<<

Robust Signal Extraction Methods and Monte Carlo Sensitivity Studies for the Sudbury Observatory and SNO+ Experiments

by Alexander Joseph Wright

A thesis submitted to the Department of Physics, Engineering Physics and Astronomy

in conformity with the requirements

for the degree of Doctor of Philosophy

Queen’s University

Kingston, Ontario, Canada

September 2009

Copyright c Alexander Joseph Wright, 2009

Abstract

The third and final phase of the Sudbury Neutrino Observatory (SNO) experiment utilized a se- ries of 3He proportional counters called Neutral Current Detectors (NCDs) to detect the neutrons produced by the neutral current interactions of solar in the detector. The number of neutrons detected by the NCDs, and hence the total flux of 8B solar neutrinos, has been determined using two novel signal extraction techniques which were designed to be robust against potential unexpected behaviour in the NCD background. These techniques yield total 8B solar neutrino flux

+0.42 6 −2 −1 6 −2 −1 measurements of 5.04− (stat) 0.28(syst)x10 cm s and (4.40 6.43)x10 cm s , which are 0.40  − in good agreement with previous SNO results and with solar model predictions, and which confirm that previous NCD analyses were not unduly affected by unexpected background behaviour.

The majority of the hardware from the now-completed SNO experiment will be reused to create a new liquid based neutrino experiment called SNO+. An important part of the SNO+ physics program will be a search for neutrinoless , carried out by dissolving 150Nd into the scintillator. The sensitivity of the SNO+ experiment to neutrinoless double beta decay has been evaluated. If loaded at 0.1% (w/w) with natural neodymium, after 1 kT a of data taking · 0ν 24 SNO+ would have a 90%C.L. sensitivity of T1/2 > 8.0x10 a or better 50% of the time; if the experiment were run with neodymium enriched to 50% in 150Nd this limit improves to 57x1024 a.

Under a reasonable choice for the 150Nd neutrinoless double beta decay matrix element, these half lives correspond to upper limits on the effective Majorana neutrino mass of 112 meV and 42 meV, respectively. These limits are competitive with those expected from all other near-term neutrinoless double beta decay experiments.

ii Acknowledgements

The author would like to extend his thanks to all of those who have provided him with guidance, support and friendship over the past years.

The Particle Astrophysics Group at Queen’s University provided a wonderfully supportive en- vironment during my tenure as a graduate student. The faculty, including the emeritus faculty, were friendly, knowledgeable and approachable, and always took the time to assist students. The postdoctoral researchers and technical staff were extremely capable and genuinely interested in help- ing the students with their projects. My fellow graduate students proved to be extremely valuable collaborators, as well as great people to be around. In my experience, the Queen’s Particle Astro- physics Group was composed entirely of excellent physicists who were also good people, which made working as a part of the group extremely rewarding on both the professional and the personal level.

The faculty at Queen’s University, particularly Ian Towner, provided excellent coursework in- struction which gave me the theoretical basis necessary to understand the experiments upon which

I worked.

The SNO and SNO+ collaborations provided me with outstanding examples of how large collab- orations can do excellent science while maintaining amicable and mutually respectful relationships among their members. I understand that this is not always the case in collaborations of this size, and it has made working on these experiments both fruitful and enjoyable.

While individual thanks could, and perhaps should, be given to each person with whom I have worked, I will restrict myself to a further four. First, special mention is due to Mark Chen for his excellent supervision, teaching and guidance. My frequent long discussions with Mark taught me more about physics and about being a good physicist than did any other aspect of my graduate

iii experience. Second, I wish to acknowledge Aksel Hallin who, during the time when our tenures at Queen’s overlapped, was effectively my second supervisor. I have tried in many ways to model myself as an experimentalist after Aksel’s excellent example. Third, I wish to thank Ken Clark for many helpful and enjoyable discussions about both physics and non-physics topics. Finally, I would like to thank Chuck Hearns for his practical advice regarding just about everything. I consider each of these gentlemen to be a close personal friend, and knowing them has enriched me personally as much as it has professionally.

Finally, and most importantly, I would like to thank my family for their support and encour- agement over the years. Without the strong foundation which they provided I would not have been able to achieve what I have.

This work was supported financially by the Natural Sciences and Engineering Research Council through the P.G.S. and C.G.S. programs, and the Government of Ontario through the O.G.S.S.T. program.

iv Statement of Originality

As is always the case in experiments involving large collaborations, achieving the results described in this thesis required the effort of many individuals. As is also generally the case, however, the thesis focuses on those areas towards which the author made particularly important contributions.

The robust signal extraction techniques developed for use on the SNO NCD data, which are described in Chapter 4, were entirely the work of the author. They were developed in the context of discussions within the SNO collaboration about how best to treat the poorly understood NCD background in signal extraction, in which the author played a significant role. The author also participated in the development of the Monte Carlo model of the NCD pulse shapes and energy spectra and made important contributions to the development of a pulse shape analysis technique

(which is not described in this thesis) which provided some ability to distinguish alpha and neutron events in the NCDs. In addition, the author played a central role in the development of the semi- empirical method of scope live fraction determination described in Section 5.3.3, and was responsible for defining and evaluating the systematic uncertainties (described in Section 5.2) associated with the use of an empirical neutron PDF in the signal extraction.

On the SNO+ experiment, the author was involved in almost every aspect of the development of the project. The author made important contributions to the development of the LAB liquid scintillator, playing a central role in the development of the LAB - acrylic chemical compatibility tests and the characterization of the LAB optical properties. The author was also responsible for modifying the SNOMAN Monte Carlo code to allow it to simulate the optical properties of a liquid scintillator and for using the modified Monte Carlo to study the light output of SNO+ under different optical conditions (including the addition of the neodymium double beta decay isotope). These

v efforts are described in Section 6.2.1. The author was also responsible for the initial investigations of the feasibility (both optical and chemical) of loading double beta decay isotope into the LAB via nanoparticle suspension, and assisted in the development of the ultimate organometallic loading technique, both of which are described in Section 6.1.5. At the same time, the author created the physics sensitivity Monte Carlo code described in Section 7.4.1, and used it to investigate the sensitivity of the SNO+ experiment to the low energy solar neutrinos and to demonstrate that it was possible for SNO+ to carry out a competitive search for neutrinoless double beta decay, as discussed in Chapter 7. These studies formed an important part of the physics basis for the ultimately successful SNO+ funding proposals.

vi Contents

Abstract ...... ii

Statement of Originality ...... v

Table of Contents ...... vii

List of Tables ...... xii

List of Figures ...... xiv

Chapter 1. Introduction ...... 1

Chapter 2. Solar Neutrinos and the Sudbury Neutrino Observatory ...... 4

2.1 Solar Neutrinos and the Solar Neutrino Problem ...... 4

2.2 The Sudbury Neutrino Observatory ...... 7

2.3 The Published SNO Solar Neutrino Results ...... 11

2.4 Neutrino Oscillations ...... 13

2.4.1 Vacuum Oscillations ...... 13

2.4.2 Matter Oscillation ...... 15

2.4.3 Current Constraints on the Solar Neutrino Mixing Parameters ...... 19

Chapter 3. The SNO Neutral Current Detector Phase ...... 20

3.1 The Neutral Current Detectors ...... 20

3.2 Pulse Production in the NCDs ...... 22

3.3 NCD Data Acquisition ...... 27

vii 3.4 NCD Signals and Backgrounds ...... 28

3.4.1 Neutron Signal ...... 29

3.4.2 Alpha Backgrounds ...... 30

3.4.3 Other Physics Backgrounds ...... 31

3.5 Non-Physics Backgrounds and Data Cleaning ...... 32

3.5.1 Data Cleaning ...... 32

3.5.2 NNNAs ...... 34

3.6 NCD Calibration ...... 38

3.6.1 Electronics Calibration ...... 39

3.6.2 Physics Calibration ...... 39

3.7 Abnormal NCD Behaviour ...... 40

3.8 Understanding the Alpha Background ...... 41

3.8.1 Alpha Monte Carlo ...... 43

3.9 Published NCD Results ...... 45

Chapter 4. Robust Determination of the NCD Neutron Number ...... 49

4.1 The “Polynomial Method” ...... 49

4.1.1 Polynomial Order ...... 50

4.1.2 Binning ...... 57

4.1.3 Fit Range ...... 57

4.2 The “Overlap Method” ...... 58

4.2.1 Method Description ...... 58

4.2.2 Fit Range ...... 64

4.2.3 Binning ...... 64

4.2.4 Assumed Alpha Shape ...... 65

4.2.5 Defining the Overlap Limits ...... 67

4.3 Testing the Methods ...... 72

viii 4.4 Robust Signal Extraction Summary ...... 87

Chapter 5. Robust NCD Measurement of the Total Solar Neutrino Flux . . . . 90

5.1 The NCD Neutron Number ...... 90

5.2 Neutron Number Systematic Uncertainties ...... 92

5.2.1 Energy Resolution ...... 92

5.2.2 Energy Scale ...... 95

5.2.3 MUX Threshold Effects ...... 96

5.2.4 24Na Statistics ...... 97

5.2.5 24Na Non-Uniformity ...... 97

5.2.6 Backgrounds in the 24Na ...... 98

5.2.7 Neutron PDF Systematic Uncertainties Summary ...... 99

5.2.8 Verification with 2006 24Na Data ...... 101

5.3 Live Fraction and Detection Efficiency ...... 101

5.3.1 Shaper Live Fraction ...... 102

5.3.2 MUX Live Fraction ...... 102

5.3.3 Scope Live Fraction ...... 102

5.3.4 Shaper Threshold Efficiency ...... 104

5.3.5 MUX Threshold Efficiency ...... 104

5.3.6 Data Cleaning Efficiency ...... 104

5.3.7 Efficiency and Live Fraction Summary ...... 106

5.4 Neutron Backgrounds in the NCDs ...... 106

5.5 Live Time and Capture Efficiency ...... 110

5.5.1 Live Time ...... 110

5.5.2 Capture Efficiency ...... 110

5.5.3 Live Fraction and Detection Efficiency Summary ...... 111

5.6 NCD 8B Flux Result ...... 111

ix 5.6.1 Neutron Rate to Flux Conversion ...... 111

5.6.2 The NCD 8B Flux Measurement ...... 112

5.7 SNO Conclusion ...... 114

Chapter 6. The SNO+ Experiment ...... 115

6.1 SNO+ Physics ...... 115

6.1.1 Solar Neutrinos ...... 116

6.1.2 Reactor Anti-neutrinos ...... 119

6.1.3 Geo-Neutrinos ...... 121

6.1.4 Supernova Neutrinos ...... 123

6.1.5 Neutrinoless Double Beta Decay ...... 124

6.1.6 Physics Summary ...... 126

6.2 Experimental Design and Development ...... 126

6.2.1 Scintillator Development ...... 126

6.2.2 Acrylic Vessel Hold-Down ...... 133

6.2.3 SNO+ Current Status and Outlook ...... 135

Chapter 7. The SNO+ Neutrinoless Double Beta Decay Search ...... 136

7.1 Introduction to Neutrinoless Double Beta Decay ...... 136

7.1.1 The Neutrinoless Double Beta Decay Rate ...... 137

7.1.2 Implications of Detecting Neutrinoless Double Beta Decay ...... 139

7.1.3 Calculation of Double Beta Decay Matrix Elements ...... 141

7.2 Neutrinoless Double Beta Decay in SNO+ ...... 143

7.3 Development of Neodymium Loaded Liquid Scintillator ...... 144

7.4 Simulations to Estimate the SNO+ Sensitivity to Neutrinoless Double Beta Decay . 148

7.4.1 Simulation Technique ...... 148

7.4.2 Backgrounds Simulated ...... 150

7.4.3 Fit Range ...... 152

x 7.4.4 Quantifying the 0νββ Sensitivity ...... 153

7.5 Simulated SNO+ 0νββ Sensitivity ...... 154

7.5.1 Comparison with Other 0νββ Experiments ...... 157

7.6 Impact of Detector Parameters on the SNO+ 0νββ Sensitivity ...... 160

7.6.1 Background Levels ...... 160

7.6.2 Light Output/Energy Resolution ...... 162

7.6.3 Neodymium Loading Levels ...... 162

7.6.4 Energy Resolution Systematics ...... 163

7.6.5 Pileup Backgrounds and Excited State Decays ...... 170

7.7 Summary of the SNO+ Neutrinoless Double Beta Decay Search ...... 178

7.8 SNO+ Conclusion ...... 179

References ...... 180

xi List of Tables

4.1 The trials penalty in polynomial selection...... 53

4.2 The effect of the ∆χ2 target on superfluous parameter acceptances...... 54

4.3 The effect of the ∆χ2 target on false parameter rejections...... 55

4.4 The overlap of the NCD PRL PDFs...... 71

4.5 The fraction of correct overlap neutron confidence intervals in Monte Carlo tests. . . 81

4.6 Polynomial order selection in the presence of hypothetical NNNAs...... 82

4.7 The effect of polynomial order on fits to the blind data set...... 83

4.8 Overlap limit tests on the blind data set...... 86

4.9 Tests adding known NNNAs and alphas to the blind data...... 87

5.1 The effect of polynomial order on fits to the full NCD data set...... 91

5.2 Summary of the 24Na systematic uncertainties ...... 100

5.3 NCD live fractions and threshold efficiencies...... 106

5.4 Summary of the neutron backgrounds ...... 109

5.5 Summary of the systematic uncertainties in the 8B flux measurement...... 112

6.1 The expected SNO+ supernova neutrino signal...... 125

7.1 150Nd nuclear matrix element calculations...... 143

7.2 Baseline SNO+ 0νββ sensitivity with natural neodymium...... 157

7.3 Baseline SNO+ 0νββ sensitivity with enriched neodymium...... 158

7.4 Pileup event fraction as a function of background rate...... 175

7.5 Pileup based limits on the tolerable levels of low energy radioisotopes in SNO+. . . . 176

xii 7.6 The results of neodymium γ-ray counting...... 177

xiii List of Figures

2.1 The solar fusion reactions ...... 4

2.2 The solar neutrino energy spectra ...... 5

2.3 The discrepancy between solar model predictions and experimental results...... 6

2.4 The SNO detector...... 8

2.5 The SNO Phase I energy fit...... 10

2.6 The SNO Phase II isotropy fit...... 11

2.7 The SNO neutral current, charged current, and elastic scattering flux results. . . . . 12

2.8 The energy dependence of the solar neutrino survival probability...... 18

2.9 The current experimental constraints on the solar neutrino mixing parameters. . . . 19

3.1 A schematic diagram of a Neutral Current Detector string...... 21

3.2 The positions of the NCDs within SNO...... 22

3.3 Example neutron pulse shapes...... 25

3.4 Example alpha pulse shapes...... 26

3.5 The neutron energy spectrum...... 29

3.6 The alpha energy spectra...... 32

3.7 Examples of spurious NCD events...... 33

3.8 The effect of the data cleaning cuts on the NCD energy spectrum...... 35

3.9 The energy spectra of some of the identified NNNAs...... 37

3.10 The 4He alpha spectra...... 42

3.11 The Monte Carlo alpha spectrum...... 46

3.12 The NCD energy fit published in PRL...... 48

xiv 4.1 Tests of the polynomial choice algorithm...... 53

4.2 The correlation between pull and ∆χ2 ...... 56

4.3 The effect of bin size on the overlap fits...... 66

4.4 The background shape assumed in the overlap fits...... 67

4.5 A comparison of the Monte Carlo and 4He alpha spectra...... 68

4.6 The overlap of different possible background shapes...... 70

4.7 The overlap spread admitted by Monte Carlo alpha spectrum systematics...... 72

4.8 Monte Carlo ensemble tests of the robust signal extraction methods...... 74

4.9 Comparison of the effect of potential NNNAs on polynomial and standard fit biases. 75

4.10 A comparison of the effect of NNNAs on the overlap and standard fit biases. . . . . 76

4.11 A comparison of the effect of NNNAs on the overlap and polynomial fit biases. . . . 77

4.12 A comparison of the effect of NNNAs on the polynomial and standard fit pulls. . . . 78

4.13 A comparison of the effect of NNNAs on the overlap and standard fit pulls...... 79

4.14 A comparison of the effect of NNNAs on the overlap and polynomial fit pulls. . . . . 80

4.15 Fits to the blind data set...... 84

4.16 Fits to the blind data set with differing overlap limits...... 85

5.1 Final fits to the full NCD data set...... 91

5.2 The changes in the NCD energy calibration constants over time...... 94

5.3 The 24Na energy spectrum after systematic smearing...... 95

5.4 The effect of the MUX threshold on the neutron spectrum...... 97

5.5 The scope live fraction as a function of time since last event...... 105

5.6 A comparison of the current neutral current flux results with the previous SNO results.113

6.1 The solar electron neutrino survival probability curve with non-standard interactions. 117

6.2 Reactor anti- effects in SNO+...... 121

6.3 The geo-neutrino energy spectrum...... 122

6.4 The results of the Series I LAB - acrylic compatibility tests...... 129

xv 6.5 The current results of the Series II LAB - acrylic compatibility tests...... 130

6.6 A candidate design for the SNO+ acrylic vessel hold-down...... 134

7.1 The limits on the effective Majorana neutrino mass as a function of the minimum

neutrino mass...... 140

7.2 The optical attenuation of 0.6% Nd2O3(np) suspension in pseudocumene...... 146

7.3 The simulated light output in SNO+ as a function of neodymium loading...... 147

7.4 The simulated SNO+ energy spectrum including low energy backgrounds...... 151

7.5 The effect of fit range on the SNO+ 0νββ fits...... 152

7.6 The baseline SNO+ 0νββ sensitivity...... 156

7.7 The SNO+ 0νββ signal at m = 270 meV...... 156 h ν i 7.8 The effect of background levels on the SNO+ 0νββ limits...... 161

7.9 The effect of energy resolution on the SNO+ 0νββ limits...... 162

7.10 The effect of neodymium loading on the SNO+ 0νββ limits...... 163

7.11 The effect of energy resolution uncertainty on the SNO+ 0νββ limits...... 164

7.12 The addition of exponential tails to the simulated SNO+ energy response...... 165

7.13 The effect of energy resolution tails on the SNO+ 0νββ limits...... 166

7.14 The effect of energy resolution tails projected onto the spillover axis...... 168

7.15 SNO+ false 0νββ “detections.” ...... 169

7.16 Effect of excited state decays on the 2νββ spectrum...... 172

7.17 The distortion of the 2νββ spectrum by event pileup...... 174

xvi Chapter 1. Introduction

In recent years, our understanding of the fundamental properties of neutrinos has been significantly altered. The discovery of the non-zero neutrino mass, which was first suspected as a solution to the

Solar Neutrino Problem and which has now been conclusively demonstrated through the observation of neutrino oscillations by a number of experiments, represented the first significant change in the

Standard Model of Particle Physics in many years. The existence of a small but non-zero neutrino mass poses an interesting problem for theorists and experimentalists alike; for theorists, the challenge is to explain how such a small mass might arise, while the challenge for experimentalists is to precisely describe the additional neutrino properties and behaviours that are made possible by the non-zero mass. In particular, there is a great deal of interest in determining the absolute scale and ordering of the neutrino masses and in discovering whether neutrinos are Majorana or Dirac particles. Determining these properties, which would fill some of the last gaps in our knowledge of the Standard Model particles, will help us to understand the role that neutrinos played in the evolution of the universe, and perhaps even help to explain the origin of the matter-antimatter asymmetry.

The Sudbury Neutrino Observatory was one of the experiments which first demonstrated the ex- istence of a non-zero neutrino mass by showing conclusively that the solar neutrinos undergo flavour change. After this initial discovery, SNO continued to operate, collecting data to confirm its initial results and to measure more precisely the parameters which describe solar neutrino oscillations. The second chapter of this thesis describes the SNO experiment in the context of its contribution to the solution of the Solar Neutrino Problem. Neutrino oscillations and the formalism describing them are also introduced. Chapter 3 gives further detail about the third and final phase of SNO. During this

1 phase, the neutrons produced by neutrino interactions within the detector were primarily detected using an array of proportional counters. The design and operation of these counters is described, and the results of the phase are summarized.

In determining the number of neutrons captured by the proportional counters, it was necessary to perform a signal extraction in a regime where the behaviour of the neutron signal was understood very well but the behaviour of the background was more difficult to predict. This ill-defined uncertainty in the description of the NCD background posed a serious potential problem to traditional χ2 and maximum likelihood signal extraction techniques wherein an incorrect description of the background shape would in general result in a bias in the extracted number of neutrons. Chapter 4 describes the development and validation of two novel signal extraction techniques which were created to help address this problem by minimizing their dependence on prior knowledge of the background distributions. In the first method, a polynomial is used to describe the background shape, with the optimal polynomial order necessary to describe the data selected a posteriori by comparing, using a goodness-of-fit based selection algorithm, fits to the data using different polynomial orders. In the second method, a neutron number confidence interval is determined using only the standard assumption that it is physically unreasonable for the background shape to mimic the behaviour of the neutron signal too closely. In both methods, lessening the dependance of the signal extraction on prior knowledge of the background spectrum makes the method more robust than traditional signal extraction techniques when unexpected background spectra are present in the data, at the cost of a decrease in fit precision. In Chapter 5, these new signal extraction techniques are used to determine the number of neutrons detected by the proportional counter array and hence to produce a new, robust, measurement of the total flux of 8B solar neutrinos.

The SNO experiment has now been completed, but investigations of neutrino properties at SNO-

LAB will continue with the new SNO+ experiment. SNO+ is a liquid scintillator experiment that is currently being constructed from the SNO detector. An introduction to the SNO+ experiment, including its history and development and an overview of its physics potential is given in Chapter

2 6. SNO+ will pursue a broad program of neutrino research, including investigation of the low en- ergy solar neutrinos, geo-neutrinos, reactor anti-neutrinos, and supernova neutrinos. In addition, by loading the scintillator with neodymium SNO+ will also stage a search for neutrinoless double beta decay. Detection of neutrinoless double beta decay would demonstrate that neutrinos are Majorana particles and provide information about the absolute neutrino mass. The search for neutrinoless double beta thus has the potential to provide answers to two of the most important outstanding questions about the properties of the neutrino. In Chapter 7, the double beta decay process is in- troduced and the SNO+ double beta decay search is described. The sensitivity of the SNO+ double beta decay search is then estimated using Monte Carlo simulations and the effect of different exper- imental factors on the sensitivity of the experiment is explored. A comparison of the SNO+ double beta decay sensitivity with that of other near- and medium-term double beta decay experiments is also given.

In summary, this thesis deals with the study of fundamental neutrino properties in two separate but related experiments. The analysis of the data from the final phase of the Sudbury Neutrino

Observatory confirms again the observation that neutrinos have mass and helps to further constrain the solar neutrino oscillation parameters. The development of the SNO+ experiment, on the other hand, sets the stage for further investigations of neutrino behaviour, including the staging of a competitive search for neutrinoless double beta decay. Thus, the thesis describes how an experiment which helped to demonstrate that neutrinos are massive particles will, in a new form, remain at the forefront of efforts to learn more about the nature of that mass and will, as a result, continue to expand our knowledge of these particles at the most fundamental level.

3 Chapter 2. Solar Neutrinos and the Sudbury Neutrino Observatory

2.1 Solar Neutrinos and the Solar Neutrino Problem

Since the early part of the 20th century it has been understood that the energy emitted by the Sun is produced through a series of nuclear fusion reactions [1]. These reactions are now understood to occur in two series, the “pp chain” and the “CNO cycle,” both of which ultimately result in the fusion of protons into 4He nuclei. These reaction series are illustrated in Figure 2.1. It is interesting to note that the early papers on solar fusion make no reference to neutrinos; the solar neutrinos were first discussed in 1948, about 10 years later [2].

(a) (b)

Figure 2.1: The nuclear fusion reactions that produce the energy emitted by the Sun. Of these, the pp chain is responsible for the bulk of the solar energy generation. Only the dominant reactions in the CNO cycle are shown.

By the early 1960’s, the cross-sections for most of the important solar fusion reactions had been

4 measured well enough, and models of the solar interior were developed to a sufficient extent that the

fluxes and energy spectra of the solar neutrinos could be predicted with some reliability (a modern prediction of the solar neutrino spectra is shown in Figure 2.2). These calculations predicted that the

flux of higher energy solar neutrinos was large enough to be detectable by terrestrial experiments; as a result, a chlorine-based solar neutrino experiment was constructed in the Homestake Mine in

South Dakota [2].

Figure 2.2: The energy spectra of the solar neutrinos as predicted by solar modelling. The pp chain neutrino fluxes are represented by solid lines, while the CNO neutrino fluxes are represented by dashed lines. Figure from [3].

The flux of solar neutrinos measured by this “Homestake Chlorine Experiment” [4] and sub- sequent radiochemical [5, 6, 7] and real-time [8, 9] experiments was surprising; in every case, the measured neutrino flux was significantly lower than the solar model predictions. As it became ev- ident that this discrepancy could be ascribed neither to errors in the solar model calculations nor to experimental mistakes, the difference between the measured and predicted solar neutrino fluxes became known as the “Solar Neutrino Problem.” Figure 2.3 shows a modern summary of the dis- crepancy between the solar neutrino flux measured by the radiochemical and light water experiments and the predictions of the standard Solar Model.

5 Figure 2.3: The discrepancy between the predictions of the Standard Solar Model and the so- lar neutrino fluxes measured by the radiochemical (Ga and Cl) and elastic scatter- ing (H2O) experiments that formed the basis of the Solar Neutrino Problem. Figure adapted from [3].

Numerous potential solutions to the solar neutrino problem were discussed over the subsequent years [10]. These could be divided into two groups; those which suggested an incomplete understand- ing of the Sun, in which case fewer neutrinos than initially expected might be produced, and those which predicted a new behaviour of the neutrino, by which some fraction of the neutrinos produced in the Sun could evade detection by the Earth-based experiments. In the former category were non-standard solar models, which considered, among other ideas, non-standard isotopic abundances in the solar interior, possible effects of magnetic fields and rotations of the solar interior, and ther- mal dis-equilibrium between the solar surface and interior. Possible resonances in the cross-sections for the solar fusion reactions were also discussed. The latter category included the possibility of neutrino decay to a light scalar or pseudo-scalar particles and another neutrino flavour [11], the transition of neutrinos to (sterile) right-handed neutrinos as the result of magnetic precession [12], and oscillations between neutrino flavours similar to those that had already been observed in the

6 quark sector [13, 14, 15]1.

2 In 1985 it was realized [16] that a heavy water ( H2O or “D2O”) based solar neutrino experiment would be able to distinguish between the reduced neutrino production and flavour change scenarios.

In such an experiment, neutrinos would be detected primarily through two nuclear interactions with deuterium: the charged current interaction,

− ν + d p + p + e , (2.1) e → which is sensitive only to electron neutrinos and the neutral current interaction,

ν + d ν + n + p, (2.2) e,µ,τ → e,µ,τ which is equally sensitive to all three neutrino flavours. A heavy water experiment would thus be able to measure the total active solar neutrino flux using the neutral current interaction and, by comparing that flux with the electron neutrino flux measured by the charged current interaction, determine directly whether or not the solar neutrinos undergo flavour change.

2.2 The Sudbury Neutrino Observatory

The Sudbury Neutrino Observatory (SNO) was a heavy water based solar built to resolve the Solar Neutrino Problem. The detector, shown in Figure 2.4, consisted of 1000 T of heavy water in a 12 m diameter acrylic sphere suspended in 7400 T of ultra-pure light (i.e. isotopically normal) water to shield the experiment from radioactive backgrounds. The active volume was surrounded by an array of about 9500 photomultiplier tubes (PMTs) which were used to detect the

Cerenkˇ ov light given off by the energetic electrons produced, directly or indirectly, by the neutrino interactions in the heavy water. To shield against background from cosmic radiation, the experiment was located 6800 feet underground in VALE INCO’s Creighton mine near Sudbury, ON. The design of the SNO experiment is described in detail in [17].

1 Neutrino oscillations could account for the observed deficit of solar neutrinos because the early neutrino detectors were either only sensitive to, or primarily sensitive to, electron flavour neutrinos, like those produced in the solar fusion reactions. Neutrino oscillations would cause some of these solar electron neutrinos to appear as or tau neutrinos on Earth, thereby lowering the number of neutrinos detected by these experiments.

7 Figure 2.4: An artist’s depiction of the SNO detector.

In addition to the charged current and neutral current reactions discussed earlier, SNO also detected neutrinos through the neutrino-electron elastic scattering interaction,

ν + e− ν + e−. (2.3) e,µ,τ → e,µ,τ

This interaction (which is also the neutrino detection channel used in the light water neutrino experiments) is sensitive to all three neutrino flavours, although the cross section is about six times higher for interactions with electron neutrinos.

The charged current and elastic scattering interactions in SNO were detected by observing the

Cerenkˇ ov light from the directly produced energetic electrons in the PMT array. Because Cerenkˇ ov light production is both prompt and highly directional, it was possible to use the relative positions, distribution in time, and number of photons collected by the PMT array to reconstruct the position, direction and energy of the primary electron. In the case of the elastic scattering interaction, the direction of the electron is strongly correlated with the direction of the neutrino, while the direction of the charged current electrons is weakly anti-correlated with the neutrino direction. The direction

8 of an event with respect to the position of the Sun thus provided a powerful means of separating elastic scattering events from the charged current and (isotropic) neutral current signals.

Separation of the charged current and neutral current interactions was more subtle, as it involved separating the Cerenkˇ ov signals of the directly produced charged current electrons from the secondary signals produced by the neutral current neutrons. To ensure that the charged current - neutral current separation was done correctly and accurately, the SNO experiment was carried out in three phases, each with a different primary means of performing this separation.

In the first phase of the experiment, which ran from November 1999 to May 2001, the active vol- ume of the detector contained pure heavy water. In this configuration, the neutrons produced by the neutral current interaction were primarily detected though their capture on deuterium nuclei, which resulted in the production of single 6.25 MeV γ-rays. These γ-rays subsequently Compton scattered in the heavy water, each of them typically producing a single electron that emitted Cerenkˇ ov light.

In Phase I, then, the neutral current and charged current interactions both resulted in the creation of single Cerenkˇ ov electrons, meaning that the signals could not be differentiated on an event-by-event basis. However, the energy of the electron produced in the charged current reaction is strongly correlated with energy of the incident neutrino, resulting in a spectrum of electron energies that is quite different from the electron energy spectrum produced from neutron captures on deuterium

(which is independent of neutrino energy). In Phase I, then, the neutral current and charged current signals were primarily separated by fitting the observed electron energy spectrum to the expected charged current and neutral current spectra [18]. This fit is shown in Figure 2.5.

The main disadvantage to the use of an energy fit to separate the charged current and neutral current signals was that the 8B solar neutrino spectral shape must be assumed in order to calculate the expected charged current spectrum. This was not ideal because (as will be discussed later) neutrino oscillations, if they occurred, were expected to be energy dependant, and would therefore result in distortions of the expected charged current spectrum. Thus, in the second phase of the experiment, which ran from July 2001 to October 2003, about two tonnes of NaCl was dissolved

9 Figure 2.5: The energy fit used in Phase I as the primary charged current - neutral current sep- aration variable. The charged current spectrum, which reflects the 8B solar neutrino energy spectrum, extends to higher energy than the neutral current spectrum, which is simply the energy spectrum produced by neutron captures on 2H. Figure from [19]. into the heavy water. As 35Cl has a higher neutron capture cross section than deuterium, most of the neutral current neutrons detected in Phase II were captured by 35Cl. Such captures released a total 8.6 MeV distributed among several γ-rays (2.5 on average), each of which could subsequently

Compton scatter. Therefore, in Phase II each neutron capture resulted in the production of multiple

Cerenkˇ ov electrons. Having multiple Cerenkˇ ov electrons meant that the light produced by a neutral current event during Phase II was more isotropic on average than the light produced by a (single electron) charged current event. It was therefore possible to use an isotropy parameter, called β14, to statistically separate the charged current and neutral current signals [20], as shown in Figure

2.6. This made it possible to separate and measure the charged current and neutral current fluxes without relying on their energies. In fact, the separation of the charged current and neutral current

fluxes in Phase II was done independently in the different energy bins, which allowed SNO to make a measurement of the charged current spectrum and, through it, the 8B solar neutrino spectrum.

Finally, in the third phase of SNO, which ran from November 2004 to November 2006, the neutral current neutrons were primarily detected through an array of 3He proportional counters that was deployed in the heavy water volume. These “Neutral Current Detectors” or “NCDs” provided a

10 Figure 2.6: The β14 isotropy parameter fit used in Phase II as the primary charged current - neutral current separation variable. Events which produced light more isotropically in the detector, like neutron captures on 35Cl which subsequently produced multiple γ-rays, had lower values of β14, while less isotropic events like the charged current and elastic scattering interactions which produced only single Cerenkˇ ov electrons had higher β14 values. Figure from [20]. completely independent detection channel for the neutral current interactions, and hence reduced the correlation between the charged current and neutral current flux measurements2. The NCD measurement of the total solar neutrino flux is the topic of the SNO portion of this thesis.

2.3 The Published SNO Solar Neutrino Results

The SNO experiment showed conclusively that the solar neutrinos undergo flavour change [18, 20, 21].

The total flux of solar neutrinos, as measured by the neutral current reaction in each phase of the experiment, is shown in Figure 2.7(a). Each of these measurements is in good agreement with the predictions of the Standard Solar Model ((5.69 0.91) 106cm−2s−1 [3]). However, as seen  × in Figure 2.7 (b) and (c), the charged current and elastic scattering fluxes are significantly lower, indicating that approximately 66% of the electron neutrinos produced in the Sun appear as muon or tau neutrinos on Earth.

The SNO results, in combination with those of the other radiochemical and real-time solar

2 In Phases I and II, the fits were essentially background free in the relevant energy range, and the elastic scattering signal was strongly constrained by its directionality with respect to the direction of the Sun. Thus, increasing the neutral current flux in the fit tended to decrease the charged current flux and vice versa, meaning that the uncertainties on the charged current and neutral current fluxes were highly correlated. Reducing this correlation reduces the uncertainty on the charged current / neutral current flux ratio (the most important number in the neutrino flavour change investigations), and hence increases the sensitivity of the SNO experiment to neutrino oscillations.

11 SNO Phase I SNO Phase I

SNO Phase II SNO Phase II

SNO Phase III SNO Phase III

3.5 4 4.5 5 5.5 6 6.5 7 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 Neutral Current Flux (x106cm-2s-1) Charged Current Flux (x106cm-2s-1)

(a) Neutral Current Flux (b) Charged Current Flux

SNO Phase I

SNO Phase II

SNO Phase III

Super-Kamiokande

0.5 1 1.5 2 2.5 3 Elastic Scattering Flux (x106cm-2s-1)

(c) Elastic Scattering Flux

Figure 2.7: The flux results from the three phases of SNO [19, 20, 21]. The total solar neutrino flux inferred from the rates of neutral current interactions observed in SNO is shown in (a). (b) and (c) show the total neutrino fluxes inferred from the charged current and elastic scattering event rates under the assumption that the solar neutrino flux is purely νe. For each phase, the best fit point is shown in black, the inner red band indicates the range of the statistical uncertainties and the outer blue band shows the range of the total uncertainty (the quadratic sum of the statistical and systematic uncertainties). There is clearly a significant difference between the neutral current, charged current and elastic scattering fluxes, indicating that the solar neutrinos undergo flavour change (note that the x-axis scales are different in the different plots). The prediction of the Standard Solar Model (BS05, [3]) is shown as a vertical band in (a) for comparison with the SNO result - the solid line shows the central value prediction and the dashed lines represent the 1σ uncertainties. The high statistics measurement of the elastic scattering flux fromthe 1496-day data set of the Super-Kamiokande experiment [9] is shown in (c) for comparison. The apparently low elastic scattering number from the SNO Phase III result has been investigated and appears to be entirely consistent with a statistical fluctuation which seems to have occurred in a single energy bin.

12 neutrino experiments discussed earlier [4, 5, 6, 7, 8, 9, 22] and the KamLAND reactor anti-neutrino experiment [23], show that the observed neutrino flavour change is well described by matter enhanced

“MSW”-type neutrino oscillations (described in the next section) but not by neutrino decay or de- coherence models [24]. The Solar Neutrino Problem has thus been resolved and neutrino oscillations are now accepted as a fundamental aspect of neutrino physics.

2.4 Neutrino Oscillations

Excellent reviews of the neutrino oscillation theory relevant to solar neutrino oscillations are given in Chapter 9 of [25] and in [26]. The experimentally important oscillation phenomena are outlined below, with the reader referred to the aforementioned references for more detailed derivations.

2.4.1 Vacuum Oscillations

Standard or “vacuum” neutrino oscillations of the type first proposed by Gribov and Pontecorvo [13] arise, as in the quark sector, from imperfect alignment between the neutrino mass eigenstates (ν1,

ν2, and ν3) and flavour eigenstates (νe, νµ, and ντ ). The relationship between the mass and flavour eigenstates is given by the so-called Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix, UˆP MNS , via

νe ν1 ˆ  νµ  = UP MNS  ν2  . (2.4) ν ν  τ   3      As the PMNS matrix is believed to be unitary, it can be expressed in its most general form in terms of three rotation angles and one complex phase. For neutrino oscillations, the mixing angles are conveniently chosen to be θ12, θ23, and θ13, and the complex phase is usually labelled δ. The general

13 form of the PMNS matrix can then be written

Ue1 Ue2 Ue3 Uˆ U U U (2.5) ≡  µ1 µ2 µ3  U U U  τ1 τ2 τ3    −iδ 1 0 0 c13 0 s13e c12 s12 0 = 0 c s 0 1 0 s c 0  23 23  ×   ×  − 12 12  0 s c s eiδ 0 c 0 0 1  − 23 23   − 13 13          (2.6) −iδ c12c13 s12c13 s13e = s c c s s eiδ c c s s s eiδ s c  − 12 23 − 12 23 13 12 23 − 12 23 13 23 13  s s c c s eiδ c s s c s eiδ c c  12 23 − 12 23 13 − 12 23 − 12 23 13 23 13    (2.7) where s sin θ and c cos θ . ij ≡ ij ij ≡ ij Quantum mechanics dictates that as a superposition of mass eigenstates (e.g. a neutrino produced in a pure flavour eigenstate) propagates, the probability of that particle being detected in any one of the flavour eigenstates changes with time. In fact, these probabilities oscillate, with the frequency of oscillation determined by the difference between the energies of the propagating states. With the assumptions that the neutrino rest masses are small compared to their energies and that the different mass eigenstates propagate with equal momenta, this energy splitting can be expressed in terms of the differences between the mass eigenstates. For a neutrino produced in the electron

flavour eigenstate, the probability, Pee, that the neutrino is detected in that same eigenstate after propagating for a time t is given by

1 ∆2m P = 1 cos4(θ ) sin2(2θ ) 1 cos 21 t ee − 2 13 12 − 2E~    1 ∆2m sin2(2θ ) 1 cos 31 t − 2 13 − 2E~    1 ∆2m ∆2m + ∆2m s2 sin2(2θ ) cos 31 t cos 21 31 t , − 2 12 13 2E~ − 2E~      (2.8) where E is the total energy of the propagating neutrino and ∆2m = m2 m2 is the difference in the ij i − j squares of the masses of the neutrino mass eigenstates. Thus, for neutrino oscillations there are three important mass splittings, ∆m12, ∆m23, and ∆m13, although only two of these are independent

14 once their ordering is known.

Because the size of the neutrino production region in the Sun is large compared to neutrino oscillation lengths, terrestrial solar neutrino experiments are sensitive only to the phase-averaged survival probability (i.e. the average over t in Equation 2.8):

1 1 P = 1 cos4(θ ) sin2(2θ ) sin2(2θ ). (2.9) ee − 2 13 12 − 2 13

Thus we can see that if neutrino oscillations were restricted to vacuum phenomena, the solar neutrino survival probability would be independent of neutrino energy and depend only on two mixing angles

The situation is in fact even simpler than this. Although three flavour neutrino oscillations are in general described by 6 parameters (three mixing angles, two mass splittings and one complex phase), it turns out that, because θ13 is small and ∆m12 is significantly less than ∆m23 [27], the PMNS matrix factors naturally, both physically and mathematically, into the form shown in Equation 2.6.

The different terms in this equation are experimentally accessible through different experiments; the

θ13 term by short baseline reactor anti-neutrino experiments, the θ23 term by atmospheric neutrino experiments and the θ12 term by solar neutrino experiments and by long-baseline reactor experiments

[26]. For solar neutrino experiments, therefore, the two-flavour approximation, in which θ13 is taken

2 to be zero and hence in which only θ12 and ∆m12 appear, is sufficiently accurate to describe the observed oscillation phenomena. Under this approximation, the phase-averaged electron neutrino survival probability in vacuum is simply

1 P = 1 sin2(2θ ). (2.10) ee − 2 12 2.4.2 Matter Oscillation

As Wolfenstein [14] and then Mikheyev and Smirnov [15] pointed out, there is an additional oscilla- tion effect when neutrinos propagate through matter. As neutrinos propagate through matter, the neutrino-matter interaction adds an “effective mass” term to the neutrino mass (Wolfenstein used the analogy of matter having an “index of refraction” for neutrino propagation). If all neutrinos experienced identical interactions with matter this would have no effect on oscillation phenomena.

15 However, electron neutrinos do interact differently with matter than the other neutrino flavours due to the possibility of charged current interactions between electrons and electron neutrinos. This means that electron neutrinos gain a different “effective mass” from matter interactions than do the other neutrino flavours, and this difference in effective mass can affect neutrino oscillations.

In the two-flavour approximation, the survival probability for a neutrino propagating in uniform matter is given by 1 2πL∆M P = 1 sin2(2θ ) 1 cos , (2.11) ee − 2 m − 4πE~c    where the matter mixing angle, θm, is given by

∆m2 sin(2θ ) = sin(2θ) (2.12) m ∆M or by

∆m2 sin(2θ) tan(2θm) = , (2.13) ∆m2 cos(2θ) 2√2G n E − F e and the difference in the matter energy eigenstates, ∆M = E E , is given by 2 − 1 2 ∆M = ∆m2 cos(2θ) 2√2G n E + (∆m2 sin(2θ))2. (2.14) − F e r  2 In these expressions, GF is the Fermi constant, ne is the electron number density, θ and ∆m are the vacuum mixing angle and the rest mass splitting, respectively (there is only one of each in the two flavour approximation), and E2 is defined to be larger than E1. It can now be seen that, in addition to the mixing angle, the survival probability for neutrinos propagating in uniform matter depends on the neutrino energy and mass splitting, as well as the matter density and the strength of the neutrino-matter interaction. One important feature of matter mixing is that, as can be seen from Equation 2.13 or from Equations 2.12 and 2.14, the matter mixing is maximal when

2√2G n E cos(2θ) = F e , (2.15) ∆m2 regardless of the size of the vacuum mixing angle.

For neutrinos propagating in non-uniform matter densities (like the neutrinos propagating out of the Sun), determining the survival probability is more challenging and involves continual “re- mapping” of the neutrino mass eigenstates as the density changes. Conceptually, the matter effect

16 can be thought of as changing the relationship between the mass and flavour eigenstates. This means that an electron neutrino produced in the very high matter density of the solar core has a different admixture of mass eigenstates than an electron neutrino produced in vacuum. As the neutrinos produced in the core of the Sun propagate through the smoothly decreasing solar density to the surrounding vacuum, they change adiabatically (i.e. undergo equilibrium re-mappings of the mass eigenstates, but not jumping between mass eigenstates) from the high-density mass-to-flavour- eigenstate relations to the low density relations. Thus, for example, it is possible that νe is primarily

ν1 in vacuum and primarily ν2 in the dense matter of the solar core. So when the (primarily) ν2 neutrinos produced as νe in the Sun propagate to the (vacuum-like) Earth (where ν2 has a much smaller νe component) they appear to be primarily non-νe. This “flavour swap” occurs when the matter density at the point of neutrino production is greater than, or equal to, the density given by the resonant mixing condition (Equation 2.15).

Analytic calculation of the solar survival probability can be achieved for simple density profiles, for example in situations where the solar density changes linearly or exponentially with radius.

The linear density approximation is illustrative: the phase-averaged survival probability for electron neutrinos produced on the “high density” side of the resonant region and detected on the “low density” side is in this case given by

1 1 P = + P cos(2θ ) cos(2θ), (2.16) ee 2 2 − j m   where θm corresponds to the matter density at the production point,

π∆m2 sin2(2θ) n P = exp − e , (2.17) j 4E cos(2θ) dn /dr   | e | res and “res” refers to the resonant density region. Here Pj gives the probability of a jump between

flavour eigenstates as a result of the change in relationship between the flavour and mass eigenstates3.

Note that the survival probability still depends on the neutrino energy, the mixing angle and the neutrino mass splitting.

3 In the adiabatic approximation, the neutrinos do not change mass eigenstates, so any change in the relationship between the mass and flavour eigenstates must appear in the flavour basis.

17 Numerical calculations which include realistic solar density profiles, which account for inexactness in the adiabatic approximation4 and which properly treat neutrinos of lower energy for which a resonant transition does not occur, have been carried out by the SNO collaboration, among others.

Figure 2.8 shows the solar neutrino survival probability over the relevant energy range for solar mixing parameters near the current global best-fit values [21]. As can be seen in the Figure, at high energies, the resonant transition is “fully on.” The survival probability is low, changes slowly with energy, and to a large extent is determined only by the mixing angle (i.e. is fairly independent of neutrino energy). At very low energies, the solar electron density is not high enough for the matter effect to be significant and neutrino oscillations are essentially vacuum oscillations. The transition between these extremes is determined by the turn-on in the resonant condition; the position of this

“survival upturn” is determined by the solar matter density and the neutrino mass splitting.

tan2θ = 0.47, ∆ m2 = 7.6x10-5eV2 tan2θ = 0.47, ∆ m2 = 7.6x10-5eV2 ee 0.6 ee 0.6 P P tan2θ = 0.47, ∆ m2 = 6.6x10-5eV2 tan2θ = 0.52, ∆ m2 = 7.6x10-5eV2 0.55 0.55 tan2θ = 0.47, ∆ m2 = 8.6x10-5eV2 tan2θ = 0.43, ∆ m2 = 7.6x10-5eV2

0.5 0.5

0.45 0.45

0.4 0.4

0.35 0.35

0.3 0.3 0 2 4 6 8 10 12 14 0 2 4 6 8 10 12 14 Energy (MeV) Energy (MeV)

Figure 2.8: The solar electron neutrino survival probability as a function of energy. As shown on the left, changing the neutrino mass splitting (the changes shown here are about five times larger than the uncertainty on the global solar + KamLAND best fit uncertainty) changes the survival probability in the transition region, but not at the high and low energy asymptotes. Changing the mixing angle, on the other hand, changes the survival probability in the high and low energy regions, as shown on the right (these curves show the 1σ variation about the current global best fit mixing angle from [21]). In all cases, thesurvival probabilities shown are averaged over the solar neutrino production region.

4 For the adiabatic condition to be perfect, the propagation distance through the region of the Sun at which the resonance condition applies must be long compared to the oscillation wavelength at that matter density.

18 2.4.3 Current Constraints on the Solar Neutrino Mixing Parameters

As described in the previous section, the neutrino oscillations of interest to solar neutrino experiments

2 can be described by two mixing parameters, θ12 and ∆m12. The current experimental constraints on these parameters, as derived from the SNO results alone, from the combination of the data from all solar neutrino experiments, and from the combination of the data from all of the solar neutrino experiments plus the KamLAND long-baseline reactor neutrino experiments, are shown in

2 Figure 2.9. As can be seen, to a large extent the global constraint in ∆m12 comes from the reactor anti-neutrino measurement, while the constraint on θ12 comes from the SNO results.

Figure 2.9: The current experimental constraints on the solar neutrino mixing parameters from (a) SNO alone, (b) all solar neutrino experiments, and (c) solar neutrino experiments + KamLAND. Figure from [21]. Note that the axis ranges are different in (a) than in (b) and (c). In (c) alone, the inner blue contour is drawn at the 90% C.L. rather than the 68% C.L. indicated in the legend. This figure includes the information from the NCD results described in Section 3.9.

19 Chapter 3. The SNO Neutral Current Detector Phase

During Phase III of the SNO experiment, the neutrons produced in the heavy water volume by neutral current neutrino interactions were primarily detected by an array of proportional counters called the “Neutral Current Detectors” or the “NCDs.” In this Chapter, the NCDs and their operation during the third phase of SNO are described, and the previously published NCD results outlined.

3.1 The Neutral Current Detectors

The Neutral Current Detectors were 3He proportional counters which were deployed in the SNO heavy water volume during the third phase of the experiment. The NCDs detected neutrons through the n+3He p+3H+764 keV neutron capture interaction, which has an extremely large (5333 b) → cross section. This large cross section meant that even when deployed in a relatively sparse array the NCDs provided the dominant neutron detection channel during Phase III of SNO.

The construction, deployment, and operation of the NCD array and the analysis of the NCD results are described in detail in [17], [21], and especially in [28]. The treatment here largely follows from those works.

As shown in Figure 3.1, a SNO NCD consisted of a 2-3 m long, 5.08 cm diameter thin walled

3 nickel tube which was filled to 2.5 atm with an 85:15 (by pressure) mixture of He and CF4. A 50µm diameter copper anode wire to collect the ionization electrons produced by the passage of charged particles through the gas ran down the centre of each NCD, passing out through the endcaps via fused-silica high-voltage feed-throughs. At the time of their deployment into the SNO detector, 3 to

4 of these “counters” were laser welded end-to-end, producing 9-11m NCD “strings.” An open-ended

20 inductive delay line adding 90 ns to the transit time of reflected pulses was added to the bottom of ∼ each string. Below the delay line, a vectran line attached the NCD to two acrylic spheres which were used to attach the NCDs to pre-installed acrylic anchor positions on the bottom of the SNO acrylic vessel. NCD read-out and high-voltage supply were provided via single cables which ran from the top of each string along the upper inside surface of the SNO acrylic vessel and out the neck.

Figure 3.1: A schematic view of an NCD string. In this image the body of the NCD is greatly compressed. Figure from [17].

A total of 36 3He NCDs totalling 398m were deployed on a 1m grid within the SNO detector.

The positions and names of the NCD strings are shown in Figure 3.2. In addition, 4 NCDs filled with 4He in place of the 3He were deployed to allow the study of non-neutron backgrounds.

21 Figure 3.2: A cross section of the SNO detector at the equator showing the position of each NCD string. The 4He strings were I2, I3, I6, and I7. The smaller circle at the centre shows the position of the neck of the acrylic vessel. Figure from [28].

3.2 Pulse Production in the NCDs

In the NCD analysis, two main observables were important: the total charge collected by the NCD for each event, and the “pulse shape” of each event (which was an oscilloscope trace of the charge leaving the NCD as a function of time). In order to understand these observables, it is necessary to understand the basic ideas behind pulse production in the NCDs.

As ionizing particles like the energetic proton and triton (3H nucleus) produced from neutron capture on 3He passed through the NCD gas, they ionized the gas to produce electron-ion pairs.

These “primary” electron-ion pairs were distributed along the track of the primary particle and, on the timescales relevant to pulse shape creation, were produced essentially instantaneously. When the

NCDs were live, the anode wires were held at 1950V relative to the NCD body, so that the electrons drifted towards the anode wire while the positive ions drifted away from it. The time required for a primary electron to drift to the anode was dependant on the radius at which the electron was produced; it required about 3.5 µs for an electron to drift across the entire radius of the NCD, and

22 this time decreased non-linearly with radius. Thus, the time profile of the arrival of charge at the anode wire gave a mapping of the radial profile of the primary ionization density.

The arrival of a charge signal at the anode wire resulted in identical pulses propagating up and down the anode wire. The downward-going component reflected from the bottom of the delay line and arrived at the top of the NCD delayed by 100 ns relative to the upward going pulse. For very ∼ narrow pulses, the prompt and delayed signals were easily distinguished; for longer pulses the prompt and delayed signals substantially overlapped. The time delay between the prompt and reflected pulse varied with the additional distance travelled by the reflected pulse and could thus, in principle, be used to extract the z-position of an event within an NCD.

In order to record and make use of this pulse shape information, it was necessary that the charge read-out of the NCDs be fast compared to the spread in the electron times-of-arrival. In the NCDs, the physical limit on the readout time is set by the ion drift speed in the gas. This is because the majority of the charge collected by the anode was trapped as “image charge” by the positive ions produced close to the wire by the multiplication avalanches (described below). The image charge was released (and could hence be recorded) only as the ions drifted away from the anode. The total drift time for an avalanche ion to reach the outer cathode wall was several milliseconds, resulting in a decaying “ion tail” on each of the recorded pulses. However, the amount of trapped image charge decreased relatively quickly during the first part of the ion drift ( 50% was released in the first 2 µs ∼ [28]), so the radial charge deposition information was accessible to the analysis in the pulse shape.

A final important factor in pulse production by a physics event in an NCD was the space charge effect. In order to produce a detectable signal from the relatively small number of primary electrons created by a typical physics event in the NCD gas, the NCDs were run at high enough voltage such that within about 100 µm of the anode wire the electric field was strong enough that a drifting electron obtained sufficient energy between subsequent collisions with gas molecules to produce secondary ionization. As each of these secondary electrons could also produce additional ionizations, an “avalanche” of ionizations occurred; as a result, each primary electron resulted in the creation

23 of 200 secondary electrons, and hence a MeV-level primary ionization of the gas (resulting in a ∼ few tens of thousands of primary electron-ion pairs) could produce a detectable charge on the anode wire. As discussed earlier, the cloud of positively charged ions produced by an electron avalanche drifted away from the anode wire quite slowly and beneath the cloud, the electrons trapped as image charge resulted in a reduced electric field.

Depending on the geometry of a physics event in the NCD, it was possible that primary electrons from different parts of the ionization track impinged on the same part of the anode wire at different times. In this case, the electrons which arrived at the anode later passed through the clouds of

“space charge” ions produced by the avalanches of the earlier electrons and experienced the reduced electric field beneath. This meant that the later primary electrons, and any secondary avalanche electrons that they produced, gained less energy between collisions and were less likely to produce additional ionizations. As a result, the “gas gain” (i.e. the average number of secondary electrons produced per primary electron) of the later primary electrons was reduced by the space charge effect.

This had two major effects on the analysis. First, it meant that the amount of charge collected by the NCD was not exactly proportional to the energy deposited in the gas; for alpha particle tracks perpendicular to the anode wire (which maximizes the space charge effect), for example, the amount of collected charge was approximately 50% less than the amount collected for an equivalent alpha travelling parallel to the wire. Second, the pulse shapes of high space charge events were noticeably affected, with the amplitude of the later part of the pulse shape significantly suppressed.

Taking all of these factors together, a typical neutron capture in the NCDs resulted in the collection by the anode wire of approximately 0.5 pC of charge over the period of a few microseconds.

This signal is small but nonetheless detectable. To further illustrate the information available in the NCD pulse shapes, some representative neutron pulse shapes are shown in Figure 3.3, and some example alpha pulse shapes are shown in Figure 3.4.

24 Figure 3.3: Example neutron pulse shapes taken from a neutron calibration run. The narrower pulse was produced by a neutron capture which resulted in the proton and triton tracks begin nearly parallel to the anode wire, so all of the charge was collected in a relatively short time. The wider pulse resulted from an interaction where the (back- to-back) proton and triton tracks were nearly perpendicular to the anode wire. The ionization signal arrived at the anode over the period of a few microseconds, and two bumps corresponding to the Bragg peaks (which are increases in ionization density which occurs near the end of charged particle tracks) of the proton and triton energy loss profiles are visible. Figure from [28]

25 Figure 3.4: Example alpha pulse shapes taken from a 4He string. The narrower pulse is a pulse produced by a track which was essentially parallel to the wire, so all of the charge was collected in a relatively short time. The slow decay after 3µs is the “ion tail” resulting from the slow release of electrons trapped as “image charge” on the anode wire as the positive ions drift away from the anode. The wider pulse is a pulse where the alpha track was essentially perpendicular to the anode. As the alpha originates from the inside surface of the nickel tube, the Bragg peak occurs nearest the wire. Therefore the charge from the Bragg peak arrives at the anode first, resulting in a “forward- peaked” pulse shape; the charge from the rest of the pulse continues to arrive over the next 3-3.5µs. In fact, the effect of the Bragg peak is exaggerated because the derivative of electron drift time with respect to radius increases with radius; thus even a flat ionization profile would result in a “forward peaked” pulse shape, although to a lesser extent than what is seen here. Figure from [28].

26 3.3 NCD Data Acquisition

In order to make use of the pulse shape information described above while still being able to record events at sufficient rates to allow high-rate calibrations (and to be sensitive to possible neutrino bursts from Galactic supernovae), the NCD read-out electronics system was divided into two streams. The

NCD signals passed through current pre-amplifiers, which converted the current signal into a voltage, and were then split into two parallel streams: the “shaper” stream and the “MUX” stream.

The “shaper-ADC system” was a system of shaping amplifiers, one for each NCD string, that measured the total charge collected by the NCD for each event. The shaper-ADC system maintained a rolling integral of the voltage from each pre-amplifier. This integral was then differentiated, with zero crossings of the derivative used to define maxima in the integrated charge and hence candidate events. A level-crossing discriminator accepted or rejected candidate events based on their total maximum integrated charge (i.e. the value of the charge integral at the time of zero-crossing of the derivative) and the maximum of the charge integral for accepted events was read-out by a 12 bit ADC. A “shaper event” was then produced in the SNO data stream. The shaper system was designed to be fast enough (with a dead-time of approximately 235 µs per event) to record data from higher-rate calibration sources and to provide sensitivity to possible supernova neutrino bursts.

The “multiplexer” or “MUX” stream recorded the event pulse shapes. Each NCD channel en- tering the MUX system was monitored by a level-crossing discriminator. Signals from channels that crossed the threshold were passed, with an added voltage offset, to one of four logarithmic amplifiers.

Up to 12 strings were connected to each logarithmic amplifier (the signal from a given string always went through the same amplifier); if multiple strings on the same amplifier had simultaneous MUX triggers, the signals were summed before entering the amplifier. The output from the logarithmic amplifier was then passed to one of two digital oscilloscopes (the choice between the scopes was based on a toggle bit which in principle alternated events between the scopes). At the scope, the event, from about 1500 ns before the trigger time to a total of 15 µs was digitized at 1 GHz, and read out through a GPIB interface. The MUX trigger and scope information then became part of

27 a “MUX event” in the SNO data stream (with a time stamp based on the MUX trigger time). The readout of the oscilloscopes was relatively slow ( 0.6 s), limiting the throughput of the two-scope ∼ system to about 1.2 Hz. This was adequate for regular running, but too slow to record pulse shapes for all events during calibration running. In the event that the scope toward which the toggle bit directed a log-amp signal was busy, a “partial MUX event,” containing the MUX trigger information but no oscilloscope trace was produced. As the MUX trigger had a much smaller dead time than the scope ( 1 ms), these partial MUX events allowed MUX-shaper trigger coincidences to be used ∼ as a method of data cleaning even on higher rate data.

The dual shaper-MUX system meant that each physics event produced two events in the SNO data stream. These events were associated with one another offline based on the relative timing of the events (the MUX trigger occurred 8-15 µs before the shaper trigger due to the shaper integration time). Only “correlated events” with both MUX and shaper triggers were considered in the analysis; for neutrino data it was further required that each MUX event have an associated scope trace.

3.4 NCD Signals and Backgrounds

The NCDs were built to detect the passage through the NCD gas of the energetic proton and triton produced by neutron captures on the 3He; they were, however, sensitive to the passage of any ionizing particle. This meant that background events, in particular alpha particles produced by the decay of radioactive isotopes in the NCD components, were also present in the NCD signal. It was originally hoped that the neutron and alpha events would be distinguishable based on differences in their pulse shapes. A good deal of effort has gone into realizing this separation, and some promising results have been obtained. This pulse shape separation will not, however, be discussed here. Instead, as in the previously published NCD analysis ([21], discussed below), the neutron and alpha signals will be separated based on their energy spectra. Therefore, it is the energy spectra of the neutrons, alphas, and other backgrounds which are of principal interest to the current discussion.

28 3.4.1 Neutron Signal

The capture of a neutron by a 3He nucleus produces a back-to-back proton-triton pair with a total kinetic energy of 764 keV,

3He + n p +3 H + 764 keV. (3.1) →

In the NCDs, the proton and triton resulting from a neutron capture propagated through the NCD gas depositing, in many cases, the entire 764 keV into the gas. It was also possible, however, that either the proton or the triton (or, with a tiny geometric probability, both), impacted the nickel body or the anode wire of the NCD before depositing all of their energy1, and hence did not deposit their full energy in the gas. This geometric effect resulted in a rather unique energy spectrum, shown in Figure 3.5, which extended downward from 764 keV.

3500

3000

2500

2000

1500 Neutron Calibration Events 25 KeV 1000

500

00 0.2 0.4 0.6 0.8 1 Energy (MeV)

Figure 3.5: The energy spectrum produced in the NCDs by a source of neutrons uniformly dis- tributed throughout the SNO heavy water volume. The “full energy peak,” where the full energy of both the proton and triton are deposited in the gas, is visible at 764 keV. The total absorption of the proton’s energy through collision with the NCD wall results in the 191 keV shoulder, while the total absorption of the triton’s energy results in the shoulder at 573 keV. This higher energy shoulder is significantly enhanced by the downward smearing of the full energy peak by the space charge effect.

1 The propagation distance of the proton in the NCD gas was 1 cm, while the triton travelled 3 mm. ∼ ∼

29 3.4.2 Alpha Backgrounds

The majority of the physics signals detected by the NCDs were alpha particles emitted by radioactive isotopes in and on the NCD components. The energy of a typical alpha particle was several MeV, and hence most of these alphas were significantly higher in energy than the neutron peak. However, some alpha particles were degraded by passing through nickel or copper before entering the NCD gas, while others passed through the NCD gas on a chord and struck the nickel wall before depositing all of their energy. Both of these effects could cause alpha particles to appear in the neutron region of the energy spectrum (in the final data there were about five times as many alpha events as neutron events in the neutron energy window). The alpha backgrounds in the NCDs can be grouped into four main categories:

1. Bulk Alphas: 238U and 232Th chain decays in the bulk of the nickel NCD bodies produced

alphas with energies as high as 8.8 MeV. Due to degradation of the alphas as they propagated

out of the bulk of the nickel and into the NCD gas, and to space charge effects, a significant

fraction of these events fell into the neutron energy region.

2. Surface Alphas: During NCD construction, radioactive daughters of 222Rn in the laboratory

air plated out on the inside surface of the NCD nickel. The 222Rn decay chain includes 210Po

which emits a 5.3 MeV alpha and which is supported by 210Pb with a 22-year half-life. In

order for these surface 210Po alphas to appear in the neutron energy window, it was necessary

for them to re-enter the nickel wall after traversing a short chord through the NCD gas. This

relatively unlikely track geometry meant that a much smaller fraction of surface events than

bulk events appeared in the neutron energy region. However, a higher overall number of surface

events made the total contributions of the two classes in the neutron energy region similar.

3. Wire Alphas: Surface and bulk events originating from the anode wire were also detected by

the NCDs and had noticeably different pulse shapes than alphas originating from the nickel.

Even surface wire alphas could be degraded by passing on a chord through the (very thin)

30 anode wire before entering the gas, and wire alphas were much more likely to experience

significant space charge effects than were alphas from the nickel. Wire alphas are thought to

have been responsible for approximately 2% of the alpha events recorded by the array [29].

However, a lack of “chord tracks” through the NCD gas meant that a much smaller fraction

of wire alphas appeared in the neutron energy region compared to the other types of alpha.

4. Endcap Alphas: Quartz feed-through tubes at either end of each NCD counter created,

by design, 2.5 cm long dead regions from which charge could not be collected (this avoided

complications due to the non-uniform electric fields at the ends of the counters). However,

alpha tracks originating from within the dead endcap regions and propagating into the live

region (or vice versa) had significantly different pulse shapes and energy spectra than “normal”

alphas. Under the assumption that the alpha activity was uniformly distributed within the

NCDs, endcap alphas were expected to form only a few percent of the total alpha population.

The alpha backgrounds observed in the NCDs were dominated by surface and bulk alpha events from the nickel, while the wire and endcap alphas were thought to be present at the few percent level [29]. The energy spectra of these dominant components, as simulated by the SNOMAN Monte

Carlo software package (described in Section 3.8.1), are shown in Figure 3.6.

3.4.3 Other Physics Backgrounds

Apart from neutrons and alphas, the NCDs were also sensitive to high energy electrons, both those produced by beta decays and those produced by the Compton Scattering of γ-rays. Because electrons are relatively weakly ionizing, the path length required for an electron to deposit enough energy in the NCD gas to pass threshold was of the order of the length of the counters, which in turn meant that electrons were expected to provide a small component of the background at the very low end of the neutron energy region which would decrease quickly with energy. Due to the multiple changes in direction possible due to the electrons scattering both within the gas and off the nickel walls, however, electrons had the ability to produce a wide variety of pulse shapes.

31 Bulk Alphas

0.08 Surface Alphas

0.07

0.06

0.05

0.04

0.03

0.02 Normalized M.C. Alpha Spectrum / 10 keV 0.01

0 0 1 2 3 4 5 6 7 8 9 Energy (MeV)

Figure 3.6: The energy spectra of bulk and surface alphas, as predicted by the SNOMAN Monte Carlo. The energy spectra of bulk 238U and 232Th chain decays are very similar in the neutron energy region, so these chains were not treated individually in most of the NCD analysis. The energy of the 210Po peak is degraded slightly by space charge effects. The peak in the surface alpha spectrum around 2.5 MeV is composed of full energy polonium events with geometries that induced significant space charge effects.

3.5 Non-Physics Backgrounds and Data Cleaning

In addition to the physics backgrounds discussed above, the NCD system also recorded a large number of spurious events. These pulses took many forms and were generated by a variety of sources, not all of which are understood. Examples of these spurious pulses are shown in Figure 3.7.

3.5.1 Data Cleaning

In order to remove as many of these spurious events as possible from the analyzed data set, an extensive series of data cleaning cuts was developed.

The simplest data cleaning cut, which removed a large fraction of the spurious triggers, was the requirement that an analyzed event be “correlated,” (i.e. have both a shaper and a MUX trigger which occurred within the normal coincidence time window). This cut removed electronics noise generated upstream of the divergence of the MUX and shaper data streams, very narrow events which did not have enough integral charge to trigger the shaper, and very wide events which did not have sufficient amplitude to trigger the MUX.

32 Figure 3.7: Examples of spurious NCD events: (a) shows oscillatory noise, likely from electrical pick-up in the NCD electronics, (b) shows what is likely a micro-discharge from within the pre-amplifier, (c) shows a breakdown event from the delay line, and (d) shows a “flat trace” trigger from baseline fluctuations. Signals thought to originate from the NCD are plotted in µA, while those which are thought to originate in or above the pre-amplifier are plotted in mV. Figures from [28].

33 The next level of data cleaning removed events that occurred as part of a burst (MUX channel bursts, shaper channel bursts, and NCD events in conjunction with PMT bursts were all removed), and those events which occurred in conjunction with the passage of a muon through the detector.

Finally, the oscilloscope traces of the remaining events2 were passed through two complementary data cleaning paths, one of which examined the pulses in the time domain and one which operated in frequency domain. These data cleaning paths cut events which were too narrow to be physics events, oscillatory events with multiple zero crossings, and “flat trace” events, among others. The frequency-domain cuts also checked for the presence of the decaying ion tail in the latter part of the pulse, which should be a signature of all physics events.

The combination of all of the data cleaning cuts was very effective at removing the spurious instrumental backgrounds, with a neutron sacrifice that was measured to be less than 1%. The effect of the data cleaning cuts on the NCD energy spectrum is shown in Figure 3.8. Unfortunately, unlike the data cleaning cuts in the PMT data, it proved impossible to develop for the NCDs a method to set a numerical limit on the residual contamination of spurious events in the data after the data cleaning cuts were applied. Therefore, while it is believed that any residual contamination is negligible, this cannot be conclusively demonstrated.

3.5.2 NNNAs

In fact, there are several examples in the data of unexplained spurious events which pass the cleaning cuts in significant numbers. The NCDs upon which these events (which were colloquially called “Non-

Neutron Non-Alphas” or “NNNAs”) were observed are listed below, along with brief descriptions of the NCD behaviour. The energy spectra of some of these strings are shown in Figure 3.9.

J3 (String 26): This was the first string upon which NNNAs were identified, based on a • significant distortion in the neutron-region energy spectrum (shown in Figure 3.9(a)). The

events on this string were originally claimed to be “missing their ion tail,” and were in some

2 For neutrino data it was also required that all events have a scope trace, so that the following checks could be performed. For high-rate calibration data, however, most events did not have scope traces due to the relatively long scope readout time. Therefore, in calibration runs the scope-based data cleaning checks were not required.

34 Figure 3.8: The effect of the data cleaning cuts on the NCD energy spectrum. Figure from [28]. The neutron full energy peak appears at 0.76 MeV. This Figure shows only a subset of the NCD data.

cases distinguishable by eye. Nonetheless the NNNA pulse shapes were similar enough to

physics events that it proved impossible to reject these events on a pulse shape basis (in fact,

pulse shape based analysis consistently classified these events as more similar to neutrons than

to alphas). The rate of NNNA events on J3 was constant in time, raising the possibility that

they may in fact have been produced by an as yet unidentified radioactive background.

N4 (String 0): As in the case of J3, the N4 NNNAs were identified based on a distortion in • the energy spectrum of the string (see in Figure 3.9(b)). Unlike the J3 events however, the N4

NNNAs are not uniform in time, occurring mostly in low rate bursts during a few runs.

K7 (String 8): This string exhibited low rate bursts of events with pulse shapes similar to • those seen on N4 [30]. As seen in Figure 3.9(c), however, the distortion in the K7 energy

spectrum extends to higher energies than does the N4 distortion.

I7 (String 3): This 4He string exhibited high-rate bursts of shaper activity, some of which • also resulted in MUX triggers. Some of these events had pulse shapes similar to the other

NNNAs that were identified. The energy spectrum for this string is not shown in Figure 3.9

35 because the string was operated at different shaper thresholds which resulted in significant

distortion in the low energy spectrum.

K5 (String 18): This string was observed to produce events with pulse shapes similar to “J3- • type” NNNA events. The energy spectrum of the string is not shown in Figure 3.9 because a

leak in one of the connectors between two of the NCD segments resulted in a significant change

in the gas gain in one counter. This string therefore has significant distortions in the neutron

region energy spectrum even without the NNNAs.

K2 (String 31) and M8 (String 1): Both of these strings were observed (in hand scans of • the NCD pulse shapes) to produce events with pulse shapes similar to “J3-type” NNNAs.

Searches for NNNAs on strings besides those listed above were carried out by comparing the energy spectra (and the distributions of other pulse shape parameters) between the different strings

[31]. These searches “detected” the previously identified NNNA strings, and no others. Additional searches for non-Poissonian event timing distributions [32], which correctly identified the known

NNNAs on K7, I7 and N4, also did not identify any other infected strings. For additional NNNAs to have been present in the array and not detected by these searches, they would have to either have been very low rate or more-or-less uniformly distributed across the NCD array and fairly similar in spectrum to either the alpha or the neutron events. Again, however, while no additional NNNAs were “discovered” by these searches, no limits on their possible number were produced either3. The negative results of these searches do, however, give some confidence that there was no gross effect from NNNAs on strings other than those listed above.

Of the NNNA strings listed above, all but J3 and N4 were excluded from the analysis because of other observed defects (see Section 3.7). Although no causal relationships were determined between the presence of the NNNAs and the other observed defects on these strings, it was deemed reasonable

3 It is difficult in these types of searches to decide on the criteria for “detection” of an NNNA. In the burst search, for example, several strings (M8, K8, J6, N3, J3, and K2) showed non-Poissonian behaviour at 50% of the level of 0 and 8 [32], but below the criteria that had been set for a “detection.” It is equally difficult to define the necessary criteria on the “allowable” NCD behaviour that are necessary to set a limit on the possible number of NNNAs. Limits can be placed on the NNNA contamination under specific assumptions about their uniformity both in the array and in time and their similarity to the alpha and neutron shapes [33, 34]. It was decided, however, that the assumption of any specific set of criteria was difficult to justify, and so these limits were not used in the published analysis.

36 J3 Spectrum N4 Spectrum Other ’J’ Strings (Average) Other ’N’ Strings (Average) 40 50 35

40 30

25 30

Blind Data Events / 100 keV Blind Data Events / 100 keV 20

20 15

10 10 5

0 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Energy (MeV) Energy (MeV)

(a) J3 Spectrum. (b) N4 Spectrum.

K7 Spectrum K2 Spectrum Other ’K’ Strings (Average) 70 Other ’K’ Strings (Average) 70 60 60 50 50 40 40 Blind Data Events / 100 keV Blind Data Events / 100 keV 30 30 20 20 10 10

0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Energy (MeV) Energy (MeV)

(c) K7 Spectrum. (d) K2 Spectrum.

M8 Spectrum 30 Other ’M’ Strings (Average)

25

20

Blind Data Events / 100 keV 15

10

5

0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Energy (MeV)

(e) M8 Spectrum.

Figure 3.9: The energy spectra of some of the strings known to contain NNNAs compared to the average spectra of the other strings in the same NCD “ring” as the affected string (which were expected to have identical neutron signals). Note that K7, K2, and K5 were omitted from each other’s comparison spectra. Changes in the level of the al- pha background above and beneath the neutron peak should be expected due to the different concentrations of alpha emitters in the different strings. The shapes of the alpha spectra, however, should be reasonably similar. The data shown are from the full NCD blinded data set (described in Section 4.3) with all of the data cleaning cuts applied, including some that were specifically designed to target NNNAs. In all cases, with the possible exception of M8, significant distortions appear to be present in the energy spectra of these strings. 37 to assume that the NNNAs were linked in some way to the observed defects and were therefore unlikely to appear on strings which did not display those defects. J3 and N4, on the other hand, were excluded from the analysis solely on the basis of their observed NNNAs. As the cause of these

NNNAs was not known and they were only noticed based on their relatively high rates, there was the potential that similar behaviour may have occurred, undetected, on the strings that were included in the analysis. Therefore, in the previously published analysis [21] empirical PDF4 distributions representing the NNNA shapes observed on J3 and N4 were included in the fit to the data from the rest of the array.

The origin of the NNNA events remains unclear, although the different temporal behaviour and energy spectra of the different NNNAs suggest that they do not all have the same cause. The lack of evidence, in very careful searches, for similar events in the rest of the array is comforting, and means that any NNNAs present would have to have very specific distributions in time, space and energy.

Still, the observation of these unexplained events on a significant portion of the array, coupled with the lack of a quantitative limit on the residual contamination in the rest of the array, lends some uncertainty to the prediction of the background shape in the rest of the array that was required in the signal extraction. This uncertainty was one of the primary motivations for the development of the robust signal extraction techniques which make up the bulk of the work in the SNO portion of this thesis, and which are described in the next chapter.

3.6 NCD Calibration

The response of the NCD array to physics events, especially to neutron captures, was measured and monitored through an extensive calibration program. In fact, the neutron energy PDF that was used in the ultimate signal extractions was drawn directly from calibration data.

4 “PDFs” or “Probability Distribution Functions” contain the assumed distributions of each signal and background population in each of the observed variables. In performing a fit to the data, these PDFs form the set of distributions that are compared to the data in each variable to determine the number of each class of signal and background events consistent with the observations.

38 3.6.1 Electronics Calibration

The response of the NCD electronics (the pre-amplifiers and higher) could be measured by inject- ing known waveforms into pulser inputs on each pre-amplifier. These “electronics calibrations” or

“ECAs” were carried out weekly, with more extensive ECA scans monthly. The ECAs allowed electronics gain and linearity and the trigger thresholds to be monitored, and provided a means of measuring the amplification and offset parameters of the logarithmic amplifiers. The electronics parameters from a given ECA calibration were applied to the data between that calibration and the next; interpolation and back-calibration were not performed.

3.6.2 Physics Calibration

In addition to the ECAs, extensive neutron source calibrations were performed in order to study the response of the array to neutron events. Encapsulated sources (241AmBe and 252Cf) could be moved in two two-dimensional planes (which contained the primary axes of the SNO detector) using the SNO calibration source manipulator system (described in [17]). Extensive scans designed to illuminate all of the NCD counters as uniformly as possible were carried out four times during the third phase of SNO, with shorter scans (to check array stability) every few months.

In addition, on two occasions, small amounts of 24Na in the form of neutron-activated NaCl

24 24 brine were mixed into the D2O active volume. Na beta decays to Mg with a half-life of about 15 hours, releasing a 2.75 MeV γ-ray. These γ-rays had sufficient energy to photo-dissociate deuterons, and thus produced a very nearly uniform source of neutrons in the D2O volume.

The physics calibrations allowed the absolute neutron capture efficiency and the neutron detection efficiency of the NCD array to be determined, both directly and through comparison with Monte

Carlo. In addition, the physics calibrations allowed the shaper-ADC values to be converted to energies (the ADC values were scaled to place the neutron peak in neutron scans at 764 keV), and provided the neutron energy spectrum that was used as the neutron PDF in the NCD signal extraction.

39 3.7 Abnormal NCD Behaviour

Of the 36 3He strings deployed in the array, six strings were identified as behaving differently from the rest of the array in ways important to the analysis. As a result, the data from these strings, although not necessarily unrecoverable, were excluded from analysis to date. The affected NCDs were:

J3 and N4: As discussed earlier, these strings exhibited significant numbers of “NNNA” • events which passed the data cleaning cuts. These strings were omitted from the analysis on

this basis.

K5: This string developed a leak between one of its counter sections and the adjacent con- • necting volume, allowing 3He to flow from the counter into this dead volume. This resulted in

a gradual reduction of the gas gain of that counter, and hence a time-dependent distortion of

the K5 neutron energy spectrum.

K2 and M8: At various times during running, these strings showed behaviours consistent • with the read-out electronics becoming electrically disconnected from the bulk of the NCD.

When these strings were removed from the detector, investigation showed that the “resis-

tive couplers” (springy connectors which were used to both electrically connect and provide

impedance matching between the top of the NCD string and the bottom of the readout cable

- see Figure 3.1) were only loosely connected to the NCD feed-throughs.

K7: The energy calibration of this string was observed to be unstable during neutron calibra- • tions.

In the rest of this thesis, data from “the NCD array” should be assumed to exclude these strings

(and the 4He strings) unless otherwise noted.

40 3.8 Understanding the Alpha Background

As mentioned earlier, the separation between the neutron signal and the (alpha dominated) back- ground was primarily carried out by fits to the NCD energy spectrum. This meant that understand- ing the shape of the alpha energy spectrum was of critical importance. It was also one of the most challenging aspects of the analysis.

The in-situ data available from the 4He strings to study the alpha background in the neutron energy region, while valuable, were quite limited. As can be seen in Figure 3.10, of the four 4He detectors, only three produced useful data5. The energy calibration of the 4He strings was also challenging, as they had no neutron peak to use for energy calibration. Instead, the energy scale and offset of each 4He string was set by hand such that the 210Po peak and the low energy threshold cut-off occurred at approximately the same energies as in the 3He strings. It should also be noted that the three counters in I3 were not as well gain matched as in most strings, meaning that the alpha spectral features were broadened on this string. The main limitation of the 4He data, however, was statistical; even with I7 included, there were only about 2600 events on the 4He strings in the neutron energy window. More than two thirds of these events came from I6, which had an inordinately high level of 210Po surface activity.

Attempting to determine the alpha energy spectrum in the neutron energy region from the

4He strings is thus quite challenging. Even considering only surface and bulk contributions (and neglecting the other families of alphas described in Section 3.4.2), the statistical uncertainties on the extracted alpha PDFs were too large to be particularly useful.

Attempts were made to supplement the alpha data set using additional 4He NCD strings that were run (in air) in the underground control room, and by taking data with a test NCD (which was made of stainless steel and had a feed-through to allow the insertion of sources) at Los Alamos

National Laboratory. Both of these supplementary sources, however, ran in very different background environments, and with data acquisition and read-out systems which differed significantly from the

5 The shaper threshold of I7 was significantly raised to suppress continuing shaper bursts. This meant that events were no longer recorded in the neutron energy region, so the data from this string were excluded from the analysis.

41 1000

120 800 100 Events / 125 keV Events / 125 keV

80 600

60 400

40 200 20

0 0 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Energy (MeV) Energy (MeV)

(a) String I7 (b) String I6

140

120 120

100 100 Events / 125 keV Events / 125 keV

80 80

60 60

40 40

20 20

0 0 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Energy (MeV) Energy (MeV)

(c) String I3 (d) String I2

Figure 3.10: The 4He alpha spectra. The energy scale of each string has been hand corrected so that the 210Po peak and the low energy threshold cut-off occur at approximately the same energies as in the 3He strings. The spectrum of I7 is distorted as a result of the string being operated with significantly raised shaper threshold for the majority of its operating period. Multiple 210Po peaks can be seen in I3 as a result of gain mismatch between the three NCD counters which made up the string.

42 deployed array. These electronics changes, especially the differences in impedance matching in the electronics chain and differences in the reflection timing, affected the pulse shapes significantly, and hence made application of the data cleaning algorithms to this supplementary data difficult. The distribution of the alpha activity in the NCDs used to take this supplementary data was also quite different from the assumed distribution of alpha activity in the deployed array. Therefore, while this additional data were useful in studying the alpha behaviour, it could not be used to directly provide a background PDF for the signal extraction.

3.8.1 Alpha Monte Carlo

In order to produce background PDFs for the signal extraction, then, a good deal of effort was put towards modelling pulse production by the NCDs. The NCD pulse shape Monte Carlo, which was implemented as part of the SNOMAN [17] Monte Carlo package, is described in detail in [29] and

[35].

The goal of the Monte Carlo was to model not only the energy spectra of the different classes of alphas and neutrons, but their pulse shapes as well. As a result, the Monte Carlo tracked not just the energy deposition of the primary particles in the NCD gas, but the full pulse production process including the propagation of the primary electrons to the anode wire, the avalanche amplification and charge collection, and the propagation of the resulting pulse through the electronics chain.

In the absence of the space charge effect, the energy spectrum of a given class of physics events would be relatively easy to model, as the charge recorded by the shapers would simply be proportional to the energy lost by the primary particle in the NCD gas. In the presence of a significant space charge effect, however, the charge recorded by the shapers, and hence the reconstructed energy of the event, depends on the details of the propagation of the primary particle, the drifting of the charge through the NCD gas, and the charge multiplication.

As described in Section 3.2, the “space charge effect” refers to the reduction in gas gain ex- perienced by an electron which impinges on a portion of the anode wire which is surrounded by a cloud of positive ions produced by the multiplication of an earlier electron. In attempting to

43 model this process, the spatial distribution of the ion clouds generated by the electrons from a given segment of the primary track is critically important. It is also difficult to predict. In the Monte

Carlo simulation, the lateral diffusion of the primary electrons as they drifted to the anode wire was modelled using known diffusion constants and drift speeds, while the lateral and radial spread of the ion density of an avalanche were parameters which were tuned using high energy alpha and neutron calibration data. For each segment of the anode wire, the space charge was assumed to form a complete cylinder around the wire, which slowly drifted outward according to the ion mobility.

Within the cylinder of space charge, the change in the electric field could easily be calculated, and from this the change in the gas gain was determined using the Diethorn model [36].

This parameterization of the space charge effect was approximate. In reality, for example, the space charge cloud from a given electron avalanche was expected to have an RMS azimuthal spread of 20o [37], and hence not wrap completely around the anode wire6. There may also have ∼ been electron capture effects as the primary electrons drift through the ions produced by the other primary ionizations and through any space charge ion charges present. On average, however, the

Monte Carlo treatment of the space charge effect was thought to be sufficient for spectral modelling under standard running conditions [35].

Having developed a model of charge collection in a single event, the other required Monte Carlo input in simulating the alpha spectrum is the physical distribution of alphas within the NCDs. As little information is available about the distribution of alpha activity along the length of the NCDs, both the surface and bulk alpha distributions were assumed in the Monte Carlo to be uniform along the length of the NCDs7. Wire alphas and endcap alphas were included in the simulations, with their numbers estimated through fits to the alpha spectrum above the neutron energy region. The ratio of surface to bulk alpha activity was also determined via string-by-string fits to the high energy alpha spectrum. In these same high-energy alpha fits, it was found that the depth distribution of

6 This would result in a stronger but more local space charge effect, which might, for example, cause a significant change in tracks headed straight towards the wire. 7 The possibility of spectral distortions in the neutron energy region due to potentially enhanced activity in the endcap regions was considered unlikely, while spectral distortions on single strings due to local depositions of activity at fixed depth would likely have been detected by the NNNA searches described earlier.

44 the alpha activity within the nickel had a significant effect on the alpha spectrum. In particular, it was found that a better description of the data could be obtained by introducing both surface and bulk activity distributions which decayed exponentially with depth [29]. This distribution is difficult to motivate physically, but was accepted based on the improvement in the high energy fits.

As described above, the challenges in simulating the alpha energy spectrum are many. Never- theless, based on comparisons with the 4He data, with higher energy alphas on the 3He strings, with neutron calibration data and with the supplementary alpha data described earlier, the Monte

Carlo was deemed to provide a sufficiently accurate description of the data to be used in the signal extraction above 0.4 MeV8. Systematic uncertainties in the Monte Carlo alpha energy spectrum were set by varying the tuned parameters (the electron avalanche parameters described earlier, the ion mobility, which was measured using calibration neutron data, and the surface to bulk alpha activity ratios) and parameterizing their effect on the neutron region energy spectrum as polynomials9. Due to their questionable physical origin, the differences between the exponentially distributed surface and bulk activities and the more expected uniform bulk and purely surface distributions were also taken as systematic uncertainties. Finally, due to the much larger uncertainties in the modelling of individual pulse shapes (which will not be discussed here), the entire action of the scope-based data cleaning cuts on the Monte Carlo data was taken as a systematic uncertainty. The final Monte

Carlo alpha spectrum, along with the range of variation encompassed by the systematic uncertainties described above, is shown in Figure 3.11.

3.9 Published NCD Results

The first, and to date only, SNO publication to make use of the NCD data was the “NCD PRL”

[21]. This analysis consisted of a joint fit of the NCD-phase NCD and PMT data, with the neutral current neutron flux constrained to be equal in the two detection channels.

8 Below this point the systematic uncertainties in the Monte Carlo became difficult to reliably estimate. It is also in this low energy region where other backgrounds like electrons and other data cleaning leakage events were most likely to occur. 9 The alpha systematic uncertainty polynomials ranged from zeroth order for the avalanche gradient and ion mo- bility systematic uncertainties to as high as third order for the Po depth distribution and data cleaning systematic uncertainties.

45 Figure 3.11: The Monte Carlo alpha spectrum near the neutron energy region. The line shows the central value Monte Carlo, while the band shows the range consistent with 1σ variations in the Monte Carlo systematic uncertainties.

The PMT-side signal extraction was carried out in a manner similar to the signal extractions in previous phases [20, 19],10 with the fitted observables being event energy, event radius, and the cosine of the angle between the reconstructed event direction and the position of the Sun. The PMT

PDFs were taken from the Monte Carlo, and the energy spectrum of the charged current events (and hence the energy spectrum of the 8B neutrinos) was not constrained in the fit, but was extracted from the data.

As discussed earlier, the only observable used in the NCD fits was event energy. The spectrum of events from the 2005 24Na spike calibration was taken to be the neutron PDF, while the Monte

Carlo alpha PDF described in the previous section was used to describe the alpha backgrounds. The previously discussed NNNAs on J3 and N4 were represented in the fits by skew Gaussian curves.

As was also described earlier, the Monte Carlo systematic uncertainty estimates were considered reliable above 0.4 MeV, so the NCD fits were carried out over the 0.4 - 1.4 MeV range.

The fit minimization was carried out using a Markov-Chain Monte Carlo type optimization

10 Note, however, that the optical characterization of the detector and the event position, direction, and energy estimators had to be updated to take into account the light shadowing of the NCDs.

46 procedure, which allowed the most important systematic uncertainties to vary within the fits. For the NCDs, these systematic uncertainties included the energy scale and resolution, the systematic uncertainties in the Monte Carlo alpha spectrum, and the widths and amplitudes of the two NNNA

PDFs. These “floated” systematic uncertainties were constrained, via penalty terms in the likelihood, where external information was available (the energy scale, for example, was constrained to be within 1% of the nominal value, while the alpha spectrum systematic uncertainties were constrained to 1σ). A complete description of the NCD PRL Markov-Chain Monte Carlo signal extraction  procedure and results can be found in [38, 39].

The inclusion of the Monte Carlo systematic uncertainties and the NNNA PDFs created a good deal of freedom in the low energy part of the NCD background energy spectrum; indeed, the best

+162 fit included a large number of NNNAs (571−175), while distorting the alpha spectrum significantly downward at low energy, as shown in Figure 3.12. Nevertheless, the neutron energy spectrum

+77 was sufficiently unique that the fitted number of NCD neutrons was relatively stable, with 983−76

+25 neutral current neutrons (in addition to the 185−22 background neutrons expected based on exter- nal constraints (see Section 5.4)) being fitted. This corresponds to a total solar neutrino flux of

+0.33 +0.36 6 −2 −1 5.54−0.31(stat)−0.34(syst)x10 cm s (see Section 5.3 for information about the conversion of the detected number of neutral current neutrons into neutrino flux).

+24 +91 On the PMT side, 267−22 neutral current neutron captures, 1867−101 charged current events

+24 and 171−22 elastic scattering events were fitted above the 6.0 MeV (kinetic) energy threshold.

Under the assumption that the solar neutrino flux is entirely νe, these correspond to fluxes of

+0.05 +0.07 6 −2 −1 +0.24 +0.09 6 −2 −1 1.67−0.04(stat)−0.08(syst)x10 cm s and 1.77−0.21(stat)−0.10(syst)x10 cm s , respectively. The

PMT-side data and signal extraction is described more fully in [21].

As discussed earlier, the most useful value in studying neutrino oscillations is the ratio of the charged current flux, φCC , and the neutral current flux, φNC . For the NCD phase results, this ratio is φ CC = 0.301 0.033. (3.2) φNC 

47 800 NCD data 700 best fit NC 600 neutron backgrounds α 500 3NA J3 3NA N4 400

300

Count / 0.05 MeV 200 100

0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Shaper Energy (MeV)

Figure 3.12: The fitted NCD energy spectrum from the NCD PRL fits. The freedom in the back- ground PDFs can be seen to have resulted in a significant, and unphysical, downward distortion in the alpha energy spectrum which is taken up by an increase in the num- ber of fitted NNNAs. This freedom in the background spectrum does not, however, result in any particular instability in the fitted number of neutrons.

The constraints on the solar neutrino oscillation parameters that resulted from including the NCD results in the global neutrino oscillation analysis were shown in Figure 2.9, earlier, while comparisons of the NCD phase flux results to the results from the previous phases of SNO were shown in Figure

2.7.

48 Chapter 4. Robust Determination of the NCD Neutron Number

As described in the previous Chapter, one of the most significant challenges in extracting the NCD neutron number was constraining the shape of the background energy spectrum. The systematic un- certainties in the Monte Carlo alpha spectrum were relatively large and difficult to reliably evaluate and the possibility of unexpected backgrounds, both physics and non-physics, could not be com- pletely ruled out. In this Chapter, two alternative signal extraction procedures, the “polynomial method” and the “overlap method,” which were developed with the aim of successfully extracting the number of neutrons from the NCD energy spectrum while making minimal assumptions about the background energy spectrum, are introduced. With these robust techniques, the NCD energy fit can be extended to include the full neutron energy region, which both slightly increases the statistics in the extracted neutron number and includes in the fit additional information which may help to constrain the background behaviour under the neutron peak. The price of reducing the dependence of the fit on prior knowledge of the background shape is expected to be an increase in the uncertainty in the extracted number of neutrons; the benefit is a fit that is less susceptible to biases in the event that unexpected background shape did occur in the data. In addition to what is presented here, the history and development of the overlap and polynomial methods are also described in [40] and [41].

4.1 The “Polynomial Method”

A standard approach to the extraction of a well-defined and quickly varying signal from a relatively smoothly varying background is the use of a polynomial to model the background in the fit. In the case of the NCD energy spectrum, we must modify the polynomial to account for the low energy threshold cutoff (this is accomplished by multiplying the polynomial by an inverted Fermi Function,

49 the position and steepness of which are floated in the fit). The use of a polynomial to represent the background in the fit requires only three inputs from the experimenter: the shape of the signal (in this instance the neutron energy PDF, which is obtained from calibration data), the order of the polynomial to be used, and the parameter range over which the fit should occur.

4.1.1 Polynomial Order

Due to the uncertainty in the background spectrum, the order of the polynomial needed to accurately represent the background in the NCD fit is not known a priori; it is, however, of fundamental importance. Using a polynomial of insufficient order will in general result in fit biases, while using a polynomial with more degrees of freedom than needed unnecessarily inflates the fit uncertainties.

The solution adopted here is to perform fits to the data with polynomials of different orders and to use information from the fits themselves to infer the best polynomial to use.

The obvious criteria to use in this type of a posteriori model selection is the “goodness of fit” between the models and the data. It is easily demonstrated, however, that adding an additional degree of freedom to a polynomial fit to data cannot result in a worse fit and almost always leads to a better fit. It is necessary, therefore, to determine whether the improvement in the fit resulting from an additional degree of freedom is large enough to justify that degree of freedom or not.

A standard method of making this determination involves the “likelihood ratio,”

L λ = i , (4.1) Lj where Li and Lj are the likelihoods of the best fits of model i and model j, respectively, to the data. In the event that one of the models is a nested subset of the other (for example a 4th order polynomial can be considered a nested subset of a 5th order polynomial) and the model with fewer degrees of freedom is already a good description of the data, the logarithm of the likelihood ratio is well approximated by a χ2 distribution [42]:

2−ν/2 P ( 2 ln(λ)) (λ)ν/2−1e−λ/2, (4.2) − ≈ Γ(ν/2) where ν is the number of additional parameters in model j compared to model i. Further, in the

50 regime of Gaussian statistics (within which the NCD energy spectrum lies for a broad range of possible bin widths), the log-likelihood is equivalent to the χ2 with an additive constant [43]. It is therefore possible to replace the likelihood ratio in Equation 4.2 with a χ2 difference:

2−ν/2 P (χ2 χ2) (λ)ν/2−1e−λ/2, (4.3) i − j ≈ Γ(ν/2)

2 2 2 where χi and χj are the χ ’s of the best fits of models i and j to the data (here model j is assumed to have the larger number of free parameters).

Equation 4.3 thus provides a means by which the statistical significance of the decrease in χ2 resulting from the use of a higher order polynomial can be evaluated. For example, the requirement that the addition of a single additional degree of freedom reduces the fit χ2 by 1.0 or more has approximately a 68% chance of rejecting a superfluous parameter. Indeed, this is the origin of the standard “rule of thumb” in parameter significance testing that a parameter must produce a ∆χ2 of at least 1.0 to be justified. In selecting the best polynomial to use, however, we cannot simply add orders one at a time until the ∆χ2 of one addition fails the comparison criteria; if testing superfluous parameters this would work fine, but when testing polynomials of insufficient order to correctly describe the data it is possible that adding one order does not improve the fit much (or even at all) while adding the next order can make a significant improvement in fit1. If the “true” polynomial order required to properly represent the data is high, this would lead to a significant probability that one of the χ2 comparisons would fail the ∆χ2 test, and hence that subsequent testing would be stopped, before an appropriate description of the background is reached.

To avoid this problem, it is necessary to scan all interesting polynomial orders and compare their

fit χ2’s. In the beginning, the lowest (0th) order polynomial is assumed to correctly describe the background. The χ2 of the 0th order fit is sequentially compared with the χ2’s of the higher order

fits, and for each comparison if the ∆χ2 between that fit and the reference (0th order) fit is larger than the target ∆χ2 for that number of additional degrees of freedom, the higher polynomial is taken

1 As an example, consider the 3-point data set y(1)=0, y(2)=1, y(3)=0 with unit uncertainties (in fact, the demon- stration works for any non-zero uncertainty, as long as the uncertainties on all three bins are equal). The best-fit 2 th st 2 χ ’s for fits to 0 and 1 order polynomials can easily be determined analytically to both be equal to 3 . Thus, the first order term acts as a superfluous parameter, and tests only comparing adjacent χ2’s would have a good chance of truncating the series at 0th order, while a second order polynomial (of course) fits the data perfectly.

51 to be the correct description of the background. That higher order polynomial fit is then used as the reference fit in subsequent comparisons. The “reference fit” after all of the comparisons have been completed is taken to be the correct description of the background. The ∆χ2 targets are set such that in a single comparison between models differing by n degrees of freedom, the chance of superfluous parameters being rejected is independent of n (i.e. based on Equation 4.3, the ∆χ2 targets are set by locating a certain confidence level in the relevant χ2 distribution). Changing the ∆χ2 target with n is necessary because although the average ∆χ2 for n added superfluous parameters is n, the cumulative probability distribution changes. For example, one superfluous parameter produces a

∆χ2 less than 1.0 68.3% of the time, while for two extra parameters the 68.3% upper confidence limit on ∆χ2 is 2.3, not 2.0. Thus, setting ∆χ2 targets based only on a certain change in χ2 per degree of freedom becomes significantly biased towards falsely accepting large n comparisons.

Unfortunately, the multiple comparisons between different polynomial orders necessary to select the “best” order in the manner described above incurs a significant trials factor in the probability that superfluous parameters are accepted in at least one of these comparisons. As can be seen in

Table 4.1, when several superfluous polynomial orders are tested sequentially, the chance that none of them are accepted is significantly less than the chance that any one of them is rejected. This is because the average improvement in χ2 when a superfluous degree of freedom is added is 1.0, even though 68% of the time the improvement is less than 1.0. Thus, relatively large jumps in χ2 occur with some frequency, and with several subsequent parameter additions the chance of hitting one of these jumps and hence falsely accepting too high a polynomial order can become large. Figure

4.1 shows the same effect in tests of the polynomial order choice procedure using toy Monte Carlo energy spectra.

To mitigate the trials factor, and hence avoid an excessive probability of falsely accepting su- perfluous parameters, two complimentary approaches can be taken. First, the number of tested polynomial orders can be reduced, which lowers the number of χ2 comparisons made, and hence the probability of a false acceptance. This is a reasonable thing to do, as there is no reason to

52 n 1 2 5 7 10

Prej 0.68 0.57 0.43 0.39 0.35

Table 4.1: The probability, Prej , that after sequentially testing nested models with up to n super- fluous degrees of freedom, none of the superfluous models have been accepted by the χ2 comparison tests. The acceptance criteria on χ2 were set such that any single compar- ison between models differing by n superfluous degrees of freedom would have a 68% chance of rejecting the superfluous parameters. The calculation was carried out using repeated Monte Carlo sampling of the single-order χ2 distribution, in order to properly calculate the conditional probability that a comparison with n superfluous parameters fails the χ2 improvement test, given that the comparisons with all lower numbers of unnecessary parameters had also failed.

2 100 χ 400 90 350 80 Average fit 70 300

60 250

50 Number of Times Selected 200 40 150 30 100 20

10 50

0 0 0 2 4 6 8 10 0 2 4 6 8 10 Polynomial Order Polynomial Order

(a) Average fit χ2’s. (b) Polynomial order chosen.

Figure 4.1: The results of testing the polynomial order selection algorithm against Monte Carlo data. Multiple high-statistics ( 100 times the number of events in the NCD energy spectrum below 1.5 MeV) data∼sets were generated with the underlying third-order distribution 90 + 35x 20x2 + 10x3. The average χ2 of the fit of each polynomial order to the fake data −sets is shown in (a). As can be seen, as the polynomial order is increased, the fits improve dramatically with each order below the “true” order (the average χ2 for order zero, 5000, is off-scale); above the true order, the χ2 improves by approximately the expected∼ 1.0 per superfluous parameter. The frequency with which a given polynomial order was selected (again based on χ2 selection criteria that would reject a given set of n superfluous parameters 68% of the time in a single test) is shown in (b). As can be seen, the correct order is chosen 381 times, within error of the 390 times expected from Table 4.1; the remaining choices are approximately uniformly distributed over the higher orders, which is also as expected.

53 expect that a tenth order polynomial should be required to fit the background (the Monte Carlo and the data from the 4He strings suggest that at most a third order polynomial should be needed to describe the alpha spectrum); we therefore truncate the polynomial order tests at sixth order, which reduces the chance of a false selection while still giving the fit significant ability to adapt to unexpected background shapes.

The second approach to reducing the trials factor is to increase the change in χ2 required for additional parameters to be accepted. Table 4.2 shows the probability that the “correct” model is still accepted after consecutively adding and testing six superfluous parameters under different χ2 acceptance criteria. As expected, the probability of correctly rejecting all six superfluous parameter tests increases as the probability of rejecting any single superfluous parameter is increased. It is therefore possible to find, through iteration, a single test superfluous parameter rejection probability, and hence a set of ∆χ2 targets, that will produce any desired overall probability that multiple superfluous parameters tests are all rejected. We thus set the ∆χ2 targets by choosing the total, rather than the single-trial, probability of accepting superfluous parameters2.

Single test rejection probability. 0.68 0.90 0.95 0.99 Required ∆χ2 for n = 1. 1.0 2.7 3.8 6.6 Six test rejection probability. 0.42 0.77 0.88 0.97

Table 4.2: The change in the probability that χ2 tests of six consecutive superfluous parameters are all rejected as the χ2 target is changed. The ∆χ2 targets are set based on the probability, shown in the first row, of rejecting any n superfluous parameters in a single comparison. The resulting ∆χ2 target for n = 1 is shown in the second row to illustrate the dependence between the selected superfluous acceptance probability and the ∆χ2 target (Note that, as described in the text, the ∆χ2 target does not scale linearly with n). The final row shows the probability that consecutive tests of 6 nested superflu- ous parameters are all rejected. As can be seen, the probability of correctly rejecting multiple tests increases quickly as the single test rejection probability increases.

2 There is an additional subtlety in implementing this algorithm: although we test up to polynomial order six, not all of these polynomials represent superfluous parameters; only those above the “correct” order are superfluous. Therefore, to give the correct probability of accepting superfluous parameters, the ∆χ2 targets should be set based on the “correct” polynomial order (e.g. if a 4th order polynomial is chosen as the best description of the background, the ∆χ2 targets should have been computed based on two, rather than six, sequential tests of superfluous parameters). Therefore the ∆χ2 targets used to choose the correct polynomial order depend on which order is chosen. To resolve this co-dependence, the best polynomial order is chosen iteratively, first using ∆χ2 targets appropriate for six superfluous parameter tests, and subsequently with ∆χ2 targets set based on the polynomial order chosen in the last iteration. This is continued until a self consistent best polynomial and ∆χ2 target are found (in practice, it was relatively rare for the selected best order to change beyond the first iteration).

54 The remaining question, then, is the total probability with which superfluous parameters should be rejected. Obviously, from the point of view of minimizing the uncertainty on the final neutron number, superfluous parameters should be avoided as completely as possible. As the ∆χ2 require- ments are made more stringent, however, the selection criteria will accept as sufficient models which more poorly describe the data; as an example, Table 4.3 shows the number of times insufficient, cor- rect, and excessive polynomial orders were chosen in Monte Carlo experiments with different ∆χ2 targets3. Thus in choosing the ∆χ2 target, it is necessary to balance the desire to have the lowest possible polynomial order with the risk of bias associated with selecting a model which more poorly describes the background. The bias risks are likely relatively low, as those “incomplete” models must provide a reasonably good description of the data in order to be selected. Nevertheless, the conservative thing to do is to accept a relatively high probability of accepting superfluous parame- ters (and hence having unnecessarily high uncertainties). In keeping with the spirit of the “rule of thumb” that in a single parameter test a model is accepted if there is a “1-sigma” chance or less that the attendant improvement in χ2 is due to statistical fluctuations alone, we choose here to assign the ∆χ2 targets so that the total probability that all superfluous parameters be rejected is 0.683.

Total probability to reject superfluous models. 0.68 0.90 0.95 0.99 Frequency of selection of insufficient polynomial order 39 148 242 411 Frequency of selection of correct polynomial order 624 743 707 565 Frequency of selection of excessive polynomial order 337 109 51 24

Table 4.3: The results of polynomial selection in Monte Carlo experiments similar to those de- scribed in Figure 4.1, under different ∆χ2 targets. These results are only illustrative, as the probability of selecting polynomials of insufficient order will depend on the precise shape of the background. Still, it can be seen that while increasing the ∆χ2 targets does indeed reduce the number of times superfluous parameters are accepted, it also increases the probability of choosing an insufficient description of the background.

3 Note that the excessive number of selections of excessive polynomial orders in the 0.99 column of Table 4.3 is due to numerical precision in the determination of the ∆χ2 determination. In order to make the multi-trial selection probability of a superfluous model 0.99, the single trial probability had to be closer to 1.0 than the precision of the Monte Carlo numerical integration technique used.

55 Uncertainty Correction

A final, unexpected, effect that must be taken into account in the polynomial method is a slight broadening of the pulls distribution4 by the polynomial selection procedure. In Monte Carlo ensemble tests, it was observed that if many Monte Carlo data sets were fitted with a fixed polynomial order, the pulls distribution was correct (independent of polynomial order), while after performing the polynomial selection procedure inherent in the polynomial method, the distribution of pulls was slightly wider than expected. Investigation showed that this broadening was the result of a correlation, shown in Figure 4.2, between large changes in χ2 and large changes in pull between fits of different polynomial orders. Thus, in selecting for large ∆χ2 we tend to select for large changes in the neutron number (which can be either towards or away from the true number). This additional scatter is responsible for the observed widening of the pulls distribution.

Mean x 0.02212 Mean -0.005799 Mean y -1.034 RMS 1.047 2 5 RMS x 0.2956 2 χ

χ / ndf RMS y 155.5 / 197

∆ 1.676 200 Prob 0.987 Constant 190.4 ± 2.3 350 180 0 Mean -0.005955 ± 0.010477 160 Sigma 1.048 ± 0.007 300 140 -5 250 120 200 100 -10 150 80 60 100 -15 40 50 20 -20 0 0 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -5 -4 -3 -2 -1 0 1 2 3 4 5 Change in Pull (True - Fitted) Number of "Signal" Events/Fit Uncertainty

(a) (b)

Figure 4.2: The relationship between the ∆χ2 and the change in pull in the number of extracted “signal” events between fits with subsequent polynomial orders is shown in (a). A correlation is clearly present, and in selecting for large ∆χ2’s, large changes in the pull are also selected, resulting in the slightly widened pulls distribution shown in (b). The Monte Carlo data sets used to produce this plot contained on average 7500 alpha background events below 1.5MeV (distributed according to the 4He alpha spectrum) and 1000 signal events distributed according to “NNNA shape 1” in Figure 4.6.

Further investigation showed that the broadening effect was independent of neutron and back-

4 The “pull” of a fit is defined as the difference between the true and fitted value of a parameter divided by the fit uncertainty. If the uncertainties are properly estimated, the distribution of pulls for many fits should be described by a Gaussian curve with a standard deviation of 1.0.

56 ground statistics within at least a factor of ten of the expected statistics in the data, and did not depend on the signal shape. As a result, the width of the pulls distribution in Figure 4.2 was taken to be representative of the effect, and the uncertainties in subsequent polynomial fits were increased by 5% to compensate for it.

Polynomial Choice Summary

To summarize the polynomial fit procedure: the data are repeatedly fitted to the (calibration- data derived) neutron energy PDF, with different order polynomials (between 0th and 6th) used to represent the background. For each polynomial, the χ2 between the data and the best fit model is recorded. The results of the different fits are then sequentially compared based on their χ2’s, with a

fit with more degrees of freedom being preferred only if the associated ∆χ2 is larger than the target

∆χ2 for a comparison over the appropriate number of degrees of freedom. The polynomial which passes the χ2 comparison test relative to all preceding orders, and relative to which all subsequent orders fail, is taken to be the best description of the background. The ∆χ2 targets are set such that the total probability that the selected polynomial order is higher than necessary is 0.683.

4.1.2 Binning

The binning of the data has very little effect on the polynomial fits; in Monte Carlo studies, no change was seen between fits with bin widths as small as 6.25 keV and as large as 50 keV. As bin widths get very large, the features of the neutron spectrum become less distinct and the ability to distinguish some background shapes may be degraded. As bin widths become narrower, the increase in the number of bins means that the fits take longer to perform. In what follows, the data are binned in 25 keV bins; this gives several bins across the neutron peak without unnecessarily slowing the fits.

4.1.3 Fit Range

The fit range of the polynomial fits is also not critical. Extending the fit range, especially to higher energies, has the potential to help constrain the background in the neutron energy region. This

57 constraint is only helpful, however, if the background is smoothly varying; if the background varies quickly, extending the fit range will only mean that higher order polynomials are needed to correctly describe the background. Based on the Monte Carlo and data from the 4He strings, the alpha spectrum is expected to be relatively smooth below about 1.5 - 2.0 MeV. A “step” feature due to surface alpha events which are nearly perpendicular to the wire and hence experienced very large space charge effects occurs at energies above this. In order to avoid this known effect, the upper limit of the polynomial fit range is set at 1.5 MeV. The low end of the fit range is set by the requirement that the number of events in each bin must be high enough for Gaussian statistics, and hence the

χ2 tests that are used to select the polynomial order, to apply. Typically, Gaussian statistics are assumed to be a good approximation provided that all bins have more that 5-10 events [43]. In NCD data binned at 25 keV, this criteria is satisfied for bins above 0.15 - 0.2 MeV. Therefore, the lower limit of the fit range is chosen to be 0.2 MeV.

4.2 The “Overlap Method”

The Overlap Method for extracting the number of neutrons from the NCD signal was designed to capitalize on the unique energy spectrum of the NCD neutron signal. The idea is that by defining a comparison metric which describes the similarity between two distributions, one can determine the maximum and minimum number of neutrons consistent with the data under the condition that the residual background should not be too similar to the neutron shape. In this way, it proves possible to extract a neutron number while supplying no information at all about the background, other than that it is different from the neutrons (although as we shall see, adding information about the background does reduce the uncertainty in the extracted number of neutrons).

4.2.1 Method Description

The comparison metric used in the overlap method is a simple one called the “overlap,” O, which defines the similarity between two distributions L and K:

1 K L O = i i i , (4.4) (K,L) BW K L Pi i i i P P 58 where Xi is the entry of distribution X in bin i, the sum spans the energy range over which the comparison is to be made, and BW is the bin width of the histograms (dividing by the bin width makes the overlap value independent of binning). In what follows, the overlap is always defined relative to the neutron energy PDF, P N , so the second subscript on the overlap is dropped and the second normalization sum is superfluous as the PDF is pre-normalized. The overlap then becomes

1 L P N O = i i i . (4.5) L BW L P i i Now, consider a data set comprised of neutron evPents (“N”) and background events (“B”). This later class will be dominated by alphas, but could also include instrumental events, etc. The overlap of the data (“D”) will be then be given by

1 D P N O = i i i D BW D P i i 1 (N + B )P N = Pi i i i (4.6) BW (N + B ) P i i i 1 = P (NN ON + NBOB ) . NN + NB

We see, therefore, that the linear definition of the overlap is useful in that it allows the overlap of a distribution to be determined from the sum of the overlaps of the constituent distributions.

In using the overlap as a signal extraction tool, we subtract a certain number of neutrons, n, from the data (using the assumed neutron PDF shape) and calculate the overlap value for the residual,

OR(n). Using Equation 4.6, this is simply

1 O (n) = ((N n)O + N O ) . (4.7) R N + N n N − N B B N B −

We see that OR(n = NN ) = OB, so that if we knew OB we could determine the number of neutrons in the data by determining the n for which OR(n) = OB (this is actually algebraically calculable from OD, ON , OB and (NN +NB)). Unfortunately, the same poor understanding of the background shape that necessitated consideration of the overlap method in the first place denies us precise knowledge of OB. This, however, is the situation for which the overlap method was designed. In lieu of exact knowledge of the background shape and hence of OB, the method provides a convenient way to map a prior probability distribution in OB into a probability distribution in n.

59 Without a complete physical understanding of the background, the most straightforward proba- bility distribution that can be assigned to the background is a flat distribution which is felt to span the physically reasonable set of possible background spectra. The upper and lower overlap limits,

upper lower Olim and Olim , define the points at which it is felt, based on the set of backgrounds that have been observed, experience in modelling the response of the NCDs, and experience in working with similar types of detectors in other contexts, that the background looks too much like the neutron shape to be physically reasonable5.

Having established upper and lower limits on the overlap6 (and noting that higher allowed over- laps will be associated with lower estimates of the number of neutrons, and vice versa), Equation

4.7 can be used to determine the maximum and minimum number of neutrons consistent with the data such that the residual overlap does not exceed those limits:

O Oupper n = N + N B − lim (4.8) lower N B O Oupper N − lim and O Olower n = N + N B − lim . (4.9) upper N B O Olower N − lim 7 We see that if NB = 0, the confidence region collapses to NN , the true neutron number , that

upper lower for OB = Olim or OB = Olim we have nlower = NN or nupper = NN , respectively, and that n N n provided Olower O Oupper , as desired. The size of the confidence lower ≤ N ≤ upper lim ≤ B ≤ lim interval is given by upper lower (Olim Olim )(ON OB ) nupper nlower = NB − − . (4.10) − (O Oupper )(O Olower) N − lim N − lim Overall, then, a signal extraction based on overlap limits contains all of the features one might expect from a signal extraction:

5 The upper limit, which can be thought of in terms of a neutron-shaped “bump” appearing on an otherwise innocuous background is more intuitive than the lower limit. However, a neutron-shaped depression in an otherwise smooth background (which leads to the lower limit on the overlap) is just as physically unreasonable as a neutron- shaped bump. 6 The actual determination of these limits is discussed in Section 4.2.5 7 Note, however, that this situation is not practically achievable due to statistical fluctuations; these will be discussed later, but it should be noted at this point that statistical fluctuations about the assumed neutron shape are counted as part of the background.

60 The extracted neutron confidence interval contains the true number of neutrons provided that • the overlap of the background falls within the overlap limits.

A more precise knowledge of the background (reflected in a smaller spread in the overlap limits) • leads to a more precise determination of the number of neutrons.

Both a larger amount of background and a greater similarity between the background and • neutron shapes (reflected in a smaller difference between the overlap limits and the neutron

overlap) lead to greater uncertainty in the number of neutrons.

Finally, we note that simple confidence limits on the residual overlap yield simple confidence limits on the number of neutrons. Thus, while we expect that the true number of neutrons lies within the region we determine, we have no reason to expect that the true value is more likely to occur near the middle of the region than it is to occur near either of the ends.

As can be seen in Equation 4.10, the size of the confidence region for the extracted number of neutrons depends linearly on the number of background events. One might expect, therefore, that the precision of the neutron number determination could be significantly improved by subtracting from the data the large component of the alpha background that is understood based on modelling

4 and the He data. Separating the background into NA alpha events (by definition these are the alpha events that follow the predicted energy spectrum; any alphas which do not fall into the next category) and No “other” events we find that

1 O (n) = ((N n)O + N O + N O )) (4.11) R N + N + N n N − N o o A A N o A − from whence comes

O Olower O Olower n = N + N A − lim + N o − lim (4.12) upper N A O Olower o O Olower N − lim N − lim

(with nlower similar) and

upper lower (Olim Olim ) nupper nlower = − (NA(ON OA) + No(ON Oo)) . (4.13) − (O Oupper)(O Olower) − − N − lim N − lim

61 Subtracting the alpha background (using the alpha shape determined from the Monte Carlo and/or the 4He strings and normalizing it by fitting above the neutron energy region, for example) is equivalent to setting NA = 0 in these equations. We therefore recover Equations 4.7-4.10 with

N N and O O . Because N < N , for the same overlap limits, and under the assumption B → o B → o o B that Oo is not significantly nearer ON than OB was, subtracting the alpha component does indeed improve the precision of the neutron extraction very significantly8.

Unfortunately, subtraction of the alpha background brings to the fore an inherent difficulty with the normalization scheme used in the above derivation; there is nothing to prevent the denominator in Equation 4.11 from being small or negative. When a large number of alphas are present this does not pose a problem, but once the alphas are subtracted (N 0), the denominator is just A → (N n + N ). As the “other events” category now includes, by definition, the component of the N − o alpha background that we do not understand, any NNNAs that may be present, and the statistical

fluctuations in the original data, No can be small or negative. In this case, as n is scanned over the region near NN , a discontinuous divergence is encountered across which the sign of the overlap changes. To circumvent this problem, a new normalization convention must be introduced in the definition of the overlap: 1 L P N Onew = i i i , (4.14) L BW L P i | i| so the overlap of the alpha-subtracted data is now P

((N n)O + N O ) O(n) = N − N o o , (4.15) (N n)P i + N P i i | N − N o o|

iP i where the “other background PDF” Po (normalized such that i Po = 1) has been introduced, and the subscript R has been dropped to differentiate overlaps Pcalculated using the old and new normalization conventions9. Unfortunately, the new normalization convention has destroyed the linearity which led to the analytic relations derived earlier between n and the overlap limit. However,

8 The assumption that the overlap limit would not need to be changed after the subtraction of the alpha background is, of course, a poor one. The following discussion, however, renders this point moot. 9 We note that it is still possible to have a divergence with this new normalization if No = 0. However, this would require that the data agree exactly with the assumed alpha and neutron shapes, to the extent of having no statistical fluctuations. This possibility is statistically negligible.

62 the feature that allowed the overlap to be used as a signal extraction technique, that is the monotonic decrease of O(n) as n increases, remains:

dO(n) O ((N n)O + N O ) d = N N − N o o D dn − D − 2 dn | i| i i ( i Di ) i ! | | | | X PON ((NN Pn)ON + NoOo) + − 2 (4.16) ≤ − Di ( D ) i | | i | i| O O(n) = P N + P − D D i | i| i | i| 0.P P ≤

d The second line in this sequence follows from the observation10 that 1 D 1, and − ≤ dn i | i| ≤ the last line follows from the assumption11 that O(n) O . Given that O(n =PN ) = O , this ≤ N N o monotonic decrease shows that the true number of neutrons will still lie between the upper and lower neutron limits defined by the overlap method provided that the true overlap of the background lies between the overlap limits.

To summarize, in order to be able to subtract the alpha component from the data it is necessary to introduce a normalization convention to the definition of the overlap which sacrifices the linearity of the equation and hence the analytic relationships between the overlap limits and the limits of the neutron confidence region. However, the confidence region can still be defined numerically (by scanning n to find the limiting positions of O(n)), and it can still be demonstrated that provided the background overlap lies between the overlap limits, the true neutron number will fall within the extracted confidence interval. The “new” overlap thus still represents a viable signal extraction tool and, based on Monte Carlo studies, the subtraction of the alpha background thus enabled does indeed result in a substantial reduction in the width of the extracted confidence interval, even after the overlap range is expanded to take into account the significant reduction in our knowledge of the background after alpha subtraction. As a result, it is the alpha subtraction method with the new normalization convention that is used in the following analysis.

10 d d If all the Di are positive, `P Di ´ = 1; if all the Di are negative, `P Di ´ = 1. i | | i | | − 11 dn dn It is technically possible to have O(n) > ON when Oo > ON ; this could occur, for example, for a “single-bin” background immediately beneath the neutron peak. We neglect this possibility here as it will certainly lie outside of our assumed overlap interval.

63 Statistical Uncertainty

As was noted after Equations 4.9 and 4.8, if the number of background events in the data is zero, the neutron confidence interval returned by the overlap method collapses to the number of events in the data. That is to say, the overlap method as described to this point provides a confidence region describing the number of neutrons present in a specific data set; the statistical uncertainty associated with the conversion between the number of neutrons present in a specific data set and the “true” number expected based on the underlying distribution is not included. It is therefore necessary to widen the confidence region returned by the overlap method by an amount equivalent to adding in quadrature the width of the original confidence region and the statistical uncertainty in the number of extracted neutrons in order to include this effect.

4.2.2 Fit Range

As the overlap method (by design) does not impose any information about the background other than its dissimilarity to the neutron shape, there is no benefit in including the data outside the neutron energy region in the fit (any data outside this region will have a “zero” weighting when multiplied by the neutron PDF value, and so will not contribute to the overlap results). Therefore, the overlap fits are performed over 0.2 - 1.0 MeV.

4.2.3 Binning

The bin width used in the overlap fits is important. In the ideal situation, where the data are well described by the assumed background and neutron shapes, in the vicinity of the true neutron number the residual being used to calculate the overlap will be dominated by statistical fluctuations.

These fluctuations, which on average will themselves have an overlap of zero, will thus determine the normalization used in Equation 4.14. In determining the upper and lower limits on the neutron number, neutrons can be thought of as being added to, and subtracted from, the residual at the true neutron number until the residual overlaps meet the overlap limits. The normalization increases or decreases the contribution of each added neutron to Equation 4.14, and hence changes the number

64 of neutrons that must be added and subtracted to reach the overlap limits. Thus, as the bin widths narrow and the sum of the absolute values of the statistical fluctuations in the bins increases, the width of the neutron confidence region will also increase. From this point of view, then, the bin widths should be as large as possible.

On the other hand, as the bin widths get very large, the distinct neutron shape is “washed out” by the binning. This means that the overlap of a given physics distribution with the neutron shape increases as bin widths increase (in the limit of single bin histograms, any background shape has the same overlap as the neutron shape). Thus, in order to encompass the same set of physics backgrounds, the overlap limits must be increased as the bin width increases. This effect increases the neutron confidence interval as bin width increases, and hence makes smaller bin widths preferable.

Figure 4.3 gives an illustrative example of the opposing effects described above. In this Figure, we see that when both of these effects are considered, the width of the neutron confidence interval

(and hence the size of the neutron number uncertainty) has a shallow minimum for bin widths in the

50 - 90 keV range. This result is slightly dependant on the shape of the background, as the rate with which the overlap of the background increases with bin width depends on the background shape being considered. The change in the background overlap with bin width is quite slow, however, compared to the effect of the changing sum of the statistical uncertainties, so the position of the minimum is unlikely to change significantly. We therefore choose 50 keV (approximately at the beginning of the steep increase due to the statistical effect) as the bin width for the overlap fits.

4.2.4 Assumed Alpha Shape

The background shape that was assumed in carrying out the overlap fits was obtained from data from the 4He strings. For ease of manipulation, the 4He data were fitted using the polynomial method to provide an analytic form for the assumed background shape. The polynomial method selected a first order polynomial to describe the 4He data; the data and fit are shown in Figure 4.4.

For reference, the (non-normalized) best fit form of the background, B, as a function of the energy,

65 110 1.3

100 1.25

90 1.2

80 1.15

70 Overlap of Test Background Shape 1.1

60 1.05

Average Half Length of Neutron Confidence Interval 50 1 0 20 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200 Bin Width (keV) Bin Width (keV)

(a) Constant Overlap Limit. (b) Change in Overlap with Bin Width.

110

105

100

95

90

85

80

75

70

65

Average Half Length of Neutron Confidence Interval 60 0 20 40 60 80 100 120 140 160 180 200 Bin Width (keV)

(c) Constant Physics Limit.

Figure 4.3: The effect of bin size on the overlap fits. In (a), the half-width of the neutron confidence interval is shown for overlap fits to Monte Carlo data sets (each data set consisted of about 7500 alpha background events between 0 and 1.5 MeV drawn from the 4He distribution and 1000 neutron events drawn from the energy spectrum of the 24Na spike data) with different bin widths and constant overlap limits. Each point on the plot represents the average of 1000 Monte Carlo tests. The increase in the neutron confidence interval at small bin widths as a result of the increase in the total statistical fluctuations in the data, as described in the text, can clearly be seen. In (b), the increase in the overlap of a constant physics data shape (in this case a broad Gaussian centred at just below the neutron peak; this is “NNNA shape 1” as described in Figure 4.6) as the bin width increases is shown. Finally, (c) shows the half-widths of the neutron confidence regions resulting from Monte Carlo data sets similar to those in (a), except that this time the overlap limits for each binning are scaled by the results shown in (b) (i.e. (c) shows the change in confidence region with binning when a constant background shape limit, as opposed to a constant overlap limit, is applied). A shallow minimum is visible at around a bin width of 50 keV.

66 E, (in MeV) is 16.87 + 6.18E B(E) = . (4.17) e(56.847(0.199272−E)) + 1

In the overlap fits, the normalization of this assumed background shape was determined by fitting to the data above the overlap window (i.e. from 1.0 - 1.5 MeV).

80

70

60

50 He String Alphas / 20 keV 4 40

30

20

10

0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Energy (MeV)

Figure 4.4: The 4He string based alpha spectrum, and the fit to the first order polynomial (con- volved with an inverted Fermi Function to simulate the threshold cutoff) that was used as the assumed background shape in the overlap fits.

It should be noted that initial attempts were made to use the Monte Carlo-derived alpha PDF that was used in the NCD PRL as the assumed alpha shape. However, as can be seen in Figure 4.5, the 4He shape provides a better description of the data, and hence a narrower neutron confidence interval, when tested on a blinded NCD data set12.

4.2.5 Defining the Overlap Limits

In defining the overlap limits, one can take two approaches; the limits can be based either on the range of possible background shapes which seem a priori reasonable, or they can be based on a rough expectation of what the background shape actually is. In the first case, the overlap limits are set by quantifying the point at which the background seems “too similar” to the neutron shape to

12 The blind data used here were produced using the “Wilkerson-Elliott blindness procedure” which was used in all SNO analyses. In this procedure, an unknown fraction (<30%) of events are excluded at random from the data, and the muon-follower data cleaning cut is intentionally partially spoiled to allow an unknown number of muon follower events (which are primarily neutrons) into the analyzed data.

67 Monte Carlo Alpha Shape 700 4He Alpha Shape

600

500

400 Blind Data Events / 50 keV 300

200

100

0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Energy (MeV)

Figure 4.5: A comparison of the extrapolated 4He and Monte Carlo alpha spectra compared to the (blind) 3He data. The assumed spectra are fitted to the data between 1.0 and 1.5 MeV and extrapolated to lower energy, as is done in the overlap method. The 4He shape can be seen to give better agreement with the blind data at low energies (where the neutron spectrum gives very little contribution). be physically acceptable. In this case, the overlap method simply serves as a mechanism to map our prior expectations for the types of background that seem reasonable into a range of allowed neutron numbers. The range of reasonable backgrounds upon which the overlap limits are based is, of course, a somewhat subjective notion which will vary from person to person. Still, there is some consistency in the background behaviours that people consider reasonable, and in the event that no other information is available, the overlap method allows a signal extraction to be carried out on this basis alone.

In the second case, the overlap limits are based on a firmer understanding of the expected background shape. If, for example, one was able to define even roughly the PDFs that might be used in a more conventional signal extraction, the overlap limits could be based on the range of background overlaps allowed by the background PDFs and their systematics. Used in this way, the overlap method will yield the “correct” answer in any situation where the conventional signal extraction would, and thus simply provides an additional method of evaluating systematic uncertainties (in a very conservative way). More than that, however, the overlap method “automatically includes” in that systematic uncertainty any background shape within that overlap range, not just those that

68 would be considered in traditional systematics determination. The overlap method thus has the potential to be correct even where a traditional analysis might not be. This generality, of course, comes at the cost of sacrificing some specific information about the expected background shape, presumably with an attendant increase in neutron uncertainty. One would not, therefore, opt for this approach when the systematics are small and well defined. When the systematics are not well defined, however, the overlap approach could provide a reasonable alternative.

Before proceeding further, it is useful to develop an intuition for the behaviour of the overlap variable. In general, of course, the overlap is larger for those backgrounds which have a larger fraction of their area under the neutron peak. By definition, the overlap of a flat line with the neutron shape is 1.0, while the overlap of the neutron shape with itself is 3.06 and the overlap of the assumed alpha shape (with the “assumed background subtraction” turned off) is 1.04. To provide further examples, a set of hypothetical “NNNA shapes” was prepared and their overlaps calculated; the results are shown in Figure 4.6. These shapes will also be used in the next section to test the response of the different signal extraction routines to unexpected shapes in the backgrounds. The maximum achievable overlap (for a single bin background directly beneath the neutron peak) is 4.99.

If the overlap limits are to be set using simply our intuition about reasonable behaviours of the overlap, Figure 4.6 is all that is required. In the opinion of the author, for example, shapes 2, 6, and

7 are too similar to the neutron shape to be physically reasonable, while shapes 1 and 5 (and their inverses) seem reasonable. Therefore, the overlap limits might be chosen as 1.2 and -1.2.

To proceed along the second path, it is necessary to determine the overlaps of the background

PDFs that were used in the published NCD PRL. These PDFs consisted of a Monte Carlo alpha spectrum, with eight associated polynomial systematic uncertainties, and two “NNNA PDFs” pro- duced by fitting Gaussian curves to the spectra of the known NNNA strings after subtracting the average spectrum from “clean” strings. As discussed earlier, the PDFs and their associated system- atics were only vetted in the 0.4 - 1.4 MeV energy range. However, the produced PDFs do extend down to 0.2 MeV, and that full range was used in determining their overlaps. The overlaps of the

69 Shape 1 Shape 2 Shape 3 0.6 0.6 0.6

0.5 O = 1.17 0.5 O = 2.57 0.5 O = 0.94

0.4 0.4 0.4

0.3 0.3 0.3

0.2 0.2 0.2

0.1 0.1 0.1

0 0 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Energy (MeV) Energy (MeV) Energy (MeV)

0.6 Shape 4 0.6 Shape 5 0.6 Shape 6

0.5 O = 1.00 0.5 O = 1.22 0.5 O = 1.61

0.4 0.4 0.4

0.3 0.3 0.3

0.2 0.2 0.2

0.1 0.1 0.1

0 0 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Energy (MeV) Energy (MeV) Energy (MeV)

Shape 7 Shape 8 Shape 9 0.6 0.6 0.6

0.5 O = 2.21 0.5 O = 0.34 0.5 O = 0.26

0.4 0.4 0.4

0.3 0.3 0.3

0.2 0.2 0.2

0.1 0.1 0.1

0 0 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Energy (MeV) Energy (MeV) Energy (MeV)

Figure 4.6: The “NNNA” shapes used in calculating the sensitivity of the signal extraction methods to unexpected backgrounds. The 2005 24Na energy spectrum is overlaid with each background shape for comparison, and the overlap, O, of each shape with the neutron spectrum is also given. The overlap is seen to have the expected behaviour, with higher overlaps for those shapes which “look more” like the neutron shape.

70 different PDFs are shown in Table 4.4.

PDF Overlap Central Alpha Monte Carlo -0.14 Alpha MC +1σ Po alpha depth systematic -0.24 Alpha MC -1σ Po alpha depth systematic 0.31 Alpha MC +1σ Bulk alpha depth systematic -0.48 Alpha MC -1σ Bulk alpha depth systematic 0.44 Alpha MC +1σ Drift time systematic -0.24 Alpha MC -1σ Drift time systematic 0.03 Alpha MC +1σ Avalanche width offset systematic -0.32 Alpha MC -1σ Avalanche width offset systematic 0.08 Alpha MC +1σ Avalanche gradient offset systematic -0.14 Alpha MC -1σ Avalanche gradient offset systematic -0.14 Alpha MC +1σ Po/bulk fraction systematic -0.32 Alpha MC -1σ Po/bulk fraction systematic 0.11 Alpha MC +1σ Ion mobility systematic -0.14 Alpha MC -1σ Ion mobility systematic -0.14 Alpha MC +1σ Data cleaning systematic 0.38 Alpha MC -1σ Data cleaning systematic -0.27 String 26 NNNA 0.48 String 0 NNNA 0.46

Table 4.4: The overlaps of the PDFs used in the signal extraction published in the NCD PRL. The systematic PDFs themselves are shown in [35].

The systematic uncertainties in the Monte Carlo alpha energy spectrum were parameterized as multiplicative, rather than additive, changes to the spectrum [35]. This means that the overlap of the alpha shape produced by combinations of these systematics will not necessarily lie within the overlap range spanned by the individual corrections. To check for this possibility, combinations of the different systematic uncertainties, with each of them randomly varied according to the 1σ  constraints, were applied to the Monte Carlo alpha spectrum and the overlap recorded. The result is shown in Figure 4.7. In Figure 4.7, we see that overlap limits of -0.5 and 0.6 spans the bulk of the possible overlaps. To be conservative, however, (and to have symmetric overlap limits), we choose the overlap limit of -1.0 and 1.0 as spanning practically all of the backgrounds allowed by the Monte

Carlo.

71 0.05

0.04

0.03

0.02 Relative Frequency of Occurance

0.01

0 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 Overlap

Figure 4.7: The spread in the overlaps of the different Monte Carlo alpha spectra admitted by the Monte Carlo systematic uncertainties.

In summary, then, we have produced two possible sets of overlap limits using two different techniques. The first technique, in which the overlap limits are chosen based on the author’s intuition for the range of background shapes that are physically reasonable, yields overlap limits of -1.2 and 1.2.

The second technique, which bases the overlap limits on the assumption that the true background should not be more like the neutron shape than what might be expected from the PDFs used in the

NCD PRL signal extraction (including their systematic uncertainties) yields overlap limits of -1.0 and 1.0.

4.3 Testing the Methods

To ensure that the polynomial and overlap methods function as desired and to gain some intuition for how the fits behave when faced with unexpected backgrounds, it is helpful to carry out studies using Monte Carlo data. Beyond that, the methods can be tested using a blinded set of actual NCD data.

In the simplest tests, the polynomial and overlap methods were used to fit fake data sets composed only of neutrons and alphas drawn from the assumed distributions. Again, each data set contained on average 1000 neutrons and 7500 alphas below 1.5 MeV. As can be seen in Figure 4.8, both

72 the polynomial and overlap methods have mean biases less than one event, and consistent with zero13. Recalling that if the statistical spread in the bias distribution is correctly described by the

fit uncertainties, the pulls distribution should have a width (“sigma”) of 1.0. As can be seen, the pulls distribution of the polynomial method is correct. The distribution of pulls for the overlap method has a width of less than one; this is not unexpected, however, as it simply shows that the spread in the chosen overlap limits was larger than the spread required to describe the background present in the simulated data sets. Repeating the overlap tests with overlap limits of -0.65 and

0.65, for example, increases the width of the overlap pulls distribution to 1.03, while increasing the overlap range to -1.2 to 1.2 narrows the width of the pulls distribution to 0.59.

Having established that both signal extraction methods work as anticipated on data drawn from well understood signal and background distributions, it is now interesting to study their behaviour when unexpected background shapes are present. To this end, Monte Carlo ensemble tests were carried out with “fake NNNAs” added to the data. The NNNAs were drawn from the spectra shown in Figure 4.6, and an average of 200 NNNAs was added to each fake data set (in addition to

1000 neutrons and 7300 alphas drawn from the expected distributions). The choice of 200 NNNAs is meant to be illustrative; the effect of the NNNAs should scale roughly as the number present.

Each fake data set was then fitted using the polynomial method, the overlap method14 and a

“standard method,” which was a simple binned likelihood fit to the expected neutron and alpha

PDFs. Comparisons between the results of the polynomial, overlap and standard fits in both bias and pull are shown in Figures 4.9-4.14.

In these tests, it appears that both the overlap and polynomial tests react to the “unexpected” backgrounds in the expected manner. In the overlap fits, the presence of the unexpected background shape shifts the position of the true neutron number with respect to the overlap-defined neutron limits (increasing the size of the “biases”), while at the same time increasing the neutron limit spread

13 In order to easily compare the overlap method with the polynomial method, a “fitted number of neutrons” for the overlap method was defined as the average of the upper and lower neutron number limits, with a “fit uncertainty” equal to half of the spread between the limits. Note that there is no reason to expect the resulting distributions to have small biases; all that should really be expected is that if the true background overlap lies between the overlap limits, the overlap pulls should lie between -1 and 1. 14 The overlap limits used in these comparisons were -1.0 and 1.0.

73 Entries 10000 Entries 10000 Mean -0.04201 Mean 0.006916 350 RMS 62.89 RMS 0.9999 χ2 / ndf 98.08 / 197 220 χ2 / ndf 190.6 / 197 300 Constant 317.2 ± 3.9 200 Constant 199.5 ± 2.4 Mean -0.044 ± 0.629 Mean 0.0068 ± 0.0100 Sigma 62.89 ± 0.44 180 Sigma 1 ± 0.0 250 160

200 140 120 150 100 80 100 60 40 50 20 0 0 -500 -400 -300 -200 -100 0 100 200 300 400 500 -5 -4 -3 -2 -1 0 1 2 3 4 5 (True-Fitted) Number of Neutrons (True-Fitted) Number of Neutrons / Fit Uncertainty

(a) Polynomial Bias. (b) Polynomial Pull.

Entries 10000 Entries 10000 Mean 0.2853 Mean 0.007996 RMS 56.47 RMS 0.7211 2 2 350 χ / ndf 72.02 / 197 300 χ / ndf 112.7 / 197 Constant 353.2 ± 4.3 Constant 276.7 ± 3.4 Mean 0.5325 ± 0.5648 Mean 0.007995 ± 0.007209 300 Sigma 56.48 ± 0.40 250 Sigma 0.7209 ± 0.0051

250 200 200 150 150 100 100

50 50

0 0 -500 -400 -300 -200 -100 0 100 200 300 400 500 -5 -4 -3 -2 -1 0 1 2 3 4 5 (True-Fitted) Number of Neutrons (True-Fitted) Number of Neutrons / Fit Uncertainty

(c) Overlap Bias. (d) Overlap Pull.

Figure 4.8: The results of simple Monte Carlo ensemble tests of the polynomial and overlap meth- ods (the overlap limits were -1.0 and 1.0). Each fake data set consisted of, on average, 1000 neutrons and 7500 alphas below 1.5 MeV drawn from the assumed alpha and neu- tron spectra. The fitted width of the polynomial pulls distribution is 1.00005 0.007. 

74 NNNA Shape 1 Mean x -32.4 NNNA Shape 2 Mean x -173.5 NNNA Shape 3 Mean x -43.78 500 Mean y -12.27 500 Mean y -145.9 500 Mean y -3.751 RMS x 55.94 RMS x 57.5 RMS x 57.24 400 RMS y 64.84 400 RMS y 72.85 400 RMS y 66.15

300 300 300

200 200 200

100 100 100 Polynomial Fit Bias Polynomial Fit Bias Polynomial Fit Bias 0 0 0

-100 -100 -100

-200 -200 -200

-300 -300 -300

-400 -400 -400

-500 -500 -500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 Standard Fit Bias Standard Fit Bias Standard Fit Bias

NNNA Shape 4 Mean x -41.27 NNNA Shape 5 Mean x -66.47 NNNA Shape 6 Mean x -77.53 Mean y Mean y Mean y 500 -5.83 500 -37.98 500 -20.13 RMS x 53.77 RMS x 54.86 RMS x 53.86 400 RMS y 64.2 400 RMS y 60.64 400 RMS y 72.64

300 300 300

200 200 200

100 100 100 Polynomial Fit Bias Polynomial Fit Bias Polynomial Fit Bias 0 0 0

-100 -100 -100

-200 -200 -200

-300 -300 -300

-400 -400 -400

-500 -500 -500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 Standard Fit Bias Standard Fit Bias Standard Fit Bias

NNNA Shape 7 Mean x -145.7 NNNA Shape 8 Mean x0.07398 NNNA Shape 9 Mean x 8.964 Mean y -109 Mean y -3.195 Mean y 7.118 500 500 500 RMS x 57.17 RMS x 54.71 RMS x 55.97 400 RMS y 72.91 400 RMS y 66.69 400 RMS y 70.21

300 300 300

200 200 200

100 100 100 Polynomial Fit Bias Polynomial Fit Bias Polynomial Fit Bias 0 0 0

-100 -100 -100

-200 -200 -200

-300 -300 -300

-400 -400 -400

-500 -500 -500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 Standard Fit Bias Standard Fit Bias Standard Fit Bias

Figure 4.9: Comparison of the effect of potential NNNAs on polynomial and standard fit biases. Note that the average biases of the polynomial method are significantly smaller than in the standard signal extraction, while the RMS spread in the bias (which should be approximately equal to the fit uncertainty) is larger for the polynomial method.

75 NNNA Shape 1 Mean x -32.4 NNNA Shape 2 Mean x -173.5 NNNA Shape 3 Mean x -43.78 500 Mean y -37.51 500 Mean y -166.7 500 Mean y -56.77 RMS x 55.94 RMS x 57.5 RMS x 57.24 400 RMS y 56.4 400 RMS y 57.95 400 RMS y 58.19

300 300 300

200 200 200 Overlap Fit Bias Overlap Fit Bias Overlap Fit Bias 100 100 100

0 0 0

-100 -100 -100

-200 -200 -200

-300 -300 -300

-400 -400 -400

-500 -500 -500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 Standard Fit Bias Standard Fit Bias Standard Fit Bias

NNNA Shape 4 Mean x -41.27 NNNA Shape 5 Mean x -66.47 NNNA Shape 6 Mean x -77.53 Mean y Mean y Mean y 500 -54.09 500 -77.94 500 -91.28 RMS x 53.77 RMS x 54.86 RMS x 53.86 400 RMS y 54.43 400 RMS y 56.21 400 RMS y 55.68

300 300 300

200 200 200 Overlap Fit Bias Overlap Fit Bias Overlap Fit Bias 100 100 100

0 0 0

-100 -100 -100

-200 -200 -200

-300 -300 -300

-400 -400 -400

-500 -500 -500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 Standard Fit Bias Standard Fit Bias Standard Fit Bias

NNNA Shape 7 Mean x -145.7 NNNA Shape 8 Mean x0.07398 NNNA Shape 9 Mean x 8.964 Mean y -143.4 Mean y -21.61 Mean y -16.2 500 500 500 RMS x 57.17 RMS x 54.71 RMS x 55.97 400 RMS y 57.07 400 RMS y 57.11 400 RMS y 57.76

300 300 300

200 200 200 Overlap Fit Bias Overlap Fit Bias Overlap Fit Bias 100 100 100

0 0 0

-100 -100 -100

-200 -200 -200

-300 -300 -300

-400 -400 -400

-500 -500 -500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 Standard Fit Bias Standard Fit Bias Standard Fit Bias

Figure 4.10: Comparison of the effect of potential NNNAs on overlap and standard fit biases. The behaviour of the standard and overlap fits can be seen to be quite correlated. The overlap biases can be seen in several cases to be larger than the biases in the standard fits. It is difficult, however, to make meaningful comparisons with the means of the overlap fit ranges, as the chief way in which the overlap method responds to unexpected backgrounds is by increasing the width of the neutron confidence interval. This should be more evident in the pulls distribution (Figure 4.13), and in the fraction of test fits in which the true number of neutrons is contained in the neutron confidence interval (Table 4.5).

76 NNNA Shape 1 Mean x -37.51 NNNA Shape 2 Mean x -166.7 NNNA Shape 3 Mean x -56.77 500 Mean y -12.27 500 Mean y -145.9 500 Mean y -3.751 RMS x 56.4 RMS x 57.95 RMS x 58.19 400 RMS y 64.84 400 RMS y 72.85 400 RMS y 66.15

300 300 300

200 200 200

100 100 100 Polynomial Fit Bias Polynomial Fit Bias Polynomial Fit Bias 0 0 0

-100 -100 -100

-200 -200 -200

-300 -300 -300

-400 -400 -400

-500 -500 -500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 Overlap Fit Bias Overlap Fit Bias Overlap Fit Bias

NNNA Shape 4 Mean x -54.09 NNNA Shape 5 Mean x -77.94 NNNA Shape 6 Mean x -91.28 Mean y Mean y Mean y 500 -5.83 500 -37.98 500 -20.13 RMS x 54.43 RMS x 56.21 RMS x 55.68 400 RMS y 64.2 400 RMS y 60.64 400 RMS y 72.64

300 300 300

200 200 200

100 100 100 Polynomial Fit Bias Polynomial Fit Bias Polynomial Fit Bias 0 0 0

-100 -100 -100

-200 -200 -200

-300 -300 -300

-400 -400 -400

-500 -500 -500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 Overlap Fit Bias Overlap Fit Bias Overlap Fit Bias

NNNA Shape 7 Mean x -143.4 NNNA Shape 8 Mean x -21.61 NNNA Shape 9 Mean x -16.2 Mean y -109 Mean y -3.195 Mean y 7.118 500 500 500 RMS x 57.07 RMS x 57.11 RMS x 57.76 400 RMS y 72.91 400 RMS y 66.69 400 RMS y 70.21

300 300 300

200 200 200

100 100 100 Polynomial Fit Bias Polynomial Fit Bias Polynomial Fit Bias 0 0 0

-100 -100 -100

-200 -200 -200

-300 -300 -300

-400 -400 -400

-500 -500 -500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 -500 -400 -300 -200 -100 0 100 200 300 400 500 Overlap Fit Bias Overlap Fit Bias Overlap Fit Bias

Figure 4.11: Comparison of the effect of potential NNNAs on overlap and polynomial fit biases. As expected from the previous comparisons, the polynomial fits have significantly smaller average biases.

77 NNNA Shape 1 Mean x -0.5752 NNNA Shape 2 Mean x -2.995 NNNA Shape 3 Mean x -0.7757 5 Mean y -0.1931 5 Mean y -2.228 5 Mean y-0.05427 RMS x 1.005 RMS x 0.9376 RMS x 1.021 4 RMS y 1.03 4 RMS y 1.129 4 RMS y 1.04

3 3 3

2 2 2

1 1 1 Polynomial Fit Pull Polynomial Fit Pull Polynomial Fit Pull 0 0 0

-1 -1 -1

-2 -2 -2

-3 -3 -3

-4 -4 -4 -5 -5 -5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 Standard Fit Pull Standard Fit Pull Standard Fit Pull

NNNA Shape 4 Mean x -0.7337 NNNA Shape 5 Mean x -1.18 NNNA Shape 6 Mean x -1.381 Mean y Mean y Mean y 5 -0.09809 5 -0.5949 5 -0.3396 RMS x 0.962 RMS x 0.9674 RMS x 0.9527 4 RMS y 1.018 4 RMS y 0.9492 4 RMS y 1.105

3 3 3

2 2 2

1 1 1 Polynomial Fit Pull Polynomial Fit Pull Polynomial Fit Pull 0 0 0

-1 -1 -1

-2 -2 -2

-3 -3 -3

-4 -4 -4 -5 -5 -5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 Standard Fit Pull Standard Fit Pull Standard Fit Pull

NNNA Shape 7 Mean x -2.542 NNNA Shape 8 Mean x0.01279 NNNA Shape 9 Mean x 0.1747 Mean y Mean y Mean y 5 -1.683 5 -0.02938 5 0.1535 RMS x 0.9597 RMS x 0.9915 RMS x 1.02 4 RMS y 1.123 4 RMS y 1.025 4 RMS y 1.095

3 3 3

2 2 2

1 1 1 Polynomial Fit Pull Polynomial Fit Pull Polynomial Fit Pull 0 0 0

-1 -1 -1

-2 -2 -2

-3 -3 -3

-4 -4 -4

-5 -5 -5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 Standard Fit Pull Standard Fit Pull Standard Fit Pull

Figure 4.12: Comparison of the effect of potential NNNAs on polynomial and standard fit pulls. Some deviation from an RMS of 1.0 can be seen in both the standard and polynomial fits when NNNAs are present; however, the deviations are not particularly large indi- cating that the fit uncertainties do a reasonably good job of predicting the statistical spread in the fit values even when NNNAs are present. The polynomial fits have significantly smaller average pulls.

78 NNNA Shape 1 Mean x -0.5752 NNNA Shape 2 Mean x -2.993 NNNA Shape 3 Mean x -0.7757 5 Mean y -0.4445 5 Mean y -1.812 5 Mean y -0.5812 RMS x 1.005 RMS x 0.9364 RMS x 1.021 4 RMS y 0.6831 4 RMS y 0.6968 4 RMS y 0.6102

3 3 3

2 2 2 Overlap Fit Pull Overlap Fit Pull Overlap Fit Pull 1 1 1

0 0 0

-1 -1 -1

-2 -2 -2

-3 -3 -3

-4 -4 -4 -5 -5 -5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 Standard Fit Pull Standard Fit Pull Standard Fit Pull

NNNA Shape 4 Mean x -0.7337 NNNA Shape 5 Mean x -1.18 NNNA Shape 6 Mean x -1.381 Mean y Mean y Mean y 5 -0.5919 5 -0.8052 5 -0.9659 RMS x 0.962 RMS x 0.9674 RMS x 0.9527 4 RMS y 0.6154 4 RMS y 0.5971 4 RMS y 0.6118

3 3 3

2 2 2 Overlap Fit Pull Overlap Fit Pull Overlap Fit Pull 1 1 1

0 0 0

-1 -1 -1

-2 -2 -2

-3 -3 -3

-4 -4 -4 -5 -5 -5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 Standard Fit Pull Standard Fit Pull Standard Fit Pull

NNNA Shape 7 Mean x -2.542 NNNA Shape 8 Mean x0.01279 NNNA Shape 9 Mean x 0.1747 Mean y Mean y Mean y 5 -1.621 5 -0.1686 5 -0.1176 RMS x 0.9597 RMS x 0.9915 RMS x 1.02 4 RMS y 0.7053 4 RMS y 0.4614 4 RMS y 0.4575

3 3 3

2 2 2 Overlap Fit Pull Overlap Fit Pull Overlap Fit Pull 1 1 1

0 0 0

-1 -1 -1

-2 -2 -2

-3 -3 -3

-4 -4 -4

-5 -5 -5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 Standard Fit Pull Standard Fit Pull Standard Fit Pull

Figure 4.13: Comparison of the effect of potential NNNAs on overlap and standard fit pulls. Here the “pull” of the overlap is defined as the mean of the confidence interval divided by half the width of the interval. Based on the definition of the overlap neutron limits, all that should really be expected of such a “pull” is that it should be between -1 and 1 when the background overlap falls between the overlap limits. Still, the mean pull of the overlap fits is seen to be smaller than the mean pull of the standard fit (in spite of the larger average overlap biases in Figure 4.10), indicating that the widths of the overlap confidence intervals are properly increasing in the presence of the NNNAs to help to take their effect into account.

79 NNNA Shape 1 Mean x -0.4445 NNNA Shape 2 Mean x -1.83 NNNA Shape 3 Mean x -0.5812 5 Mean y -0.1931 5 Mean y -2.247 5 Mean y-0.05427 RMS x 0.6831 RMS x 0.71 RMS x 0.6102 4 RMS y 1.03 4 RMS y 1.138 4 RMS y 1.04

3 3 3

2 2 2

1 1 1 Polynomial Fit Pull Polynomial Fit Pull Polynomial Fit Pull 0 0 0

-1 -1 -1

-2 -2 -2

-3 -3 -3

-4 -4 -4 -5 -5 -5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 Overlap Fit Pull Overlap Fit Pull Overlap Fit Pull

NNNA Shape 4 Mean x -0.5919 NNNA Shape 5 Mean x -0.8052 NNNA Shape 6 Mean x -0.9659 Mean y Mean y Mean y 5 -0.09809 5 -0.5949 5 -0.3396 RMS x 0.6154 RMS x 0.5971 RMS x 0.6118 4 RMS y 1.018 4 RMS y 0.9492 4 RMS y 1.105

3 3 3

2 2 2

1 1 1 Polynomial Fit Pull Polynomial Fit Pull Polynomial Fit Pull 0 0 0

-1 -1 -1

-2 -2 -2

-3 -3 -3

-4 -4 -4 -5 -5 -5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 Overlap Fit Pull Overlap Fit Pull Overlap Fit Pull

NNNA Shape 7 Mean x -1.627 NNNA Shape 8 Mean x -0.1686 NNNA Shape 9 Mean x -0.1176 Mean y Mean y Mean y 5 -1.69 5 -0.02938 5 0.1535 RMS x 0.7085 RMS x 0.4614 RMS x 0.4575 4 RMS y 1.127 4 RMS y 1.025 4 RMS y 1.095

3 3 3

2 2 2

1 1 1 Polynomial Fit Pull Polynomial Fit Pull Polynomial Fit Pull 0 0 0

-1 -1 -1

-2 -2 -2

-3 -3 -3

-4 -4 -4

-5 -5 -5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5 Overlap Fit Pull Overlap Fit Pull Overlap Fit Pull

Figure 4.14: Comparison of the effect of potential NNNAs on overlap and polynomial fit pulls. In some cases the polynomial method has a smaller mean bias and in others the overlap fit has a smaller mean bias. While comparisons to the “pulls” of the overlap fits should again not be taken too seriously, this does indicate that the preferable method can depend on the shape of the background.

80 (which reduces the size of the “pulls”). The overlap limits used in these fits (-1.0, 1.0) are small compared to the overlaps of most of the NNNA test shapes; as a result, the increase in the width of the neutron confidence region is not necessarily sufficient to encompass the change produced by the

NNNAs. Table 4.5 shows the fraction of test data sets for which the true number of neutrons falls within the overlap neutron confidence intervals for overlap fits using different overlap limits. As can be seen in the Table, the neutron limits tend to encompass the true neutron number provided that the overlap limits encompass (or almost encompass) the background overlap, as expected.

Overlap Limits -1.0, 1.0 -1.2, 1.2 -2.0, 2.0 none (–) 0.85 0.92 1.00 1 (1.17) 0.79 0.87 1.00 erlap)

Ov 2 (2.57) 0.11 0.19 0.89 3 (0.94) 0.76 0.87 1.00 4 (1.00) 0.76 0.86 1.00 (NNNA

e 5 (1.22) 0.65 0.77 1.00 6 (1.61) 0.52 0.67 0.99 Shap 7 (2.21) 0.19 0.31 0.94 8 (0.34) 0.96 0.99 1.00

NNNA 9 (0.26) 0.96 0.99 1.00 No NNNA uncertainty 69 97 210

Table 4.5: The fraction of Monte Carlo ensemble tests in which the true neutron number was con- tained in the neutron confidence region defined by the overlap method. It can be seen that the overlap fit limits have a good chance of encompassing the true neutron num- ber provided that the overlap limits encompass, or nearly encompass, the background overlap. The effect is not perfect because the overlap of a specific NNNA background can differ from the overlap of the underlying distribution due to statistical fluctuations. The average “uncertainty” (the half width of the neutron confidence interval) for fits to Monte Carlo data sets with no NNNAs added are shown to give an idea of how they change with overlap limits.

The polynomial tests also react as expected: as seen in Table 4.6, in the presence of the hypo- thetical NNNAs, higher order polynomials are chosen, on average, than when the NNNAs are not present. The fits also appear to be correctly adapting to the unexpected backgrounds, as seen in the reduced average biases and pulls compared to the standard fits.

Finally, the inclusion of the “standard” fitting technique in the ensemble tests allows comparison

81 NNNA None 1 2 3 4 5 6 7 8 9 Avg. Order 1.76 1.99 2.45 2.10 1.98 1.95 3.29 2.58 2.97 3.04

Table 4.6: The average polynomial order selected in the ensemble tests which included hypothetical NNNAs (the number in the first row refers to the number of the NNNA shape as shown in Figure 4.6). It can be seen that the polynomial method chooses higher order polynomials, on average, when the NNNAs are present.

of the performance of the new methods with standard techniques. Both the polynomial and overlap

fits have smaller pulls in the presence of the NNNAs than does the “standard” fit, at the cost of larger fit uncertainties. In the overlap method, the fit biases tend to be as large or larger than the standard fits, but with the fit “uncertainties” in the overlap fits increasing to reduce the pull. The polynomial method tends to have smaller biases and slightly larger uncertainties than the standard

fits. In general, then, both the polynomial and overlap methods appear to be fulfilling their intended purpose of increasing the robustness of the fit results against unexpected background shapes. As might be expected, this increased robustness, which is achieved by relying less strongly on prior knowledge of the background energy spectrum, comes at the cost of increasing the fit uncertainties.

Tests Using Blind Data

Finally, the new fit procedures were tested using a blinded set of NCD data. As described earlier, this blind data set consisted of a subset of about 93% of the NCD data from which an unknown fraction (<30%) of individual events was randomly excluded, and for which the data cleaning cuts designed to reject secondary events resulting from the passage of a muon through the detector were intentionally spoiled to allow an unknown number of single muon follower events (typically neutrons) into the data set. Only data from the 3He strings that were included in the NCD PRL were included in this blind data set.

The best fits to the blind data set using the polynomial method and the overlap method with overlap limits -1.0 and 1.0 are shown in Figure 4.15. The polynomial fit chose order five as the best description of the background, with 1188 73 neutrons fitted and a χ2 of 32.0/45, while the  overlap method returned (1290, 1537) as the neutron confidence interval. The standard fit with

82 the 4He background PDF fitted 1399 57 neutrons with a χ2 of 64.5/50. Based on Figure 4.15,  it appears that the discrepancy between the polynomial fit and the other fit methods is due to an apparent “step” in the background spectrum just below the neutron peak. The standard fit, lacking the freedom in the PDFs included in the NCD PRL, was unable to account for this background and as a result “undershot” the data in this area, fitting a larger number of neutrons as a result. As can be seen in Figure 4.15 and Table 4.7, the lower order polynomial fits also encountered this problem; as the polynomial order increases, the background shape begins to form a “bump” at energies just below the neutron peak, and the number of fitted neutrons decreases. A bump can also clearly be seen in the upper and lower residual backgrounds determined by the overlap fit. In looking at Figure

4.15 (b), it is clear that the overlap limits used were not sufficient to encompass the background present; the lower residual background clearly exhibits the expected inverted neutron peak, but the upper residual background does not show a significant (positive) peak. We thus expect the upper limit on the neutron confidence region to be safe while the lower limit should be too high. And, in comparison with the polynomial fit, at least, this is what is observed.

Polynomial Order Fitted Neutrons Fit χ2 /NDF +59 0 1287−58 100.4/50 +62 1 1370−61 63.0/49 +64 2 1308−63 47.0/48 +67 3 1278−66 44.8/47 +68 4 1286−67 44.5/46 +73 5 1188−73 32.0/45 +73 6 1190−73 31.9/44

Table 4.7: The results of fits to the blind data set using different polynomial orders to represent the background. Order 5 was selected by the polynomial fitting procedure as the best description of the background in the data.

That the chosen overlap limits do not appear to adequately describe the background present in the blind NCD data is disappointing, and indicates that the true background in the NCD data was more “neutron-like” than expected based on the alpha background model. Indeed, as shown in

Figure 4.16 (a), even using -1.2 and 1.2 as the overlap limits (as suggested earlier by subjectively

83 Blind Data Full NCD Data Best Polynomial Fit 4 st He-derived alpha shape 400 1 order background 3rd order background Upper background limit th 600 5 (best) order background Lower background limit 350 500 300 Data Events / 50 keV 250 400

Blind Data Events / 25 keV 200 300 150 200 100 100 50

0 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Energy (MeV) Energy (MeV)

(a) Polynomial Fit. (b) Overlap Fit.

Blind Data Standard Fit 400 Standard Fit Background

350

300

250 Blind Data Events / 25 keV 200

150

100

50

0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Energy (MeV)

(c) Standard Fit.

Figure 4.15: The results of fits to the blind data set. The polynomial fit results including the best fit 5th order background are shown in (a). The 1st and 3rd order polynomial backgrounds are also shown for comparison. The upper and lower limit backgrounds determined by the overlap limit are shown in (b), along with the 4He-based alpha shape that was subtracted in calculating the overlaps. Finally, the results of the “standard” PDF-based fits (including only the 24Na neutron spectrum and 4He alpha spectrum) is shown in (c) for comparison.

84 examining the test NNNA shapes) appears to give too low an upper background limit. This seems to be the result of a step or bump in the background at around 0.5 MeV, which increases the fraction of the residual background that falls beneath the neutron peak and hence the background overlap15.

To continue to use the overlap method in the NCD signal extraction, then, a new set of overlap limits must be produced. This can be accomplished by changing the limits and examining the upper and lower limit backgrounds until evidence of a neutron peak is shown in both16. Carrying out this procedure yields overlap limits of -0.3 and 1.7, the upper and lower limit backgrounds of which are shown in Figure 4.16 (b)17. The increase in the width of the neutron confidence interval associated with this change in overlap limits is shown in Table 4.8.

Full NCD Data Full NCD Data 4He-derived alpha shape 4He-derived alpha shape Upper background limit Upper background limit 600 Lower background limit 600 Lower background limit

500 500 Data Events / 50 keV Data Events / 50 keV 400 400

300 300

200 200

100 100

0 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Energy (MeV) Energy (MeV)

(a) Overlap Limits (-1.2, 1.2) (b) Overlap Limits (-0.3, 1.7).

Figure 4.16: The upper and lower limit backgrounds of fits to the blind data with different overlap limits. In (a), the upper limit overlap does not show definite evidence of the neutron shape, while it does in (b). Also, as the neutron “dip” in (a) appears to be larger than required, the lower overlap limit has also been raised between (a) and (b). Therefore, -0.3 and 1.7 seem to be a reasonable set of overlap limits to describe the background present in the blind data.

15 It should be noted that the observed effect could also be due to the higher energy “shoulder” in the neutron spectrum being much larger in the data than in the 24Na PDF. In this case, the adaptive fit techniques would be attempting to accommodate the inaccuracy in the neutron PDF as background, and hence biasing the fits. There is, however, no physical reason that the NCD neutron energy spectrum should be systematically different during the 24Na running than during neutrino running, while there are plenty of plausible mechanisms to account for the observed distortion as a background effect. It is therefore reasonable to proceed under the assumption that what is observed is an effect in the background and not in the signal. 16 That this procedure is possible (and necessary) points out the fact that the overlap metric is not identical to what appears to be “neutron like” by eye. The latter seems to weight the few bins around the neutron peak much more heavily. It is possible, therefore, that an improved comparison metric could be devised that corresponds more closely to how the judgement of “neutron-ness” is made by eye. 17 Note that the neutron number was not used (or known) in setting the overlap limits “by eye,” so there was little risk of consciously (or unconsciously) biasing the results.

85 Overlap Limits Neutron Confidence Interval (-1.0, 1.0) (1290, 1537) (-1.2, 1.2) (1253, 1565) (-0.3, 1.7) (1119, 1450)

Table 4.8: The neutron confidence intervals extracted from the blind data set using different overlap limits. It can be seen that as the upper overlap limit is raised (allowing more neutron shape to appear in the upper limit background, as seen in Figure 4.16), the lower limit of the neutron confidence region decreases.

Having to define the overlap limits using the blind data in this way is frustrating and highlights the difficulty in setting these limits. This is not, however, necessarily a weakness of the method itself as much as it is a symptom of the same poor understanding of the background shape that motivated consideration of the overlap method in the first place. Thus, while it would certainly have been preferable to use overlap limits that were set independently of the data, it turns out that this was not possible because of the same unexpected behaviour in the background that was feared when the overlap method was developed.

With this third choice of overlap limits, the polynomial and overlap methods agree well. One further check that can be done is to examine the stability of the fit results when known NNNAs and alpha events are added to the data sets. Given that rate of capture of neutral current neutrons on the different 3He strings is expected to be essentially identical18, including the data from J3 and N4

(the “known” NNNA strings) in the blind data set is expected to increase the number of detected neutrons by approximately 2/30 = 6.7%. Adding the 4He data should not change the neutron number but will change the statistical fluctuations of the background. Comparing fits to the blind data before and after adding these additional events is not straightforward, as the fit results are expected to be partially statistically correlated. As a result, Monte Carlo ensemble tests were used to predict the expected change and statistical fluctuation in the fitted neutron number in these tests, under the assumption that the added events were either all alphas (in the case of the 4He events) or the expected number of neutrons with the balance of the added events alphas (in the case of

18 Monte Carlo studies of uniform neutron capture indicate that the number of neutrons captured by each 3He string is almost exactly identical, in spite of the different string lengths. Non-uniform neutron backgrounds will make this estimate slightly incorrect, but it is still more than good enough for the present comparison.

86 the NNNA strings). In the case of the 4He events we fully expect the observed change in the fit parameters to agree with the Monte Carlo predictions19, while for the NNNA data the changes in the fit results will agree with the Monte Carlo predictions if the added NNNAs do not systematically affect the fits. The predicted and observed changes in the fit parameters under the addition of the data from the NNNA stings and the data from the 4He strings is shown in Table 4.9.

Variable Observed Change Expected Change Polynomial Neutron Number 64 81 18  Overlap Upper Neutron Limit 142 81 17  NNNAs Overlap Lower Neutron Limit -44 74 25  Polynomial Neutron Number -48 10 50  Overlap Upper Neutron Limit -10 3.2 32

Alphas  Overlap Lower Neutron Limit -33 24 50 −  Table 4.9: The changes observed after adding either the data from the 4He strings (“Alphas”) or the data from the known NNNA strings (“NNNAs”) to the blind data set compared to the changes expected based on Monte Carlo studies of adding similar numbers of known events (neutrons and alphas) to the (real) blind data set. When the NNNA string data were added, the number of neutrons returned by the polynomial method increased by roughly the amount expected from the Monte Carlo, indicating that the NNNAs did not pull the polynomial fits any more than alpha events (although the chosen polynomial order did increase by one). On the other hand, adding the NNNAs had a much more significant effect on the overlap method than would be expected from the addition of alpha and neutron events; that is to say, the addition of background events with an unexpected distribution caused the overlap method to return a wider neutron confidence interval, as it was designed to do. On the other hand, when the 4He events were added to the blind data, both the polynomial and overlap fits behaved as expected, with the change in the polynomial fit being 1.2σ from expectation. The non-zero central value in the expected change in the polynomial neutron number with the addition of the 4He data was not expected, and is likely due to a statistical fluctuation in the (real) blind data that is “covered up” by the addition of more background events; repeating the tests with random initial data sets in place of the true blind data yielded expected changes in the polynomial neutron number consistent with zero. The asymmetric expected change in the overlap neutron limits is the result of the asymmetric overlap limits (-0.3, 1.7) being used.

4.4 Robust Signal Extraction Summary

Two new signal extraction techniques, the polynomial method and the overlap method, have been developed for use in extracting the number of neutron events from the NCD energy spectrum. The

19 In fact, this is not really a valid test of the overlap method, as the 4He spectrum was used to produce the alpha shape that is assumed in calculating the overlaps. In the case of the overlap method, then, the addition of the 4He events to the blind data can test the implementation of the overlap algorithm, but not the algorithm itself.

87 methods were explicitly developed to capitalize on the unique and well-defined neutron energy spec- trum, while minimizing their dependence on specific prior knowledge of the background spectrum.

This has the effect of making the polynomial and overlap methods more robust than standard PDF

fits against unexpected shapes in the background. In the case of the polynomial fits, the background

fit is able to adapt to an unexpected background shape by choosing higher order polynomials if necessary. In the case of the overlap method, the presence of such an unexpected shape results in the inflation of the neutron confidence region such that the true number of neutrons remains within the confidence region (provided that the overlap of the actual background lies within the overlap limits). In both methods, the price for the increased fit robustness is an increase in the fit uncertainty compared to a standard PDF fit.

The two methods have very different conditions for yielding the correct (within uncertainty) num- ber of neutrons. The polynomial method will be correct provided that the background is smoothly varying enough to be well described by a polynomial, while the overlap method is correct if the over- lap of the background in the data falls between the overlap limits. This means that the polynomial method can be expected to work well for slowly varying backgrounds, while the overlap technique might be better for most rapidly varying backgrounds.

Three different methods of determining the overlap limits were explored. Examining the back- ground PDF used in the signal extraction published in the NCD PRL yielded overlap limits of -1.0 and 1.0, while examining a set of example NNNAs using the author’s experience in modelling the

NCDs and their backgrounds suggested that -1.2 and 1.2 might be useful limits. Examination of the blind NCD data, however, suggested that the presence of a “bump” in the background at slightly lower energy than the neutron peak rendered both of these ranges too narrow20. Examining the upper and lower background limits in the blind data set suggested that overlap limits of -0.3 and

1.7 should span the actual background present in the blind data set, and hence (likely) in the full

NCD data set. 20 Note that this observation does not necessarily imply that the published NCD results were significantly biased; it appears that the narrower energy range used in fitting the published data, combined with the freedom in the published PDFs allows the fit to adapt to any background “bump” in much the same way as the polynomial method does.

88 Both the polynomial and overlap methods have been tested using Monte Carlo data, including data containing example NNNAs, and seem to function as designed. Tests on a blind set of NCD data also yielded reasonable results, once the overlap limits were adjusted as described above. The

fits also reacted as expected both to the addition of data from the known NNNA strings and to the inclusion of data from the 4He strings with the blind data set. In all, both the polynomial and overlap methods appear to function as designed, and as a result these methods appear to represent valid new techniques for extracting the number of neutrons from the poorly defined NCD background.

89 Chapter 5. Robust NCD Measurement of the Total Solar Neutrino Flux

Having developed two methods with some ability to extract the number of neutrons from a poorly defined background, it is now possible to use these methods to extract the number of neutrons from the full NCD signal, and hence to make an NCD-only deduction of the neutral current flux. In doing so, additional systematic uncertainties associated with the use of the 24Na PDF as the neutron shape must be evaluated and a number of conversion factors dealing with detector live time, efficiency, and neutron backgrounds must be derived.

5.1 The NCD Neutron Number

+78 th Applying the polynomial to the full NCD data set yields 1156−77 neutrons (from a 5 order polyno- mial with a fit χ2 of 33.9 in 45 degrees of freedom), while applying the overlap method (with overlap limits of -0.3 and 1.7) to the full NCD data yield a neutron confidence interval of (1058, 1394). The

fitted polynomial and overlap limit backgrounds are shown in Figure 5.1, while the results of the polynomial fits of different order are shown in Table 5.1.

From these results it seems that the full NCD data are substantially similar to the blind data, as expected. The spectral feature which appeared in the overlap backgrounds at around 0.5 MeV in the blind data appears to still be present in the full data set. The polynomial method still chooses a higher order background, presumably to accommodate this feature, and the overlap limits chosen based on the blind data result in overlap limit backgrounds which appear by eye to nicely bracket the range of reasonable backgrounds. The polynomial and overlap results are in good agreement, with the polynomial result being more precise.

90 Full NCD Data Full NCD Data Best Polynomial Fit 4He-derived alpha shape 450 th 5 (best) order background 700 Upper background limit 400 Lower background limit 600 350

300 500 Data Events / 25 keV Data Events / 50 keV

250 400

200 300 150 200 100 100 50

0 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Energy (MeV) Energy (MeV)

(a) Polynomial Fit. (b) Overlap Fit.

Figure 5.1: The results of fits to the full NCD data set. The polynomial fit results including the best fit 5th order background are shown in (a). The upper and lower limit backgrounds determined by the overlap limit are shown in (b), along with the 4He-based alpha shape that was subtracted in calculating the overlaps.

Polynomial Order Fitted Neutrons Fit χ2 /NDF +61 0 1213−61 115.7/50 +65 1 1322−64 56.7/49 +68 2 1258−66 43.8/48 +71 3 1246−70 43.3/47 +72 4 1245−72 43.4/46 +78 5 1156−77 33.9/45 +78 6 1156−77 33.9/44

Table 5.1: The results of fits to the NCD data set using different polynomial orders to represent the background. Order 5 was selected by the polynomial selection procedure as the best description of the background in the data.

91 5.2 Neutron Number Systematic Uncertainties

A significant advantage to the use of the overlap and polynomial methods in determining the number of neutrons detected by the NCDs is that many possible fit systematic uncertainties, especially those involving the shape of the background spectrum, are automatically treated by the fits and are therefore included in the fit “statistical” uncertainties. In fact, the only uncertainties in the extracted number of neutrons which must be treated independently of the fit itself are the systematic uncertainties associated with the use of the 24Na spectrum to represent the total neutron spectrum in the data. The potential contributions to that systematic are considered individually below.

5.2.1 Energy Resolution

As the 2005 24Na spike was taken over a relatively short period of time ( 4 days), it is possible that ∼ the energy calibration of the NCDs was different during the 24Na data taking period than it was, on average, during neutrino data taking. In particular, it is possible that slow drifts in detector or electronics gain, which are periodically corrected by energy calibrations, could have resulted in a broadening of the energy resolution in the data compared to that of the 24Na spike data.

As described earlier, the NCD energy scale was set by two calibrations. First, calibration pulses were injected into the pre-amplifiers as part of the “NCD ECA” calibrations in order to measure the gain and offset of each electronics channel, thus providing a conversion between Q, the charge recorded by the shaper, and q, the charge collected by the anode wire. Second, neutron source runs were used to determine the relationship between q, the collected charge, and E, the energy deposited in the NCD gas, by fitting the position of the neutron peak in “q,” and setting that peak position to be 764 keV (note that some average space charge effect was included in this conversion). The energy of an NCD event with shaper charge Q was thus taken to be [44]

Q a E = F − 0 , (5.1) q2E a  1  where a0 and a1 are the ECA offset and slope parameters, and Fq2E is the charge-to-energy conver- sion taken from the neutron calibration data. The calibration constants were determined individually

92 for each string, and were applied to data taken from the time of that calibration until the next cal- ibration. Note that it was not possible to combine a1 and Fq2E because the ECA and neutron calibrations were not done at the same time, and because there is a strong correlation between a0 and a1.

If any array-wide drift occurred in either the electronics gain or the NCD gain over time, one would expect to see a corresponding trend in the calibration constants described above. However, as can be seen in Figure 5.2, no such trend exists. Instead, it appears that the calibrations changed in a random manner within a relatively narrow spread. This suggests that no significant array-wide drifts occurred, and that the changes between the measurements simply reflect fluctuations in the energy scale. Depending on the time scale of these fluctuations, then, the average energy resolution during the 24Na data taking period may have been the same as, or narrower than, the average resolution during neutrino running.

To estimate the systematic uncertainty due to the 24Na possibly having a narrower energy res- olution than the data, the 24Na distribution was widened by convolving it with the distributions shown in Figure 5.2 (this was done by shifting the energy of each event in the PDF by an amount determined by randomly sampling the Figure 5.2 distributions). This is equivalent to assuming that the 24Na energy resolution was perfect and that Figure 5.2 gives a good measure of the relative width of the energy resolution in the data. The fits were repeated using this broadened neutron PDF, and the changes in the neutron fit parameters were taken as the associated systematic uncertainties1.

As can be seen in Figure 5.3, the changes in the Fq2E parameter had a greater effect on the neutron

PDF than did the changes in the a0 and a1 parameters, and this was reflected in a significantly larger effect of Fq2E on the neutron fits. In the case of the polynomial fit, for example, convolving the PDF with the Fq2E parameter changes increased the number of fitted neutrons by 1.9%, while the a0 and a1 changes increased the fit value by only 0.4%. Because the Fq2E PDF broadening

1 Here, and in the evaluation of other systematic uncertainties where the systematic effect was applied by a Monte Carlo procedure, there was not a unique systematically varied 24Na PDF due to fluctuations in the Monte Carlo systematic uncertainty application. In these cases, the “apply systematic and refit” procedure was repeated 100 times, and a distribution of systematically varied fit parameters created. The systematic uncertainty was then set by defining the range about each central value parameter which contained 68% of the systematically varied fit parameters, and the half-width of that range was taken to be the systematic uncertainty.

93 Mean -0.001667 Mean -6.458e-05 RMS 0.01024 RMS 0.002823 140 350 All Runs All Runs 120 24Na Runs 300 24Na Runs

100 250 Number of Occurances Number of Occurances 80 200

60 150

40 100

20 50

0 0 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 -0.04 -0.03 -0.02 -0.01 0 0.01 0.02 0.03 0.04 Fractional Change in F q2E Fractional Change in Peak Energy

(a) (b)

Mean -0.0003046 RMS 0.01501

200 All Runs 180 24Na Runs 160 140

Number of Occurances 120 100 80 60 40

20 0 -0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1 Fractional Change in Peak Energy

(c)

Figure 5.2: The fractional changes in the NCD energy calibration constants between subsequent measurements. The changes shown are between subsequent measurements of that constant on the same string; however, the changes for all of the analyzed strings are shown on the same plot. The changes in Fq2E are shown in (a). Because the changes in the a0 and a1 parameters were correlated, (b) and (c) show the fractional changes in the reported energy for an event in shaper ADC bin 130 (near the neutron peak) and ADC bin 20 (near the low end of the neutron tail), respectively, that result from the correlated changes of these parameters with each measurement. It can be seen that the changes in the a0 and a1 parameters have a larger fractional effect on lower energy events, primarily due to the proportionately larger influence of the a0 offset on these events. The changes in the calibration constants between measurements before and after the 2005 24Na spike are also shown for comparison. The means and RMS’s shown are for the full set of neutrino runs.

94 is independent of energy while the a0 and a1 broadening is energy dependent, these tests are not evaluating exactly the same systematic, and hence both effects are included in Table 5.2 and the overall uncertainty.

0.22 0.22 Original Original 0.2 0.2 After Convolution After Convolution 0.18 0.18 0.16 0.16 Na Events / 25 keV Na Events / 25 keV 24 0.14 24 0.14 0.12 0.12 0.1 0.1 0.08 0.08 0.06 0.06 0.04 0.04 0.02 0.02 0 0 0.2 0.4 0.6 0.8 1 00 0.2 0.4 0.6 0.8 1 Energy (MeV) Energy (MeV)

(a) Fq2E Changes (b) a0 and a1 Changes

Figure 5.3: The change in the 24Na energy PDF after smearing with (a) the observed changes in the Fq2E calibration constant and (b) the observed changes in the a0 and a1 constants. The smearing was accomplished by randomly choosing, from the distributions used to create Figure 5.2, a change in the relevant energy calibration constant for each event 24 24 in the Na. In the case of the a0 and a1 constants, applying the change to the Na event required converting the event energy back to q using a representative Fq2E . After this, the “old” a0 and a1 constants were used to determine Q, and the “new” a0 and a1 constants (and the same representative Fq2E ) were then used to convert that Q back to the (jittered) E. In both of these smearings, no attempt was made to correlate the string from which the event was taken with the string from which the change in calibration constant occurred.

5.2.2 Energy Scale

It is also possible that the calibration of the energy scale was slightly different during 24Na running than it was, on average, during neutrino running. The possible change in the energy scale between the

24Na data and the neutrino data can be estimated by comparing the mean change in the calibration parameters before and after the 24Na spike and during neutrino running (i.e. by comparing the means of the distributions shown in Figure 5.2). The mean change in the Fq2E parameter between

24 measurements before and after the Na spike is 0.1%, while the corresponding change due to a0 and a1 is 0.1% in ADC bin 20 and 0.03% in ADC bin 130 (see the Figure 5.2 caption for approximate

95 calibrations of these ADC bins). In the total data, the mean change in Fq2E is -0.2%, and the mean changes due to a0 and a1 are less than 0.1%. The difference in the mean energy scale between the

24 Na spike and the full NCD data (dominated by the Fq2E ) can therefore reasonably be assumed to be less than 0.3%. To estimate the effect of this potential change in energy scale on the number of fitted neutrons, the 24Na PDF was shifted up and down (event-by-event) in energy by 0.3% and the data re-fitted. The largest change in each neutron fit parameter as a result of these shifts was taken to be the energy scale related systematic uncertainty associated with that parameter; these are tabulated in Table 5.2.

5.2.3 MUX Threshold Effects

During the time of the 2005 24Na spike, the MUX thresholds of all of the analyzed strings were set to their nominal values. During the rest of the neutrino running period, however, the MUX thresholds on various strings were occasionally raised by different amounts to suppress occasional periods of increased noise. The effect on the data is expected to be small, as only 2.8% of neutron events are expected to have been captured by one of these strings during a time when the threshold was raised

[45]. Nevertheless, as can be seen in Figure 5.4, the neutron spectrum will be different on strings with higher MUX thresholds than in the 24Na PDF.

To investigate the possible effect of this MUX threshold distortion on the signal extraction, the

24Na PDF was rebuilt, with events randomly rejected (based on the known fraction of array live time spent in each raised MUX threshold setting and the energy dependant raised MUX threshold efficiencies inferred from the spectra shown in Figure 5.4) to simulate 24Na data taking with MUX threshold live time fractions equivalent to the neutrino data taking period. The effect of the simulated threshold increases on the neutron PDF are so small as to be indistinguishable by eye, and the smallness of the change is reflected in the effect on the neutron fits. In fact, as seen in Table 5.2, the spectral effect of the raised MUX thresholds changed the neutron fit parameters by 0.2% or less.

96 1 Nominal

0.8 +6 +12

0.6

0.4 Calibration Event Rate (Normalized)

0.2

0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Energy (MeV)

Figure 5.4: The summed spectrum of AmBe source data at several positions within the SNO detector. For each source position, data were taken with all of the MUX thresholds set to their nominal values, raised by 6 units and raised by 12 units (a “MUX threshold unit” is the software setting used to determine the MUX threshold). As the MUX threshold is increased, the neutron spectrum is distorted as more and more lower energy events fail to trigger the MUX (the curves in the Figure are normalized such that the neutron peak heights are equal; it may actually be that even at the peak energies the MUX efficiency is reduced.)

5.2.4 24Na Statistics

The 2005 24Na spike data contain approximately 17,000 events in the neutron energy region. As this is a large but not overwhelming number of events, the statistical uncertainty in the 24Na PDF could have an important effect on the signal extraction. To test this, the bin entries of the 24Na

PDF were repeatedly jittered (according to Poisson Statistics) and the data re-fitted with these new

PDFs. As shown in Table 5.2, this leads to a systematic uncertainty of just less than 1.0%.

5.2.5 24Na Non-Uniformity

A significant concern for the NCD neutron efficiency measurement was the possibility that the 24Na brine may not have mixed completely uniformly within the D2O volume. In particular, there was concern that a “dead layer” free from brine might have existed along the inside of the acrylic vessel.

This would have increased the brine concentration in the interior regions of the detector and hence increase the apparent neutron capture efficiency. From the point of view of the neutron PDF, non-

97 uniformity of the brine in the detector could change the string-to-string distribution of neutron events (which, along with the z-distribution of the neutrons within a string, is known from tests with point source data to have significant effects on the neutron spectrum2). Thankfully, however, the neutron PDF turns out to be relatively insensitive to such non-uniformities; it has been shown

[46] that re-weighting the contributions of the different strings according to a PMT-derived estimate of the 24Na non-uniformity changes the results of polynomial-type fits by less than 0.1%. The effect on the overlap limits is expected to be similar, and the smallness of the uncertainty makes it unnecessary to re-evaluate the systematic in these cases - 0.1% will simply be adopted for the overlap limit uncertainties as well.

5.2.6 Backgrounds in the 24Na

A final consequence of using a data-derived PDF for the neutron shape is the possibility that events besides 24Na neutrons will be present in the PDF. These could include alpha events, neutron events from sources besides the 24Na, and non-physics backgrounds.

Alpha Contamination

The rate of alpha decays in the NCD array should not have changed during the 24Na calibration, and we know that alphas dominated the neutron region during neutrino data taking. We can therefore use the neutrino data rate to estimate the alpha contamination of the 24Na PDF. Treating every event below 1 MeV in the neutrino data as an alpha event, we find an alpha rate of 15.8/day, which translates to 64 alpha events below 1 MeV in the 24Na neutron PDF, or about 0.3% of the total

PDF. This represents a direct uncertainty on the extracted number of neutrons, as it means that some small number of alphas in the data will be “recruited” as neutrons to make up this portion of the PDF (in fact, this should be a correction rather than an uncertainty, but it is small enough to simply be taken as an uncertainty).

2 Presumably these effects are dominated by calibration differences between strings and by gain differences between the counters within a string.

98 Other Neutrons

Non-uniform background neutron sources could influence the 24Na PDF in a manner similar to non- uniform mixing of the 24Na brine. The total neutron background rate, however, is much less than the alpha rate, meaning that the “other neutron” contribution to the PDF will be less than 0.3%, from above. This is a much smaller non-uniformity than the non-uniformity of the 24Na brine itself, which was already found to be negligible.

Other Backgrounds

Finally it is possible that the 24Na PDF could contain non-physics events which pass the data cleaning cuts. If the rate of these events was the same during 24Na data taking as during neutrino running the total number of such events would be negligible (and would already be included in the alpha contamination systematic discussed above). It has been shown, however, that the rate of events failing the data cleaning cuts decayed during the 24Na data taking period with a half-life very similar to that of 24Na [46]. It is therefore possible, if there is significant leakage through the data cleaning cuts, that the leakage events occurred in the 24Na PDF at a rate proportionally comparable to their rate in data. This would be a most unfortunate situation, as the leakage events in the data would then be fitted out as neutrons. The possibility of such a “just so” scenario seems remote, however, and will be neglected here as it has been in other NCD analyses. Nevertheless, the lack of quantitative estimates of the number of spurious events passing the data cleaning cuts means that no quantitative limit can be placed on their effect.

5.2.7 Neutron PDF Systematic Uncertainties Summary

A summary of the different systematic uncertainties associated with the use of the 24Na neutron

PDF are shown in Table 5.2. As can be seen, the systematic uncertainties are dominated by the possible differences in the average energy scale and resolution between the 24Na neutron PDF and the data, and by the statistical uncertainty in the neutron PDF. The magnitude of these systematic uncertainties as evaluated here is very similar to (but slightly smaller than) their magnitudes as

99 evaluated in [21] and [46].

Neutron Fit Parameter Polynomial Fit Overlap Lower Overlap Upper Energy Scale 0.5% 1.7% 1.0%

Energy Resolution (Fq2E ) 1.9% 1.6% 1.0%

Energy Resolution (a0 and a1) 0.4% 0.2% 0.1% MUX Threshold 0.2% 0.1% 0.1% 24Na Non-Uniformity 0.1% 0.1% 0.1% 24Na PDF Statistics 0.9% 0.8% 0.5% Alpha Contamination 0.3% 0.3% 0.3% Total 2.2% 2.5% 1.5%

Table 5.2: The final contributions of the different 24Na PDF systematic uncertainties to the dif- ferent neutron extraction parameters.

In applying these systematic uncertainties to the polynomial and overlap fits, they are assumed to represent the standard deviations of Gaussian curves centred about the central value fit parameter.

In the case of the polynomial method, where the fit statistical uncertainties are interpreted in a similar way, the statistical and systematic errors can then be combined in quadrature in the usual way. For the overlap method, however, recall that the confidence region should not be expected to represent the width of a Gaussian distribution with the most probable value of the true neutron number at the centre; to be consistent with the flat prior that was assumed on the background overlap, all that can be expected is that the true number of neutrons should lie between the upper and lower neutron limits. In considering the systematic uncertainties on the overlap neutron limits, then, the upper neutron limit should be shifted further up to include the systematic uncertainty; taking the systematic uncertainty as describing the width of a Gaussian centred about the original upper overlap neutron limit, the new upper neutron limit should be chosen such that 68.3% of the systematic distribution lies below that value. As 68.3% of a normal Gaussian distribution lies below 0.48σ, for a systematic uncertainty of 1.5%, the upper overlap limit should be raised by

1.5 0.48 = 0.7%. The lower overlap neutron limit should be lowered in a similar way. Note, however, × that additional systematic uncertainties which will follow should be added in quadrature with these

100 before the limits are shifted. For now, therefore, we will keep track of them simply as an uncertainty on each limit. Thus, after including the systematic uncertainties due to the use of the 24Na neutron

+78 PDF, the number of neutrons fitted using the polynomial technique is 1156− (stat) 25(syst), while 77  the neutron confidence interval defined by the overlap method is (1058 26, 1394 21).   5.2.8 Verification with 2006 24Na Data

As a check that the systematic uncertainty estimates described above are reasonable, the data were refitted using the 2006 24Na spike data as the neutron PDF. This spike has fewer events than the

2005 spike, and so should be expected to have even larger uncertainties due to statistical fluctuations in the PDF. It was also recorded using a different set of energy calibration constants, so any offsets in energy scale might be expected to be different. As the 2006 spike also took place during a limited period of time, however, the energy resolution uncertainty might be expected to be similar (under the model that the resolution is “perfect” during the 24Na data taking period relative to the changes observed during the rest of the data taking).

After switching to the 2006 24Na PDF, the number of neutrons returned by the polynomial fit decreased by 0.8%, the upper overlap neutron limit decreased by 2.1% and the lower overlap neutron limit decreased by 2.2%. These changes are reasonably consistent with the expectations from Table

5.2. Even if the extreme assumption that the energy resolution is perfect during the 24Na data taking periods is accepted and the energy resolution contributions are removed from Table 5.2, the changes expected based on the from the systematic uncertainties are 1.1%, 1.2%, and 1.9%, respectively, which is not so different from that observed (the largest deviation is 1.8σ in the the upper overlap limit). This suggests that the evaluation of the systematic uncertainties was indeed reasonable.

5.3 Live Fraction and Detection Efficiency

Not all neutrons that were captured by the NCDs during a period of time when the detector was operating produced events which were included in the analysis. Some neutron captures failed to trigger either the MUX or shaper systems and hence were not recorded, while others occurred while

101 some portion of the system was reading out a previous event and was hence unable to accommodate the new event. Still other events were successfully recorded but then erroneously rejected by the data cleaning algorithms. The number of neutrons extracted from the NCD data must therefore be corrected to account for the detector live fractions and the trigger and data cleaning efficiencies.

5.3.1 Shaper Live Fraction

The shaper dead time after an event on any one string was quite short (236 5µs), and applied to  all of the strings [28]. The 1.4x106 shaper triggers during the NCD phase thus translated to 330 7s  of dead time (or (9.9 0.2)x10−4% of the total run time) during neutrino running. This dead time  is negligible.

5.3.2 MUX Live Fraction

The MUX dead time per event was 1 ms [28], but was somewhat variable. In order to determine ∼ the MUX live time, then, a system of live time clocks was used. One of these “live time scalers” (all of which were based on a local 10 MHz oscillator) was never inhibited, and thus gave a measure of the full live time of the run. A second scaler was inhibited while the MUX system was locked out by the MUX controller, and hence recorded the MUX live time time during the run. Comparing the output of these clocks for each run and performing a time weighted average gives a total average

MUX live fraction of 0.9980 0.0001 for the NCD data taking period [47]. During the period of time  the random pulser (described in the next section) was running, the random pulser could be used to verify the live time scaler result; the two methods agree to within 7x10−3%.

5.3.3 Scope Live Fraction

The readout time for a scope trace, and hence the dead time associated with a scope event, was quite long ( 0.6s). To mitigate this dead time, two scopes were used, with a flip-flop bit used to ∼ alternate MUX events between the scopes. Thus, in principle, any event which occurred more than

0.6s from the second most recent scope event was expected to find a live scope. In practice, however, the scope read-out time was quite variable and the flip-flop bit did not always function exactly as

102 intended. Therefore, an empirical scope live fraction determination was required.

The scope live fraction is defined as the fraction of time that an event triggering the MUX would

find a live scope. Thus, partial MUX events can be used to define the times during which the scope system was dead, while scope events define the times during which the system was live3. To provide the random sampling necessary to obtain an accurate measurement of the scope live fraction, a

“random pulser” system was installed [28], which randomly pulsed a spare NCD channel (with a height-adjustable square wave pulse) at an average rate of 0.01Hz. For the data taking period during which the random pulser operated, then, the scope live fraction could be determined by dividing the number of scope events on the random pulser channel by the sum of the number of scope and partial MUX events.

Unfortunately, the random pulser was not installed during the first part of the data taking period, meaning that empirical scope live fraction measurements were not available during this time. In addition, bursts of events were occasionally observed on the random pulser channel [48]. In order to confirm the random pulser results and to extrapolate them to the complete data taking period, a semi-empirical scope live fraction determination was developed. In this semi empirical method, the scope and partial MUX events were used to determine the scope live fraction as a function of the time since the most recent event on each of the scopes, as shown in Figure 5.5. The average scope live fraction for the data taking period was then integrated by sampling this distribution using a

Monte Carlo technique in which the real timeline of the data taking period was repeatedly randomly sampled. For each randomly sampled time, the probability that the scope was live was determined

(from the live fraction distribution shown in Figure 5.5(c)) based on the difference between the sampled time and the most recent (real) event on each scope. In this way, the average scope live fraction as experienced by a uniform signal could be calculated. During the period of time when the random pulser was operating, the scope live fraction determined by the random pulser (0.960 0.001)  was in good agreement with the results from the semi-empirical calculation (0.959 0.001). Applying  3 Note that the scope live fraction is not simply given by the number of scope events divided by the sum of the number of scope events and the number of partial MUX events because the partial MUX events tended to occur in bursts, while neutrino events are expected to be uniform in time. Using this simple procedure would therefore overweight the bursty data periods and underestimate the live fraction relevant to the neutrino signal.

103 the semi-empirical technique to the entire data taking period yielded an average scope live fraction of 0.957 0.004 [49].  5.3.4 Shaper Threshold Efficiency

The shaper thresholds were intentionally set significantly below the lower edge of the neutron spec- trum, and on the NCDs used in the analysis the shaper thresholds were never raised. Thus, when no energy cut is applied to the data, the shaper threshold efficiency was assumed to be 1.0.

5.3.5 MUX Threshold Efficiency

The probability that a neutron event had sufficient amplitude to pass the MUX threshold was measured by comparing, in dead time corrected AmBe point source calibration data, the number of events detected by the MUX and shaper systems. Given that the shaper threshold efficiency was assumed to be perfect, the MUX efficiency could be inferred by the difference in the number of observed events. This was done individually for each of the strings at each of the MUX threshold settings (the MUX thresholds on some of the strings were raised from time to time to mitigate episodes of bursty behaviour), and a live time weighted average was calculated. Over the full neutron energy range, the average MUX threshold efficiency was 0.98284 0.00037 [45].  5.3.6 Data Cleaning Efficiency

The data cleaning cuts rejected a small but finite fraction of real neutron events, and these “false rejections” must be corrected for. The efficiency with which neutron events passed the data cleaning cuts was estimated from the fraction of events in neutron calibration runs (AmBe and Cf) which passed the data cleaning cuts. The use of high rate source data limited the fraction of non-neutron events in the data, and hence the possibility that the sacrifice was overestimated because of correct rejections of non-physics events. This effect was further reduced through the use of timing cuts which selected neutron-like events (i.e. events which appeared to be part of a fission burst for Cf source data and those preceded by a PMT event consistent with a 4.4 MeV γ-ray for the AmBe source).

The average efficiency for neutron events to pass the data cleaning cuts was thus determined to be

104 ) 2 ) 2 other other t t ∆ ∆ ( ( 10 1 104 10 1 log log 103

0 0 103

102 -1 -1 102

-2 -2 10 10

-3 -3

1 1 -3 -2 -1 0 1 2 -3 -2 -1 0 1 2 log (∆ t ) log (∆ t ) 10 min 10 min

(a) Scope Events (b) Partial MUX Events

) 2 1 other

t 0.9 ∆ ( 10 1 0.8 log 0.7 0 0.6

0.5 -1 0.4

0.3 -2 0.2

-3 0.1

0 -3 -2 -1 0 1 2 log (∆ t ) 10 min

(c) Scope Live Fraction

Figure 5.5: The empirical distributions used in calculating the semi-empirical scope live fraction. ∆tmin is the time (in seconds) between the given event and the most recent event recorded by either scope, while ∆tother is the time in seconds between the most recent event recorded by either scope and the most recent preceding event on the other scope. (a) and (b) show the timing distributions of the scope events and the partial MUX events, respectively. Overall we see that the scopes tend to be live if an event is more than 0.6s after the previous event on at least one of the scopes, as expected. However, there∼ are also unexpected events, including some events for which the scope system was dead more than 10s after the most recent event on either scope. Dividing the data shown in (a) by the sum of the data in (a) and (b) yields the average scope live fraction as a function of the time from the most recent events on both scopes. This distribution, shown in (c), was then used in the semi-empirical scope live fraction determination described in the text.

105 0.98338 0.00021 [50]; this number was assumed to be stable in time.  5.3.7 Efficiency and Live Fraction Summary

A summary of the live fraction and threshold efficiency corrections is given in Table 5.3. The total correction can be multiplied directly with polynomial fit central values and the overlap neutron limits, with the uncertainties combined in the standard manner. The polynomial fit result for the

+85 number of neutron captures by the NCDs is then 1252− (stat) 28(syst), and the overlap neutron 83  capture confidence interval is (1146 29, 1510 24).  

Source Correction Factor

LFshaper – LF 0.9980 0.0001 MUX  LF 0.957 0.004 scope  shaper –  0.98284 0.00037 MUX   0.98338 0.00021 data cleaning  Total Correction 0.923 0.004  Table 5.3: A summary of the NCD live fractions (LF ) and efficiencies () discussed in the text.

5.4 Neutron Backgrounds in the NCDs

When a neutron was captured by the NCDs, no distinction could be made between those neutrons which resulted from neutral current interactions and those originating from other sources. In order to extract the neutral current flux, therefore, the number of neutron captures in the NCDs produced by sources other than the 8B solar neutrino neutral current interaction must be estimated and subtracted from the observed number. The additional sources of neutrons which have been identified are briefly described below. The rate of neutron production by each of these background sources was evaluated, and the capture efficiencies of those neutrons by the NCD array were estimated using point- and distributed-source verified Monte Carlo (with the exception of the uniform sources, where the 24Na-derived neutron capture efficiency, described in the next Section, was used directly). The resulting expected numbers of neutron captures in the NCDs are given in Table 5.4.

106 D O Radioactivity: Gamma rays with energies greater than 2.2 MeV have the potential • 2 to photo-dissociate a deuteron, producing a free neutron. Thus, radioactive impurities in the

heavy water volume, in particular 208Tl and 214Bi, have the potential to provide a neutron

background that is indistinguishable from the neutral current neutrons. The levels of these

contaminants were monitored through PMT-based in-situ analyses and through water assays

[51], and their neutron contributions calculated by Monte Carlo.

NCD Bulk and Cables: Gammas from radioactive impurities in the NCDs and NCD read- • out cables could propagate to the heavy water volume and produce background neutrons. The

238U and 232Th chain impurities in the NCD components were assayed [28], and the resulting

background contributions calculated.

NCD “Hot-spots”: Three positions on two NCD strings were identified by the PMT-based • in-situ low energy background analysis as having relatively high-level local contaminations of

238U and 232Th chain activity, which were likely the result of handling during deployment.

Upon the removal of the NCDs, these hot-spot sections were counted in an external alpha

counting system and assayed by elution of the contamination in a modified version of the

water assay technique in order to accurately determine the background contribution of this

activity [51].

H O and Acrylic Vessel Radioactivity: Gamma rays from decays in the light water • 2 outside of the acrylic vessel and from the acrylic vessel itself could also propagate into the

heavy water and produce neutrons through photo-dissociation. The contributions of these

reactions are constrained though in-situ measurements of the light water and acrylic vessel

activity, and through radio-assays of the light water contamination [52].

Acrylic Vessel Neutrons: During the construction of the SNO experiment, the acrylic vessel • was exposed for a significant period of time to (HEPA filtered) mine air, which has high levels

of 222Rn. As a result, radon daughter isotopes plated onto the inner and outer surfaces of

107 the acrylic vessel, where some of them were slightly embedded through recoil from subsequent

alpha decays. This resulted in long-lived 210Po activity (supported by 210Pb with a 22a half-

life) on the acrylic vessel surfaces, which produces a 5.3 MeV alpha particle. These alphas are

energetic enough to produce neutrons via (α, n) interactions with 13C in the acrylic and 17O

and 18O in the water. Because the 210Pb daughters do not produce decay products energetic

enough to be directly detected by the PMT system, it was not possible to directly monitor

this background. However, during Phase II of SNO, the neutron capture distance in the salty

D2O was short enough that the external-source neutron contribution could be extracted from

the radial profile of the detected events [20]; from this the acrylic vessel neutron production

rate was determined. In the NCD phase, the neutron capture length was too long for the

external neutron component to be identified. As a result, the salt phase measurement was

used. To confirm that this was reasonable, the alpha activity of the inner surface on the neck

of the acrylic vessel was measured with silicon detectors before and after the NCD phase [53].

This activity was found to be consistent (within a large uncertainty which was applied to the

extrapolated salt acrylic vessel neutron rate), supporting the hypothesis that the production

rate of neutrons by acrylic vessel surface contamination was constant between Phases II and

III.

NCD (α, n) Radon daughter activity on the outer surfaces of the NCDs had the potential • to produce neutrons through (α, n) interactions with 17O and 18O in the water. To estimate

this contribution, the surface activities of several sections of the NCDs were counted with a

custom-built gas proportional chamber [54].

Atmospheric Neutrinos Cosmic-ray induced atmospheric neutrinos could also interact with • deuterium to produce neutrons. The flux of these neutrinos is much lower than the solar

neutrinos, but their energies and hence their interaction cross-sections are higher. This back-

ground contribution is evaluated using Monte Carlo calculations based on published estimates

of the atmospheric neutrino flux [55].

108 hep Neutrinos The hep neutrino flux is predicted to be 718 162 times less than the 8B •  flux [3], with an average neutral current interaction cross-section that is 2.8 0.4 times larger  [25]. Scaling the expected neutron capture rate from 8B (described below) thus gives 4.1  1.1(syst) as the expected hep contribution to neutron capture in the NCDs.

Other Neutron Backgrounds Finally, a variety of other sub-dominant neutron produc- • tion mechanisms, each of which is expected to produce less than two neutron captures in the

NCD array over the course of the NCD phase, have been evaluated [55]. These include neu-

tron production by cosmogenically created isotopes, the neutral current interactions of reactor

anti-neutrinos, geo-neutrinos and CNO solar neutrinos, deuteron photo-dissociation by cos-

mogenically produced 16N, and the spontaneous fission of and (α, n) interactions produced by

background radioactivity in the heavy water.

Source NCD Captures D O 33.3 5.5 2  NCD bulk and Cables 39.3 15.0  NCD “Hot-spots” 74.7 7.5  H O and A.V. Bulk 23.8 17.1 2  A.V. Surface 23.7 16.8  NCD Surface 2.1 0.47  Atmospheric ν 15.8 3.1  hep Solar ν 4.1 2.3  Other 2.7 0.4  Total 219.4 33.4  Table 5.4: A summary of the neutron events expected in the NCDs from sources other than the neutral current interactions. The uncertainties on the individual contributions are sys- tematic only; both statistical and systematic uncertainties are included in the total.

The expected number of background neutron captures must be subtracted from the number of

NCD neutron captures developed earlier. The uncertainty in the number of background neutron captures becomes part of the systematic uncertainty in number of neutrino-induced neutron captures.

The number of neutron captures in the NCDs resulting from 8B solar neutrino neutral current

109 +85 interactions is thus determined to be 1033− (stat) 44(syst) based on the results of the polynomial 83  fit, and in the range (927 44, 1290 41) based on the overlap method.   5.5 Live Time and Capture Efficiency

Two final numbers are required to convert the number of observed 8B neutral current neutron captures into a 8B neutron production rate: the length of time for which the detector was operated and the efficiency with which uniformly produced neutrons were captured by the NCDs.

5.5.1 Live Time

The total live time included in the NCD phase data set was measured using both 10 MHz and 50

MHz scaler clocks (similar to the 50 MHz clock described earlier in the determination of the MUX live fraction). This live time was then corrected for the periods of time that were removed offline by muon- and burst-related data cleaning cuts (about 2% in total). The total live time thus determined was 385.17 0.14 days [56]. This number was verified using the data from the NCD random pulser  (during the period of time for which the random pulser was operational) and the data from a 5 Hz random trigger that was applied to the PMTs.

5.5.2 Capture Efficiency

The efficiency with which the NCD array captured neutrons produced uniformly throughout the

24 D2O volume was determined in two ways. The primary determination used the Na spikes; given that the injected activity of 24Na was known4, the number of neutrons produced by the 24Na could be calculated. The neutron capture efficiency was then the ratio of the number of neutrons produced to the number detected (corrected for threshold efficiencies and dead time to avoid double counting these). The neutron capture efficiency thus determined was 0.211 0.007, with the uncertainty  dominated by the measurement of the injected brine activity and by uncertainty in the uniformity of the distribution of the brine in the heavy water volume [58].

4 Samples of the injected brine were counted in a sealed container at the centre of the SNO detector (where their activities could be compared to sources of known strength), and on a low background detector in the SNOLAB facility to determine the injected activity [57].

110 To confirm this distributed source measurement, the SNOMAN Monte Carlo (which had been extensively verified against neutron point source calibration data) was also used to calculate the capture efficiency for uniform neutrons. The Monte Carlo capture efficiency was 0.210 0.003 [57],  in good agreement with the 24Na value.

5.5.3 Live Fraction and Detection Efficiency Summary

The rate of 8B neutral current interactions in the heavy water volume can now be calculated by dividing the number of 8B neutral current neutron captures in the NCDs by the live time and the capture efficiency. Uncertainties in the correction factors become part of the systematic uncertainty in the neutral current rate. The polynomial fit results give a 8B solar neutrino neutral current

+1.05 −1 8 interaction rate of 12.71− (stat) 0.69(syst) d , while the overlap method gives a B solar 1.02  neutrino neutral current interaction rate in the range (11.41 0.66, 15.87 0.73).   5.6 NCD 8B Flux Result

Having determined the neutral current interaction rate, it is now possible to determine the total solar neutrino flux.

5.6.1 Neutron Rate to Flux Conversion

The conversion from neutron production rate to solar neutrino flux is carried out by calculating, using the SNOMAN Monte Carlo, the expected neutron production rate in SNO and then scaling to the observed neutron production rate. For the NCD phase, the Monte Carlo used a 8B neutrino flux of 5.145x106cm−2s−1, and the Winter-Friedman 8B neutrino energy spectrum [59]. The neutrino- deuteron neutral current interaction cross-section was taken from [60], with radiative corrections as calculated in [61]5. The SNO heavy water target is taken to be a sphere of radius 600.54 cm with an average deuteron density (after correcting for the NCD displacement) of 6.6352x1028 m−3. The

Monte Carlo includes the effects of the Earth’s orbital eccentricity, properly weighted by live time.

5 In order to do this, the partial radiative corrections in [60] had to be removed by applying a correction factor of 1./1.024 [62].

111 Taking all of this into account, the average expected neutron production rate in the SNO heavy water during the NCD data taking period is 12.96 0.13 neutrons per day [62].  5.6.2 The NCD 8B Flux Measurement

The total 8B flux can now be obtained by scaling the Monte Carlo neutrino flux by the ratio of the predicted and observed neutron production rates. The 8B flux as measured by the polynomial

+0.42 6 −2 −1 method is thus determined to be 5.04− (stat) 0.28(syst)x10 cm s , and the range of fluxes 0.40  allowed by the results of the overlap method is (4.53 0.27, 6.30 0.28)x106cm−2s−1. Now that all   of the systematic uncertainties are included, it is possible to combine the systematic uncertainties with the confidence range (as described in Section 5.2.7) to determine a final overlap based flux confidence region of (4.40, 6.43)x106cm−2s−1. A summary of the contributions to the systematic uncertainty of each of the flux parameters is given in Table 5.5.

Flux Parameter Polynomial Fit Overlap Lower Overlap Upper 24Na PDF Systematics 2.67% 3.09% 1.17% Detection Efficiency & Live Fraction 0.52% 0.53% 0.50% Neutron Background 3.23% 3.60% 2.59% Live Time & Capture Efficiency 3.32% Target & Cross Section 1.0% Total Systematic 5.46% 5.89% 4.51%

Table 5.5: The contributions of the different sources of systematic uncertainty to the 8B flux pa- rameters. Note that the fractional contribution of a given systematic to the total flux can be different than its fractional value as evaluated in earlier sections due to the sub- traction of the neutron backgrounds. Note also that the systematic uncertainties are not dominated by any single source, but have approximately equal contributions from several sources.

As seen in Figure 5.6, the polynomial and overlap results agree well with the previously pub- lished SNO results and with the Standard Solar Model prediction. The total uncertainty for the polynomial method is 9.9%. This is slightly larger than the 8.8% total uncertainty in the NCD PRL

[21]. However, this is a relatively modest increase in uncertainty given the relative simplicity and robustness of the technique compared to the published signal extraction. It is interesting to note

112 that the central value fit results of the polynomial method and the previously published NCD phase analysis differ at approximately the 1σ level. Although not particularly alarming, this is a larger change than one would expect in fitting identical data using similar fitting procures. This suggests either that there was tension between the PMT and NCD data in the published fit, resulting in a central fit value that was pulled relative to the NCD-only best fit, or that the NCD background spectrum determined by the polynomial fit was slightly different than the backgrounds that the

PDFs used in the published analysis could accommodate. In either case, however, the relatively small difference suggests that any effect in the published result is unlikely to be more than 1σ.

As predicted, the overlap method had a significantly larger uncertainty (the half range of the total confidence interval was 18.6% of the central value) compared to either the polynomial fit or the published result. Still, the confidence region agrees well with the other results, and 20% sensitivity seems reasonable given how little information about the backgrounds had to be supplied to the method.

SNO Phase I

SNO Phase II

SNO Phase III

Polynomial Fit

Overlap Fit

3.5 4 4.5 5 5.5 6 6.5 7 Neutral Current Flux (x106cm-2s-1)

Figure 5.6: A comparison of the results of the polynomial and overlap methods with the previously published SNO neutral current flux results [19, 20, 21] and the BS05 SSM predictions [3]. The current results can be seen to agree well with both the previously published results and the SSM prediction.

113 5.7 SNO Conclusion

Two new signal extraction techniques have been developed which allow the number of detected neutrons to be extracted from NCD data using minimal information about the poorly understood background spectra in the NCDs. This has the effect of making these signal extraction techniques more robust against unexpected behaviour in the background at the cost of increasing the uncertainty in the extracted number of neutrons. These new robust signal extraction techniques were applied to the NCD data, yielding a total 8B solar neutrino flux confidence interval of (4.40, 6.43)x106cm−2s−1.

8 +0.42 6 −2 −1 From the overlap method and a B solar neutrino flux of 5.04− (stat) 0.28(syst)x10 cm s 0.40  +0.33 +0.36 6 −2 −1 from the polynomial method. The published NCD phase flux result (5.54−0.31(stat)−0.34(syst)x10 cm s ) agrees well with the overlap confidence interval and is higher than the polynomial result by approxi- mately 1σ. This deviation appears to be the result of an unexpected step or bump in the background of the NCD data at around 0.5 MeV. Overall, though, the fairly good agreement between the pub- lished analysis and the results of the robust signal extraction methods confirms that the published result was not unduly affected by unexpected background behaviour.

114 Chapter 6. The SNO+ Experiment

The SNO+ experiment is a new liquid scintillator based neutrino experiment that is being con- structed in SNOLAB using the SNO detector. A liquid scintillator is an organic liquid that pro- duces light when excited by the passage of a charged particle. In fact, liquid produce significantly more light than the water Cerenkˇ ov process, which will make SNO+ sensitive to events of much lower energy than those that were detectable by SNO. This low energy sensitivity will allow

SNO+ to investigate a new regime of interesting physics.

Building SNO+ from the remaining SNO apparatus will mean that SNO+ can be constructed much more quickly, and at much lower expense, than a comparable experiment in another location.

SNO+ will also inherit many of the features that made SNO successful, including the relatively high photocathode (PMT) coverage, the excellent shielding provided by the large amount of external light water, the low rate of cosmogenic backgrounds afforded by the very deep location, and the very clean environment that was maintained within the SNO laboratory. Therefore, the completion of the

SNO experiment presented a compelling opportunity to create a new large-volume liquid scintillator experiment, and as a result the SNO+ project was begun.

6.1 SNO+ Physics

The sensitivity of a scintillator experiment to low energy events combines with some important aspects of the SNOLAB site to give SNO+ the ability to make some extremely interesting physics measurements. The different physics signals of interest to SNO+ are briefly introduced below.

115 6.1.1 Solar Neutrinos

Although the Solar Neutrino Problem is now resolved, there is still interesting physics to be done by observing solar neutrinos. With an energy threshold below 0.5 MeV, SNO+ will be sensitive to a number of different solar neutrino fluxes. Of particular interest will be the pep and CNO neutrinos. pep Neutrinos

As discussed in Section 2.1, the major advantage of the SNO experiment compared to previous solar neutrino experiments in studying neutrino oscillations was SNO’s ability to normalize its charged current flux measurement with its own neutral current flux measurement. This made SNO “model independent,” whereas the other solar neutrino experiments had to rely on the Standard Solar

Model (SSM) for their normalization. As SNO+ will detect solar neutrinos only through the elastic scattering interaction (Equation 2.3), it too will have to be normalized against the SSM. This, in turn, means that the ability of SNO+ to study solar neutrino oscillations could be limited by the precision with which the neutrino flux is predicted by the SSM. From the point of view of neutrino oscillations, then, the two fluxes which are most precisely predicted by the SSM, namely the pp and the pep fluxes (see Figure 2.2), are the most interesting. The pp neutrinos are not measurable in

SNO+ due to the background from 14C intrinsic to the scintillator, but a SNO+ measurement of the pep neutrinos should be possible.

Using the pep neutrinos to probe the solar neutrino survival probability is particularly interesting because, at 1.44 MeV, the pep flux falls on the upturn of the solar neutrino survival probability

(Figure 2.8). From the point of view of MSW oscillations this region is interesting because the energy at which the survival probability upturn occurs depends on the neutrino mass splitting. Therefore, a measurement of the pep neutrino flux, and hence the electron neutrino survival probability at 1.44

2 MeV, could help to improve the precision of the solar neutrino measurement of ∆m12.

Probing the upturn in the electron neutrino survival probability is also interesting because it is in this region that the detection of new physics is most likely. This is because at low neutrino energies

116 the solar matter density is insufficient to produce significant changes in the neutrino propagation

Hamiltonian, and what are essentially vacuum oscillations occur. At high energies, on the other hand, the resonant MSW transition is fully “turned on” and the survival probability is essentially independent of the neutrino-matter interaction. That is, as long as the neutrinos are interacting with the solar matter strongly enough for the resonant transition to occur, the actual mechanism through which the neutrinos are interacting is relatively unimportant. Therefore, it is in the transition region between the low energy vacuum oscillations and the “full matter effect” at higher energy that the details of the neutrino-matter interaction have the greatest effect on the survival probability.

A number of non-standard models have been proposed, including non-standard neutrino-matter interactions [63], and variations in the neutrino mass depending on the local neutrino density[64], which can greatly affect the electron neutrino survival probability in the transition region while leaving the high and low energy regions within current experimental bounds (see Figure 6.1). Mea- surement of the pep electron neutrino survival probability would help to constrain these models.

Figure 6.1: The solar electron neutrino survival probability curve with non-standard neutrino- matter interactions. The difference between the non-standard model solar electron neutrino survival probability (“LMA-0”) and the standard MSW curve (“LMA-1”) can be seen to be greatest in the transition region, while the models agree well in the high and low energy regions. The solid and dotted lines represent survival probabilities averaged over different neutrino production regions in the Sun. Figure from [63].

For further discussion of the implications of low energy solar neutrino measurements on our

117 understanding of the Standard Solar Model and on neutrino oscillations, the reader is referred to

[65].

Monte Carlo simulations suggest [66] that SNO+ should be able to make an 8.5% (statistical) measurement of the solar neutrino flux with 1 kT a of data1, which would thus provide a measurement · of the solar neutrino survival probability at 1.44 MeV of similar precision. This should begin to provide constraints on the non-standard neutrino interaction models.

It should be noted that the depth of the SNOLAB site has an important impact on the SNO+ pep measurement. For experiments in shallower locations, cosmogenic backgrounds, in particular

11C, provide a difficult background to the pep measurement [68]. At the depth of SNO+, however, this cosmogenic background is almost negligible.

It should also be noted that the BOREXINO experiment (which has recently published a low energy solar neutrino survival probability based on a measurement of the 7Be neutrino flux [22] and presented a low energy 8B neutrino flux measurement [69]) will also attempt to measure the pep

flux. Although the 11C is a significant background in BOREXINO, the collaboration has developed a neutron-tagging technique [70], which may allow the pep measurement to be made. Ultimately, however, the larger size and lack of 11C background in SNO+ should permit SNO+ to make a more precise measurement2.

CNO Neutrinos

Where a measurement of the pep neutrinos would provide interesting information about neutrino physics, measurement of the CNO neutrinos would provide interesting information about solar physics. As mentioned in Section 2.1, there are two possible chains of nuclear reactions that account for the energy production of the Sun: the pp chain, which produces the hep, 8B, pep, 7Be, and pp neutrinos, and the CNO cycle. pp chain reactions are thought to be responsible for generating

1 These simulations were carried out assuming that SNO+ would have no ability to discriminate between alpha and beta events based on their timing distributions, while recent studies [67] suggested that the alpha - beta discrimination might be higher than 90%. Also, these simulations were carried out at background levels higher than those which have subsequently been demonstrated by BOREXINO. It is likely, therefore, that a repeated analysis would demonstrate an improved SNO+ pep sensitivity. 2 Provided that SNO+ achieves radio-purity comparable to BOREXINO.

118 about 98.5% of the Sun’s energy, with only 1.5% being supplied by the CNO chain [71]. The CNO contribution is still quite uncertain, however, and depends strongly on the density of the “metals”

(i.e. N and O, and especially C) in the solar interior. Thus, a measurement of the CNO neutrino

flux would constrain not only the contribution of the CNO cycle to the solar energy generation, but also provide a measurement of the solar metallicity. That solar metallicity measurement, in turn, would be helpful in resolving the discrepancy which has recently developed between spectroscopic determinations of the solar metallicity and the metallicities required in solar models to correctly reproduce helioseismological observations [72]3.

Monte Carlo simulations predict that SNO+ should be able to make approximately a 20% mea- surement of the CNO flux after 1 kT a of solar neutrino data. This would be good enough to confirm · the CNO contribution to solar energy generation and to provide some information to the metallicity controversy. It is again possible that BOREXINO will make a measurement of the CNO flux before

SNO+, but again the larger mass and lower cosmogenic background in SNO+ should ultimately allow SNO+ to make a more precise measurement.

6.1.2 Reactor Anti-neutrinos

Nuclear reactors used for power generation emit large fluxes of electron anti-neutrinos as a result of the beta decay of the fission products. Because the number of decays is related to the thermal power produced by the reactor and this thermal power is carefully measured, the anti-neutrino flux leaving each reactor is well known. The energy spectrum of the anti-neutrinos emitted by a reactor is also known quite accurately, although it does depend slightly on the composition of the nuclear fuel used.

It should be noted at this juncture that anti-neutrinos are detected in a liquid scintillator through the inverse beta decay reaction

p + ν¯ n + e+. (6.1) e →

The positron is produced with a kinetic energy 1.804 MeV less than the energy of the anti-neutrino,

3 When the metallicities supported by the two measurements are used in the SSM, the predicted CNO neutrino flux changes by 30-40% [72].

119 Eν , and is annihilated by a nearby electron almost immediately, so that a total of (Eν -0.782) MeV is deposited in the detector. The neutron diffuses through the detector for 200 µs before being ∼ captured by another nucleus, usually another proton, which releases another burst of energy (a 2.2

MeV γ-ray in the case of a proton capture). This “delayed coincidence signal” of two events separated by a fairly well defined time interval provides a powerful method for extracting anti-neutrino signals from single event backgrounds.

As they propagate through the Earth, reactor anti-neutrinos undergo what are essentially vacuum oscillations. The oscillation lengths for reactor anti-neutrinos are on the order of 100km and depend on the energy of the neutrino, as shown in Equation 2.8. This relatively accessible oscillation length, combined with the well understood source flux and spectrum, makes detecting reactor anti-neutrinos with a detector a few hundred kilometres from the reactor an extremely powerful method for studying neutrino oscillations.

Indeed, this was the primary physics motivation for the construction of the KamLAND detector, a kilo-tonne scale liquid scintillator experiment in Japan, which has succeeded in measuring the reactor anti-neutrino oscillation spectrum [23] and extracting from it the most precise measurement

2 to date of ∆m12. KamLAND sits at an average distance of 180km from several Japanese and

Korean nuclear reactors, with 9 reactor sites within 300km. This gives KamLAND a high reactor anti-neutrino flux, but “averages out” the neutrino oscillation features.

SNO+, on the other hand, will be 240 km from the Bruce nuclear generating station and about

330 km from reactors in Pickering and Darlington. This greater distance will combine with the lower power produced by these reactors to give SNO+ a much lower anti-neutrino interaction rate than KamLAND4. However, it happens that the SNO+ reactor-detector baselines are such that the second oscillation minimum in the Bruce reactor energy spectrum coincides with the third oscillation minimum in the Pickering and Darlington oscillation spectra, producing a clear dip in the expected

SNO+ reactor anti-neutrino spectrum (see Figure 6.2). This Figure also shows that the position

4 In fact, the SNO+ reactor anti-neutrino interaction rate is expected to be approximately 147 events/kT a after neutrino oscillations [73], which is approximately 30% of the KamLAND rate. ·

120 2 of this combined oscillation feature is quite sensitive to ∆m12. The presence of a fairly obvious oscillation feature which has good sensitivity to the mass splitting gives SNO+ better sensitivity to

2 ∆m12 than might be predicted given the low statistics expected in the measurement. It has even

2 been suggested [74] that the SNO+ measurement of ∆m12 may rival the KamLAND measurement in precision, even given the reduced statistics of the SNO+ measurement.

Figure 6.2: The expected spectrum of reactor anti-neutrinos at SNO+ for three different values 2 of ∆m12. The spectrum can be seen to contain clear oscillation features. The chosen 2 values of ∆m12 are approximately the current global (total solar + KamLAND) best fit value and the 1 σ variations about this value. It can be seen that the oscillation features in the expected SNO+ spectrum are sufficiently sensitive to the oscillation parameters to produce noticeable differences in the SNO+ spectrum even within the uncertainty bounds of the KamLAND-dominated global result. Figure from [74].

6.1.3 Geo-Neutrinos

The energy released from nuclear decays of long lived isotopes5 within the Earth’s mantle and crust is thought to be responsible for replenishing a large portion (50-100%) [75] of the heat emitted by the Earth. As can be seen, however, this contribution is not well known, and constraining it would provide useful information for those attempting to understand the Earth’s thermal history. SNO+ 5 Especially 40K and isotopes in the 238U and 232Th decay chains.

121 may be able to contribute to answering this question, as each of these decay chains contains at least one beta decay, and each beta decay produces an electron anti-neutrino. These geologically produced anti-neutrinos, which are generally referred to as “geo-neutrinos” should be detectable in

SNO+.

The geo-neutrino energies are quite low (see Figure 6.3), but are well within the reach of scintil- lator experiments. Also, as discussed earlier, the coincidence signal used to detect anti-neutrinos in a scintillator experiment makes the measurement robust against most backgrounds.

Figure 6.3: The expected spectrum of geo-neutrinos in SNO+. The reactor anti-neutrino back- ground is also shown for comparison.

The KamLAND experiment has already published initial studies of the geo-neutrino flux at

Kamioka [76, 23]. These measurements are important first steps in our understanding of these terrestrial anti-neutrinos but, because of the low rate of the geo-neutrino signal, background from the high KamLAND reactor anti-neutrino rate discussed above, and (α, n) backgrounds produced by alpha decays of radon daughters dissolved directly into the KamLAND scintillator6, it will be some time before the KamLAND measurement is able to usefully constrain the radiogenic heat production.

6 (α, n) reactions can produce a coincidence background through the scattering of the high energy neutron when it is first produced (this produces a signal through the recoil of the scattered proton) followed by the neutron being captured by another proton. KamLAND is currently working to remove these radon daughters from their scintillator via re-distillation; once this is complete the only background to the geo-neutrino measurement should be the reactor anti-neutrinos.

122 Accurately measuring the geo-neutrino flux should be easier in SNO+ than in KamLAND. For one thing, the rate of geo-neutrino interactions in SNO+ is predicted to be significantly higher than at KamLAND (42 /kT a for SNO+ vs. 31 /kT a for KamLAND [77, 73]), due to differences · · in the regional geology of the two sites. SNO+ has also taken care in the design of the scintillator handling and purification systems to prevent, as much as possible, the ingress of radon, which should prevent (α, n) reactions from significantly affecting the SNO+ geo-neutrino measurement7. Also, as discussed earlier, the reactor anti-neutrino flux at SNOLAB is significantly lower than at Kamioka

(the location of KamLAND). The combination of these factors should make it possible for SNO+ to make a fairly precise measurement of the geo-neutrino flux at SNOLAB.

Another important point about a SNO+ geo-neutrino measurement is that the interpretation of a geo-neutrino measurement at SNOLAB would be relatively straightforward. This is because the SNOLAB local geography consists of uniform crust which, because of the Sudbury area’s unique geological history and the extensive geological studies that have been carried out as a result, is among the best understood in the world. This makes the contribution of the local crust to the SNO+ geo-neutrino flux fairly well constrained, and hence the de-convolution of the crust and mantle contributions to the geo-neutrino signal is less uncertain at SNO+ than it is at other locations.

Having the ability to compare geo-neutrino flux measurements from Sudbury, Kamioka8 and Gran

Sasso9 would also be useful in de-convolving the geo-neutrino contributions from the different regions of the Earth’s interior.

6.1.4 Supernova Neutrinos

When a star undergoes a Type IIA supernova explosion, more than 99% of its gravitational binding energy is carried away by neutrinos [79] in a neutrino burst which emits more neutrinos in the span of a few seconds than are released in total during the rest of the star’s life. The neutrinos are

7 The design of the SNO+ scintillator purification and handling systems closely follows the design used in the BOREXINO experiment [78], which successfully produced scintillator with radon daughters at the level of a few thousand decays/kT a [22]. 8 Kamioka sits at the· boundary between continental crust, which is believed to have relatively high levels of the 238U, 232Th and 40K geo-neutrino emitters, and oceanic crust, which is expected to be almost devoid of these chemicals. 9 Gran Sasso is the location of the BOREXINO experiment, which should make a relatively low background, but fairly low statistics, measurement of the geo-neutrino flux [77].

123 predicted to be evenly divided between the flavours and between particles and anti-particles, and to have an average energy of about 15 MeV [80]. The neutrino flux is so large, in fact, that even a supernova at Galactic distances should produce a large number of events in SNO+. Indeed, a total of 24 neutrinos from supernova 1987a, which occurred about 50 kpc away from Earth in the Large

Magellanic Cloud, were detected by three neutrino detectors which were active at the time (see [81] and references therein). A supernova effectively provides a “point source” of neutrinos both in space and in time which propagate out through the supernova remnants and then traverse a long but well defined path through the interstellar medium to Earth. Supernova neutrinos can thus carry with them information about the supernova itself, about the interstellar medium through which they have propagated, and about neutrino propagation over very large distances. As a result, they are interesting to many different areas of physics. As an example, the information from the 24 neutrinos from supernova 1987a has been used for everything from testing neutrino oscillation models [81] to setting limits on neutrino mass [82] to constraining the size of possible compact extra dimensions in the universe [83]. In fact, [82] is entitled “Yet Another Paper on Sn1987a....”. If neutrinos from a supernova within our Galaxy were detected, the much greater number of observed events would provide even more information to these diverse areas of physics.

As a large volume scintillator experiment, SNO+ would be a very good supernova detector.

The reactions that could be used by SNO+ to detect supernova neutrinos are listed in Table 6.1 below, along with the number of interactions of each type expected from a “standard” type IIA

3x1053 erg supernova 10 kpc from the Earth. As can be seen, a SNO+ detection of a Galactic supernova would have good statistics and would provide some ability to distinguish νe, ν¯e, νµ,τ and

ν¯µ,τ . This flavour separation is expected to provide even more information about supernova physics and neutrino oscillations.

6.1.5 Neutrinoless Double Beta Decay

Many people consider the most important open question in neutrino physics to be the question of whether neutrinos are Majorana or Dirac particles. Dirac particles, which include most familiar

124 Interaction # Events / kT Reference ν + e− ν + e− 8 [80] e → e ν¯ + e− ν¯ + e− 3 [80] e → e ν + e− ν + e− 4 [80] µ,τ → µ,τ ν¯ + e− ν¯ + e− 2 [84] µ,τ → µ,τ ν¯ + p n + e+ 263 [84] e → ν +12 C 12 N + e− 27 [85] e → 12 12 + ν¯e + C B + e 7 [85] ∗→ ν +12 C 12 C (15.11 MeV) + ν 58 [85] x → x ν + p ν + p 273** [86] x → x

Table 6.1: The expected number of neutrinos detected through different interaction channels in SNO+ for a 10 kpc type IIA supernova, assuming MSW oscillations. Note that there is one high statistics charged current channel and one high statistics neutral current channel. (**assuming a 0.2 MeV threshold) particles, are distinct from their corresponding anti-particles. Majorana particles, on the other hand, are identical to their corresponding anti-particles except for their helicities. Many theorists believe that neutrinos should be Majorana particles, as the theory of Majorana masses admits a mechanism that naturally explains the smallness of the neutrino mass [87].

Attempts to detect the (possible) Majorana nature of neutrinos focus around the double beta decay process (“2νββ ”),

(A, Z) (A, Z + 2) + 2e− + 2ν¯ . (6.2) → e

If neutrinos are Majorana particles, the anti-neutrino emitted by one of the neutrons can be absorbed as a neutrino by the other. The resulting process, in which no neutrinos are emitted, is neutrinoless double beta decay (“0νββ ”),

(A, Z) (A, Z + 2) + 2e−. (6.3) →

One experiment [88] claims to have detected neutrinoless double beta decay in 76Ge with a neutrino mass in the range 0.27-0.37 eV (90%C.L.). The initial claims by this experiment [89], however, were extremely controversial [90], and experiments to confirm or refute the claim are considered necessary.

SNO+ will be able to make an important contribution to this effort by loading 150Nd, a double

125 beta decay isotope, into the liquid scintillator. The large mass of isotope thus obtained will be sufficient to give the SNO+ double beta decay search competitive, and potentially leading, sensitivity to neutrinoless double beta decay compared to other proposed and existing experiments. The SNO+ double beta decay search is described in detail in Chapter 7.

6.1.6 Physics Summary

The SNO+ experiment has the potential to carry out interesting studies of a number of different physics signals. These include the low energy solar neutrinos, geo-neutrinos, reactor anti-neutrinos and supernova neutrinos, as well as neutrinoless double beta decay.

Of these topics, all but the potential for the neutrinoless double beta decay search had been fully appreciated for some time. Indeed, even the 2001 NSERC Subatomic Physics Five Year Plan includes a scintillator experiment built from the SNO detector (called ’SNO+’ purely by coincidence) to “study low energy neutrinos” [91]. As described in the next chapter, however, the potential sensitivity of the experiment to neutrinoless double beta decay only began to be appreciated in the late 1990’s. The addition of the neutrinoless double beta decay search to this already very interesting physics program provided the final impetus to stimulate the formation of the SNO+ collaboration and the development of the SNO+ experiment.

6.2 Experimental Design and Development

Having established that the SNO+ experiment was scientifically interesting, it remained to be demon- strated that the experiment was practically realizable. The two main prerequisites to the successful staging of the experiment were the identification of a liquid scintillator suitable for long term use in the acrylic vessel and the design of a hold-down system to hold the now buoyant acrylic vessel

(A.V.) in position.

6.2.1 Scintillator Development

The liquid scintillators used in large scintillator experiments typically have two or three components: a solvent, which is an aromatic hydrocarbon which is efficiently excited by the passage of charged

126 particles, and which forms the bulk of the scintillator; a fluor, typically 2,5-diphenyloxazole (PPO) at the level of a few grams per litre, which “collects” (typically via dipole interactions) the excitations from the solvent and emits them as photons with high efficiency; and sometimes a wavelength shifter which, at the level of a few mg/L, captures the light emitted by the fluor and re-emits it at longer wavelengths where there is less chance of it being re-absorbed by the other scintillator components.

In addition, scintillators are sometime diluted in inert solvents like mineral oil to reduce their cost or improve their chemical compatibility.

Unfortunately, most aromatic liquids, like the pseudocumene liquid scintillator that has been used in previous large liquid scintillator experiments, are quite aggressive towards acrylic. Pseudocumene diluted to 20% in mineral oil had been used in contact with acrylic for extended periods in the Palo

Verde experiment [92], but even there anecdotal accounts suggest visible changes occurred in both the acrylic and the scintillator immediately adjacent over time. In the higher-stress application of the SNO+ acrylic vessel, such degradation may not be acceptable.

Therefore, a campaign was begun to test different scintillation solvents for suitability for use in SNO+. An ideal scintillator would be acrylic compatible, have good light output, provide little attenuation to its own emitted light and have as high a density as possible (to reduce the A.V. buoyancy problem). Initial testing involved exposing small samples of acrylic to the different solvents and monitoring for changes in the acrylic and in the optical behaviour of the scintillator. The initial tests also compared the amount of light produced by the different scintillators (with 2 g/L

PPO fluor) under irradiation from a γ-ray source. In these initial tests two scintillators stood out: diisopropylnaphthalene (DIN) and linear alkyl benzene (LAB). Of these, DIN had a slightly higher light output and a higher density (0.99 g/cm3 vs 0.86 g/cm3 for LAB), but a slight increase in the amount of 90o scattering observed in the acrylic-exposed DIN raised fears that chemical attack might be causing small particles of acrylic to become suspended in the scintillator. In addition, measurements of the optical attenuation lengths of the candidate scintillators showed that LAB exhibited significantly less attenuation than DIN in the important wavelength region, and was thus

127 more suited for use in large detectors like SNO+. As a result, LAB was selected for further, more careful, testing. A complete description of the initial scintillator tests can be found in [93]

Acrylic Compatibility

In order to determine more conclusively whether LAB was compatible with acrylic, two more series of compatibility studies were begun. In one test, carried out at Brookhaven National Laboratory, bars of acrylic were stressed by bending them across a fulcrum to a surface stress of 3000 PSI, in a test technique described in [94]. Absorbent pads were used to hold solvent in contact with the stressed surface. A high level of tensile stress (for comparison, the maximum design stress in the

SNO A.V. (in tension) was 600 PSI) increases the susceptibility of the acrylic to chemical attack to such a degree that this test is considered to be passed if no change is observed in the acrylic after a solvent exposure of 4 hrs. To confirm the sensitivity of the test, acrylic was first exposed to pseudocumene, which, as mentioned earlier, is known to be incompatible with acrylic. The acrylic bar exposed to pseudocumene failed (broke) after one hour. Next, pseudocumene diluted to 20% in mineral oil (which, as mentioned above, was known to be “acrylic compatible” from the Palo

Verde experiment) was tried. The diluted pseudocumene passed the test, but the exposed acrylic was observed to have softened after 3 days of exposure. Finally, a test with LAB showed no change in the acrylic after more than 30 days exposure. In this series of tests, then, LAB was found to have less interaction with acrylic than diluted pseudocumene, a known “acrylic compatible” solvent.

The second series of compatibility tests were based on the ASTM D 543 chemical compatibility test [95]. In these tests, acrylic “dogbones” (which were designed to be destructively tested in order to determine the acrylic elastic modulus and failure stress as described in [96]), were exposed to

LAB for long periods of time. There were three series of dogbones:

Series I: A series of of dogbones cut from a commercial (cast, UV absorbing) acrylic sheet. • These dogbones were soaked, unstressed, in pure LAB. Five sets of four dogbones were made.

Series II: A series of dogbones cut from a sheet of SNO acrylic, with all of the machining •

128 done at Queen’s University. These dogbones were mounted on custom built strain jigs which

stressed the outside surface of the dogbones to 1200 PSI (twice the SNO design maximum)

and soaked in pure LAB. Seven sets of four dogbones were made, of which five were mounted

on strain jigs and soaked (the others were broken as a control).

Series III: A larger series of 600 dogbones constructed from SNO acrylic. Half of these • dogbones were produced10 with a SNO-type acrylic-to-acrylic bond in the centre of the “neck”

of the dogbone. These dogbones were soaked, on strain jigs, in three groups; one group in

water (as a control), one group in SNO+ scintillator (LAB + 2 g/L PPO), and one group in

neodymium-loaded SNO+ scintillator11.

The first series of dogbones, which have now all been broken, soaked in LAB for up to 42 months.

As seen in Figure 6.4, the acrylic properties initially appeared to be stable, but in the final test a decrease was observed in the failure stress; the elastic modulus appears to have remained unchanged.

×103 450

12000 440

430 11500 420 Yield Strength (PSI)

Elastic Modulus (PSI) 410 11000 400

10500 390

380 10000 370

9500 360

0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 LAB Soak Time (Months) LAB Soak Time (Months)

(a) Yield Stress (b) Elastic Modulus

Figure 6.4: The results of the Series I LAB - acrylic compatibility tests. The elastic modulus of the acrylic appears to be unchanged with exposure, while the failure stress in the final group of dogbones tested exhibited a decrease. Four dogbones were broken in each test; the mean results of each test are shown with error bars representing the RMS spread between the individual dogbones measured within that test.

This pattern is repeated in the current results of the Series II tests, as seen in Figure 6.5. Two

10 By Reynold’s Polymer Technology, the same company which bonded the SNO A.V. 11 This group is currently soaking in scintillator only; the Nd-carboxylate will be added at a later date.

129 sets of Series II dogbones remain to be tested.

×103

480 12500

460 12000 Yield Strength (PSI)

Elastic Modulus (PSI) 440

11500 420

11000 400

380 10500

0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35 LAB Soak Time (Months) LAB Soak Time (Months)

(a) Yield Stress (b) Elastic Modulus

Figure 6.5: The current results of the Series II LAB - acrylic compatibility tests. As in the Series I tests, the elastic modulus of the acrylic appears to be unchanged with exposure, while the failure stress appears to decrease.

Finally, the Series III tests have just been started, with the intention of maintaining them as a long-term exposure tests for the duration of the SNO+ experiment. A single set of dogbones have so far been tested, after 10 month’s exposure, with no change seen in the LAB-exposed samples (both bonded and non-bonded). There was, however, an unexplained decrease of about 15% in the failure stress of the non-bonded water exposed acrylic.

The observed change in the acrylic failure stress after prolonged exposure to LAB is unsettling, but, at least at the observed levels, it is likely tolerable. The most important quantity identified in the finite element calculations that have been used to study the acrylic vessel stability is the elastic modulus, and that still appears to be stable after LAB exposure. In addition, the majority of the acrylic vessel is, by design, under compression rather than under tension, so the failure stress under tension is less important [97]. At some point, of course, continued weakening of the acrylic through

LAB exposure would become troublesome. Efforts are being made to understand the cause of the observed change in the acrylic failure stress12, and the remaining dogbones will continue to be tested periodically to monitor for continuing changes in the acrylic. The observed, and likely extrapolated, 12 It could, for example, be caused by the formation of surface cracks as the result of chemical attack by the LAB, or it could simply be the result of stresses induced in the acrylic surfaces as they swell due to LAB absorption. Clearly these different underlying mechanisms have different implications for the SNO+ acrylic vessel.

130 changes in the failure stress will also be included in the finite element analysis of the acrylic vessel to confirm that they will not compromise the integrity of the vessel.

Scintillator Purification

Concurrently with the development of the acrylic compatibility tests, investigations were carried out into purification methods for the LAB. These purification trials, which were principally based on techniques that proved highly successful in the purification of the scintillator in the BOREXINO experiment [78], showed that the standard vacuum distillation and Al2O3 column purification tech- niques were effective both in improving the optical transmission of the LAB and in reducing the level of radioactive contamination in the scintillator [66]. Based on this information, a full scale vacuum distillation based scintillator purification plant has been designed13, and construction activities for the installation of the plant in SNOLAB have begun.

Optical Characterization

The amount of light detected by a large liquid scintillator experiment like SNO+, after the passage of a charged particle, is more difficult to predict than it would be in a smaller detector. Very pure scintillators can have absorption and scattering lengths of many meters at the important wavelengths, so in smaller detectors the amount of light detected basically depends only on the amount of light initially produced and on the detector geometry. In a larger experiment like SNO+, on the other hand, the path lengths for photons through the scintillator are of order of the attenuation lengths, so a typical detected photon has been absorbed and re-emitted by the scintillator more than once (1.4 times on average for photons produced at the centre of the detector in the SNOMAN simulations

(described below)). The wavelength and direction of the photon change with each re-emission. Thus, precise measurements of long attenuation and scattering lengths in the scintillator are necessary to achieve a reliable prediction of the scintillator light output, and Monte Carlo simulations must be used. 13 The SNO+ scintillator purification plant was designed by the same company which designed and built the BOREX- INO scintillator purification plant.

131 In order to provide such simulations for SNO+, the SNOMAN particle energy deposition and photon tracking code, which was developed for the SNO experiment and which, as a result, already contained excellent models of the SNO geometry and the optical response of the PMT array, was modified to include scintillator optical effects. As scintillator is much more optically active than the water for which the code was originally designed, this required fairly significant modifications to the photon production and propagation routines. The scintillator optical properties which are important to the light output of a large scintillation experiment, and which were therefore accommodated in the simulation include:

1. Primary Light Output: This is the average number of photons initially produced when a

given amount of energy is deposited in the scintillator (this number includes both the fraction of

the energy deposited in the scintillator that creates electronic excitations capable of producing

photons and the transfer and emission efficiencies described below). The light output depends

on the composition of the scintillator cocktail and the type of particle responsible for the

excitation.

2. Absorption Length: The (wavelength dependant) average distance travelled by a photon

before it is absorbed by one of the scintillator components. The absorption of each scintillator

component must be treated separately because the re-emission probability is different for each

component.

3. Scattering Length: The (wavelength dependant) average distance travelled by a photon

before it scatters (typically by Rayleigh scattering in clean scintillator), changing its direction

but not its wavelength. The scattering length is common to the scintillator cocktail, and can

vary with the component concentrations.

4. Quantum Efficiency: The efficiency with which a scintillator component converts electronic

excitations into emitted photons. The fluor especially is chosen to have a high quantum

efficiency. The quantum efficiency also gives the probability that an absorbed photon is re-

132 emitted.

5. Non-Radiative Transfer Efficiency: The component-specific efficiency with which a molec-

ular electronic excitation in a molecule of one scintillator component is transferred (typically

by dipole interactions) to another scintillator component. This is especially important in the

transfer of primary particle excitations from the solvent to the fluor, but it also plays a role

in the re-emission of captured photons. The non-radiative transfer efficiency varies with the

concentrations of the different scintillator components.

6. Emission Spectrum: The distribution which describes the relative probabilities of a photon

emitted by a given scintillator component having different wavelengths.

7. Emission Time: The average time between the absorption of a photon by a scintillator

component and its subsequent re-emission. This is not important in light output predictions,

but it is necessary to simulate event reconstruction within the detector.

In a fairly extensive campaign, the properties listed have been measured or estimated for the

LAB + 2 g/L PPO SNO+ scintillator cocktail (and the same cocktail with 0.1% Nd-carboxylate)

[98, 93]; work is ongoing to improve some of these measurements, especially the long absorption and scattering lengths in LAB and Nd-LAB, and the LAB quantum efficiency. With the current best measurements incorporated, the SNOMAN-based simulation suggests that the light output for the

SNO+ solar neutrino phase is adequate ( 520 PMT hits for a 1 MeV electron at the centre of the ∼ detector) and fairly robust against the uncertain measurements described above.

6.2.2 Acrylic Vessel Hold-Down

Apart from its apparent acrylic compatibility, the other major contributing factor in selecting LAB from the scintillator candidates studied in the original scintillator selection trials was the emerging understanding that the 15% density mismatch between LAB and the surrounding light water ∼ could likely be tolerated by the SNO acrylic vessel. This began with an engineering opinion [99] that because the A.V. had been designed to support, with a wide safety margin, an internal over-

133 density of 10% and because the junction at the neck of the vessel was re-enforced such that the ∼ vessel should behave structurally as a complete sphere, there was no fundamental reason that the vessel should not be expected to tolerate a 15% internal under-density with a carefully designed ∼ hold-down system.

Continued work resulted in the creation of several candidate hold-down net designs, an example of which is shown in Figure 6.6. Finite-element calculations have been carried out based on these designs [100] and indicate that the A.V., thus anchored, would withstand even 100% internal under- density compared to the surrounding light water, provided that it were perfectly spherical. Surveying of the real A.V. in order to determine its actual deviation from sphericity, and studies of second order effects like potential increases in pressure on the A.V. beneath rope junctions are ongoing.

Nevertheless, it appears that the 15% internal under-density associated with the use of LAB should ∼ be safely tolerated by the SNO+ acrylic vessel, and this opinion was supported by an independent engineering review of the A.V. hold-down design [101].

Figure 6.6: The results of a finite-element analysis of one of the candidate SNO+ acrylic vessel rope net hold-down configurations. The colour indicates the level of stress in the acrylic. Figure from [100].

134 6.2.3 SNO+ Current Status and Outlook

The SNO+ project is moving forward. The collaboration currently has about 65 members at 14 institutions in Canada, the United States, Great Britain, Portugal, and Germany. The project has received annual development funding from the Canadian Natural Sciences and Engineering Research

Council since 2005, and support from the National Science Foundation for SNO+ research activities in the U.S. since 2007. The Canadian FedNor northern development agency provided additional seed funding in 2008. In June of 2009, capital funding for the SNO+ project was received from the

Canada Foundation for Innovation to cover the final design, purchase and installation of the acrylic vessel hold-down net, the purchase and installation of the scintillator purification system, and the purchase of the liquid scintillator. Preparation for the installation of the scintillator purification plant has already begun, and work to prepare the SNO+ cavity for the installation of the acrylic vessel hold-down is also underway.

On the current schedule, the SNO+ detector is predicted to be filled and operational by 2011.

After a few months of “shakedown” operation, (natural) neodymium carboxylate will be added to the scintillator and the neutrinoless double beta decay experiment will be carried out. Later, perhaps in 2015, the neodymium will be removed and the SNO+ solar neutrino phase will begin. If at any point during this time enriched neodymium were to become available it would be installed at once, as would nanoparticle neodymium loading if it should be perfected. The geo-neutrino and reactor anti-neutrino signals should be detectable during both phases, and if a Galactic supernova were to occur it would be detectable in either phase. On the current schedule, then, the exciting scientific journey of the SNO+ experiment should begin in a few years’ time.

135 Chapter 7. The SNO+ Neutrinoless Double Beta Decay Search

7.1 Introduction to Neutrinoless Double Beta Decay

“Two neutrino” double beta decay (2νββ ) is a rare second order weak process,

− − (Z, A) (Z + 2, A) + e + e + ν¯ + ν¯ (7.1) → 1 2 e1 e2 which is observed when single beta decay from the initial (Z, A) nucleus to the intermediate (Z+1, A) nucleus is energetically disallowed, but the double beta decay is not. This situation occurs naturally in some even-even nuclei, and two-neutrino double beta decay has in fact been observed for some isotopes, including 48Ca, 76Ge, 96Zr, and 150Nd [102].

Another potential decay mode for these double beta decay isotopes is neutrinoless double beta decay (0νββ ),

− − (Z, A) (Z + 2, A) + e + e . (7.2) → 1 2

In this process, rather than emitting two anti-neutrinos, the decaying nucleons instead exchange a virtual neutrino1. In order for this process, which explicitly violates lepton number conservation, to occur, the neutrino must be massive and invariant under charge conjugation. Such “Majorana” neutrinos differ from the corresponding anti-neutrinos only in their helicity. As discussed in the earlier sections, neutrino oscillation experiments have demonstrated that neutrinos are massive; the search for neutrinoless double beta decay thus seeks to determine whether or not neutrinos are invariant under charge conjugation (i.e. whether they are “Majorana Particles” or “Dirac Particles”).

1 It is also possible that neutrinoless double beta decay could involve the emission of a neutral boson, (Z, A) − + → (Z + 2, A) + e1 + e2 + χ, or arise via right-handed weak currents or the exchange of exotic heavy particles [103, 104]. Such exotic processes are, however, beyond the scope of this thesis and will not be discussed further.

136 7.1.1 The Neutrinoless Double Beta Decay Rate

A detailed description of the theoretical mechanism behind neutrinoless double beta decay is beyond the scope of this work. The overview given below highlights the experimentally important points, and is largely drawn from the excellent reviews given in [103], [105] and [104]. For more details, the reader is referred to those works and the references contained therein.

The half-life for neutrinoless double beta decay, T 0ν , is given to good approximation (for 0+ 0+ 1/2 → transitions, which form the experimentally interesting cases) by

− 2 1 T 0ν = G0ν (E , Z) M 0ν m 2 , (7.3) 1/2 0 h ν i h i where the matrix element M 0ν and the effective neutrino mass m will be discussed further below. h ν i The phase space factor is given by

G0ν F (Z,  )F (Z,  )p p   δ(E   )d d , (7.4) ∝ 1 2 1 2 1 2 0 − 1 − 2 1 2 Z where F (Z, ) is the Fermi Function, E0 is the energy released during the double beta decay, and

0ν the pi’s and i’s are the momenta and total energies, respectively, of the two emitted electrons. G is in principle exactly calculable.

In the case of two neutrino double beta decay, the matrix elements are given by the standard overlap integrals between the initial, intermediate, and final nuclear states ( i , m , and f , respec- | i | i | i tively): f στ m m στ i M 2ν = h | + | i h | + | i, (7.5) E (M + M )/2 m m i f X − + where the sum is over the (1 ) states of the intermediate nucleus with energy Em, Mi and Mf are the nuclear masses of the initial and final states, τ+ is the isospin raising operator (describing the conversion of neutrons to protons) and σ is a Pauli matrix (which describes the spin changes inherent to this Gamow-Teller type transition). Therefore, the matrix element of the two-neutrino double beta decay can essentially be calculated as the product of the standard matrix elements for two independent beta decays. In neutrinoless double beta decay, on the other hand, this simplification is not possible because the requisite neutrino exchange couples the two decays; the emission of a

137 Majorana neutrino at position x, which is subsequently re-absorbed at position y, adds the following to the matrix element:

d4q qµγ + m i e−iq(x−y)e¯(x)γ P µ P γ ec(y), (7.6) − (2π)4 p L q2 m2 L σ Z − where q is the momentum 4-vector and m is the mass of the neutrino, e¯(x) and ec(y) are electron creation operators, the γ’s are the standard Dirac matrices, and P = (1 γ5)/2 is the chirality L − projector which “picks out” the proper chirality component (which for Majorana neutrinos is similar to “picking out” the neutrino or anti-neutrino component, as discussed above) at the two vertices.

Note the presence of the neutrino mass term in the numerator. Integrating this expression over the neutrino energy and momentum yields the “neutrino potential” terms

∞ 2m q sin(qr) H(r) = dq , (7.7) πr w(w + E (M + M )/2 (  )/2 Z0 m − i f  1 − 2 where r = x y and w = q 2 + m2. These potentials are then averaged to give H(r) = (H (r) + − →− + p H−(r))/2, which appears in the matrix element:

2 0ν gV M = f H(rlk)τ+lτ+k →−σ l →−σ k 2 i , (7.8) | | h | · − gA | i Xlk   where the sum is over nucleons which could participate in the decay, gV and gA are the vector and axial coupling constants, respectively, and the multiplicative neutrino mass factor m has been factored out2. The 0νββ half-life is thus

−1 0ν 0ν 0ν 2 T1/2 = G (E0, Z) mM . (7.9) h i

The neutrino mass m which multiplies Equation 7.9 now needs to be considered in more detail, in light of the misalignment between the neutrino mass and flavour eigenstates discussed in Section

2.4. The neutrino coupling at each of the electron creation vertices in the neutrinoless double beta decay occurs through the weak interaction, which in this case couples to the electron neutrino

flavour eigenstate, and from Equation 2.4, we see that ν = U ν + U ν + U ν . Thus, | ei e1 | 1i e2 | 2i e3 | ei the neutrinoless double beta decay matrix element actually contains three “copies” of Equation 7.8,

2 This formula is actually an approximation considering only s-type electron wave-functions and non-relativistic nucleons, but is considered quite accurate for 0+ 0+ transitions. →

138 one for each mass eigenstate, with each of them weighted by the mass of that eigenstate times the square3 of the appropriate entry in the PMNS matrix. The actual mass term in Equation 7.9 is thus a sum,

2 m = mj Uej , (7.10) j X th where mj is the mass of the j neutrino mass eigenstate. The neutrino mass and nuclear matrix element terms in Equation 7.9 are usually then separated to yield Equation 7.3 with the “effective

Majorana neutrino mass” defined as

m = m U 2 . (7.11) h ν i j ej j X

A final important point is that for Majorana neutrinos, the PMNS matrix is more general than the Dirac case introduced in Section 2.4. In particular, the Majorana PMNS matrix contains two additional complex phases (provided that there are three light neutrino mass eigenstates) such that

1 0 0 ˆ ˆ iα UMajorana = UDirac 0 e 0 . (7.12) ×   0 0 eiβ    

These complex terms thus appear (squared) in the sum in Equation 7.11 and lead to the possibility of destructive interference between the mass eigenstates in the effective neutrino mass. The neutrino mass splittings that have been measured by the oscillation experiments can, however, still be used to set upper and lower limits on m as a function of the minimum neutrino mass. These are shown h ν i in Figure 7.1.

7.1.2 Implications of Detecting Neutrinoless Double Beta Decay

The detection of neutrinoless double beta decay would have important implications for particle physics and cosmology. First of all, the detection of neutrinoless double beta decay would demon- strate that neutrinos are Majorana particles. This would confirm the existence of the two additional complex phases in the PMNS matrix, making it more natural that large CP violation in the lepton

3 The squared term is necessary because there are two neutrino coupling vertices.

139 Figure 7.1: The limits on the effective Majorana neutrino mass as a function of the minimum neutrino mass. As the splittings between the mass eigenstates are known but their ordering is not, there are two potential scenarios, the “normal” hierarchy, in which the largest mass splitting occurs between the two heaviest mass eigenstates and the “inverted” hierarchy, in which the largest mass splitting occurs between the two lightest mass eigenstates. These scenarios are essentially indistinguishable if the minimum neutrino mass is large compared to both mass splittings, which is referred to as the “degenerate” scenario. Figure from [105].

140 sector could be responsible for the observed matter-antimatter asymmetry in the Universe. Majo- rana neutrinos would also admit the “see-saw mechanism” [87], which naturally explains a small but

finite neutrino mass (which is otherwise difficult to explain theoretically).

The second important implication of the detection of neutrinoless double beta decay would be a measurement of the neutrino mass. Although the effective Majorana neutrino mass that would be measured does not necessarily correspond exactly to the physical neutrino mass (because of the possible cancellations due to the complex phases discussed above), given a measurement of m , h ν i the limiting relationships shown in Figure 7.1 can be used to determine an allowed range for the physical neutrino mass.

7.1.3 Calculation of Double Beta Decay Matrix Elements

In order to convert between observed neutrinoless double beta decay rates (or rate limits) and bounds (or limits) on the effective Majorana neutrino mass, it is necessary to evaluate the phase space and nuclear matrix element terms in Equation 7.3. As mentioned earlier, the phase space term can be calculated exactly. The matrix element term, on the other hand, is much more chal- lenging to calculate, especially for spin-deformed nuclei like 150Nd. Two main techniques are used in numerically evaluating M 0ν : Nuclear Shell Model (NSM) calculations, and calculations based on the Quasiparticle Random Phase Approximation (QRPA). Again, only very brief outlines of these approaches are given here, with the reader referred to [103, 105, 104] and the references therein for more information.

In NSM calculations, the nucleons are individually modelled and their interactions described by an effective Hamiltonian which is tuned to properly reproduce the observed energy levels splittings and transition probabilities of that nucleus. NSM calculations can in principle account for nuclear deformation. However, computers are not yet powerful enough to treat more than a few nucleons at a time in NSM calculations; the effects of the remaining nucleons must then be treated in some approximate way.

In QRPA, on the other hand, more significant approximations are made which means that the

141 calculation requires more empirical tuning but which allows more nucleons to be included in the calculation. At its most basic level, QRPA involves replacing interacting pairs of nucleons with boson “quasi-particles” and modelling the excited intermediate nuclear states as phonon-like states in a vacuum. These approximations mean that a larger number of nucleons can be treated in a given calculation (although there is still usually assumed to be an “inert core” of inner nucleons), but they also mean that the particle-particle correlations are no longer treated properly. This has an effect on the calculated nuclear transition rates, and so an empirical correction factor gpp, which must be tuned against data from other types of nuclear transitions, is introduced to compensate. It turns out that the predicted rate of 2νββ decays depends strongly on gpp, and much of the work on

QRPA involves attempting to lessen its sensitivity to this parameter and on developing new ways to

fix gpp. One popular recent approach is renormalized QRPA or RQRPA. Nuclear deformations can in principle be included in QRPA and RQRPA calculations “by hand,” but this has not yet been attempted for 0νββ matrix elements.

The uncertainties in both NSM and QRPA calculations of M 0ν are both relatively large and poorly defined. For lack of a better method of estimating the uncertainties, the spread in the M 0ν values calculated by different methods and different evaluations using the same method is often used.

In the smaller, undeformed (and presumably simpler to calculate) nuclei, like 76Ge, the calculated values of M 0ν differ by factors of more than three [103, 104]; in the larger spin-deformed nuclei, the uncertainties in the matrix elements are likely larger, but have not yet been even approximately defined.

150Nd, the isotope of interest to SNO+, falls into this category of large, spin-deformed nuclei.

Several attempts at evaluating its matrix element using QRPA/RQRPA have been made [106, 107,

108, 109], all of which have neglected the deformity of the nucleus. In addition, a single NSM calculation has been attempted [110], with the basis of nucleons severely truncated in order to make the calculation tractable. The calculation did, however, attempt to include nuclear deformation effects. The results of these different matrix element calculations (in terms of the 0νββ half-life that

142 they predict for m = 50 meV) are shown in Table 7.1. h ν i

Calculation Method T 0ν for m = 50meV (a) Reference 1/2 h ν i QRPA 1.0x1025 [106] RQRPA 2.0x1025 [107] RQRPA 2.0x1025 [108] RQRPA (2.0 - 4.4)x1025 [109] QRPA (1.6 - 3.5)x1025 [109] NSM 4.2x1026 [110]

Table 7.1: The T 0ν values predicted for 150Nd at m = 50meV for several different calculations 1/2 h ν i of M 0ν . In some cases, a range of allowed values (set by the level of variance in the 2ν numerical calculation, uncertainty in the experimental measurements of T1/2 that were used to tune the calculations, and by uncertainty in the axial coupling constant used) is shown. In all of these calculations the nucleus is assumed to be spherical.

7.2 Neutrinoless Double Beta Decay in SNO+

The search for neutrinoless double beta decay involves attempting to detect a small, mono-energetic neutrinoless double beta decay peak at the endpoint of a much larger two-neutrino double beta decay continuum. Double beta decay experiments thus require good energy resolution (so that the 2νββ continuum is not “smeared out” to such a degree that the 0νββ signal region is not statistically accessible at an interesting level), low background in the 0νββ energy region, and large mass (so that a reasonable number of the extremely rare 0νββ events might be expected).

In SNO+, the double beta decay search will be carried out by suspending neodymium in the

LAB scintillator. Natural neodymium contains 5.9% 150Nd, which is a double beta decay isotope decaying to 150Sm with an endpoint energy of 3.37 MeV. The 150Nd 2νββ half-life has been measured to be (9.2 0.8)x1018 a4, and the current experimental 90% C.L. upper limit on the 0νββ half-life is  1.8x1022 a [102]. At the 0.1% (w/w) loading that optical studies suggest will be tolerable in SNO+, the 860 tonnes of SNO+ liquid scintillator would hold 48 kg of 150Nd isotope. This is a competitive

4 There may in fact be an additional uncertainty on the 2νββ half-life due to the fact that the NEMO3 experiment 2ν [102], which is responsible for the best current T1/2 measurement, explicitly rejected events with co-incident γ-rays 2ν [111]. To the extent of the NEMO3 γ-ray detection efficiency, then, the reported T1/2 includes only decays to the ground state of 150Sm. Therefore, for the purposes of SNO+, it may be necessary to add (some fraction of) the 0.45 20 150 + 1.330.26x10 a branch to the first Sm 0 excited state [112] to the 2νββ rate inferred from the NEMO3 result.

143 amount of isotope compared to other near-term double beta decay experiments (see Section 7.5.1).

The SNO+ collaboration is also investigating the possibility of obtaining neodymium enriched in

150Nd5; neodymium enriched to 50% 150Nd would mean 430kg of isotope in SNO+, making the experiment one of the largest double beta decay searches currently envisioned.

7.3 Development of Neodymium Loaded Liquid Scintillator

In early 2004, the idea of dissolving neutrinoless double beta decay isotope into a SNO-based scintil- lator experiment began to receive significant attention within the SNO+ working group. A similar idea, involving the dissolution of Xe gas into a large liquid scintillation detector had been proposed earlier [114, 115]. The principle advantage to this type of experiment would be scale; even a few percent loading in a kilo-tonne scale scintillator experiment gives tens of tonnes of material, well beyond the near-term reach of any other double beta decay search technique. Xe was an interesting element for this type of experiment because of its relatively high ( 2%) solubility in scintillator and ∼ the relative ease with which it can be enriched in the 136Xe double beta decay isotope. Xe also has the advantage that it does not chemically interact with the scintillator and does not significantly absorb light in the wavelength regime important to liquid scintillator function. On the other hand, the predicted 0νββ rate for 136Xe is quite low, even at large loading, and the predicted ratio of 0νββ

/ 2νββ rates is also low6. In addition, an endpoint energy of 2.5 MeV makes the Xe measurement ∼ particularly sensitive to the 2.6 MeV γ-ray background from external 208Tl, and to internal 214Bi background.

Other double beta decay isotopes exist which have higher endpoint energies, better 0νββ /

2νββ ratios, and higher rates than 136Xe [103]. However, the loading of these isotopes into liquid scintillator is more challenging. One potential solution which was studied extensively by the SNO+ group during this time was the possibility of loading the double beta decay isotope into the liquid

5 Neodymium enrichment is currently only possible through the atomic vapour laser isotope separation (AVLIS) process [113]. A large AVLIS facility called MENPHIS for uranium enrichment has been developed in France; discus- sions about the potential to use this facility for neodymium enrichment are ongoing, but at the present time are not promising. Possible alternative AVLIS facilities in Russia are also being investigated. 6 As will be seen, the 2νββ signal is one of the dominant backgrounds to the 0νββ signal in scintillator experiments, so the 0νββ / 2νββ ratio is a very important consideration in choosing an isotope for a scintillator based neutrinoless double beta decay search.

144 scintillator as a nanocrystalline suspension. The impetus here was the observed effect that for very small nanoparticles, the size of the particles can have a significant effect on the optical activity of the constituent atoms. In particular, as the nanoparticle radius decreases, in general so does the wavelength of the absorbed and emitted light [116]. The thought, then, was that for sufficiently small nanoparticles the optical absorption might be pushed beyond the important region for optical transmission in a liquid scintillator. The scattering length is also expected to decrease as the particles are made smaller [117], so for sufficiently small nanoparticles one might expect to regain the optical benefits of Xe loading, but with the decay benefits of another isotope7.

To investigate this possibility, suspensions of various types of nanoparticles in water and pseudoc- umene8 were obtained from a commercial source9. These nanoparticles were all nominally smaller than 3nm diameter and were loaded into the liquids at approximately 0.5% by mass using propri- etary surfactant techniques. By this time, 150Nd had emerged as a leading candidate for the double beta decay experiment based on its high endpoint energy, high predicted 0νββ /2νββ ratio, rea- sonable natural abundance, and high predicted 0νββ decay rate, so neodymium was paid particular attention. As was hoped, the nanoparticle neodymium suspensions exhibited significant reductions in neodymium atomic absorption, as can be seen in Figure 7.2.

The stability of the nanoparticle suspensions was variable, however, with many of them (especially those in water) forming visible crystalline precipitates after periods of days to weeks. The Nd2O3(np) in pseudocumene has not formed a visible precipitate after more than 5 years. Changes were observed in its optical attenuation over time, however, along with the potential plate-out of material onto the sides of the storage vessel. Also, the lack of optical activity in the neodymium meant that the loading level claimed by the company could not be confirmed. A brief summary of the nanoparticle investigations can be found in [118].

Although the nanoparticle loading technique is promising, and is in fact still being pursued by

7 Nanoparticle loading is also attractive as it would likely be relatively material non-specific, meaning that a number of different double beta decay isotopes could be run in sequence. 8 This is the liquid scintillator used in the BOREXINO experiment and, diluted with mineral oil, in the KamLAND experiment. 9 Applied NanoWorks, Watervliet NY. Now called Auterra Inc.

145 0.6% Nd O in PC

) 2 3(np) -1 0.5% Nd-carboxylate 0.25 A (cm

0.2

0.15

0.1

0.05

0

350 400 450 500 550 600 Wavelength (nm)

Figure 7.2: The optical attenuation of 0.6% Nd2O3(np) suspension in pseudocumene compared to the attenuation measured for neodymium loaded into the LAB scintillator (discussed below) in organometallic form (as Nd-carboxylate); the measurement was made at 0.1% neodymium loading and scaled to 0.5%. The increase in attenuation in the nanoparticle sample at low wavelengths is the absorption of the pseudocumene; the corresponding LAB absorption has been subtracted out of the Nd-carboxylate reference. The strong neodymium absorption lines visible in the carboxylate sample appear to be absent in the nanoparticle suspension. some members of the SNO+ collaboration, the instability seen in the nanoparticle suspensions, cou- pled with the necessity of having a viable liquid scintillator experiment demonstrated and ready to begin when the SNO experiment was completed, meant that other avenues of neodymium loading were pursued. In particular, work following previously developed (although still experimental) tech- niques of loading neodymium [119] and gadolinium [120] into organic solution via organometallic compounds quickly led to the successful creation of a stable neodymium-carboxylate loaded liquid scintillator. As shown in Figure 7.2, the neodymium atomic absorption is fully present in these organometallic compounds. Parallel efforts to add scintillator and neodymium optical properties to the SNOMAN Monte Carlo simulation code (described in Section 6.2.1) showed that neodymium loading of 0.1% w/w might still be possible in spite of this absorption (see Figure 7.3).

Work is ongoing to confirm the light output of the actual neodymium-carboxylate loaded LAB scintillator that will be used in SNO+ will be sufficient at 0.1% loading. The primary light output of neodymium-loaded scintillator has been shown to be equal to unloaded scintillator to within

146 700

600

500

400

300 Predicted Light Output (NHit/MeV)

200

100

0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 Neodymium Concentration (weight %)

Figure 7.3: The simulated light output of central electrons SNO+ as a function of neodymium loading. This plot was made using the scintillator optical properties of pseudocumene with 1.5 g/L PPO wavelength shifter as measured by the BOREXINO collaboration [121] and the neodymium absorption spectrum as measured in a NdCl3(aq) solution.

10% through direct measurements in the SNO detector10. The overall simulated light output of ∼ the neodymium-loaded scintillator, however, is currently lower ( 90 PMT hits for a 1 MeV central ∼ electron) than would be ideal, and quite dependant on some of the scintillator optical properties that are the most challenging to measure accurately (it is especially sensitive to the long scintillator absorption and scattering lengths and the re-emitted photon spectra). It may well be that with improved measurements of these properties, the simulated light output in the SNO+ double beta decay phase will increase to match the early predictions shown in Figure 7.3; if not, the neodymium loading may have to be reduced to increase the amount of detected light. For now, however, it is assumed that the target of 400 NHit/MeV at 0.1% loading target can be met.

10 A clear acrylic cylinder “source” was constructed, filled with scintillator, and deployed into the light water filled SNO detector. By placing an AmBe neutron source near the cylinder, the light output of the scintillator could be observed using the SNO photomultiplier array. Repeating this measurement with both neodymium-loaded and unloaded scintillator allowed the primary light outputs to be compared (the cylinder was too small for the other scintillator optical properties to be important). See [122] for more information.

147 7.4 Simulations to Estimate the SNO+ Sensitivity to Neutrinoless Double Beta Decay

In order to estimate the sensitivity of the SNO+ experiment to neutrinoless double beta decay (and the other physics signals of interest) a standalone Monte Carlo simulation was developed which allowed the sensitivity of the experiment to be studied under different detector conditions.

7.4.1 Simulation Technique

The physics sensitivity simulations discussed here were separate from the detailed SNOMAN-based simulations of the SNO+ light output described earlier. Work on producing a detailed Monte

Carlo (including event-by-event simulation of the energy deposition and light propagation within the detector, and employing the same event reconstruction algorithms that will be used on the data) is ongoing. These studies will give a more detailed understanding of the factors influencing the SNO+ physics sensitivity. However, these detailed simulations are not yet sufficiently developed to be used in physics sensitivity predictions, and (especially in the early stages of an experiment) a great deal can be learned from significantly simpler and faster simulations. The simulations described here are just such simpler simulations, and they have been used to demonstrate the SNO+ physics potential and to guide the development of the experiment.

In these simulations, high statistics spectra of the expected energy deposition in the scintillator by the radioactive backgrounds of interest were first simulated using the EGS4 (a standard particle interaction code) portion of SNOMAN, and the expected energy spectra of solar neutrino interactions

(including the effects of MSW oscillations) were calculated using the QPhysics neutrino propagation and elastic scattering code developed by the SNO collaboration. These spectra were generated only once and stored as binned histograms. The 2νββ spectrum was taken as having the approximate analytic form, dN 4K2 K3 K4 K(T K)5 1 + 2K + + + , (7.13) dK ∼ 0 − 3 3 30  

148 11 where K is the sum of the electron energies and T0 is the endpoint energy of the decay [123] .

Together, these distributions provided a set of “known spectra” upon which the simulations were based.

When performing a simulation, the “known spectra” described above were used to create two additional distributions: the “data distributions” from which the fake data sets were drawn, and the PDFs that were used to fit the fake data sets. These new distributions were created by applying a simulated detector energy resolution to the “known spectra.” The standard energy resolution applied was a Gaussian smearing (applied as a numerical convolution between the binned energy deposition spectra and a Gaussian curve), the width of which was determined by counting statistics on the number of PMTs expected to detect photons for an event of a given energy12. It was possible in the simulations to apply different energy resolutions to the data distributions and the PDFs (in order to study the effects of an improper understanding of the detector energy resolution) and to add exponential “tails” to the data distributions (in order to test the experiment’s sensitivity to possible systematic effects in energy reconstruction).

To generate fake data sets, each of the data distributions described earlier were scaled to contain the average number of events of that type desired in the fake data set. For each bin in the fake data set, the average contribution of a given class of events was then simply the entry of the same bin in the relevant (scaled) data distribution13. The true contribution of that class of event to that bin in the fake data set was then determined by randomly drawing a number from a Poisson distribution with that mean. Especially for large signals, this method is much quicker computationally than

filling the data histogram by repeatedly randomly sampling the data distribution histograms to

11 This formula is approximate in that non-relativistic Coulomb effects are assumed in the Fermi Functions; including the relativistic effects numerically has been shown to shift the peak in the 2νββ spectrum lower in energy and steepen the spectrum near the endpoint [104]. 12 Energy reconstruction in SNO+, as in SNO, will consist of a mapping between the energy deposited in the scintillator by an event and the number of PMTs which detect photons as a result (“NHits”). The limit on the obtainable energy resolution is then set by the statistical fluctuations in the number of PMTs hit. For example, at a light output of 400 NHit/MeV, 2 MeV electrons are expected to produce an average of 800 PMT hits with a 1σ statistical fluctuation of √800, or about 28. So the 1σ fractional energy resolution is expected to be approximately 1/√NHit. The actual resolution will be broadened somewhat by effects such as the position-dependant energy response of the detector. These detailed effects will be one of the main subjects of study of the detailed simulation code once it is completed. 13 Note that for this procedure to work properly, the fake data histogram and the data distribution histograms had to be identically binned.

149 simulate individual events.

7.4.2 Backgrounds Simulated

Only internal backgrounds (i.e. backgrounds originating from the LAB scintillator itself) were included in this simulation. External backgrounds, principally neutrons and γ-rays produced in the light water and acrylic vessel which then propagate into the scintillator volume will also be present in the experiment, but their numbers decay quickly with distance from the edge of the detector.

Independent simulations [124] suggest that the rate of external background events, dominated by

208Tl from the acrylic vessel, will equal the rate of 0νββ events approximately 1.5 m in from the acrylic vessel. For this reason, SNO+ will likely operate with a fiducial radius of 4.5 m (i.e. only those events which reconstruct within 450 cm of the centre of the detector will be included in the analysis), and neglecting the external backgrounds within this fiducial radius is a reasonable first approximation.

As shown in Figure 7.4, many potential internal backgrounds, including 40K, 39Ar, 85Kr, and all of the isotopes in the 238U chain and the 232Th chains were included in the simulations. These isotopes

(along with 14C which has a lower energy endpoint) are believed to account for the radiogenic backgrounds observed in the KamLAND [125] and BOREXINO [22] experiments. Most of these backgrounds have decay energies somewhat lower than the 0νββ peak and were included in the simulation to make possible low energy solar neutrino sensitivity studies [66]. As will be seen, the

SNO+ neutrinoless double beta decay search is essentially insensitive to the energy spectrum below

2.5-3.0 MeV, so the only internal backgrounds important to the double beta decay sensitivity are

208Tl in the 232Th chain and 214Bi in the 238U chain.

Additional internal backgrounds from in situ cosmogenic activation are also possible, but studies of the potential cosmogenic activation of both scintillator [68] and neodymium [126, 127] have so far failed to identify any isotopes with half-lives long enough and decay energies high enough to pose a significant concern to the double beta decay measurement through in-situ cosmogenic activation14.

14 Although with a 285 day half-life, 144Ce (which supports 144Pr which has a 3 MeV beta decay) produced in the neodymium before it is brought underground may be a concern. This is currently under investigation.

150 Total pep Solar ν’s cno Solar ν’s 7Be Solar ν’s 8B Solar ν’s 228Ac 210 106 Bi 214Bi 85Kr 40 5 K 10 234Pa Events / 10 keV 212Bi-β 234U 4 10 230Th 226Ra 232Th 103 228Th 39Ar 212Bi-α 2 224Ra 10 220Rn 216Po 212Po 10 208Tl 222Rn 214Po 1 214Pb 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 218Po Energy (MeV) 238U 210Po 2ν Double Beta 0ν Double Beta

Figure 7.4: A spectrum generated by the SNO+ simulation including low energy signals and back- grounds. The backgrounds below about 3.0 MeV are unimportant to the determination of the SNO+ 0νββ sensitivity (due to the overwhelming 2νββ statistics in this region), but have an important impact on the sensitivity to low energy solar neutrinos. Due to space restrictions, not all simulated decays are listed in the legend. The event rates in this simulation differ slightly from the levels described in the text. This plot includes the detector energy resolution expected at a light output of 400 NHit/MeV, and as- sumes mν 170 meV. The spectra of any low energy backgrounds which may be added toh thei ∼detector along with the neodymium are not included in the simulation.

151 A final (and rather amusing, from the point of view of one who was also involved with SNO) important background to the 0νββ signal in SNO+ will be the elastic scattering interaction of 8B solar neutrinos.

7.4.3 Fit Range

The energy range over which the fits to the energy spectrum are performed might be expected to have some effect on the predicted sensitivity of the experiment. In particular, as the lower limit of the fit range is extended to lower energies, more and more of the 2νββ signal is included in the fit, and hence the constraint on the 2νββ background in the 0νββ signal region might be expected to improve. Similarly, raising the upper limit of the fit range might be expected to give a statistically better constraint on the 8B elastic scattering background. As can be seen in Figure 7.5, including the signal below about 3.0 MeV in the fit gives limited statistical gain (because the number of 2νββ events increases quickly with decreasing energy). The 8B elastic scattering, on the other hand, has a low enough rate that there is benefit in extending the fit range up to about 5.0 MeV. In the actual

SNO+ experiment, the fits will likely be performed over as wide a background range as possible in order to ensure the best result possible statistically. Here, however, in the interests of computational speed the fit range was restricted to 3.0 - 5.0 MeV.

×1024 ×1024 45 (a) (a) β β

β 42 β ν ν 0 0 1/2 1/2 40 40 35 38 30 36 Lower Limit on T Lower Limit on T 25 34 20 32 15 30 10 28 5 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3 4 5 6 7 8 Lower Limit of Fit Range (MeV) Upper Limit of Fit Range (MeV)

(a) (b)

Figure 7.5: The effect of fit range on the SNO+ 0νββ fits. In (a), the upper limit of the fit range was held at 5.0 MeV and the lower limit was varied. In (b) the lower limit of the fit range was fixed at 2.8 MeV and the upper limit was varied. As can be seen, there is little statistical benefit in extending the fit range below 3.0 MeV or above 5.0 MeV.

152 7.4.4 Quantifying the 0νββ Sensitivity

The sensitivities of neutrinoless double beta decay experiments are typically expressed in terms of the limit that the experiment would set on the 0νββ signal in the absence of a detection. Thus, to determine the SNO+ sensitivity using the simulations described above, no 0νββ signal was included in the fake data sets, but the 0νββ PDF was included in the fits. The fitted number of 0νββ events and its uncertainty were then used to define a neutrinoless double beta decay limit at the required confidence level.

Thus, defining the 0νββ limit for a single fake data set was simple. There is, however, an additional subtlety associated with defining the neutrinoless double beta decay sensitivity of an experiment. This is because statistical fluctuations will cause variation in the 0νββ limits set by different fake data sets (or equivalently by different runs of the same real experiment). In order to define an overall sensitivity for an experiment, then, it is necessary to define not only the confidence limit being used, but also the frequency with which that confidence limit, as set by a single iteration of the experiment, will satisfy the stated criteria15. There appears to be a consensus in the community that 0νββ sensitivities be given at the 90% C.L. and a frequency of 50%.

There is also some uncertainty about whether 0νββ limits should be given in terms of limit on the half-lives (or equivalently decay rates) of the isotope in question, or whether they should be converted into effective Majorana neutrino mass using Equation 7.3. The latter is preferable in that it allows experiments using different isotopes to be compared according to the most interesting physics parameter; it suffers the drawback, however, that the limits thus inherit the uncertainty in the calculation of the matrix elements.

In Section 7.5, the SNO+ 0νββ limits given are the levels at which there is a 50% chance (based

0ν on multiple simulations) of a single experiment achieving a 90% C.L. on T1/2 that high or higher.

Tables of the average 0νββ fit central values and uncertainties will also be given, so that other

15 That is, it is necessary to state limits of the form “the experiment is expected to set an X%C.L. upper limit on the 0νββ signal that is less than Y Z% of the time.”

153 confidence limits or frequencies can easily be computed if desired16. The conversion to limits on the effective Majorana neutrino mass will only be given in the section involving comparisons to other experiments; for a given matrix element calculation, however, the conversion is easily made, using the half-life values for m = 50 meV in Table 7.1. In Section 7.6, the 0νββ levels given are the h ν i 0ν 17 95%C.L. 50% frequency upper limits on T1/2

7.5 Simulated SNO+ 0νββ Sensitivity

The “baseline” sensitivity of the SNO+ experiment to neutrinoless double beta decay is defined to be the sensitivity when all of the detector parameters are at their nominal values. The nominal detector conditions assumed for SNO+ are:

A light output of 400 NHit/MeV, taken from the optical simulations described earlier. •

A neodymium loading of 0.1% (w/w), with a 150Nd isotopic abundance of 5.6% in natural • neodymium and 50% in enriched neodymium.

A 208Tl rate of 383 events/kT a (equivalent to 8.3x10−18 g/g 232Th in the scintillator)18. • ·

A 214Bi rate of 6682 events/kT a (equivalent to 1.7x10−17 g/g 238U in the scintillator). • ·

A 8B elastic scattering rate of 2099 events/kT a19. The 8B elastic scattering energy spectrum • · 2 −5 2 2 was assumed to have MSW distortions with ∆m12 = 8x10 eV and tan θ12 = 0.45.

An assumed T 2ν of 9.2x1025 a (the central value of the best current measurement [102]), yield- • 1/2 ing 1.7x107 2νββ events/kT a for natural neodymium loading and 1.5x108 2νββ events/kT a · · for enriched neodymium.

16 It is possible to do this using just these averages because the uncertainties on the fitted numbers of 0νββ events are remarkably tightly grouped and independent of the statistical fluctuations in the central values of the fit. Thus, the distribution of upper limits at a given confidence level is essentially identical to the distribution of fit central values shifted upwards by a given amount. For example, if the distribution of fitted central values is a Gaussian of width σ centred about zero, the distribution of 90% upper confidence limits will be a Gaussian of width σ centred about 1.3σ; thus 50% of the 90% C.L. upper limits will lie within 1.3σ of zero, while 90% of them will lie within 2*1.3σ of zero. 17 The change in confidence level relative to Section 7.5 was due to a coding error which resulted in two-sided, rather than one-sided, 90%C.L.’s being applied in Section 7.6. 18 The levels of 232Th and 238U are taken to be the levels observed in the BOREXINO experiment. BOREXINO observes (1.6 0.1)x10−17g/g 238U equivalent in their scintillator using the 214Bi - 214Po delayed coincidence and (6.8 1.5)x10−18g/g 232Th equivalent using the 212Bi - 212Po delayed coincidence [22]. 19  2 −5 2 2 This is the Standard Solar Model prediction after MSW oscillations with ∆m12 = 8x10 eV and tan θ12 = 0.45.

154 A live time of 1 kT a. • ·

Identical energy resolution in the data and PDF distributions (i.e. perfect knowledge of the • energy spectra that underlie the data).

Where conversion to effective neutrino mass is necessary, a matrix element corresponding to • T 0ν = 4.0x1025 a at m = 50 meV (the upper edge of the QRPA/RQRPA calculations, 1/2 h ν i although the current range of calculations extends from a factor of four lower to a factor of

ten higher) is used.

These conditions can be assumed to apply to all simulation results shown in this thesis unless stated otherwise.

Under these default conditions, the 50% frequency 90%C.L. upper limits that SNO+ could set on the 0νββ signal are shown in Figure 7.6(a) for different live times and for both natural and enriched neodymium. These results are also given in Tables 7.2 and 7.3 along with the corresponding average numbers of fitted 0νββ events and their uncertainties. Figure 7.6 (b) shows the same limits in terms of m . h ν i The first goal for the next generation of neutrinoless double beta decay experiments is to test the controversial result from [88], which claims detection of 0νββ with m & 270 meV at 90%C.L.. h ν i This should be well within the reach of the SNO+ double beta decay search using either natural or enriched neodymium. For interest, Figure 7.7 shows simulated SNO+ spectra near the endpoint for both natural and enriched neodymium at m = 270 meV. h ν i Looking beyond the 270 meV target, the next interesting m target for neutrinoless double h ν i beta decay searches is the 10-40 meV range, which (as shown in Figure 7.1), tests the possibility of an inverted neutrino mass hierarchy. Depending on the actual 150Nd matrix element, SNO+ could probe the upper edges of this interesting region with natural neodymium, or explore a reasonable fraction of it using enriched neodymium.

155 ×1024 Natural Nd 200 Natural Nd (a) β

β 140

ν Enriched Nd Enriched Nd

0 180 1/2 > (meV) ν 120 160

140 100 120 80 100

60 80 90% C.L. Lower Limit on T 60 40 90% C.L. Upper Limit on

0 0 0 1 2 3 4 5 0 1 2 3 4 5 Livetime (kT*a) Livetime (kT*a)

0ν (a) T (b) mν 1/2 h i

Figure 7.6: The baseline SNO+ 0νββ sensitivity as a function of live time. As SNO+ will have a real mass of 860kg, a fiducial volume cut that rejects events in 50% of the active volume, and will likely spend about 25% of its time performing calibrations∼ and other maintenance tasks, it will take approximately three real years of running to accumulate one kT a of data. Note that the range of variation in the 150Nd matrix element · calculations to date admit variation in the mν curves shown in (b) within a factor of 2 lower and a factor of 3 higher. 1000 hfakeidata sets were thrown and fitted in generating each point on the plots.

Data 4 Data 3 10 10 8 8 B Solar ν’s B Solar ν’s 214Bi 214Bi 208Tl 208Tl Events / 50 keV Events / 50 keV 3 2ν Double Beta 10 2ν Double Beta 2 10 0ν Double Beta 0ν Double Beta

102

10

10

1 3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5 3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5 Energy (MeV) Energy (MeV)

(a) Natural Neodymium (b) Enriched Neodymium

Figure 7.7: The simulated SNO+ spectrum near the endpoint for mν = 270 meV after one kT a of data taking. The 0νββ peak is easily detected withh iboth natural and enriched· neodymium, with the fitted number of 0νββ events 7σ from zero after one kT a of data in the case of natural neodymium, and 25σ from zero in the case of enric· hed neodymium.

156 Live time T 0ν Limit m Limit NFit ∆NFit 1/2 h ν i h 0νββi h 0νββi (kT a) (x1024a) (meV) · 5.00 17.8 75 0.0 1.1 34.10 0.02   3.75 15.9 79 -1.1 0.9 29.57 0.02   2.81 13.5 86 -0.4 0.8 25.64 0.02   2.11 11.3 94 0.6 0.7 22.29 0.02   1.58 9.8 101 0.3 0.6 19.36 0.02   1.19 8.4 109 0.5 0.5 16.83 0.02   0.89 7.5 115 -0.2 0.4 14.60 0.02   0.67 6.3 126 0.3 0.4 12.71 0.02   0.50 5.7 132 -0.5 0.3 11.00 0.02   0.38 4.8 144 -0.2 0.3 9.58 0.02   0.28 4.2 155 -0.2 0.3 8.33 0.02   0.21 3.5 169 0.1 0.2 7.31 0.02   0.16 3.1 180 -0.1 0.2 6.35 0.02   0.12 2.6 197 0.1 0.2 5.52 0.02   0.09 2.2 214 0.2 0.1 4.84 0.03  

Table 7.2: The baseline SNO+ sensitivity to 0νββ with natural neodymium for different run times. The second (third) column gives the 90%C.L. lower (upper) limit that could be placed on T 0ν ( m ) 50% of the time after that amount of live time. The fourth and fifth 1/2 h ν i columns give the average fit value ( NFit0νββ ) and fit uncertainty ( ∆NFit0νββ ) for the number of 0νββ events in the simh ulationsi at each live time; theseh values cani be used to calculate other sensitivity limits if desired.

7.5.1 Comparison with Other 0νββ Experiments

A review of the 0νββ experiments currently being built or considered is given in [105]. Only a few of these experiments with the greatest potential to produce comparative 0νββ limits in the near- and medium-term are discussed here.

GERDA: The GERDA experiment [128] will search for double beta decay in 76Ge using en- • riched germanium detectors. The experiment is similar to, and uses the enriched germanium

detectors from, the Heidelberg-Moscow experiment [129] from which the current 0νββ detec-

tion claim originated. GERDA will also use the enriched germanium detectors from the IGEX

experiment [130]. GERDA will ultimately operate using approximately 50 kg of enriched ger-

manium, and is expected to reach a sensitivity of m . 125 meV (T 0ν (76Ge) & 1.4x1026a) h ν i 1/2

157 Live time T 0ν Limit m Limit NFit ∆NFit 1/2 h ν i h 0νββi h 0νββi (kT a) (x1024a) (meV) · 5.00 128.1 28 -0.6 1.4 42.85 0.03   3.75 103.1 31 2.9 1.1 37.22 0.03   2.81 90.2 33 2.0 1.0 32.30 0.03   2.11 79.9 35 0.8 0.9 28.02 0.03   1.58 71.1 38 -0.1 0.8 24.29 0.03   1.19 63.1 40 -0.8 0.7 21.05 0.03   0.89 52.5 44 0.1 0.6 18.32 0.03   0.67 48.3 46 -1.1 0.5 15.89 0.03   0.50 39.1 51 0.1 0.4 13.85 0.03   0.38 33.9 54 -0.0 0.4 12.04 0.03   0.28 29.8 58 -0.2 0.3 10.44 0.03   0.21 24.3 64 0.3 0.3 9.17 0.03   0.16 21.4 68 0.1 0.2 7.96 0.03   0.12 19.0 72 -0.2 0.2 6.93 0.03   0.09 15.8 80 0.1 0.2 6.07 0.03  

Table 7.3: The baseline SNO+ sensitivity to 0νββ with enriched neodymium for different run times. The second (third) column gives the 90%C.L. lower (upper) limit that could 0ν be placed on T1/2 ( mν ) with 50% frequency. The fourth and fifth columns give the average fit value andh uncertaini ty for the number of 0νββ events in the simulations at each live time; these values can be used to calculate other sensitivity limits if desired.

with 100 kg a of data. The experiment is currently under construction and is expected to be · fully operational within the next few years.

MAJORANA: The Majorana experiment [131] will be a larger ( 500kg) enriched germanium • ∼ experiment which should reach a sensitivity of m . 60 meV (T 0ν (130Te) & 5.5x1026a) after h ν i 1/2 about one live year of data taking. Majorana is currently in the development/proposal stage

and will likely not be operational for several years.

CUORE: The CUORE experiment [132] will search for 0νββ in 130Te using an array of cryo- •

genic TeO2 crystal bolometers. The CUORE experiment will contain 740 kg of TeO2, which

should allow the experiment to reach a sensitivity of m . 45 meV (T 0ν (130Te) & 2.5x1026a) h ν i 1/2 in 5 years’ exposure. The use of enriched tellurium, which is currently being explored, could im-

prove the sensitivity to m . 30 meV. CUORE has received Italian funding and is currently h ν i 158 under construction, with data taking scheduled to begin in 2011. The prototype CUORICINO

experiment [133] has been operating for several years and has reached a sensitivity of m . h ν i 400 meV (T 0ν (130Te) 3.0x1024a). 1/2 ≥

EXO200: The EXO200 experiment [134] will search for 0νββ in 136Xe using a liquid xenon • time projection chamber which will measure both scintillation light and charge production.

The experiment, which will contain 200 kg of enriched xenon (80% 136Xe), is currently being

deployed and should become operational in the near future. The experiment is expected to

reach a sensitivity of m . 300 meV. h ν i

MOON: MOON [135] is a proposed experiment which will search for neutrinoless double beta • decay using thin foil sources sandwiched between thin position sensitive detectors (possibly

multi-wire proportional chambers) and plastic scintillator calorimeters. MOON will study

double beta decay in 82Se, 100Mo, and/or 150Nd, and has a target design sensitivity of m . h ν i 50 meV. The MOON experiment is likely several years away from operation.

SuperNEMO: SuperNEMO [136] will build on the successful operation of the NEMO3 detec- • tor [102], to search for 0νββ in 82Se and/or 150Nd. SuperNEMO will consist of thin foil sources

within gas tracking chambers to allow reconstruction of the ββ tracks. SuperNEMO will con-

tain 100-200 kg of isotope and should reach a sensitivity of m . 50-100 meV (T 0ν & 1026a). h ν i 1/2 SuperNEMO is currently in the proposal stage and will likely not be operational for several

years.

As can be seen from the list above, there is a set of “near-term” experiments which should soon become operational and which are expected to reach 0νββ sensitivities corresponding to m . h ν i 150-300 meV in the next three to five years. Another generation of experiments with sensitivities in the m . 50 meV region are currently at the proposal stage or beginning construction. h ν i With natural neodymium, SNO+ offers significantly better 0νββ sensitivity than the near-term generation of experiments (for any of the currently computed matrix elements) on an approximately

159 similar time scale. However, the next generation of experiments, likely led by CUORE, would soon eclipse this sensitivity. If enriched neodymium could be obtained, however, SNO+ would have competitive and (depending on the matrix elements) perhaps leading sensitivity even compared to the medium term generation of experiments.

7.6 Impact of Detector Parameters on the SNO+ 0νββ Sensitivity

In designing different aspects of the SNO+ experiment, from the target scintillator light output and radio-purity levels to the calibration systems, it is necessary to understand how the sensitivity of the experiment changes with different detector parameters. One of the most important uses of these simple simulations has been to provide feedback to guide the development of various aspects of the

SNO+ project. Some of these studies are described below.

7.6.1 Background Levels

As previously mentioned, three backgrounds (apart from the irreducible 2νββ background) are relevant to the SNO+ double beta decay experiment: 208Tl, 214Bi, and the 8B solar neutrinos. From

Figure 7.7, it might reasonably be expected that at the baseline SNO+ levels, 208Tl and the 8B solar neutrinos are the dominant backgrounds, with 214Bi being significantly less important. Figure 7.8 shows the effect of varying the level of each of these backgrounds in turn20.

If the Tl and Bi backgrounds were both completely eliminated (but their PDFs left in the fit)

0ν 24 24 the SNO+ T1/2 lower limits would improve to 7.7x10 a for natural neodymium and 52x10 a for enriched; if the 8B solar neutrino background could simultaneously be reduced to 50% (10%) of

0ν 24 24 its nominal value, the T1/2 limits would increase further to 10.2x10 a (17.6x10 a) for natural neodymium and 64.2x1024 a (94.8x1024 a) for enriched.

It is also interesting to note that varying the 150Nd 2νββ half life within its 9% measurement ∼ uncertainty has only a very small effect on the 0νββ limits.

20 The magnitude of the 8B solar neutrino signal could in principle be reduced by confining the double beta decay isotope within a BOREXINO style “balloon” in the centre of SNO+.

160 24 24 ×10 Natural Nd ×10 Natural Nd 60 60 (a) (a) β β β β

ν Enriched Nd ν Enriched Nd 0 0 1/2 1/2 50 50

40 40

30 30

95% C.L. Lower Limit on T 20 95% C.L. Lower Limit on T 20

10 10

0 0 0 5 10 15 20 25 30 35 40 45 0 2000 4000 6000 8000 10000 12000 14000 Background Scaling Factor Background Scaling Factor

(a) Varying 208Tl Level (b) Varying 214Bi Level

24 ×10 Natural Nd 60 (a) β β

ν Enriched Nd 0 1/2 50

40

30

95% C.L. Lower Limit on T 20

10

0 0 1 2 3 4 5 6 7 Background Scaling Factor

(c) Varying 8B Solar ν Level

Figure 7.8: The effect of varying the background levels on the SNO+ 0νββ limits. A background scaling of 1.0 corresponds to the baseline level of that background. Varying the im- portant backgrounds one at a time confirms that 208Tl and the 8B solar neutrinos, along with the 2νββ signal, are the dominant backgrounds to the 0νββ signal. 214Bi has such a weak effect that its level must be increased by orders of magnitude before it begins to noticeably impact the 0νββ limit.

161 7.6.2 Light Output/Energy Resolution

As discussed earlier, one of the most important parameters of a 0νββ experiment is the energy resolution, as this determines the extent to which the 2νββ signal is “smeared into” the 0νββ energy region. In SNO+, the energy resolution will primarily be determined by the amount of light

0ν collected by the detector. Figure 7.9 shows the lower limits on T1/2 that could be achieved in SNO+ after 1 kT a of data with different light output levels. Note that the light output is changed in both · the data and the PDFs so that while the energy resolution is varied it is still assumed to be perfectly understood.

×1024 100 Natural Nd (a) β β

ν Enriched Nd

0 90 1/2

80

70

60

50

40 95% C.L. Lower Limit on T 30

20

10

0 0 200 400 600 800 1000 1200 Light Output (NHit/MeV)

Figure 7.9: The effect of energy resolution on the SNO+ 0νββ limits (for an otherwise “baseline” detector and 1 kT a of data). Increasing the light output (which improves the fractional energy resolution·according to 1/√NHit) improves the 0νββ limits by reducing the amount of 2νββ signal in the 0νββ energy region. Note that it is assumed in all cases that the energy response of the detector is perfectly understood.

7.6.3 Neodymium Loading Levels

The effect of changing the amount of 150Nd in the detector can be seen by comparing the limits set using enriched and natural neodymium. Nevertheless, it may be useful to have specific plots of the 0νββ limits as a function of neodymium loading, as once the light output of the neodymium- loaded scintillator is better predicted, an optimization in the neodymium loading level should be performed (optimization is necessary because increasing the loading increases the amount of isotope

162 but decreases the light output). These plots are shown in Figure 7.10.

×1024 350 Natural Nd (a) β β

ν Enriched Nd 0 3001/2

250

200

150 95% C.L. Lower Limit on T 100

50

0 0 0.2 0.4 0.6 0.8 1 Nd Loading Level in LAB (% w/w)

Figure 7.10: The effect of the neodymium loading level on the SNO+ 0νββ limits. Note that natural neodymium is assumed to be 5.6% (w/w) 150Nd, while enriched neodymium is taken to be 50% 150Nd. Although obvious, it is worth noting again that if the loading could be increased by a factor of 10 without reducing the light output (using nanoparticles, perhaps), 1% loading with natural neodymium offers approximately the same “performance” as 0.1% loading with enriched neodymium.

7.6.4 Energy Resolution Systematics

In setting a limit on the 0νββ signal in a data set, it is very important that the spectral shapes of the backgrounds be well understood. In particular, any small distortions in the 2νββ spectrum, if not included in the 2νββ PDF, can have a significant effect on the fitted number of 0νββ events and hence the 0νββ limit. One obvious potential cause of an imperfect understanding of the background spectra would be an incorrect characterization of the energy response of the detector. It is therefore useful to attempt to get an idea of what precision will be necessary in the energy calibration of

SNO+ in order not to negatively affect the 0νββ limits.

The first check to be made is to determine how well the average energy resolution of the detector must be understood. This can be done by simply changing the width of the energy resolution function that was used to broaden the PDF distributions in the simulations relative to the resolution that was applied to the data (note however, that both energy resolutions were still assumed to be Gaussian).

As can be seen in Figure 7.11 (b), overestimating the width of the detector energy response in making

163 the PDFs results in a positive bias in the number of fitted 0νββ events, as the fitter attempts to

fill the empty space beneath the (broader) 2νββ PDF; underestimating the width of the energy resolution has the opposite effect. As the 95%C.L. upper limit on the average number of fitted 0νββ

0ν events (which is used to create the T1/2 limit) is taken to be 1.6σ above the average fitted 0νββ

0ν value, these biases in the fitted 0νββ values are reflected in the T1/2 limits, as shown in Figure

7.11 (a). In order to keep this mean bias in the fitted number of 0νββ events less than about 5

(or about 0.3σ in any single experiment) and hence prevent a large effect on the 0νββ limits, the energy resolution must be known to within about 0.1% (absolute) at the endpoint. As the overall fractional energy resolution at the endpoint is 2.7%, this is equivalent to the requirement that the average energy resolution be known to better than 4% of its value. As SNO, with its lower light ∼ output, was able to model the average energy resolution at approximately the 1.5% level [20], this seems to be an achievable requirement for SNO+.

×1024 100 Natural Nd (a)

β 25 Natural Nd β

ν Enriched Nd

0 90 1/2 20 Enriched Nd Events

80 β β

ν 15 70 10 60 5 50 0 40 -5 95% C.L. Lower Limit on T

30 Mean Number of Fitted 0 -10

20 -15

10 -20

0 -25 -0.003 -0.002 -0.001 0 0.001 0.002 0.003 0.004 -0.003 -0.002 -0.001 0 0.001 0.002 0.003 0.004 (Assumed - Actual) ∆Eendpoint (absolute fraction) (Assumed - Actual) ∆Eendpoint (absolute fraction)

(a) (b)

Figure 7.11: The effect of energy resolution on the SNO+ 0νββ limits. The change in the 95%C.L. 0ν upper limits on T1/2 that could be set by SNO+ as the difference between the true 0ν and assumed energy resolutions is varied is shown in (a). The changes in the T1/2 limits are primarily driven by changes in the mean number of 0νββ events fitted from the fake data sets (which actually contained no 0νββ signal), shown in (b). The effect is larger in the case of enriched neodymium because the (relatively flat and hence less sensitive to energy resolution) 208Tl and 8B solar neutrino backgrounds have a fractionally larger effect in the fits with natural neodymium. Note that the error bars in (b) represent the uncertainty in the mean number of events fitted in 1000 fake data sets, not the uncertainty in the fitted number of 0νββ events in any one trial; the single trial uncertainty on the number of fitted 0νββ events is 16 for natural neodymium and 19 for enriched. ∼ ∼

164 A much more dangerous energy systematic for SNO+ will likely be a small fraction of events which, because of detector asymmetries, do not follow the average Gaussian energy resolution dis- tribution but instead fall as outliers21. To study this potential effect, exponential tails were added to the energy response function which was applied to the data distribution (but not to the response function applied to the PDFs), so that the probability, P , of reconstructing an event in the data with true energy E at energy E0 became

0 − E−E 2 0 0 1 ( ) A B − B(E−E ) P (E , E) = (1.0 A) e σ2 + · e , (7.14) − √2πσ 2 | | where σ = E/Nh and Nh is the expected number of NHits/MeV (400 by default). The A and p B parameters describe the fraction of events which fall into the exponential tails and the (inverse) decay length of the tails, respectively. To illustrate the effect, the response of the detector to a mono-energetic signal with and without exponential tails added to the response function is shown in Figure 7.12.

Energy Tails 1 No Energy Tails

10-1

10-2

10-3 Relative Detector Response ( / 50 keV ) 10-4

-5 10 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Energy (MeV)

Figure 7.12: The simulated energy response of SNO+ to a mono-energetic signal at 3.0 MeV with and without the addition of exponential tails (with A = 0.01, B = 1.0 MeV−1).

From Figure 7.12, one might reasonably expect the effects of the A and B parameters to be correlated; that is, it seems reasonable to expect that SNO+ could tolerate a much larger fraction

21 As an example, consider an event having a reconstructed position directly adjacent to an AV hold-down rope, but being in reality 15cm (about the position resolution that can be expected in SNO+) away from it. If the energy estimation code attempts to correct for the assumed shadowing of the rope, it will significantly overestimate the energy of that event. Such “unusual” events can result in the addition of non-Gaussian “tails” in the energy resolution.

165 of backgrounds in these exponential tails (i.e. a larger A) when the tails decay quickly (i.e. when B is larger) compared to when they decay slowly. As shown in Figure 7.13, that is exactly the case; for natural neodymium, more than 1% of 2νββ events can be accomodated in exponential tails with a decay length of 0.15 MeV without negatively affecting the 0νββ limit, while less than 1x10−4% of events can be allowed to fall in exponential tails with l MeV decays (like those shown in Figure

7.12).

×1021 ×1024 -1.5 -1.5 45 (A) (A)

10 6000 10 -2 -2 40 log log -2.5 -2.5 5000 35 -3 -3 30 -3.5 4000 -3.5 25 -4 -4 3000 -4.5 -4.5 20

-5 -5 15 2000 -5.5 -5.5 10 -6 1000 -6 5 -6.50 1 2 3 4 5 6 7 8 9 10 -6.50 1 2 3 4 5 6 7 8 9 10 B (MeV-1) B (MeV-1)

(a) Natural Neodymium (b) Enriched Neodymium

Figure 7.13: The effect of energy resolution tails on the SNO+ 0νββ limits. The colour axis shows 0ν the T1/2 sensitivity limit as a function of the A and B exponential tail parameters (defined in the text and Equation 7.14). The red areas in the lower right portion of 0ν both plots correspond to the baseline SNO+ sensitivities; the lower T1/2 limits in the top left corners show the reduction in 0νββ sensitivity. As expected, quickly decaying tails in the resolution (larger B) have a smaller effect on the 0νββ fits, and so more events (larger A) can be allowed to fall into such tails without affecting the 0νββ limits. As in Figure 7.12, natural neodymium is less sensitive to energy resolution effects, presumably because of the smaller effect of the 2νββ background in natural neodymium relative to the other sources of background.

As shown in Figure 7.14, the correlated behaviour of the A and B parameters can be simplified by projecting the effects of these parameters onto the “spillover” axis, where spillover is defined to be the number of 2νββ events which are smeared into the 3.0-3.7 MeV window (which contains the 0νββ signal) by the data energy resolution function in excess of the number expected by the

PDF energy resolution function22. As can be seen in the Figure, the correlated statements about

22 That is, spillover is the difference between the true and assumed fraction of 2νββ events which fall into the 3.0-3.7 MeV window, multiplied by the total number of 2νββ events in the data set.

166 the allowable limits in the A and B parameters can be replaced by the much simpler statement that fewer than about 100 unexpected 2νββ events can be mis-reconstructed into the 3.0-3.7 MeV window.

Achieving this low number of mis-reconstructed events will likely be the greatest challenge to the successful operation of the SNO+ experiment. Thankfully, however, it is really only the upward tails23 which are of concern to the experiment24, and upward tails are much easier to suppress in event reconstruction than downward tails. This is because while there are a number of detector geometry effects which can artificially reduce the amount of light collected for (and hence the apparent energy of) a given event compared to the detector average, there are very few physical processes which will significantly increase the amount of collected light. Upward tails in reconstructed event energy, then, tend to be the result of overcorrection for the light loss mechanisms in estimating the energy of an event (as in the example given earlier of an event which has its position falsely reconstructed as being adjacent to an A.V. hold-down rope), rather than real non-statistical upward fluctuations in the amount of light collected. To mitigate this effect, then, in creating the SNO+ energy estimators every effort should be taken to ensure that the estimators return the lowest energy consistent with the observed event. This essentially sacrifices those events which actually do have additional light loss and accepts the attendant downward tails in the reconstructed energy as the price for minimizing the number of events whose energies are over estimated.

Another concern about spillover events is the potential, shown in Figure 7.14(b), that they can lead to false “detections” of 0νββ at very high confidence levels. As discussed earlier, when unexpected events are present in the 0νββ energy region, one of the only PDFs that the fit can use to attempt to accommodate these events is the 0νββ PDF; thus, in the presence of large numbers of unexpected backgrounds, the fitted number of 0νββ events can be far from zero. This effect is illustrated in Figure 7.15, which shows endpoint fits for simulated SNO+ experiments with different numbers of spillover events. As shown in Figure 7.14(c), the imperfect description of the unexpected

23 Meaning those tails which extend the main distribution towards higher energy. 24 This is simply due to the much higher statistics in the 2νββ signal below the 0νββ peak compared to the background statistics above the peak. Smearing a tiny fraction of the 2νββ signal upwards has a much larger effect in the 0νββ energy window than smearing a large fraction of the higher energy backgrounds downward.

167 1026 Natural Nd Natural Nd (a) β β

ν Enriched Nd Enriched Nd 0 1/2 Events β β ν 104 1025

103 1024 Number of Fitted 0

2

95% C.L. Lower Limit on T 10

1023 10

10 102 103 104 105 10 102 103 104 105 Number of ’Spillover’ Events Number of ’Spillover’ Events

(a) (b)

2 Natural Nd χ

Fit Enriched Nd

104

103

102

10

10 102 103 104 105 Number of ’Spillover’ Events

(c)

Figure 7.14: The effect of energy resolution “tails” as projected on the spillover axis. Each point in these plots represents a single simulated SNO+ experiment. Different A and B parameters were used for each simulated experiment, with log10(A) drawn randomly from (-5, -1), and B drawn randomly from (0.5, 8.5). It can be seen in (a) that 0ν the range of T1/2 limits in Figure 7.13 projects relatively neatly onto the spillover axis. The fitted 0νββ values, shown in (b), demonstrate that, as in the case of the average energy resolution systematic (Figure 7.11), the non-Gaussian tails principally 0ν affect the T1/2 limits by increasing the number of 0νββ events fitted out. The error bars plotted in (b) are the single fit uncertainties, so at high spillover the simulated SNO+ experiments would “falsely detect” 0νββ decay. As shown in (c), however, the attempt by the fits to use 0νββ events to compensate for the presence of the additional spillover events is not perfect, resulting in an increase in the fit χ2’s. The horizontal line in (c) indicates the χ2 level above which the fits (which all have 35 degrees of freedom) have a χ2 probability of less than 1%.

168 2νββ smearing provided by the 0νββ shape leads to an increase in the fit χ2 , such that the most egregious false detections could be rejected by a cut on the fit χ2 probability. There is a worrisome region, however, between a few hundred and a few thousand spillover events, where false detections are possible without unduly poor fit χ2’s. This means that it would not be impossible for SNO+ to unwittingly claim a false detection of 0νββ as the result of an insufficient understanding of the detector energy response. This possibility does not, of course, affect the ability of the SNO+ experiment to interpret its results in terms of upper limits on the 0νββ process, but it is something that the collaboration will have to be extremely conscious of in the event that a 0νββ detection claim is ever considered.

4 Data Data 10 104 8B Solar ν’s 8B Solar ν’s 214Bi 214Bi 208Tl 208Tl Events / 50 keV Events / 50 keV 3 10 2ν Double Beta 103 2ν Double Beta 0ν Double Beta 0ν Double Beta

2 10 102

10 10

3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5 3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5 Energy (MeV) Energy (MeV)

(a) (b)

Figure 7.15: The 0νββ fit region for two simulated SNO+ experiments with natural neodymium which show false “detections” of 0νββ as a result of spillover events. In (a) a simulated data set with A = 0.0005 and B = 3.0 is shown. The fit to this data set, which contains 542 spillover events but no real 0νββ events, returns 107 25 0νββ events (for a 4.3σ “detection”) with a χ2 of 50 in 35 degrees of freedom. The fit slightly under-predicts the number of events in the 3.5-3.8 MeV region, but on the whole there is little in this plot to raise concern about the validity of the fit. (b) shows a more extreme case, with A=0.01 and B=3 giving 10832 spillover events. In this case, 3027 73 0νββ events are fitted out, but the fit would certainly be rejected as a bad description of the data, as it has a χ2 of 953 in 35 degrees of freedom.

Energy Resolution Systematics Conclusion

It remains to be seen whether the SNO+ energy response characterization can be carried out suf-

ficiently precisely to provide interesting 0νββ limits or demonstrated robustly enough to make

169 credible 0νββ detection claims. The detailed simulations under development may help to provide some information about the potential for success, and will certainly provide a means by which event reconstruction algorithms can be developed, but it is unlikely that success can be gauged in a mean- ingful way until SNO+ is built and operated. At the current time, no fundamental factor has been identified which would prevent SNO+ from achieving the required understanding of its energy res- olution. Nevertheless, the development of event reconstruction algorithms and plans for the energy calibration of the detector will have to be a major focus of the collaboration as the project moves forward.

7.6.5 Pileup Backgrounds and Excited State Decays

Apart from precisely understanding the energy response of the detector, in order to produce interest- ing and accurate limits on the 0νββ process it will also be necessary for SNO+ to precisely predict the underlying shape of the 2νββ spectrum. Two effects which could have an important impact on this underlying shape are 2νββ decays to excited states of 150Sm and pileup between 2νββ events and low energy backgrounds.

Excited State Decays

The 2νββ decay of 150Nd to the first 0+ excited state of 150Sm (0.740 MeV above the ground state) has been observed via detection of the de-excitation γ-rays [112], and determined to have a half-life of

+0.45 20 150 (1.33−0.26)x10 a. Other excited state 2νββ decays of Nd are possible but have not been observed; limits on several of these processes are also set in [112]. In SNO+, which will have virtually 100%

γ-ray detection efficiency, the energy observed in such an excited state 2νββ decay will be the sum of the 2νββ decay energy and the energy of the de-excitation γ-rays25. Thus, SNO+ would observe the energy spectrum of the lower endpoint excited state 2νββ spectrum shifted upwards by the γ energy such that the endpoint of the excited state decay coincides almost exactly with the endpoint

25 Decays to the first 0+ excited state typically release one 0.33 and one 0.41 MeV γ, with an 2% branch to a single 0.74 MeV photon [112]. ∼

170 of the ground state decay26. As lower endpoint 2νββ spectra approach their endpoints more steeply, excited state decays have the potential to affect the shape of the 2νββ spectrum at the endpoint, as shown in Figure 7.16. Based on the quick calculations used to produce Figure 7.16 (and after applying energy resolution smearing in the standard way), it is found that if all of the 2νββ decays were ground state decays there would be 2.0x104 (2.3x103) events/kT a in the 3.0-3.7 MeV window · for enriched (natural) neodymium, while the corresponding numbers would be 6.4x104 (7.3x103) events/kT a if all of the 2νββ decays went to the 0.74 MeV excited state. The spillover studies · described in Section 7.6.4 suggest that there must be fewer than 100 unexpected events in the 3.0- ∼ 3.7 MeV window. In order to meet this constraint, the branching ratio to the 0.74 MeV excited state decay must be known to about 0.2% absolute (or 3% of its value) for enriched neodymium, while for natural neodymium it need only be measured to 2% absolute or 30% of its value (similar to the precision of the existing measurement). Finally, shifting the excited state endpoint down by 1 keV in the calculation (to simulate very much exaggerated electron quenching) changes the number of 2νββ events in the spillover window to 6.5x104 events/kT a for enriched neodymium. This is a relatively · small change, and suggests that the additional energy “lost” through electron quenching for each additional electron produced need only be measured to 100 eV (even less precision is required in ∼ the case of natural neodymium). These targets should be confirmed by more careful simulation of the actual effect of the change in the 2νββ endpoint shape on the 0νββ limits, but these “spillover” based estimates give an idea of the level of understanding of the excited state decays that will be required in SNO+.

Decays to other 150Sm excited states are also expected, but have so far not been observed; in

[112], lower limits between 2x1020a and 8x1020a are placed on the half-lives of decays to other 0+ and 2+ excited states. Calculations of the 2νββ spectra resulting from decays to these other excited states were not performed. However, the calculations described above for decays to the first 0+ excited state suggest that if lower limits of about 4x1020a (4x1021a) could be set on the half-lives

26 The endpoints do not coincide exactly because of electron quenching in the scintillator. This is the observed effect that two electrons of a given energy create slightly less total light in a scintillator than a single electron with twice the energy.

171 Decay to Ground State -6 Decay to Ground State st + ×10 st + 0.009 Decay to 1 0 Excited State Decay to 1 0 Excited State Total Decay Shape 35 Total Decay Shape 0.008 30 0.007

0.006 25 Amplitude ( / 10 keV) Amplitude ( / 10 keV) β β

β 0.005 β 20 ν ν 0.004 15 0.003 10 Normalized 2 0.002 Normalized 2 5 0.001

0 0 0 0.5 1 1.5 2 2.5 3 3.5 3 3.1 3.2 3.3 3.4 3.5 Energy (MeV) Energy (MeV)

(a) (b)

Figure 7.16: The 2νββ spectrum over the full energy range (a) and near the endpoint (b), showing the difference between the spectrum generated by decays to the ground state of 150Sm and the spectrum generated by decays to the first 0+ excited state (neglecting electron quenching). The “total” spectrum is a weighted sum of the ground state and excited state spectra, assuming a 7% branching ratio to the excited state. of the decays to these states, their effect on the endpoint of the 2νββ spectrum would be negligible for a SNO+ double beta decay search with natural (enriched) neodymium. Again, these constraints are similar to the current experimental limits for natural neodymium, but more stringent branching ratio measurements would be required if SNO+ were to use enriched neodymium.

Pileup Backgrounds

Another effect which might change the shape of the 2νββ spectrum in the endpoint region is pileup.

In SNO+, an event window will be approximately 400 ns wide, and all PMT hits which occur during this window are recorded as being from a single event. Thus, if two events should occur within about

400 ns of one another, they will appear in the data stream as a single event. Thus, if two events occur 200 ns apart and the first event is sufficiently energetic to trigger the detector, 400 ns of the

first event and about the first 200 ns of the second event will be recorded27. If the first event does not trigger the detector but the second one does, 400 ns of the second event will be recorded along

27 The shape of the roll-off at the end of the 400 ns event window (or even the length of the window) is not particularly well understood. For the purposes of discussion the window is assumed here to be square and exactly 400 ns; as pileup will ultimately be treated phenomenologically, the exact shape of the trigger window is unimportant.

172 with the “tail” of the first event28.

Pileup is particularly worrisome in the context of pileup between a 2νββ event and another background event. This has the potential to artificially increase the energy of a (hopefully) small fraction of the 2νββ events, distorting the 2νββ spectrum and adding a “bump” in the 0νββ energy region, as illustrated in Figure 7.17 for two potential real pileup scenarios. In order to get an initial estimate of the severity of the pileup problem, quick simulations were performed in which it was assumed that the full energy of events that occurred within a 400 ns window (i.e. 200 ns before and

200 ns after) of a 2νββ event was summed with the energy of the 2νββ event. This rather extreme assumption can be expected to overestimate the average effect of each pileup event but underestimate the total number of pileup events. However, these assumptions simplify the calculations considerably, as the pileup spectrum is then simply the convolution of the 2νββ spectrum and the spectrum of the other background events. Also, under this assumption, the probability P (n) of the pileup of n of the other background events with a 2νββ event is simply the Poisson probability of n other background events occurring in a given 400 ns window,

e−mmn P (n) = , (7.15) n! where m is the mean number of other backgrounds in a 400 ns window.

Any relatively high rate background will form significant numbers of 2νββ pileup events. Table

7.4 shows the fraction of 2νββ events expected to have associated pileup backgrounds at different background rates. The fraction of these pileup events which end up in the 0νββ energy region, of course, depends on the energy of the secondary events; the higher the energy of the secondary event, the lower the energy of the 2νββ events that pileup would push into the 0νββ window. As the rate of 2νββ events increases steeply as energy decreases, the tolerable rate of potential “pileup partners” decreases quickly as their energy increases.

The seriousness of a pileup background can be estimated using the “spillover” metric described in Section 7.6.4 (i.e. by determining the number of unexpected events that appear in the 3.0-3.7

28 Although the majority of the light from an event is expected to be collected in the first 200 ns after the event, scattering and re-emission processes will lead to some photons “trickling in” up to 500 ns after the event.

173 6 2ν Spectrum 6 2ν Spectrum 10 2ν Spectrum With Pileup 10 2ν Spectrum With Pileup 105 0ν Spectrum (150meV) 105 0ν Spectrum (150meV) 104 104 103 103 102 102

Rate / (10 keV*kT*a) 10 Rate / (10 keV*kT*a) 10 β β β β

ν 1 ν 1 10-1 10-1 10-2 10-2 Expected 2 Expected 2 10-3 10-3 10-4 10-4 10-5 10-5 -6 -6 10 2 2.5 3 3.5 4 4.5 5 10 2 2.5 3 3.5 4 4.5 5 Energy (MeV) Energy (MeV)

(a) 144Nd Pileup (b) 138La Pileup

Figure 7.17: The distortion of the 2νββ spectrum for enriched neodymium from two potential pileup sources: the 1.9 MeV alpha decay of 144Nd (with an assumed alpha quenching factor of 10) and the 1.4 MeV electron capture of 138La. The 144Nd rate is assumed to be 9500 Hz (about what would be expected if the 144Nd abundance were not reduced in the enrichment process) and the 138La rate is taken to be 0.9 Hz (which is the rate expected at the upper allowed limit defined in Table 7.5). The 0νββ spectrum at m = 150meV is shown for comparison. h ν i

MeV window as the result of pileup after applying energy resolution). The pileup of two 2νββ events with themselves, for example, will add about 45 events/kT a to the spillover window for enriched · neodymium, and less than one event/kT a for natural neodymium. These pileup rates are tolerable. · At the levels targeted for the SNO+ solar phase [66], the internal 238U and 232Th chain and 40K backgrounds will be negligible from a pileup perspective. External backgrounds at their currently predicted rates [124] will contribute significant, but likely tolerable, pileup (the external backgrounds will likely cause more pileup events than the double 2νββ pileup, but less than the 144Nd discussed below). The 14C intrinsic to the scintillator is a potential problem, depending on its level. In atmospheric carbon, 14C occurs at approximately the 1 ppt level; this would be disastrous for SNO+ as each 2νββ event would have on average more that 60 14C events (which are beta decays with a 0.156 MeV endpoint) in coincidence with it. Thankfully, however, in scintillator produced from fossil fuels (which have been underground for a very long time) the 14C level is significantly reduced; the BOREXINO scintillator, for example, is observed to have a 14C concentration of 1.94x10−18g/g

[137]. At this level, the expected rate of pileup events appearing in the spillover window is should be

174 Background Rate (Hz) Pileup Fraction (%) 1 4.0x10−5 10 4.0x10−4 100 4.0x10−3 1000 0.040 10000 0.40 100000 3.8

Table 7.4: The fraction of events in SNO+ which will have a background event with a 200 ns window. The pileup fraction depends approximately linearly on the event rateat low rate, but becomes non-linear quickly once the rate increases. Note that at higher rates, double and triple pileup events can also become significant. approximately 2 /kT a for enriched neodymium and < 1 /kT a for natural neodymium29. Therefore, · · 14C pileup events should not be a problem in SNO+ as long as the LAB 14C content does not exceed about 1x10−17 g/g.

The addition of the neodymium to the scintillator has the potential to introduce additional low energy backgrounds. 144Nd, for example, which has 21.8% natural abundance, decays with a relatively short 2.3x1015 a half-life to produce 1.9 MeV alpha particles (which, due to alpha quenching in the scintillator produce approximately the same amount of light as 0.19 MeV electrons). The short half-life makes the rate of 144Nd pileup with 2νββ events relatively high, with 58 “spillover” pileup events/kT a produced with natural neodymium and 514 events/kT a produced with enriched · · neodymium (assuming that the 144Nd concentration is not reduced by the enrichment, which it likely would be). Neodymium, like all rare earth elements, can also contain significant contamination from other rare earth elements which can be difficult to remove chemically. Some potential contaminants which have the potential to contribute significantly to the pileup background have been identified

[138], and their pileup potential has been evaluated using the spillover metric [139]. Based on the spillover analysis30, limits were set on the concentrations of these different elements allowable in the

29 In calculating the 14C pileup spectrum, as with the other β-decay pileups simulated, the spectrum is simulated by scaling the 210Bi spectral shape to have the appropriate endpoint. This is again a fairly crude approximation, but it should be sufficient to give a first estimate of these pileup effects. 30 The analysis in [139] has been modified slightly here to bring it into agreement with the 150Nd half-lives and loading level assumed in this thesis and to incorporate the technique described earlier of scaling the 210Bi spectrum to represent all beta spectra.

175 SNO+ neodymium in order that pileup problems are avoided. These concentrations are listed in

Table 7.5. As chemically separating and removing these elements is challenging, assays of neodymium from different suppliers are being carried out [140, 141, 142] in the hope of finding neodymium with reduced levels of these contaminants. As seen in Table 7.6, different samples differ markedly in their levels of contamination by other rare earth elements. Although the current measurements are only precise enough to constrain their concentrations at 2 and 100 times the required levels, respectively, samples were identified in which no Lu or La activity was seen. Another sample was determined to have 235U chain activity at less than 20 times the target. This suggests that it may yet be possible to identify a source of neodymium satisfying all of the purity targets given in Table 7.5.

Background Limit 14C 1x10−17g/g in LAB 85Kr/39Ar 0.05Bq/m3 (total) La 5x10−7g/g in Nd Lu 1x10−7g/g in Nd Sm 5x10−7g/g in Nd Gd 2x10−2g/g in Nd 235U/227Ac 5x10−11/1x10−18g/g in Nd

Table 7.5: The contamination levels of different isotopes/elements at which pileup would be a negligible effect in SNO+ (at these levels, <10 pileup events are expected to be produced in the spillover window in one kT a with enriched neodymium). Where the isotope is not specified, natural isotopic abundances· are assumed. It should be noted that these are rough limits based on the approximate modelling described in the text and [139], and that more complete simulations of the pileup effect may shift these limits slightly.

Even if it proves impossible to achieve the neodymium purity targets defined in Table 7.5 exactly, if the contaminant concentrations can be reduced to within a factor of 100 or so of the targets (as certainly appears to be possible), SNO+ should still be a viable experiment. In this situation, the pileup background resulting from these backgrounds, along with the irreducible 144Nd, 14C, and double 2νββ pileup, would have to be dealt with in the analysis. Some pileup events can likely be rejected by data analysis cuts based on the abnormal distribution of arrival times of the detected photons. This approach is currently being investigated, and although simulations to evaluate its

176 Background La Lu 227Ac Sample (x10−6g La/g Nd) (x10−7g Lu/g Nd) (x10−8g 235U/g Nd) 1 195 19 13 1 0.86 0.10    2 18.1 2.2 not measured 94.8 0.52   3 0 19 0 2.3 2.4 0.18    4 0 25 0 4.6 2.9 0.18    5 175 42 0 17 52 2.9    6 240 40 0 15 52 2.9   

Table 7.6: The results from germanium detector-based assays of neodymium salts. Sample 1 was NdCl3, with the other samples being Nd2O3 of different nominal purities and from different suppliers. The samples and measurements are described in detail in [140] (Sample 1), [141] (Sample 2), and [142] (Samples 3-6). Where the isotope is not specified, natural isotopic abundances are assumed. Samarium and gadolinium were not included in this study as their decays do not result in the production of detectable γ-rays. potential are in the very early stages, it is the opinion of the author that this approach could reduce the pileup background by up to, but not much more than, a factor of 10. The remaining irreducible pileup backgrounds will have to be incorporated into the analysis by including them in the PDFs. The actual pileup background in SNO+ can be sampled by randomly triggering the detector throughout data running (similar to the random “PulseGT” triggers which were used in

SNO), and these observed pileup backgrounds can then be added to the Monte Carlo events used create the SNO+ signal extraction PDFs. Treated in this way, pileup background would affect the

0νββ sensitivity statistically rather than systematically (i.e. in a manner more similar to a well understood degradation in detector energy resolution like that shown in Figure 7.9 than to the energy systematic uncertainties that were described by the spillover metric), and hence would have a much reduced impact on the experiment.

One final complication of pileup backgrounds in SNO+ will be their effect on calibration activities.

In order to achieve the very precise understanding of the detector energy response necessary to set interesting 0νββ limits, energy calibrations with statistics larger than the 107-108 2νββ events expected will be required. In order to achieve this number of events in a reasonable time (and also to prevent the 1-10 Hz of 2νββ decays from being a significant background to the calibrations),

177 calibration data rates approaching or exceeding 1000 Hz will be required. At this rate, 0.05% of ∼ the calibration events (or between 5,000-50,000 events) will pile up with another calibration event, producing a pileup distortion in excess of what is expected in the data on the high energy side of the calibration spectra. Thus, interpreting the calibration data will rely on modelling, specifically on modelling the pileup during the high rate calibration periods. This in turn will require random triggering of the detector at a significant rate during calibration running.

In summary, the pileup of events in SNO+ has the potential to provide a significant challenge to the double beta decay portion of the experiment. Although it is reasonable to expect that, provided the radioactive impurities in the neodymium can be kept to a reasonable level, the pileup backgrounds can be accommodated in the analysis with minimal impact on the 0νββ limit, pileup backgrounds will certainty have to be carefully considered by the collaboration as the experiment moves forward.

7.7 Summary of the SNO+ Neutrinoless Double Beta Decay Search

The search for neutrinoless double beta decay through the loading of the 150Nd double beta decay isotope into the SNO+ liquid scintillator presents an exciting opportunity for the double beta decay community. The SNO+ active volume is sufficiently large that, even at relatively modest 0.1% loading of the scintillator, large masses of isotope are achievable. Using natural neodymium, SNO+ should be able to confirm or refute the controversial claim of detection of 0νββ with m & 270meV, h ν i in approximately 1 kT a of running (or in approximately three real years) for all currently calculated · values of the 150Nd nuclear matrix element. If neodymium enriched to 50% in 150Nd could be obtained, SNO+ would conclusively test the existing claim in the matter of a few months, and, depending on the 150Nd nuclear matrix element, proceed to probe the m = 10-40 meV region h ν i predicted by the observed neutrino mass splittings in the case of the inverted neutrino mass hierarchy.

This makes SNO+ competitive, both in sensitivity and in time, with other current neutrinoless double beta decay experiments.

178 Ensuring the success of the SNO+ neutrinoless double beta decay experiment will not be simple, as an extremely precise understanding of the background energy spectra, especially the 150Nd 2νββ energy spectrum, will be required in order to make interesting and credible 0νββ measurements.

This will require the energy resolution of the SNO+ detector to be characterized and monitored very carefully, and the 2νββ spectral shape underlying the energy resolution to be well understood. It will also be necessary to identify sources of, or to produce, neodymium with very low levels of rare earth element contamination. These challenges are significant but by no means insurmountable, and with a good deal of careful and well-planned effort, the SNO+ collaboration should be able to make a very significant contribution to the global search for neutrinoless double beta decay.

7.8 SNO+ Conclusion

The SNO+ experiment represents an exciting opportunity for SNOLAB, for Canada, and for the particle astrophysics community in general. With relative ease and for relatively little expense it is possible to construct, from the remaining infrastructure of the SNO experiment, a new and extremely capable liquid scintillator neutrino detector. Using this detector, the SNO+ collaboration will be able to study the low energy solar neutrinos, reactor anti-neutrinos, geo-neutrinos, and, with luck, supernova neutrinos. These studies will provide important information to several different physics disciplines including neutrino physics, solar physics, and geo-physics. In addition, by loading the liquid scintillator with double beta decay isotope, the SNO+ collaboration will be able to carry out a very competitive, and possibly world-leading, search for neutrinoless double beta decay. Thus, the same experiment which helped to demonstrate that neutrinos have mass will, in a new form, continue to lead the global effort to better understand these particles. I wish the collaboration and the experiment all the best.

179 References

[1] H. Bethe. Energy production in stars. Phys. Rev., 55:103(L), 1939.

[2] J.N. Bahcall. Solar Models: An Historical Overview. Nucl. Phys. B - Proc. Suppl., 118:77–86,

2003.

[3] J.N. Bahcall, A.M. Serenelli, and S. Basu. New Solar Opacities, Abundances, Helioseismology,

and Neutrino Fluxes. Astrophys. J., 621:L85–L88, 2005.

[4] B. T. Cleveland et al. Measurement of the Solar Electron Neutrino Flux with the Homestake

Chlorine Detector. Astrophys. J., 496:505–526, 1998.

[5] J. N. Abdurashitov et al. Measurement of the Solar Neutrino Capture Rate by the Russian-

American Gallium Solar Neutrino Experiment During One Half of the 22-year Cycle of Solar

Activity. J. Exp. Theor. Phys., 95:181–193, 2002.

[6] M. Altmann et al. Complete Results for Five Years of GNO Solar Neutrino Observations.

Phys. Lett., B616:174–190, 2005.

[7] W. Hampel et al. GALLEX Solar Neutrino Observations: Results for GALLEX IV. Phys.

Lett., B447:127–133, 1999.

[8] Y. Fukuda et al. Solar Neutrino Data Covering Solar Cycle 22. Phys. Rev. Lett., 77:1683–1686,

1996.

[9] J. Hosaka et al. Solar Neutrino Measurements in Super-Kamiokande-I. Phys. Rev., D73:112001,

2006.

180 [10] J.N. Bahcall and R. Davis. An Account of the Development of the Solar Neutrino Problem. In

D.D. Barnes, C.A. Clayton and D. Schramm, editors, Essays in Nuclear Astrophysics, pages

243–285. Cambridge University Press, 1982.

[11] J. N. Bahcall, N. Cabibbo, and A. Yahil. Are Neutrinos Stable Particles? Phys. Rev. Lett.,

28:316, 1972.

[12] A. Cisneros. Effect of Neutrino Magnetic Moment on Solar Neutrino Observations. Astrophys.

Space Sci., 10:87–92, 1971.

[13] V. Gribov and B. Pontecorvo. and Lepton Charge. Phys. Lett., 28B:493,

1969.

[14] L. Wolfenstein. Neutrino Oscillations in Matter. Phys. Rev., D17:2369–2374, 1978.

[15] S.P. Mikheyev and A.Yu. Smirnov. Resonant Amplification of ν Oscillations in Matter and

Solar-Neutrino Spectroscopy. Nuovo Cimento, 9C:17, 1986.

[16] H. H. Chen. Direct Approach to Resolve the Solar-Neutrino Problem. Phys. Rev. Lett.,

55:1534–1536, 1985.

[17] J. Boger et al. The Sudbury Neutrino Observatory. Nucl. Inst. Meth., A449:172, 2000.

[18] Q.R. Ahmed et al. Direct Evidence for Neutrino Flavor Transformation from Neutral-Current

Interactions in the Sudbury Neutrino Observatory. Phys. Rev. Lett., 89:011301, 2002.

[19] B. Aharmim et al. Measurement of the νe and Total B-8 Solar Neutrino Fluxes with the

Sudbury Neutrino Observatory Phase I Data Set. Phys. Rev., C75:045502, 2007.

[20] B.R. Aharmin et al. Electron Energy Spectra, Fluxes, and Day-Night Asymmetries of 8B Solar

Neutrinos from the 391-Day Salt Phase SNO Data Set. Phys. Rev. C., 72:055502, 2005.

[21] B.H. Aharmin et al. Independent Measurement of the Total Active 8B Solar Neutrino Flux

using an Array of 3He Proportional Counters at the Sudbury Neutrino Observatory. Phys.

Rev. Lett., 101:111301, 2008.

181 [22] C. Arpesella et al. Direct Measurement of the Be-7 Solar Neutrino Flux with 192 Days of

Borexino Data. Phys. Rev. Lett., 101:091302, 2008.

[23] S. Abe et al. Precision Measurement of Neutrino Oscillation Parameters with KamLAND.

Phys. Rev. Lett., 100:221803, 2008.

[24] T. Araki et al. Measurement of Neutrino Oscillation with KamLAND: Evidence of Spectral

Distortion. Phys. Rev. Lett., 94:081801, 2005.

[25] J.N. Bahcall. Neutrino Astrophysics. Cambridge University Press, 1989.

[26] R. Martin. Numerical Methods and Approximations for the Calculation of Solar Neutrino

Fluxes in Three Flavours Focused on Results for the Sudbury Neutrino Observatory. Master’s

thesis, Queen’s University, 2006. UMI-MR-15302.

[27] P. Amsler et al. (Particle Data Group). Leptons. PL B667, 1. URL:http://pdg.lbl.gov, 2008.

[28] J.F. Amsbaugh et al. An Array of Low-Background 3He Proportional Counters for the Sudbury

Neutrino Observatory. Nucl. Inst. Meth., A554:1054–1080, 2005.

[29] H.S. Wan Chan Tseung. Simulation of NCD Alpha Background Energy PDFs. SNO internal

document MANN-78S64R, November 2007.

[30] S. McGee. Skimmed NNNA Events. SNO internal document MANN-6ZKS8A, March 2007.

[31] B. Monreal. Stringy NNNA Hunt on Many Pulse-Shape Parameters. SNO internal document

MANN-767T5F, August 2007.

[32] B. Monreal. Search for Bursty or Non-Poissonian NNNAs. SNO internal document MANN-

76LKR3, August 2007.

[33] B. Monreal. The Prior Probability of Low-Badness NNNAs. SNO internal document MANN-

77Q4DM, October 2007.

182 [34] A. Wright. Update to the String-by-String Rate Analysis. SNO internal document MANN-

7DJQ2S, April 2008.

[35] N. Obath et al. NCD MC Readiness Report for the First NCD Paper, v3.1. SNO internal

document MANN-79662C, April 2008.

[36] H.S. Wan Chan Tseung. A Simulation of Space Charge Effects in the NCDs. SNO internal

document, April 2004.

[37] H.S. Wan Chan Tseung. The Motion of Electrons in NCD Gas. SNO internal document

MANN-6VYDCG, November 2007.

[38] B. Jamieson. SNO NCD Phase Signal Extraction on Unblinded Data with Integration over

Systematic Nuisance Parameters by Markov-Chain Monte-Carlo - Version 1.1. SNO internal

document MANN-7NCU9S, May 2008.

[39] J. Goon et al. Signal Extraction of the Solar Neutrino Neutral-Current Flux with the Sudbury

Neutrino Observatory Neutral Current Detectors. J. Phys.: Conf. Ser., 136(4):042010 (1pp),

2008.

[40] A. Wright. Extracting the NCD Neutron Flux From an Unknown Background. SNO internal

document MANN-73HQNM, May 2007.

[41] A. Wright. Update on Polynomial and Overlap Fits Under Different PSA Cuts. SNO internal

document MANN-7AWRDE, January 2008.

[42] Likelihood Ratio Tests. NIST/SEMATECH e-Handbook of statistical Methods, Section 8.2.3.3.

Available at www.itl.nist.gov/div989/handbook/index.html.

[43] Y.M. Yao et al. Statistics. J. Phys. G, 33:1, 2006.

[44] B. VanDevender. NCLI Parameters, Uncertainties, and Covariance. SNO internal document,

April 2007.

183 [45] E. Guillian. MUX Threshold Efficiency Determination for the NCD. SNO internal document

MANN-7D7UJ3, March 2008.

[46] A. Wright. Systematic Uncertainties Associated with the Use of the 24Na Neutron PDF. SNO

internal document MANN-7DHHKC, April 2008.

[47] E. Guillian. The Average MUX Live Fraction for the Solar Neutrino Data. SNO internal

document MANN-7DLJVS, April 2008.

[48] A. Wright. A Collection of Scope Live Fraction Documents. SNO internal document MANN-

7AQL4S, January 2008.

[49] E. Guillian, A. Wright, and K. Boudjemline. The NCD Scope Live Fraction. SNO internal

document MANN-7BW3Z7, May 2008.

[50] E. Guillian. Some Correction Factors of Converting Extracted Neutron Numbers to the Solar

Neutrino Flux. SNO internal document MANN-7DSE7Y, April 2008.

[51] H.M. O’Keeffe. Low Energy Background in the NCD Phase of the Sudbury Neutrino Observa-

tory. PhD thesis, University of Oxford, 2008.

[52] B. Cai, C. Kraus, and A. Poon. External Neutrons for the NCD Phase. SNO internal document

MANN-7E8TX9, May 2008.

[53] C. Kraus. A.V. Alpha Counting - 2007. SNO internal document MANN-7DDNCP, April 2008.

[54] S. McGee et al. Limit on Neutron Production from Activity on NCD Outer Surface. SNO

internal document MANN-7A2MHW, January 2008.

[55] J.A. Formaggio. Other Background Sources for the NCD Phase of the SNO Experiment. SNO

internal document MANN-7AMQGM, February 2008.

[56] J. Detwiler et al. Livetime for the Full NCD Data Set. SNO internal document MANN-

7APU7D, January 2008.

184 [57] J. Loach. Measurement of the Flux of 8B Solar Neutrinos at the Sudbury Neutrino Observatory.

PhD thesis, University of Oxford, 2008.

[58] E. Guillian. The Determination of the Neutral Current NCD Neutron Capture Efficiency with

24Na Data. SNO internal document MANN-7E7LF8, April 2008.

[59] W. T. Winter et al. The B-8 Neutrino Spectrum. Phys. Rev., C73:025503, 2006.

[60] S. Nakamura et al. Neutrino Deuteron Reactions at Solar Neutrino Energies. Nucl. Phys.,

A707:561–576, 2002.

[61] A. Kurylov, M.J. Ramsey-Musolf, and P. Vogel. Radiative Corrections in Neutrino Deuterium

Disintegration. Phys. Rev., C65:055501, 2002.

[62] C. Kraus. Corrections for the FNP. SNO internal document, May 2008.

[63] A. Friedland, C. Lunardini, and Carlos Pen˜a-Garay. Solar Neutrinos as Probes of Neutrino -

Matter Interactions. Phys. Lett., B594:347, 2004.

[64] V. Barger, P. Huber, and D. Marfatia. Solar Mass-Varying Neutrino Oscillations. Phys. Rev.

Lett., 95:211802, 2005.

[65] J.N. Bahcall and C. Pen˜a-Garay. A Road Map to Solar Neutrino Fluxes, Neutrino Oscillation

Parameters, and Tests for New Physics. JHEP11, 004, 2003.

[66] S. Quirk. Purification of Liquid Scintillator and Monte Carlo Simulations of Relevant Internal

Backgrounds in SNO+. Master’s thesis, Queen’s University, 2007.

[67] E. O’Sullivan et al. Scintillation Timing Results from Bucket Data and Bench Top Measure-

ments. SNO+ internal presentation doc-308-v1, April 2009.

[68] T. Hagner et al. Muon Induced Production of Radioactive Isotopes in Scintillation Detectors.

Astropart. Phys., 14:33–47, 2000.

185 [69] G. Bellini et al. Measurement of the Solar 8B Neutrino Flux with 246 Live Days of Borexino

and Observation of the MSW Vacuum-Matter Transition. 2008.

[70] M. Balata et al. CNO and pep Neutrino Spectroscopy in Borexino: Measurement of the Deep-

Underground Production of Cosmogenic 11C in an Organic Liquid Scintillator. Phys. Rev.,

C74:045805, 2006.

[71] J. Bahcall, M. Gonzalez-Garcia, and C. Pena-Gara˜ y. Does the Sun Shine by pp or CNO Fusion

Reactions? Phys. Rev. Lett., 90:131301, 2003.

[72] C. Pena-Gara˜ y and A. Serenelli. Solar Neutrinos and the Solar Composition Problem.

arXiv:0811.2424, 2008.

[73] M. Chen. Geo-neutrinos in SNO+. Earth, Moon, and Planets, 99(1-4):221–228, 2006.

2 [74] E. Guillian. The Sensitivity of SNO+ to ∆m12 Using Reactor Anti-neutrino Data.

arxiv:0809.1649, 2008.

[75] G. Fiorentini et al. Geo-Neutrinos: A Short Review. Nucl. Phys. Proc. Suppl., 143:53–59,

2005.

[76] T. Araki et al. Experimental Investigation of Geologically Produced Antineutrinos with Kam-

LAND. Nature, 436:499–503, 2005.

[77] C.G. Rothschild, M.C. Chen, and F.P. Calaprice. Antineutrino Geophysics with Liquid Scin-

tillator Detectors. Geophys. Res. Lett., 25:103, 1998.

[78] J. Benziger et al. A Scintillator Purification System for the Borexino Solar Neutrino Detector.

Nucl.Inst. Meth. A, 587:277, 2008.

[79] A. Burrows, D. Klein, and R. Gandhi. The Future of Supernova Neutrino Detection. Phys.

Rev., D45:3361–3385, 1992.

[80] C.J. Virtue. SNO and Supernovae. Nucl. Phys. B. (Proc. Suppl.), 100:24–29, 2001.

186 [81] C. Lunardini and A. Yu. Smirnov. Neutrinos from SN1987A: Flavor Conversion and Interpre-

tation of Results. Astropart. Phys., 21:703–720, 2004.

[82] P.J. Kernan and L.M. Krauss. Yet Another Paper on Sn1987a: Large Angle Oscillations, and

the Electron Neutrino Mass. Nucl. Phys., B437:243–256, 1995.

[83] C. Hanhart et al. Extra Dimensions, SN1987a, and Nucleon Nucleon Scattering Data. Nucl.

Phys., B595:335–359, 2001.

[84] L. Cadonati, F. P. Calaprice, and M. C. Chen. Supernova Neutrino Detection in BOREXINO.

Astropart. Phys., 16:361–372, 2002.

[85] J. Busenitz et al. Proposal for U.S. Participation in KamLAND. 2003. Available online

http://bama.ua.edu/ andreas/ps files/kamland prop.pdf. ∼

[86] J.F. Beacom, W.M. Farr, and P. Vogel. Detection of Supernova Neutrinos by Neutrino Proton

Elastic Scattering. Phys. Rev., D66:033001, 2002.

[87] M. C. Gonzalez-Garcia and M. Maltoni. Phenomenology with Massive Neutrinos. Phys. Rept.,

460:1–129, 2008.

[88] H. Klapdor-Kleingrothaus and I.V. Krivosheina. The Evidence for the Observation of 0νββ

Decay: The Identification of 0νββ Events from the Full Spectrum. Mod. Phys. Lett. A,

21(20):1547–1566, 2006.

[89] H. V. Klapdor-Kleingrothaus et al. Evidence for Neutrinoless Double Beta Decay. Mod. Phys.

Lett., A16:2409–2420, 2001.

[90] C.E. Aalseth et al. Comment on ’Evidence for Neutrinoless Double Beta Decay’. Mod. Phys.

Lett., A17:1475–1478, 2002.

[91] P.S. Vincet et al. Report of the Canadian Subatomic Physics Five-Year Planning Committee.

Available www.ap.smu.ca/ butler/lrp/, June 2001. ∼

187 [92] A.Piepke. Final Results from the Palo Verde Neutrino Oscillation Experiment. Prog. Part.

Nucl. Phys., 48:113–121, 2002.

[93] V. Novikov. Scintillator Development for SNO+. SNO+ internal document doc-319-v1.,

September 2006.

[94] J.D. Stachiw. Handbook of Acrylics for Submersibles, Hyperbaric Chambers, and Aquaria. Best

Publishing Co., 2003.

[95] Designation D 543. Standard Practices for Evaluating the Resistance of Plastics to Chemical

Reagents. In Annual Book of ASTM Standards 2005, page 28. ASTM International, 2005.

[96] Designation D 638. Standard Test Method for Tensile Properties of Plastics. In Annual Book

of ASTM Standards 2005, page 50. ASTM International, 2005.

[97] D. Rogowsky. Private Communication, April 2009.

[98] A. Wright et al. The Scintillation and Optical Properties of Linear Alkyl Benzene. SNO+

internal document doc-219-v1., September 2008.

[99] B. Brewer. SNO+ Engineering Feasibility Report. SNO+ report by SNC-Lavalin Nuclear Inc.,

October 2005.

[100] A. Hallin. Nonlinear Buckling Analysis of Acrylic Vessel. Internal SNO+ report., March 2008.

[101] C. Hearty et al. Report of the Engineering Review of the SNO+ Acrylic Vessel Hold-down

Design. Report to SNO+ by hold-down review committee., September 2008.

[102] A.S. Barabash. NEMO 3 Double Beta Decay Experiment: Latest Results. J. Phys.: Conf.

Ser., 173:012008 (10pp), 2009.

[103] S. R. Elliott and P. Vogel. Double Beta Decay. Ann. Rev. Nucl. Part. Sci., 52:115–151, 2002.

[104] P. Vogel. Nuclear Physics Aspects of Double Beta Decay. arXiv:0807.2457, 2008.

188 [105] F.T. Avignone, S.R. Elliott, and J. Engel. Double Beta Decay, Majorana Neutrinos, and

Neutrino Mass. Rev. Mod. Phys., 80:481–516, 2008.

[106] A. Staudt, K. Muto, and H. V. Klapdor-Kleingrothaus. Calculation of 2nu and 0nu Double-

Beta Decay Rates. Europhys. Lett., 13:31–36, 1990.

[107] A. Faessler and F. Simkˇ ovic. Double Beta Decay. J. Phys. G, 24:2139–2178, 1998.

[108] J. Toivanen and J. Suhonen. Renormalized Proton-Neutron Quasiparticle Random-Phase

Approximation and Its Application to Double Beta Decay. Phys. Rev. Lett., 75:410–413, 1995.

[109] V.A. Rodin, A. Faessler, F. Simkovic, and P. Vogel. Erratum: Assessment of Uncer-

tainties in QRPA 0νββ Nuclear Matrix Elements [Nucl. Phys. A 766, 107 (2006)]. 2007.

arXiv:0706.4304v1, 2007.

[110] J.G. Hirsch, O. Castan˜os, and P.O. Hess. Neutrinoless Double Beta Decay in Heavy Deformed

Nuclei. Nucl. Phys. A, 582(1-2):124 – 140, 1995.

[111] N. Fatemighomi. Private Communication, July 2009.

[112] A.S. Barabash et al. Investigation of ββ Decay in 150Nd and 148Nd to the Excited States of

Daughter Nuclei. Phys. Rev., C79:045501, 2009.

[113] A. P. Babichev et al. Development of the Laser Isotope Separation Method (AVLIS) for Obtain-

ing Weight Amounts of Highly Enriched 150Nd Isotope. Quantum Electronics, 35(10):879–890,

2005.

[114] R. S. Raghavan. New Approach to the Search for Neutrinoless Double Beta Decay. Phys. Rev.

Lett., 72(10):1411–1414, Mar 1994.

[115] B. Caccianiga and M. G. Giammarchi. Neutrinoless Double Beta Decay with Xe-136 in

BOREXINO and the BOREXINO Counting Test Facility. Astropart. Phys., 14(1):15 – 31,

2000.

189 [116] S. Link and M.A. El-Sayed. Optical Properties and Ultrafast Dynamics of Metallic Nanocrys-

tals. Ann. Rev. Phys. Chem., 54(1):331 – 366, 2003.

[117] J.D. Jackson. Classical Electrodynamics. John Wiley & Sons, Inc., 1999.

[118] A. Wright. Nanoparticle Loading Report. SNO+ internal document doc-320-v1, March 2005.

[119] C. Buck et al. Metal Loaded Liquid Scintillators. 2004. Available

www.scientificcommons.org/20562176.

[120] M. Yeh, A. Garnov, and R.L. Hahn. Gadolinium-Loaded Liquid Scintillator for High-Precision

Measurements of Antineutrino Oscillations and the Mixing Angle, θ13. Nucl. Inst. Meth. A,

578(1):329 – 339, 2007.

[121] F. Masetti, F. Elisei, and U. Mazzucato. Optical Study of a Large-Scale Liquid-Scintillator

Detector. J. Lum., 68(1):15 – 25, 1996.

[122] H.M. O’Keeffe et al. Analysis of the Scintillator Bucket Data. SNO+ internal presentation

doc-306-v1., April 2009.

[123] F. Boehm and P. Vogel. Physics of Massive Neutrinos. Cambridge University Press, 1987.

[124] J. Maneira. External Background in SNO+. SNO+ internal document doc-43-v1., August

2007.

[125] Y. Kishimoto. Liquid scintillator purification. In B. Cleveland, R. Ford, and M. Chen, editors,

Topical Workshop on Low Radioactivity Techniques LRT2004, pages 139–198. AIP Conference

Proceedings, 2005.

[126] K. Zuber. Nd-related Issues. SNO+ internal presentation doc-310-v1., April 2009.

[127] A. Wright. Nd Cosmogenics: A Preliminary Look. SNO+ internal presentation doc-39-v1.,

August 2007.

190 [128] S. Schonert et al. The GERmanium Detector Array (GERDA) for the Search of Neutrinoless

Beta Beta Decays of Ge-76 at LNGS. Nucl. Phys. Proc. Suppl., 145:242–245, 2005.

[129] A. Balysh et al. The Heidelberg-Moscow Double Beta Decay Experiment with Enriched 76Ge.

First Results. Phys. Lett. B, 283(1-2):32 – 36, 1992.

[130] C. E. Aalseth et al. The Igex 76Ge Neutrinoless Double-Beta Decay Experiment: Prospects

for Next Generation Experiments. Phys. Rev., D65:092007, 2002.

[131] R. Gaitskell et al. White Paper on the Majorana Zero-Neutrino Double-Beta Decay Experi-

ment. 2003.

[132] R. Ardito et al. CUORE: A Cryogenic Underground Observatory for Rare Events. hep-

ex/0501010, 2005.

[133] C. Arnaboldi et al. A New Limit on the Neutrinoless DBD of 130Te. Phys. Rev. Lett.,

95:142501, 2005.

[134] R. Neilson. Exo-200. J. Phys. Conf. Ser., 136(4):042052 (1pp), 2008.

[135] T Shima et al. MOON for a Next-Generation Neutrino-less Double-Beta Decay Experiment;

Present Status and Perspective. J. Phys.: Conf. Ser., 120(5):052055 (3pp), 2008.

[136] A. S. Barabash. NEMO-3 and SuperNEMO Double Beta Decay Experiments. J. Phys. Conf.

Ser., 39:347–349, 2006.

[137] J.C. Maneira. Calibration and Monitoring for the Borexino Solar Neutrino Experiment. PhD

thesis, University of Lisbon, 2001.

[138] B. Cleveland. Rare-earth Backgrounds for Nd Phase. SNO+ internal document doc-117-v1.,

January 2008.

[139] A. Wright. A Study of Event Pileup Effects in SNO+. SNO+ internal document doc-118-v2.,

January 2008.

191 [140] B. Cleveland and I. Lawson. Measurement of the γ Activity of a NdCl3 Sample with the

SNOLAB Ge Detector. SNO+ internal document doc-120-v1., March 2008.

[141] B. Cleveland and I. Lawson. Measurement of the γ Activity of a Nd2O3 Sample with the

SNOLAB Ge Detector. SNO+ internal document doc-122-v1., August 2008.

[142] B. Cleveland and I. Lawson. Measurement of the Activity of Four Nd2O3 Samples from

Stanford Materials with the SNOLAB Ge Detector. SNO+ internal document doc-220-v2.,

November 2008.

192