CERN-THESIS-2016-213 30/09/2016 nthe in lcrwa ag oo Scattering Gauge Electroweak Z W tteLreHdo Collider Large the at e auta ahmtkudNaturwissenschaften und Fakult¨at Mathematik der ilmPyie ei ohr(e.Thomas) (geb. Socher Felix Diplom-Physiker u ragn e kdmshnGrades akademischen des Erlangung zur hne ihteALSDetector ATLAS the with Channel eoe m0.Fbur18 nRiesa in 1986 Februar 01. am geboren e ehice Universit¨at Technischen Dresden der otrrrmnaturalium rerum Doctor DISSERTATION D.rr nat.) rer. (Dr. td of Study vorgelegt von

Allons-y!

1. Gutachter: Prof. Dr. Michael Kobel 2. Gutachter: Prof. Dr. Chara Petridou

Tag der m¨undlichen Pr¨ufung: 30.09.2016 Tag der Einreichung: 15.07.2016

Abstract

The of is a very well tested describing the strong, weak and electromagnetic between elementary through the exchange of carriers called gauge . Its high predictive power stems from its ability to derive the properties of the interactions it describes from fundamental symmetries of nature. Yet, it is not a final theory as there are several phenomena it cannot explain. Furthermore, not all of its predictions have been studied with suf- ficient precision, e.g. the properties of the newly discovered . Therefore, further probing of the Standard Model is necessary and may result in finding possible indications for new physics.

The non-abelian SU(2)L U(1)Y symmetry group determines the properties of the elec- × tromagnetic and weak interactions giving rise to self-couplings between the electroweak gauge bosons, i.e. the massive W and Z boson, and the massless , via triple and quartic gauge couplings. Studies carried out over the past 20 years at various particle accelerator experiments have shed light on the structure of the triple gauge couplings but few results on quartic gauge couplings are available. The electroweak self-couplings are intertwined with the electroweak symmetry breaking and thus the Higgs boson through the scattering of massive electroweak gauge bosons. Both the W and Z boson couple to the Higgs boson and may interact with each other by exchanging it. Theory predictions yield physical results at high energies only if either both the self-couplings and Higgs boson properties are as described by the Standard Model or if they deviate from its predictions and contributions from new physics are present to render the calculations finite. This makes electroweak gauge boson scattering a powerful tool to probe the Standard Model and search for possible effects of new physics. The small cross section of massive electroweak gauge boson scattering necessitates high centre-of-mass energies and luminosities to study these processes successfully. The (LHC) at CERN is a circular -proton collider equipped to supply a suitable environment for such studies with the colliding being the sources for the scattering of massive electroweak gauge bosons. The dataset collected in 2012 by the ATLAS detector at the LHC with a total lumi- nosity of 20.3 fb−1 and a centre-of-mass energy of 8 TeV is analysed in this work. The elastic scattering process WZ WZ is studied due to its clean signal properties. It → provides a complementary measurement to W ±W ± W ±W ± which reported the first → significant evidence for massive electroweak gauge boson scattering. Given the current data, WZ WZ scattering is not observed with large significantly. → A cross section upper limit of 2.5 fb at 95 % confidence level is measured, compatible with the cross section of 0.54 fb predicted by the Standard Model. In addition, distributions for several observables sensitive to electroweak gauge boson scattering are unfolded, removing effects caused by the measuring process. Physics beyond the Standard Model is probed in the framework of the electroweak chiral Lagrangian which expresses the size of effects from new physics in terms of strength parameters. The two strength parameters influencing the quartic gauge couplings are constrained to 0.44 < α4 < 0.49 and 0.49 < α5 < 0.47 thus limiting the possible − − size of new physics contributions.

Kurzdarstellung

Das Standardmodell der Teilchenphysik beschreibt die starken, schwachen und elektro- magnetischen Wechselwirkungen zwischen Elementarteilchen uber¨ den Austausch von Kraftteilchen, sogenannten Eichbosonen. Es ist eine anerkannte theoretische Beschrei- bung der Natur, da es in der Lage ist, aus fundamentalen Symmetrien die Charakterisi- ken der einzelnen Wechselwirkungen abzuleiten. Die so getroffenen Vorhersagen wurden durch eine Vielzahl von Experimenten erfolgreich uberpr¨ uft.¨ Dennoch ist es keine abge- schlossene Theorie, da es nicht alle in der Natur beobachteten Ph¨anomene beschreiben kann. Uberdies¨ konnten die von ihm gemachten Vorhersagen wie z.B. die Eigenschaften des kurzlich¨ gefundenen Higgs Bosons, noch nicht mit ausreichender Pr¨azision uberpr¨ uft¨ werden. Deshalb sind weitere Tests des Standardmodells notwendig. Die Eigenschaften der elektromagnetischen und schwachen Wechselwirkungen werden durch die nicht-abelsche Symmetriegruppe SU(2)L U(1)Y bestimmt. Eine direkte × Konsequenz ist die Existenz von Selbstwechselwirkungen zwischen den elektroschwa- chen Eichbosonen, den massiven W und Z Bosonen und dem masselosen Photon, die durch Dreier- und Vierer-Kopplungen beschrieben werden. Die Struktur der Dreier- Kopplungen ist in den letzten 20 Jahren an Teilchenbeschleunigern eingehend studiert worden. Erst seit kurzem sind durch neue Beschleuniger pr¨azise Untersuchungen der Vierer-Kopplungen m¨oglich. Das Higgs Boson koppelt an W und Z Bosonen da diese eine Masse haben. Damit kann, durch die Untersuchung der Streuung massereicher elektroschwacher Eichbosonen so- wohl die elektroschwache Selbstwechselwirkung, als auch die elektroschwache Symme- triebrechung untersucht werden. Die Vorhersagen des Standardmodells sind bei hohen Energien nur dann gultig,¨ wenn die Eigenschaften des Higgs Bosons jenen entsprechen, die vom Standardmodell vorhergesagt werden. Falls diese Bedingung nicht erfullt¨ ist, werden Beitr¨age neuer physikalische Prozesse ben¨otigt um unphysikalische Vorhersa- gen zu vermeiden. Somit ist die Streuung massereicher elektroschwacher Eichbosonen geeignet, das Standardmodell zu testen und nach neuer Physik zu suchen. Die kleinen Wirkungsquerschnitte fur¨ die zu untersuchenden Prozesse bedingen eine hohe Schwerpunktsenergie und hohe Luminosit¨aten um eine ausreichend große Daten- menge zu erhalten. Der Large Hadron Collider am CERN ist ein Kreisbeschleuniger der diese Voraussetzungen erfullt,¨ indem er Protonen, die Quellen fur¨ die streuenden elektroschwachen Eichbosonen sind, mit einer Schwerpunktsenergie von 8 TeV zur Kol- lision bringt. Diese Arbeit basiert auf dem im Jahr 2012 vom ATLAS Detektor auf- gezeichnete Datensatz, der einer Luminosit¨at von 20.3 fb−1 entspricht. Der untersuchte Prozess ist die elastische Streuung WZ WZ, welche komplement¨ar zum Prozess → W ±W ± W ±W ± ist, in dem erstmalig signifikante Hinweise auf die Streuung mas- → sereicher elektroschwacher Eichbosonen gefunden wurden. Mit der derzeit verfugbaren¨ Datenmenge kann WZ WZ nicht mit ausreichender → Signifikanz beobachtet werden. Fur¨ den Wirkungsquerschnitt wird eine obere Schranke von 2.5 fb mit 95 % Konfidenz gemessen, welche kompatibel mit der Standardmodell- vorhersage von 0.54 fb ist. Beitr¨age neuer Physik jenseits des Standardmodells k¨onnen generisch im Rahmen effek- tiver Feldtheorien durch St¨arkeparameter beschrieben werden. Die beobachteten Daten erm¨oglichen eine Einschr¨ankung der St¨arkeparameter α4 und α5, welche die Vierer- Kopplungen beeinflussen, auf die Bereiche 0.44 < α4 < 0.49 und 0.49 < α5 < 0.47. − −

Contents

1. Introduction 1

2. Theoretical Foundations 5 2.1. Introduction ...... 5 2.2. The Standard Model ...... 5 2.2.1. Local Gauge Theory ...... 7 2.2.2. Chromodynamics ...... 9 2.2.3. Electroweak Theory ...... 10 2.2.4. Electroweak Symmetry Breaking ...... 12 2.2.5. The Lagrangian of the Standard Model ...... 14 2.3. Electroweak gauge boson scattering ...... 14 2.3.1. Definition ...... 14 2.3.2. Motivation ...... 15 2.3.3. VBS Topology ...... 16 2.3.4. Choice Of Observation Channel ...... 18 2.4. Effective Theories ...... 23 2.4.1. Introduction ...... 23 2.4.2. Effective Theory of the Decay ...... 24 2.4.3. Anomalous Quartic Gauge Couplings ...... 24 2.4.4. Electroweak Chiral Lagrangian ...... 24 2.4.5. Linear Symmetry Breaking Approach ...... 25 2.4.6. K-Matrix Unitarisation ...... 26

3. Experiment 29 3.1. CERN ...... 29 3.2. Large Hadron Collider ...... 29 3.3. The ATLAS Detector ...... 31 3.3.1. ATLAS coordinate system ...... 34 3.3.2. Inner Detector ...... 34 3.3.3. Electromagnetic Calorimeter ...... 36 3.3.4. Hadronic Calorimeter ...... 37 3.3.5. Muon Spectrometer ...... 38 3.3.6. Trigger System ...... 40 3.3.7. Luminosity Monitoring ...... 41 3.4. Object Reconstruction ...... 42 3.4.1. ...... 42 3.4.2. ...... 44 3.4.3. Jets ...... 45 3.4.4. Missing Transverse Momentum ...... 47

4. Datasets 49 4.1. Introduction ...... 49 4.2. Real Data ...... 49 4.3. Simulated Data ...... 50 4.3.1. Introduction ...... 50 4.3.2. Event Generation ...... 51 4.3.3. Event Record ...... 55 4.3.4. Detector Simulation ...... 55 4.3.5. Data Format for Analysis ...... 56 4.3.6. Description of Used Generators ...... 56 4.4. Simulated Processes ...... 57 4.4.1. W ±Zjj-EW ...... 58 4.4.2. W ±Zjj-QCD ...... 59 4.4.3. Background processes ...... 60 4.4.4. Scaling Factors ...... 62

5. Object and Event Selection 63 5.1. Object Selection on Detector Level ...... 63 5.1.1. Definition ...... 64 5.1.2. Muon Definition ...... 66 5.1.3. Jet Definition ...... 68 5.1.4. Missing Transverse Momentum ...... 69 5.2. Event Selection ...... 69 5.2.1. Detector Level Event Selection for the Inclusive Phase Space . . 69 5.2.2. Event Selection for the VBS Phase Space ...... 72 5.2.3. Event Selection for the aQGC Phase Space ...... 72 5.3. Object Selection on Particle Level ...... 73 5.3.1. Definition ...... 73 5.3.2. Jet Definition ...... 74 5.3.3. Definition ...... 74 5.4. Event Selection on Particle Level ...... 75 5.4.1. Event Selection for the Inclusive Phase Space ...... 75 5.4.2. Event Selection for the VBS Phase Space ...... 76 5.4.3. Event Selection for the aQGC Phase Space ...... 77

6. Background Estimation 79 6.1. The Matrix Method ...... 80 6.1.1. Fake Ratio Estimation ...... 82 6.1.2. Application of the Matrix Method ...... 85 6.1.3. Matrix Method Results ...... 86

7. Systematics 89 7.1. Experimental Uncertainties ...... 89 7.1.1. Muon Uncertainties ...... 89 7.1.2. Electron Uncertainties ...... 90 7.1.3. Jet Uncertainties ...... 90 7.1.4. Missing Transverse Momentum Uncertainties ...... 91 7.1.5. Other Uncertainties ...... 91 7.2. Theoretical Uncertainties ...... 91 7.2.1. Scale Uncertainties ...... 91 7.2.2. Parton Distribution Function Uncertainties ...... 92 7.2.3. Parton Shower Uncertainties ...... 93 7.2.4. Lower Parton Multiplicity Uncertainties ...... 93 7.2.5. Theory Uncertainties for Signal Summary ...... 94 7.2.6. Theoretical Uncertainties on Backgrounds ...... 94

8. Measuring the VBS Cross Section 97 8.1. Phase Space Optimisation ...... 97 8.2. Event Yields in the VBS Phase Space ...... 100 8.3. Statistical Evaluation ...... 103 8.3.1. Cross Section Formula ...... 103 8.3.2. Profile Likelihood Method ...... 105 8.3.3. Technical Implementation ...... 107 8.3.4. Results ...... 108

9. Differential Distributions 111 9.1. General Approach ...... 111 9.2. Introduction to Bayesian Iterative Unfolding ...... 113 9.3. Analysis Implementation ...... 114 9.3.1. Technical Setup ...... 114 9.3.2. Systematic Uncertainties ...... 115 9.3.3. Optimising the Number of Iterations ...... 116 9.4. Results ...... 116 9.4.1. Jet Multiplicity ...... 117 9.4.2. Invariant Mass of the Dijet System ...... 119 9.4.3. Absolute difference in Rapidity of the Dijet System ...... 121

10.Setting Limits on Anomalous Quartic Gauge Couplings 125 10.1. Simulation ...... 125 10.2. Phase Space Optimisation ...... 127 10.2.1. Finding Variables for Optimisation ...... 127 10.2.2. Optimisation Approach ...... 131 10.3. Event Yields and Systematic Uncertainties ...... 132 10.4. Results ...... 134 10.4.1. Fiducial Cross Section Results ...... 134 10.4.2. Limits on Anomalous Quartic Gauge Couplings ...... 137

11.Summary 141

A. Particle Level Distributions for the WZ Channel 147

B. Additional Plots in the VBS Phase Space 151

C. Event Display for Event in VBS Phase Space 155

D. Additional Information on Unfolding Results 157

E. Additional Results on Anomalous Quartic Gauge Couplings 169 E.1. Optimisation Studies ...... 169 E.2. Results in the alternative aQGC Phase Space ...... 173 F. Simulated Data Samples 177

G. Software Framework 181

List of Figures 183

List of Tables 185

Bibliography 187 1. Introduction

Over the centuries many attempts have been made to create a complete description of the interactions between the most fundamental building blocks of . This process culminated in the Standard Model of particle physics1 [2–5]. Since its inception the Standard Model has been tested in numerous experiments without the observation of large discrepancies between its predictions and measured data cementing its reputation of being currently the best theoretical framework describing the interactions of funda- mental particles. Yet, it is also obvious that it cannot be a final theory for multiple reasons [6], one of them being that it only describes ordinary matter which only makes up 5 % of the known universe with and dark energy expected to provide the remaining majority. Furthermore, not all of its predictions have been studied with sufficient precision, e.g. the properties of the Higgs boson-like particle discovered in 2012 [7, 8]. Therefore, further probing of the Standard Model is necessary and may result in finding possible indications for new physics. The Standard Model is a quantum field theory describing the electromagnetic, weak, and strong interactions between particles. Thus it describes the fundamental interac- tions needed for understanding many of the physical phenomena observed in nature, e.g. the binding of (electromagnetic ), particle decays such as the β decay (), and the binding of atomic nuclei (). Inter- actions between matter particles are mediated through the exchange of force carriers called gauge bosons. Thus, two matter particles may communicate with each other through the emission of a gauge boson by the first particle and the absorption of the gauge boson by the second particle. For example, scattering processes between elec- trons are described through the exchange of between the two particles. In the Standard Model, the force carriers are the photon for the electromagnetic force, the W +, W −, and Z bosons for the weak force and the for the strong force. The properties of each interaction are governed by underlying symmetries which are incorporated in the Standard Model via the theoretical framework of gauge theory. The electroweak theory, a part of the Standard Model describing the electromagnetic and weak interaction, is based on the non-abelian SU(2)L U(1)Y symmetry group. × A direct consequence of the symmetry group being non-abelian is the prediction of self-couplings between multiple electroweak gauge bosons, i.e. the massive W and Z boson, and the massless photon. These self-couplings are direct interactions between three (four) electroweak gauge bosons without the exchange of an intermediate particle and are called electroweak triple (quartic) gauge coupling. The triple gauge coupling W ±W ±Z and the quartic gauge coupling W ±ZW ±Z are of immediate relevance to this work but are not the only triple and quartic gauge vertices in existence. Experiments at previous accelerators have examined these self-couplings and had suc- cess providing valuable insights in the structure of triple gauge couplings [9–14]. Results on the quartic gauge couplings have become available only recently [15–21] and further

1An alternative approach may be found in Ref. [1].

1 1. Introduction studies are needed to gain a better understanding of their properties. Another central part of the Standard Model is the electroweak symmetry breaking [22– 25] responsible for the masses of the massive particles and existence of the Higgs boson. The Higgs boson couples to all massive particles, introducing couplings of the form W ±W ±H, ZZH, and HHZZ among others. In consequence, two massive particles may interact via the exchange of a Higgs boson. The combination of the self-coupling vertices and Higgs boson exchanges defines the electroweak gauge boson scattering. Therefore, electroweak gauge boson scattering is sensitive towards the properties of both the electroweak theory and the electroweak symmetry breaking. The Large Hadron Collider (LHC) at CERN, a circular proton-proton collider of 27 km circumference and a design centre-of-mass energy of √s = 14 TeV provides the envi- ronment to study electroweak gauge boson scattering. Here, the colliding protons act as sources for the scattering electroweak gauge bosons. The event rate for a given process is directly proportional to its cross section and luminosity with the cross sec- tion being potentially dependent on the centre-of-mass energy. The low cross sections for electroweak gauge boson scattering predicted by the Standard Model necessitate high luminosities and centre-of-mass energies, otherwise a meaningful statistical anal- ysis cannot be carried out due to insufficient event statistics. Thus, the unprecedented centre-of-mass energy and luminosity of the LHC enable the study of electroweak gauge boson scattering for the first time. A first important goal is to measure the cross section of the electroweak gauge boson scattering as it has not been measured before representing an untested prediction of the Standard Model. Furthermore, the properties of the Higgs boson may be probed. The Standard Model predicts unnaturally high event rates for the scattering of massive electroweak gauge bosons at high centre-of-mass energies if the Higgs boson properties is not the one described by the Standard Model [26]. This is due to a residual direct dependence of the cross section to the centre-of-mass energy which is not present in the Standard Model prediction. In this case physics beyond the Standard Model would be needed to prevent these unnatural high event rates. However, the contributions by new physics would not necessarily lead to the cross sections for massive electroweak gauge boson scattering predicted in the Standard Model case. Therefore, deviations from the Standard Model predictions for massive electroweak gauge boson scattering may indicate new physics. A generic way to search for possible effects of new physics are effective field theories [27, 28]. These theories do not represent concrete physical models introducing new particles but aim to encompass all possible effects of new physics. The core assumption of these theories is that the particles associated with new physics are beyond the kinematic reach of the LHC and cannot be produced directly. Therefore, only low energy effects caused by the new physics may be observable. Effects of new physics are modelled by introducing new operators, each with an associated free parameter governing its overall contribution. These operators translate to new, anomalous couplings, e.g. anomalous triple and quartic gauge couplings, which emulate the possible behaviour of new physics beyond the Standard Model. Constraining the possible parameter space gives valuable insight into the nature of new physics and may help to build more concrete models. Being embedded in the field of diboson physics the electroweak gauge boson scattering is also an interesting test of perturbative QCD, as higher order corrections may not be covered by the usually applied theory uncertainty prescriptions [29, 30]. Thus it

2 provides motivation for further improvement of theory predictions. This work sets out to measure the cross section for the concrete electroweak gauge boson scattering process WZ WZ and to search for effects caused by physics beyond the → Standard Model. To achieve this, the dataset recorded by the ATLAS detector in the year 2012 with a total luminosity of 20.3 fb−1 at a √s = 8 TeV will be analysed. The analysis of this process is complementary to the study of the electroweak W ±W ± W ±W ± scattering → in which the first significant evidence for electroweak gauge boson scattering has been found [18, 19]. The signal signature contains three charged particles (electrons, , muons, or anti-muons) and a flavour-matching neutrino plus two bundles of strongly interacting particles which stem from the that emitted the scattering . This final state may be abbreviated as l+l−l±νjj. Observables able to separate between the electroweak WZ WZ scattering and all → other Standard Model processes mimicking the same final state are identified. A region enriched with events caused by purely electroweak interactions will be defined using these observables and used to measure the cross section of the WZ WZ scattering. → Differential cross sections for variables sensitive to the scattering of electroweak gauge bosons will be provided and new physics beyond the Standard Model will be probed in the framework of the electroweak chiral Lagrangian [27]. This thesis is organised as follows. Chapter 2 will summarise the theoretical foundations introducing the Standard Model with a particular focus on electroweak gauge boson scattering and effective field theories. The choice of the W ±Zjj final state as the channel of interest will be discussed here also. The experimental setup, realised by the LHC and ATLAS detector, will be presented in Chapter 3. Chapter 4 will touch on the data acquisition and generation of simulated data detailing the samples used in the analysis. The selection strategies for both physical objects and collision events defining the phase spaces used in the subsequent chapters will be presented in Chapter 5. The estimation of the contributions of background processes to the selected events will be discussed in Chapter 6 followed by a review of the sources of theoretical and experimental uncertainties in Chapter 7. The results of this work will be discussed starting with Chapter 8 detailing the optimisation of the cross section measurement of the electroweak gauge boson scattering in the W ±Zjj final state. The determination of differential distributions providing unfolded results on multiple variables sensitive to electroweak gauge boson scattering is detailed in Chapter 9. Chapter 10 will complete the discussion on physics results by constraining the possible effects of physics beyond the Standard Model. A summary of this work will be given in Chapter 11.

3

2. Theoretical Foundations

2.1. Introduction

The ultimate goal of is the complete description of nature at all energies in one consistent theory. Four are known: The electromagnetic force describing attractive and repulsive forces between elec- • trically charged particles, the weak interaction responsible for phenomena such as the beta decay, • the strong interaction binding together nuclei and , and • describing the attractive force between masses. • Any description of the physics of these interactions has to have certain properties, e.g. conservation of causality, energy, and momentum. As stated by Noether’s theo- rem [31, 32] such conversation principles stem from symmetries of the physical system under consideration. A popular example for this relation is that energy is conserved if the Hamiltonian describing the physical system in question is invariant under trans- formation in time. Therefore, the demanded properties may be expressed as the sym- metries that the Hamiltonian or Lagrangian of the theory has to follow. Examples for essential symmetries that a theory describing the interactions of elementary particles has to respect are: invariance under Lorentz transformations, which include global rotations, trans- • lations, and boosts in Minkowski space, invariance under the CPT transformation being a simultaneous application of the • parity, conjugation and time reversal operation, internal symmetries such as symmetry. • The theoretical tools of choice for building such a theory are quantum field theo- ries [33–36]. Quantum field theories combine quantum mechanics describing subatomic systems with special relativity needed for describing objects at velocities comparable to the speed of light. Here, particles are described as the quantised excitations of quantum fields. The physics of a given quantum field theory is governed by the properties of its Lagrangian e.g., its symmetries and field content. Many Lagrangians may be for- mulated, however only those returning results that do not contradict the experimental data may be considered a possible description of nature. The Standard Model repre- sents such a quantum field theory and will be introduced in this chapter together with possibilities to search for physics not described by it.

2.2. The Standard Model

There are many pedagogical reviews of the Standard Model giving an excellent intro- duction into the matter [37, 38]. This work is not intended to add to the body of said

5 2. Theoretical Foundations

Generation el. weakly colour 1st 2nd 3rd charge charged charged electron muon tauon -1 yes no me = 0.511 MeV mµ = 105.65 MeV mτ = 1776.86 MeV

neutrino 0 yes no ---

2 up + 3 yes yes mu ≈ 0.002 GeV mc ≈ 1.3 GeV mt = 173.21 GeV

1 − 3 yes yes mu ≈ 0.005 GeV ms ≈ 0.095 GeV mb = 4.66 GeV

Table 2.1.: Properties of elementary . All fermions have half-integer . Two groups exist: (upper half) and quarks (lower half). Masses are only given where reasonably well measured/defined. No uncertainties on the mass measurements are given. Electromagnetic charge is listed explicitly while only the possession of a weak or strong charge is indicated. Values are taken from Ref. [39].

reviews but will only give an overview of the needed theoretical foundation. Over the past decades experiments in the field of particle physics have succeeded in finding a multitude of particles. These particles may be classified as elementary particles which do not have a permanent substructure and composite particles which are built from elementary particles. Hadrons are one class of composite particles and are made of quarks and anti-quarks which are bound together by the strong force. Hadrons are further subdivided into consisting of three quarks or three anti-quarks and consisting of a quark and an anti-quark. There are literally dozens of different composite particles [39]. In contrast, only 30 elementary particles have been discovered yet. These are the six quarks, six anti-quarks, six leptons, six anti-leptons, the photon, the gluon, the W + boson, the W − boson, the Z boson, and the Higgs boson. Quarks and leptons1 make up all of the ordinary matter. They have spin 1/2 and follow Fermi-Dirac statistics and are thus called fermions. The interactions between fermions are described as exchanges of bosons, elementary particles with integer spin, between the fermions. The couplings of fermions to bosons is described via so-called gauge vertices which are contact interactions taking place at a given spacetime point. One such interaction is the coupling of electrons to photons with a possible notation for the vertex being eeγ. Photon, , W ± bosons, and Z bosons are the mediators of the electromagnetic, strong, and weak force and are responsible for the interactions between the fermions. The Higgs boson is a by-product of the which introduces masses to the Standard Model in a gauge-invariant way. Their integer spin designates them as bosons following Bose-Einstein statistics. properties are tabulated in Table 2.1 and boson properties can be found in Table 2.2 [39].

1Unless explicitly specified the terms quarks and leptons also encompass the anti-quarks and anti- leptons.

6 2.2. The Standard Model

el. weakly strongly mass charge charged charged −19 Photon mγ < 1 10 eV 0 no no × W boson mW = 80.385 GeV 1 yes no ± Z boson mZ = 91.188 GeV 0 yes no Gluon mg = 0 0 no yes

Table 2.2.: Properties of elementary bosons. All bosons have integer spin. No uncertainties on the mass measurements are given. Electromagnetic charge is given explicitly while only the possession of a weak or strong charge is indicated. Values are taken from Ref. [39].

The particle content of the Standard Model is not derived from first principles but is an experimental input. The nature of the interactions between these particles are likewise not derived from first principles but are rather a description of what is observed. However, it is not the case that new experimental insights are simply integrated into the Standard Model in a purely ad hoc way. In the Standard Model the formalism of local gauge theories is used to relate the properties of a given interaction with its associated symmetry group. The choice of the symmetry group determines many features of a theory at once giving the theoretical description its predictive power. Which symmetry group is associated with which interaction is not predicted by theory but has to be determined in experiments.

2.2.1. Local Gauge Theory

Maxwell’s equations represent a first example of a gauge theory, although they were not conceived in the framework of local gauge theories at that time. The idea of local gauge invariance was first formulated by Hermann Weyl in the late 1920s attempting to unify gravity and and later popularised by Wolfgang Pauli [40]. The formalism of local gauge theory was then used to formulate (QED) which was achieved independently by Feynman [41, 42], Schwinger [43] and Tomonaga [44]. An account of the historical development of local gauge theory can be found in Ref. [45]. In order to illustrate the approach the Lagrangian of QED will be derived starting from the Lagrangian of the free fermion field as done in Ref. [37]. The Lagrangian for a fermionic field ψ with half-integer spin, electric charge Qe, and mass m without interactions is

µ = ψ¯(x)(i∂/ m)ψ(x) with ∂/ ∂µγ (2.1) L − ≡

† 0 µ with ψ¯ = ψ γ being the adjoint field and ∂/ = γ ∂µ the derivative in Feynman slash notation. The matrices γi are the Dirac matrices. One may consider subjecting the Lagrangian to a global U(1) transformation, corresponding to a shift in phase of the field:

7 2. Theoretical Foundations

ψ e−iQθψ, → ψ¯ ψe¯ iQθ, (2.2) → −iQθ ∂µψ e ∂µψ → with Q being the generator of the symmetry group and θ the continuous transformation parameter. This global symmetry leads to the conservation of the current Jµ

µ Jµ = ψγ¯ µeQψ with ∂µJ = 0, (2.3) which is invariant under the application of the global gauge transformation. The current may be interpreted as the electromagnetic current.2 However, this Lagrangian does not contain interactions. This can be achieved by chang- ing the global transformation into a local one by introducing a dependence on the space- time coordinate x to the continuous parameter θ. Thus the transformation operations change to:

ψ e−iQθ(x), → ψ¯ ψe¯ iQθ(x), (2.4) → −iQθ(x) −iQθ(x) ∂µψ e ∂µψ iQ(∂µθ(x))e ψ. → − The free Lagrangian in Equation (2.1) is not invariant under this transformation ne- cessitating the introduction of a new field Aµ(x). This field is interpreted as a gauge boson interacting with the field ψ and transforming under U(1) via

1 Aµ Aµ ∂µθ(x). (2.5) → − e

The transformation prescription for Aµ compensates for the terms spoiling the invari- ance of the Lagrangian under the local gauge transformation. An often chosen way to include the new field is to replace the derivative ∂µ with the covariant derivative Dµ:

Dµψ (∂µ ieQAµ)ψ. (2.6) ≡ − The covariant derivative transforms the same way as ψ:

−iQθ(x) Dµψ e Dµψ. (2.7) → The only term left to complete the theory is a kinetic term for the introduced gauge boson to describe its propagation. The field strength tensor for the gauge field providing the necessary kinetic term has the form:

Fµν = ∂µAν ∂νAµ. (2.8) − Thus, the U(1) gauge and Lorentz invariant Lagrangian can be written down:

2The c-number e was added to support this interpretation.

8 2.2. The Standard Model

1 µν = ψ¯(x)(iD/ m)ψ(x) Fµν(x)F (x) (2.9) LQED − − 4 As indicated by the subscript, this is the Lagrangian of quantum electrodynamics de- scribing the electromagnetic interaction of particles via the exchange of photons. The formulation of all the individual theories making up the Standard Model use the concept of local gauge theory as will be seen in Sections 2.2.2 and 2.2.3.

2.2.2.

Quantum chromodynamics (QCD) provides a description for the strong interaction re- sponsible for the formation of hadrons and the stability of nuclei. Between the discovery of the in 1932 by Chadwick [46] and the early 1960s many attempts were made to describe the strong interaction between the then known particles. Many of these strongly interacting particles had been found by the early 1960s and the idea that an underlying structure has to exist relating these particles became stronger and stronger. Gell-Mann [47] and Ne’eman [48] found that the SU(3) symmetry group would relate the known particles in a scheme they called the “”. This underlying scheme motivated the introduction of new elementary particles, called quarks, with the up-, down-, and strange-quark representing a triplet. A new quantum number, colour, which has three possible values: red, green and blue [49] had to be introduced to ex- plain hadrons with three quarks of the same type, e.g. the ∆++ particle consisting of three up-quarks.3 Simultaneously, the triplet structure between the three quarks made way for the triplet structure of the colour charged quarks, e.g., (ur, ug, ub). Gluons en- tered the picture as the mediators of the strong interaction between the colour charged quarks [50].4 The formulation of QCD follows the same scheme as quantum electrodynamics. How- ever, the underlying symmetry group of the colour transformation SU(3)C is non- abelian meaning that its elements do not commute. This is in contrast to the abelian U(1) group of QED and leads to self-coupling terms among the gauge bosons of QCD, the gluons. The Lagrangian of QCD reads as:

X 1 a µν QCD = q¯(x)(iD/ mq)q(x) Fµν(x)Fa (x) (2.10) L q − − 4 with the covariant derivative being

  λa a Dµq ∂µ igs G q (2.11) ≡ − 2 µ and the field strength tensor being

a a a aβγ F (x) = ∂µG (x) ∂νG (x) + gsf GµβGνγ. (2.12) µν ν − µ Here, the fields q are vectors of length 3 and dimension 1 containing the three quark fields for the individual colour charges red, blue, and green. The strong coupling

3Without colour, all the three quarks would have the same quantum state which is prohibited by the Pauli exclusion principle. 4An extended account of the history of QCD can be found in Ref. [51].

9 2. Theoretical Foundations

constant is denoted with gs and the SU(3) generating matrices are the eight Gell- a Mann matrices λa/2. The eight gluon fields have been written as Gµ and the structure constants as f abc(a, b, c = 1, ..., 8). Several properties are noteworthy. The gluons themselves are colour charged leading to self-coupling terms, a direct consequence of the non-abelian nature of the SU(3) symmetry group. This is a general property of so-called Yang-Mills theories [52] leading to the self-coupling of gluons which also has been observed experimentally [53]. Two further properties are not visible from analysing the Lagrangian: confinement and asymptotic freedom. Confinement [54] characterises the behaviour of the strong interaction at long distances describing the fact that no individual gluons or quarks but only colourless bound states are observed. When trying to separate a quark-anti-quark pair the energy stored in the strong field between them will increase with increasing distance. At some point the stored energy is sufficiently high to produce a new quark- anti-quark pair with the newly produced particles forming colourless hadrons with the original particles. It has to be noted that it has not been proven yet that confinement is predicted by QCD.5 The other phenomenon, asymptotic freedom, reflects that the strong coupling decreases at high energy/small distances. This is a consequence of the negative value of the β function of QCD describing the dependence of the strong coupling constant on the en- ergy scale. Asymptotic freedom enables calculations of matrix elements in perturbative theory for processes involving the strong interaction and was found independently by Gross and Wilczek [56, 57] and Politzer [58, 59].

2.2.3. Electroweak Theory

One of the great successes of the Standard Model is the unified description of the elec- tromagnetic and weak interaction by the electroweak theory. The underlying symmetry group of electroweak theory is SU(2)L U(1)Y which was suggested by Glashow in × 1961 [2]. The subscript L refers to the fact that weak gauge bosons only couple to left-handed particles while Y denotes the hypercharge to avoid confusion with the U(1) transformation of QED. The Lagrangian of the electroweak theory reads as:

1 a µν 1 µν = ¯liDl/ +qi ¯ Dq/ W W BµνB (2.13) LEW − 4 µν a − 4 with the covariant derivative

  a Y Dµf = ∂µ igwTaW igY Bµ f (2.14) − µ − 2 and the field strength tensors

i i i ijk j k W = ∂µW ∂νW + g W W , (2.15) µν ν − µ µ ν Bµν = ∂µBν ∂νBµ. − 5The prove is in fact one of the millenium problems formulated by the Clay Mathematics Institute [55].

10 2.2. The Standard Model

field T3 QY 1 2 1 uL 2 3 3 1 1 1 dL 2 3 3 − − 2 4 ur 0 3 3 1 2 dr 0 3 3 1 − − νL 2 0 1 1 − eL 1 1 − 2 − − er 0 -1 -2

Table 2.3.: Charges of the fermions with respect to the SU(2)L U(1)Y symme- × try. Left-handed fermions transform as doublets under SU(2) which is reflected by the symmetric charges with respect to the T3. Right-handed fermions transform as singlets and have no weak isospin.

The fermionic fields l and q denote the leptons and quarks which receive different treatment depending on their chirality. Left-handed fermions transform as doublets under SU(2)L, whereas right handed fermions transform as singlets as is reflected in Table 2.3.

The electroweak Lagrangian introduces three gauge fields for the SU(2)L symmetry (W1...3) and one for the U(1)Y symmetry (B). The generators of the SU(2)L symmetry are the three Pauli matrices divided by two. The coupling constants for the weak fields W1...3 and the hypercharge field B are free parameters of the theory and are denoted as gw and gY , respectively. As in QCD, the field strength tensor exhibits terms that lead to self-couplings between the gauge bosons which is of paramount importance to the study of electroweak gauge boson scattering and will be discussed further in a dedicated section. Four charges have been introduced alongside the gauge fields, the weak charges Ti and the Y . The neutral T3 and the weak hypercharge Y are connected with the electric charge Q via the Gell-Mann-Nishijima formula [60, 61]:

Y Q = T3 + . (2.16) 2 The physically observed gauge bosons are a linear combination of the gauge fields present in the Lagrangian:

± 1 1 2 Wµ = Wµ iWµ , (2.17) √2 ∓ 3 Zµ = cos(θw)W sin(θw)Bµ, µ − 3 Aµ = sin(θw)Wµ + cos(θw)Bµ with θw being the weak mixing angle which governs the rotation of the gauge fields into ± the physical fields. The physical fields Wµ ,Zµ, and Aµ are obtained by requiring that the W ± act as ladder operators between the components of the left-handed doublets and that the photon does not couple to .

11 2. Theoretical Foundations

A phenomenon not immediately visible from the Lagrangian is the CP violation of the . This violation is introduced via the CKM matrix which describes the rotation of the interaction eigenstates of the down-type quarks into the respective mass eigenstates. The idea was first developed by Cabibbo in 1963 [62] for two generations of quarks and later extended to the three generation case by Kobayashi and Maskawa [63]. Decays of hadrons composed of quarks from different families (e.g. Λ0 = uds) are possible due to off-diagonal terms of the CKM matrix. The CP vio- lation observed for some electroweak processes is a result of a non-vanishing complex angle present in the CKM matrix which is only made possible by the fact that three generations of fermions exist. Measurements of the free parameters of the CKM matrix pose an important test of the Standard Model and are curated by the CKM Fitter group [64]. As of now no masses have been introduced to the electroweak theory. This also holds for the weak gauge bosons whose masses are of the order of 100 GeV. However, a naive introduction of mass terms is not possible as they would break the SU(2)L U(1)Y × symmetry. In the late 1960s Weinberg and Salam published papers combining the electroweak theory with the later to be discussed Higgs mechanism providing a theory that included massive gauge bosons [3,4] and enabled a gauge-invariant way to introduce fermion masses.

2.2.4. Electroweak Symmetry Breaking

In the early 60s of the last century three groups published proposals addressing the problem of the introduction of masses in the Standard Model in a very short timeframe: Peter Higgs [25], Robert Brout and Fran¸coisEnglert [24], and Gerard Guralnik, C.R. Hagen, and Tom Kibble [23].6 It is based on the concept of spontaneous symmetry breaking [22] describing the situa- tion that the physics determining the behaviour of a system has a symmetry that the system’s ground state7 does not have. Applied to quantum field theory it states that the Lagrangian of a system is invariant under a certain symmetry transformation but the ground state of the theory is not. In the case of the spontaneous symmetry breaking of a global symmetry the Goldstone Theorem [65] applies predicting one massless boson (called ) for each generator that does not destroy the vacuum. However, the introduction of massless bosons would not remedy the stated problem. Luckily, the Goldstone Theorem does not apply the same way in the case of gauge theories. For these, the Higgs Mechanism states that the Goldstone bosons introduced by the spontaneous symmetry breaking combine with the already present massless gauge bosons and become the longitudinal components of said bosons. As will be seen later the terms introducing the longitudinal components may be interpreted as mass terms for the respective massive electroweak

6 There is some debate how to call the mechanism based on the people who have pub- lished papers on it. Often used variants are Brout-Englert-Higgs-mechanism or En- glert–Brout–Higgs–Guralnik–Hagen–Kibble mechanism or other variations. In this work, the short version Higgs mechanism will be used which does not intend to diminish the work of the other groups but solely adopts the general usage. In the end, it is rather improbable that the memory of humankind will ever remember things as they really were and in the end the discovery is important even if the true heroes may remain unsung. 7The ground state is the state that minimises the expectation value h0|Φ|0i.

12 2.2. The Standard Model gauge boson.

In the case of the Standard Model the SU(2)L U(1)Y group is broken down in the × ground state to U(1)em. The electroweak Lagrangian is extend by the following terms which introduce the spontaneous symmetry breaking:

† µ = (DµΦ) (D Φ) V (Φ) with (2.18) LEWSB − V (Φ) = µ2Φ†Φ + λ(Φ†Φ)2; λ > 0 (2.19) − with Φ being a complex doublet:

φ+ 1 φ + iφ  Φ = = 1 2 (2.20) φ0 √2 φ3 + iφ4 and Dµ the covariant derivative of the electroweak theory. It has to be noted that µ2 > 0 and λ > 0 have to hold otherwise the vacuum would be symmetric and no spontaneous symmetry breaking would take place. Due to the U(1)em symmetry of the ground state an infinite number of ground states exists and one is free to choose a specific one for further calculations. The ground state is chosen to be:

1 0 Φ0 = (2.21) √2 v with v being the so-called . By applying unitarity gauge one can write the general form of equation (2.20) as

1  0  Φ(x) = (2.22) √2 v + H(x) with H being the newly introduced dubbed the Higgs boson. The mass terms of the massive electroweak gauge bosons are encoded in the kinetic term of the Higgs field

 2 2   2 2 2  † µ gwv + µ− 1 (gY + gw)v µ (DµΦ) (D Φ) = W W ZµZ + ..., (2.23) − 4 µ − 2 4 whereas the Higgs mass term itself can be found in the potential:

1 V (Φ) = (2µ2)H2 + ... . (2.24) 2 This leads to the following mass predictions

q 2 2 gwv gw + gY v MW = ; MZ = ; MH = √2µ. (2.25) 2 2 Using these relations and measured values for the masses of the gauge bosons and the electroweak couplings the vacuum expectation value v can be calculated and is found to be 246 GeV. The masses of the remaining fermions are introduced via Yukawa ≈ couplings which are shown in the full Standard Model Lagrangian in the next section. The numerical values for the Yukawa couplings are free parameters of the Standard Model and have to be determined experimentally.

13 2. Theoretical Foundations

2.2.5. The Lagrangian of the Standard Model

All necessary parts of the Standard Model are now in place and the three interactions can be written as one large Lagrangian which is invariant under the combined SU(3)C × SU(2)L U(1)Y symmetry group. The individual parts are: ×

= leptons + quarks + gauge + Higgs + Yukawa (2.26) L Lkin Lkin Lkin L L with

X leptons = iL¯j DL/ j + i¯lj Dl/ j , Lkin L L R R X quarks = iQ¯j DQ/ j + iu¯j Du/ j + id¯j Dd/ j , Lkin L L R R R R gauge 1 µν 1 µν 1 µν = BµνB Wa,µνW Ga,µνG , Lkin −4 − 4 a − 4 a Higgs µ † 2 † † 2 = (D Φ) (DµΦ) + µ Φ Φ λ(Φ Φ) , L − Yukawa X j j jk j c k jk j k = y j L¯ Φl Y Q¯ Φ u Y Q¯ Φd + h.c. L − l L R − u L R − d L R with j 1, 2, 3 being the summation index of the fermion generation. The non-diagonal ∈ jk Yukawa matrices Yx give rise to the the flavour changing currents described by the CKM matrix. The covariant derivative is

Y a λa a Dµ = ∂µ igY Bµ igwTaW igs G . (2.27) − 2 − µ − 2 µ

All terms necessary for theoretical calculations are present: Propagation terms for the individual fields, all allowed interaction terms between fermions and bosons as well as among the bosons themselves, the Higgs potential giving rise to the electroweak sym- metry breaking, and the Yukawa mass terms that contain the masses of the individual fermion fields.

2.3. Electroweak gauge boson scattering

2.3.1. Definition

Electroweak gauge boson scattering is also referred to as scattering (VBS) and the terms will be used interchangeably. It is the process VV VV with V = → W ±, Z, γ realised via electroweak triple and quartic gauge vertices, and interactions of the massive electroweak gauge bosons with the Higgs boson. In Section (2.2.3) the non-abelian structure of the electroweak theory was stated. This structure gives rise to self-couplings among the electroweak bosons, namely the W ± bosons, the Z boson and the photon [66]:

14 2.3. Electroweak gauge boson scattering

 µ ν ν µ †  µ ν† ν µ†  = ie cot(θw) (∂ W ∂ W ) W Zν ∂ W ∂ W WµZν (2.28) L3 − − µ − − † µ ν ν µ ie cot(θw)WµW (∂ Z ∂ Z ) − ν −  µ ν ν µ †  µ ν† ν µ†  ie (∂ W ∂ W ) W Aν ∂ W ∂ W WµAν − − µ − − † µ ν ν µ ieWµW (∂ A ∂ A ), − µ − 2  2  e  † µ † µ† ν 4 = 2 WµW WµW WνW (2.29) L − 2 sin (θw) − 2 2  † µ ν † µ ν e cot (θw) W W ZνZ W Z WνZ − µ − µ 2  † µ ν † µ ν † µ ν e cot(θw) 2W W ZνA W Z WνA W A WνZ − µ − µ − µ 2  † µ ν † µ ν e W W AνA W A WνA − µ − µ Figure 2.1 exhibits the Feynman diagrams that result from these terms as well as the interactions of the Higgs boson with the massive electroweak gauge bosons which are not stated in the shown excerpts from the SM Lagrangian. These are the building blocks for the electroweak gauge boson scattering diagrams shown in Figure 2.2 which define the purely electroweak gauge boson scattering processes that will be examined in this work.

2.3.2. Motivation

The study of VBS poses a test of the gauge sector of the electroweak theory. When only subsets of the diagrams in Figure 2.2 are considered, the theory predictions for VBS scattering show an unbounded rise in cross section with rising centre-of-mass energy. Finite results are obtained only when all diagrams are taken into account with the Higgs being the one described by the Standard Model (see Figure 2.3). A first test of the Standard Model is the observation of VBS itself as it has not been observed significantly before. First evidence of massive electroweak gauge boson scat-

W + W + W + W + , Z

,Z

W W W W , Z

Z, W + H Z, W +

H

Z, W H Z, W Figure 2.1.: Feynman diagrams for the self-couplings of the electroweak gauge bosons (top row) and the interactions of the electroweak gauge bosons with the Higgs (bottom row).

15 2. Theoretical Foundations

q0 q000 q0 q000 q0 q000

Z, W ± W ±,Z/,Z/ W ±, W ±, W ⌥, Z/

W ±,W⌥ W ±,W±,Z/ W ±, W ⌥, Z/, Z/

q00 q0000 q00 q0000 q00 q0000

q0 q000 q0 q000

W ±,Z W ±, W ±, W ±, Z

W ⌥,Z W ±, W ⌥, Z, Z

q00 q0000 q00 q0000 Figure 2.2.: Possible Feynman diagrams for VBS with final states containing massive electroweak gauge bosons. The primes of the quarks indicate possible changes in flavour.

tering was found in the scattering W ±W ± W ±W ± [18,68], but a definitive statement → is not possible with the available as of now. Thus further studies are necessary to test this central prediction of the Standard Model. Testing the Standard Model goes hand in hand with the search for new physics. To quantify the effects of new physics one may opt for a model independent approach that simply parameterises the deviations from the Standard Model. These deviations would then be interpreted as the low energy effects of new physics outside the kinematic reach of the experiment but whose effects are observable at lower energies. In general, such deviations are expressed in terms of anomalous triple or quartic gauge couplings. In addition, one may search for new heavy resonances that couple to the electroweak gauge boson or an expanded Higgs sector.

2.3.3. VBS Topology

Vector boson scattering cannot be studied directly as the process VV VV but is → rather an intermediary step in an embedding process of the form pp V V jj with j → denoting jets.8 Figure 2.2 shows the embedding process where two quarks from colliding protons radiate the interacting vector bosons. The diagrams are purely electroweak and no strong coupling or gluons are present in them. A typical VBS event is shown in Figure 2.4. The phenomenology of this scattering process is quite unique facilitating its analysis with the most striking feature being two jets with high transverse momentum which are dubbed “tagging jets”. The tagging jets exhibit a large spatial separation in rapidity (see Section 3.3.1 for definition) and a high invariant mass of the dijet system. These properties are caused by the fact that the two jets originate from the incoming quarks which emit the scattering vector bosons thus carrying a fraction of the momentum of the incoming protons. In addition, no or small jet activity (occurrence of jets) between the two tagging jets is expected due to a lack of colour flow between the incoming quarks.

8Jet is a shorthand term for closely spaced showers of hadrons.

16 2.3. Electroweak gauge boson scattering

10 10 → WLZL WLZL without H W Z → W Z with H 9 L L L L Z Z → Z Z with H 10 L L L L

in pb → WLWL W WLss without H → L σ WLWL WLWLss with H → WLWL W WLos without H → L WLWL WLWLos with H → 7 WLWL ZLZL without H 10 W W → Z Z with H L L L L

105

103

10

10−1 103 104 105 Ecm in GeV

Figure 2.3.: Cross section evolution with respect to rising centre-of-mass energy. The compared scenarios are a SM with and without a Higgs with the scattering of the longitudinal modes of various combinations of massive electroweak gauge bosons considered. Unitarity is only respected in the longitudinal modes if a SM Higgs boson is realised. The transversal modes yield finite results for both scenarios and are not shown. Plot taken from Ref. [67]

The vector bosons coming from the scattering are typically located in between the tagging jets. It is useful to quantify the “in between-ness” of the vector bosons by defining a new variable called centrality:

ζ = min(∆η−, ∆η+) with (2.30)

∆η− = min(ηparticles) min(ηjet1, ηjet2), − ∆η+ = max(ηjet1, ηjet2) max(ηparticles). −

Here, η denotes the pseudo-rapidity defined in Section 3.3.1. This variable is positive as long as all considered particles lie between the tagging jets and increasingly positive values indicate a large separation of both the tagging jets among each other and a large separation of the fermions/bosons and the two jets. The properties of the diboson system yield additional variables suitable for analysis. New resonances may be produced as intermediate resonance when two electroweak gauge bosons fuse. The properties of the observed diboson system are then influenced by the properties of the intermediate resonance. In case of resonances with large masses, one would expect a high invariant mass of the diboson system, a large separation in φ of the outgoing bosons, and high transverse momenta. Figure 2.5 and 2.6 show distributions for the mentioned variables in the WZ channel for illustrative purposes. The phase space definition for these plots is given in Section 5.3.

17 2. Theoretical Foundations

B1 ∆η

J1

ζ J2

B2

Figure 2.4.: General topology of a VBS event. VBS events are characterised by the two tagging jets (J1 and J2) that have a large separation in rapidity and enclose the decay products of the diboson system (B1 and B2). To quantify the spatial relation between the diboson system, be it the bosons themselves or their decay products, and the tagging jets the centrality ζ is introduced and defined in Equation 2.30.

2.3.4. Choice Of Observation Channel

As of now, the VBS process is only defined as pp V V jj. For analysis purposes → the to be studied final state has to be specified. Several vector boson final states are possible: W ±W ∓, W ±W ±, W ±Z, ZZ, W ±γ, Zγ, and γγ . In the following, the final states containing photons will be omitted.9 This leaves the final states with massive electroweak gauge bosons: W ±W ∓, W ±W ±, W ±Z, and ZZ. Each of the bosons in these final states has the possibility to decay either into leptons (called leptonic decay) or hadrons (called hadronic decay). Of the leptonic decays only those that result in either electrons/positrons or muons/anti-muons are considered. Tau leptons are notoriously hard to measure resulting in less favourable identification and kinematic resolution performance and are thus not included in the signal definition. The branching ratio of the W bosons to leptons (e or µ) is about 21 % and 68 % to hadrons. Similarly, the Z boson decays in about 7 % of the cases to either e+e− or µ+µ− and roughly 70 % to hadrons [39]. Considering these possible decays one can define fully-leptonic final states (both bosons decay to leptons), semi-leptonic final states (one boson decays to leptons, the other to hadrons) and fully-hadronic final states (both bosons decay to hadrons). Hadronic final states promise the highest statistics given the favourable branching ra- tios. At the same time these final states introduce large backgrounds from pure QCD processes. Although it is possible to impose certain requirements, e.g. that jets as-

9 The reasoning here is that processes with photons in the final state are not sensitive to the nature of the electroweak symmetry breaking as there is no coupling between the photon and the Higgs boson at tree level. Therefore, one of the key motivations for studying VBS is missing.

18 2.3. Electroweak gauge boson scattering

0.2 0.22 0.18 WZjj-EW WZjj-EW WZjj-QCD 0.2 WZjj-QCD 0.16 0.18 Arbitrary Units 0.14 Arbitrary Units 0.16 0.12 0.14 0.1 0.12 0.1 0.08 0.08 0.06 0.06 0.04 0.04 0.02 0.02 40 40 −6 −4 −2 0 2 4 6 -6 -5 -4 -3 -2 -1 0 1 2 3 4 3.5 3.5 ζ ζ 3 V 3 l WZjj-EW WZjj-EW

WZjj-QCD 2.5 WZjj-QCD 2.5 2 2 1.5 1.5 1 1 0.5 0.5 0 0 −6 −4 −2 0 2 4 6 -6 -5 -4 -3 -2 -1 0 1 2 3 4 ζ ζ V l

0.24 0.35 0.22 WZjj-EW WZjj-EW 0.2 WZjj-QCD 0.3 WZjj-QCD 0.18 Arbitrary Units Arbitrary Units 0.25 0.16 0.14 0.2 0.12 0.1 0.15 0.08 0.1 0.06 0.04 0.05 0.02 40 40 0 1 2 3 4 5 6 7 8 0 200 400 600 800 1000 1200 1400 1600 1800 2000 3.5 3.5 |∆η(j ,j )| m(j ,j ) [GeV] 3 1 2 3 1 2 WZjj-EW WZjj-EW

WZjj-QCD 2.5 WZjj-QCD 2.5 2 2 1.5 1.5 1 1 0.5 0.5 0 0 0 1 2 3 4 5 6 7 8 0 200 400 600 800 1000 1200 1400 1600 1800 2000 |∆η(j ,j )| m(j ,j ) [GeV] 1 2 1 2

Figure 2.5.: Jet dependent variables for the W ±Zjj-EW and the W ±Zjj-QCD process. The shown distributions were obtained on particle level after the inclu- sive selection and requiring at least two jets (see Section-5.3 for details). The shown variables are the boson centrality ζV (upper left), the lepton centrality ζl (upper right), the absolute difference in pseudo-rapidity between the two tagging jets (lower left), and the invariant mass of the dijet system (lower right). All of these variables show different behaviour for the electroweak and strong processes and may be used for selection purposes.

sociated with a certain massive gauge boson have to match its invariant mass, it is unlikely to get a clean sample as the cross section of the background QCD processes is several magnitude larger than the VBS signal process. In addition, the fully hadronic final states lack leptons making triggering these final states harder. Thus they are not considered further. Semi-leptonic final states initially appear to be a good middle-of-the-road variant promising good statistics and a relatively clean signal due to the presence of lep- tons. However, studying the measurement of the combined W W/W Z cross section at √s = 7 TeV suggests that the semi-leptonic channels would suffer from large back-

19 2. Theoretical Foundations

0.3 WZjj-EW 0.1 WZjj-EW WZjj-QCD WZjj-QCD 0.25 0.09 Arbitrary Units Arbitrary Units

0.2 0.08

0.15 0.07

0.1 0.06

0.05 0.05

40 4 0 200 400 600 800 1000 1200 1400 0 0.5 1 1.5 2 2.5 3 3.5 3.5 ∆φWZ 3 M(WZ) 3 WZjj-EW WZjj-EW

WZjj-QCD 2.5 WZjj-QCD 2.5 2 2 1.5 1.5 1 1 0.5 0.5 0 0 0 200 400 600 800 1000 1200 1400 0 0.5 1 1.5 2 2.5 3 M(WZ) ∆φWZ

0.24 0.18 WZjj-EW 0.22 WZjj-EW 0.16 WZjj-QCD 0.2 WZjj-QCD 0.18 Arbitrary Units 0.14 Arbitrary Units 0.16 0.12 0.14 0.1 0.12 0.08 0.1 0.06 0.08 0.06 0.04 0.04 0.02 0.02 40 40 0 50 100 150 200 250 300 350 400 450 500 0 100 200 300 400 500 600 700 800 900 1000 3.5 lll 3.5 ∑ p mWZ 3 T 3 T WZjj-EW WZjj-EW

WZjj-QCD 2.5 WZjj-QCD 2.5 2 2 1.5 1.5 1 1 0.5 0.5 0 0 0 50 100 150 200 250 300 350 400 450 500 0 100 200 300 400 500 600 700 800 900 1000 lll ∑ p mWZ T T

Figure 2.6.: Bosonic variables for the W ±Zjj-EW and the W ±Zjj-QCD pro- cess. The shown distributions were obtained on particle level after the inclusive selection and requiring at least two jets (see Section-5.3 for details). The shown variables are the invariant mass of the diboson system (upper left), the opening angle between the two bosons (upper right), the scalar sum of the transverse momenta of all charged leptons associated with the bosons (lower left), and the transverse mass of the diboson system (lower right). Though the variables show little separating power between the electroweak and strong processes they are expected to be of value in the search for new physics.

grounds from W and Z production in association with jets [69]. Furthermore, both processes are not separable but have to be measured together which does not allow for a specific probing of either the WW or WZ final state. An additional complication is that the background modelling is rather demanding due to the high multiplicity of additional jets that have to be simulated. Therefore semileptonic final states are also not considered further. This leaves the fully-leptonic final states for consideration. These final states show very clean signals and excellent kinematic and spatial resolution thanks to the well

20 2.3. Electroweak gauge boson scattering

V V jj final state σ(VV jj-EW)/fb σ(VV jj-QCD)/fb EW/QCD W ±W ± l±l±ννjj 4.28 0.01 1.69 0.02 2.53 ± ± W ±W ∓ l±l∓ννjj 15.57 0.08 35.24 0.13 0.44 ± ± W ±Z l±νl±l∓jj 2.36 0.01 7.19 0.01 0.33 ± ± ZZ l±l∓l±l∓jj 0.12 0.01 0.21 0.01 0.57 ± ± ZZ l±l∓ννjj 0.39 0.01 0.55 0.01 0.71 ± ± Table 2.4.: Cross sections of several VBS final states calculated with Sherpa in a VBS favouring phase space. The cross sections were calculated by Philipp Anger and can be found in Ref. [70].

measurable signatures of electrons and muons but also the lowest statistics due to the small branching ratios. A clear relationship between the individual lepton final states and the corresponding boson final states can be drawn as can be seen in Table 2.4. An important point of consideration is the ratio between the electroweak and strong contributions to the final state in question. Next to the VBS defining diagrams there are also purely electroweak diagrams which are not VBS as well as diagrams which include strong contributions. The purely electroweak diagrams define the process VV jj-EW and the ones containing QCD vertices the process VV jj-QCD. Measuring VBS thus includes suppressing the contributions from the strong production processes. Here a high ratio between electroweak and strong contributions is favourable as it lessens the need to artificially suppress the strong processes. The cross sections in Table 2.4 were taken from [70] which also includes an extrapola- tion of the cross sections to 13 TeV centre-of-mass energy. The reference also contains the generation details and phase space definition which will be repeated here for con- venience:

lepton selection: pT > 15 GeV, η < 2.5 • | | jets: at least two anti-kt (R = 0.4) clustered jets satisfying pT > 30 GeV, η < 4.5 • | | (see Section 3.4.3 for the anti-kt clustering algorithm)

invariant mass of the tagging jets dijet system Mjj > 500 GeV • if Z in bosonic final state: Mll MZ < 25 GeV with Mll being the mass of the • | − | best Z boson candidate. The considered channels have individual advantages and disadvantages. The ZZ bosonic final states suffer from low statistics as can be seen from the low cross section values. In the case of ZZ l±l∓ννjj, no benefit from a clean signature with four charged leptons → is gained and high backgrounds of Z+jets, dileptonic tt¯, and W ±Z are present [71]. Therefore this channel will probably not be studied for the foreseeable future. In contrast ZZ l±l∓l±l∓jj has a very clean signature and would most likely not → suffer from issues like misidentified leptons given that measurements of ZZ exhibit virtually no background [71]. This channel promises good prospects in a high luminosity scenario where the statistical limitations are removed by a high amount of data. It promises very good kinematic resolution due to the absence of neutrinos enabling a full reconstruction of the invariant mass of the diboson system. Furthermore, the ratio between the electroweak and strong production of the ZZ l±l∓l±l∓jj is quite →

21 2. Theoretical Foundations favourable. The properties of the final state leptons can be measured very precisely making this channel interesting for searches for new resonances. The W ±Z l±νl±l∓jj channel promises higher statistics than the ZZ channel while → having a relatively clean signature. Background contributions from Z+jets and tt¯ can be relatively well suppressed. Contributions from ZZ may be countered by applying a veto on a fourth charged lepton. At the same time the presence of one neutrino does not impair the reconstruction of the diboson system too much making it a viable search channel for singly charged resonances. The W ±W ∓ l±l∓ννjj channel shows the highest cross section in the VBS phase → space and is therefore of initial interest. However the final state signature resembles that of ZZ l±l∓ννjj and high backgrounds from Z+jets, dileptonic tt¯ and W ±Z → are to be expected. A proof-of-concept study using multivariate analysis techniques was conducted which showed that no significant measurement is possible with Run 1 data [72]. W ±W ± l±l±ννjj is a very special channel as it is the only one where the electroweak → contribution is larger than the strong contribution. This is caused by the lack of gg and qg initial states at leading order for the QCD contributions which are present in all other considered channels. The like-wise charged leptons provide a unique signature helping to suppress direct backgrounds from Z+jets and tt¯. However these backgrounds still contribute due to jets being misidentified as leptons causing the charge determi- nation being rather random. Additional charge-flip backgrounds come from W γ final states where the photon undergoes photon conversion resulting in an electron- pair introducing additional leptons which may populate the signal region. The major background however is W ±Z where one lepton fails the object selection.10 The discussion shows that the most promising channel for the extraction of the elec- troweak gauge boson scattering is the W ±W ± final state. This channel has been studied in ATLAS with results being published in [18, 73, 74].11 Therefore, the W ±Z channel was chosen to provide a complementary analysis to W ±W ±. The W ±Z channel contains all possible interactions shown in Figure 2.2 bar the s- channel12 Higgs contribution. The fully leptonic channel where both bosons decay to leptons is chosen due to the clean and relatively background-free signature.

10It can be expected that a possible Run2 analysis of this channel will suffer from an enhanced WZ background as the cross section of WZ grows quicker with the centre-of-mass energy than that of W ±W ±. 11Results by CMS can be found in [19]. 12 A given diagram implementing the processes AB → CD may be classified by the Mandelstam variables s, t, and u, depending which one represents the squared four momentum of the intermediate particle. The variables are defined as:

2 2 s =(pA + pB ) = (pC + pD) , 2 2 t =(pA − pC ) = (pB − pD) , 2 2 u =(pA − pD) = (pB − pC ) .

22 2.4. Effective Field Theories

2.4. Effective Field Theories

2.4.1. Introduction

Though being a highly successful theory and tested over forty years in numerous ex- periments the Standard Model is not believed to be the ultimate theory to describe nature. It has several shortcomings, some physically motivated, some more concerned with philosophical points of view, that motivate the search for physics beyond the Standard Model. Some of the issues are [6]:

The Standard Model does not include a description of the gravitational force. A • true theory describing all of nature should do so. No explanation regarding the relations of the individual strengths of the forces or • the mass hierarchy of the elementary particles is given by the Standard Model. The Standard Model only describes ordinary matter which only contributes to • about 5 % of the available matter and energy in the universe the rest being dark matter and dark energy. No suitable candidate for dark matter is provided by the Standard Model. The desirable property of a further unification of the forces at some scale is absent • in the Standard Model. Physics beyond the Standard Model might remedy this.

With the Standard Model not being the ultimate theory of particle physics one has to find evidence for physics beyond the Standard Model. Searches for new physics may be conducted in two ways. One may either develop a specific model and optimise an analysis for testing the predictions of said model. In doing so, one has a narrow field of view but may be highly sensitive. Examples for such searches are Z0 analyses or SUSY searches [75]. However, these approaches may fail if the new physics is beyond the kinematic reach of the LHC as they are searching explicitly for new particles whose production relies on achieving certain energy thresholds. The other approach is to parameterise the effects of new physics on SM processes via effective field theories. These theories add new operators to the Standard Model Lagrangian with the coupling strengths of these operators being free parameters. By constraining these free parameters, one sets limits on the effects of new physics. This more general approach may not point to a concrete model but may uncover effects which may provide inputs for new theories explaining the observed deviations from the Standard Model. Conceptually, these effective field theories represent an expansion of the S-matrix in a series of E/Λ with Λ being the scale of the new physics. For each step of the expansion new operators are added to the Lagrangian suppressed by increasing powers of E/Λ. Usually, the series is only expanded to next-to-leading order assuming that all operators arising from the rest of the expansion are sufficiently suppressed by E/Λ. This defines a region of validity in terms of energy where the theoretical description is sensible and a comparison to experimental results is possible.

23 2. Theoretical Foundations

2.4.2. Effective Theory of the Muon Decay

An early example of an effective field theory is the Fermi theory of beta decay [76]. At the time of its inception (1933) the Standard Model Theory had not been formulated, yet. Fermi attempted to give a theoretical explanation for the β decay. In its modern notation it can be written as a quartic vertex of the form

Fermi GF α 5 5 = (¯νµγ (1 γ )µeγ¯ α(1 γ )νe). (2.31) L √2 − − The Fermi constant is a free parameter of this Lagrangian and can be determined via measurements of the muon lifetime and the muon mass. It evaluates to

−5 −2 GF = 1.1663787(6) 10 GeV . (2.32) × Fermi’s theory is flawed insofar that it has a coupling constant depending on inverse mass dimension which will lead to divergent behaviour in the perturbative expansion at high energies. Comparing Fermi’s theory with the Standard Model explanation of the muon decay via the emission of a W boson shows that the effective theory describes the low energy behaviour very well but fails at describing the physics at higher energies.

2.4.3. Anomalous Quartic Gauge Couplings

A similar situation is considered at the LHC today. The underlying assumption of effective field theories is that new physics, e.g. new particles, is out of the kinematic reach of the experiment and cannot be observed directly. However, the low energy effects of new physics may be detectable through deviations from Standard Model predictions. These effects are implemented via the introduction of a set of new operators (d) which ideally form a complete basis. Each operator Oi is accompanied by a coupling (d) constant ci which parameterises the strength of its effect:

(d) X X c (d) = + i O . (2.33) LEFT LSM Λd−4 i d>4 i

The scale Λd−4 is the scale of the new physics and suppresses the effects of the newly introduced operators. The Standard Model is the asymptotic case where Λ . → ∞ Several approaches for constructing a complete set of operators are available and a thorough review can be found in [77]. The two approaches considered in this work are the electroweak chiral Lagrangian and the linear symmetry breaking ansatz.

2.4.4. Electroweak Chiral Lagrangian

The Higgs mechanism implementing the electroweak symmetry breaking in the Stan- dard Model was devised in the 60s of the last century. Experimental searches were unsuccessful for the longest time until a Standard Model Higgs like particle was found in 2012 [7,8]. During this period, theories had been proposed to offer alternative ways to achieve the electroweak symmetry breaking, e.g. [78, 79] or strongly in- teracting Higgs bosons [80]. Again, it is desirable to have a theory description that

24 2.4. Effective Field Theories encompasses the effects of all possible concrete theories for the electroweak symmetry breaking. At the same time the terms introduced to model the possible new physics have to honour established rules such as SU(2)L U(1)Y and CP invariance. × This description is achieved via the electroweak chiral Lagrangian [27] which applies the idea of chiral Lagrangians from QCD [81] to the electroweak case. The underlying assumption is that whatever causes the electroweak symmetry breaking will be found at energies higher than the invariant masses of the massive electroweak gauge bosons. The Lagrangian is thus written down as an effective field theory modelling the low energy regime of the unknown more complete theory. As in Fermi’s theory the Lagrangian is non renormalisable and yields divergent behaviour at high energies. The complete set of newly introduced operators of the electroweak chiral Lagrangian can be found in [82]. The subset of operators which are not suppressed due to violations of the custodial symmetry are:

0 igg µν = α1 BµνT r(TW ) (2.34) L1 2 0 ig µ ν = α2 BµνT r(T [V ,V ]) L2 2 µ ν = α3gT r(Wµν[V ,V ]) L3 2 = α4[T r(VµVν)] L4 µ 2 = α5[T r(VµV )] L5

3 † † with T Uτ U , Vµ (DµU)U . The field U denotes a non-linear parameterisation of ≡ ≡ the would-be Goldstone bosons. For details refer to [82]. All operators are of dimension four with operators and introducing contributions to triple and quartic gauge L2/3 L4/5 vertices, respectively. The operators have been constrained by LEP data and L1/2/3 will be further constrained by LHC measurements of weak-boson pair production. L4/5 are currently much less constrained as they are only observable in electroweak gauge boson scattering which has not been studied at accelerators prior to the LHC. In addition to these operators one can also introduce terms to the Lagrangian modeling new particles. For the electroweak Lagrangian these include scalar fields σ and φ, a vector field ρµ, and tensor fields fµν and aµν [83]. The discovery of a light SM Higgs boson necessitates adapting the effective field theory in order to stay valid. Therefore, it may be preferable to build a theory that assumes a light SM Higgs boson from the start.

2.4.5. Linear Symmetry Breaking Approach

A complete introduction to the linear symmetry breaking approach can be found in Ref. [28]. The linear symmetry breaking approach assumes the existence of a light SM Higgs boson. This approach similarly introduces new operators with only the ones least suppressed being considered. The lowest order operators are of dimension 6 and 8. Dimension 6 operators give new contributions to triple and quartic gauge couplings. The three CP conserving dimension-6 operators are:

25 2. Theoretical Foundations

νρ µ WWW = T r[WµνW W ], (2.35) O ρ † µν W = (DµΦ) W (DνΦ), O † µν B = (DµΦ) W (DνΦ). O These operators usually make up the set of operators to be constrained in anomalous triple gauge coupling studies. Multiple analyses of CMS [68, 84–86] and ATLAS [15, 16,71,87] are available complementing the results obtained by LEP [9] and experiments [10–14]. It is desirable to construct operators affecting only quartic gauge couplings. The lowest dimensional operators to do so are of dimension 8. There are three classes of such di- 13 mension 8 operators denoted by subscripts S, M, and T . Of these operators S,0 and O S,1 are of special interest as these are the only ones purely affecting the longitudinal O electroweak gauge boson scattering. The two operators are defined as:

h † i h µ † ν i S,0 = (DµΦ) DνΦ (D Φ) D Φ (2.36) O × h † µ i h † ν i S,1 = (DµΦ) D Φ (DνΦ) D Φ . O ×

A conversion rule between these two operators and the operators introducing anomalous quartic gauge coupling of the electroweak chiral Lagrangian exists. The conversion rule depends on the quartic gauge vertices involved in the diagrams defining the observed process. In the case of W ±Zjj only the WWZZ vertex contributes for which the conversion rule is as follows:

4 4 fS,0 v fS,1 v α4 = α5 = . (2.37) Λ4 16 Λ4 16

Here, fS,0 and fS,1 denote the coupling parameters of the operators (compare Equa- tion (2.33)), Λ the scale of new physics and v the vacuum expectation value of the Higgs field. It can be shown that this conversion rule also holds when applying unitarisation schemes needed to suppress divergent behaviour under certain assumptions. A complete list of all introduced operators and the vertices they affect can be found in Ref. [28, 88].

2.4.6. K-Matrix Unitarisation

All information regarding the evolution of an initial state to a final state is encapsulated in the scattering matrix S. The unitarity of this matrix corresponds to the conservation of probability. Therefore any violation of the unitarity of the S-matrix is unphysical and has to be remedied. A possible approach is the K-Matrix unitarisation which was introduced by Wigner [89, 90] and is described in the following.

13In total there are 18 linearly independent operators. Two, which only contain derivatives of the Higgs field and are denoted with an s (for “scalar”). Seven, which contain electroweak field strength tensors and two covariant derivatives of the Higgs field marked with an m (for “mixed”). Nine, which only contain electroweak field strength tensors and are denoted with a t (for “tensor”).

26 2.4. Effective Field Theories

When considering the electroweak chiral Lagrangian one can compute a master ampli- tude at tree level in NLO of the E/Λ expansion:

2 2 2 tree s t + u s A (s, t, u) = + 4α4 + 8α5 . (2.38) v2 v4 v4 Using this master amplitude one can express the individual scattering amplitudes of the longitudinal polarisations of the massive electroweak gauge bosons. In the case of W +Z W +Z one gets:14 → 2 2 2 + + t s + u t A(W Z W Z) = A(t, s, u) = + 4α4 + 8α5 . (2.39) → v2 v4 v4 Analysis of this amplitude regarding unitarity requires the spin-isospin eigenamplitudes of the process. These can be written in terms of the isospin eigenamplitudes:

A0(s, t, u) = 3A(s, t, u) + A(t, s, u) + A(u, s, t), (2.40)

A1(s, t, u) = A(t, s, u) A(u, s, t), − A2(s, t, u) = A(t, s, u) + A(u, s, t).

Therefore, the amplitude for W +Z W +Z can be written as: →

+ + 1 A(W Z W Z) = (A1(s, t, u) + A2(s, t, u)) (2.41) → 2 The isospin eigenamplitudes can be further decomposed into partial waves via Legendre polynomials:

∞ X AI (s, t, u) = AIJ (s)(2J + 1)PJ (s, t, u). (2.42) J=0

The individual polynomials only contribute if I J is even. The AIJ (s) are dubbed − the spin-isospin eigenamplitudes. According to the optical theorem, an eigenamplitude respects unitarity if it lies inside the Argand-circle:

1 i 1 AIJ (s) . (2.43) 32 − 2 ≤ 2 This translates to a projection rule for real valued spin-isospin eigenamplitudes:

ˆ AIJ (s) AIJ (s) = i , (2.44) 1 AIJ (s) − 32 Applying this projection rule unitarises the individual spin-isospin eigenamplitudes which in turn unitarises the complete amplitude of the process in question. Thus, the unphysical property of unitarity violation is avoided and more sensible results for the simulated process are obtained.

14These relations also hold for master amplitudes that incorporate higher order effects.

27 2. Theoretical Foundations

Noteworthy properties of the K-matrix formalism are that it has no adjustable pa- rameters and that it may be less intrusive than other unitarisation methods because it considers the contributing eigenamplitudes individually and not the amplitude as a whole. An alternative unitarisation method is the form factor method where the ununitarised amplitude is multiplied with a function of the form

 −n ˆ s A = A 1 + 2 with n 2. (2.45) ΛFF ≥

This approach introduces a global soft cutoff at the scale ΛFF with n defining the steep- ness of the cutoff. Here, two parameters are introduced which have to be harmonised between analyses when trying to compare results. In addition, this approach does not adapt to the different behaviour of the individual spin-isospin eigenamplitudes which may lead to an overzealous behaviour of this unitarisation scheme.

28 3. Experiment

The predictions of the Standard Model and its various extensions have to be tested in real world experiments. In order to study the predictions at high energies one needs to find ways to obtain particles with large kinematic energies. Several designs for such high energy experiments exist. One may use the natural supply of accelerated particles coming from extraterrestrial sources (e.g. neutrinos from supernovas) and forego the problem of particle acceleration entirely. While enabling the detection of high energy particles this approach usually comes at the price of having to create very large detectors (e.g. HESS [91], ICECUBE [92]). Another approach is to artificially accelerate particles and form beams that may be brought to collision with either stationary targets or other particle beams. The LHC is a circular accelerator at CERN that enables scientists to accelerate proton or lead ion beams which are brought to collision at several interaction points. Particle detectors record the particle collisions occurring at these interaction points providing the data for subsequent analysis. This work relies on the data taken by the ATLAS detector at the LHC. Therefore, a brief introduction to both machines will be given and the software algorithms used to translate the recorded signals into analysable data will be discussed.

3.1. CERN

CERN (Conseil Europ´eenpour la Recherche Nucl´eaire)is a multinational collabora- tion that was founded in 1954 [93]. The founding countries Belgium, Denmark, France, the Federal Republic of Germany, Greece, Italy, the Netherlands, Norway, Sweden, Switzerland, the United Kingdom, and Yugoslavia declared their will to explore peace- ful applications of and to further the basic knowledge of humanity. Since then, additional states have become observer states or affiliated to CERN mak- ing it a world spanning endeavour and an example of cooperation on an international level. The CERN site located at the franco-swiss border near Geneva hosts several experi- ments covering the fields of neutrino physics, nuclear physics, and particle physics with the Large Hadron Collider complex being the largest.

3.2. Large Hadron Collider

The Large Hadron Collider (LHC) [94] is the latest in a series of more and more complex and powerful accelerators built at CERN. Its purpose is to accelerate the proton or lead ion beams for the four main experiments ATLAS, CMS, LHCb, and ALICE. To this end, the LHC was installed in an underground tunnel with a circumference of 27 km. A total of 1232 superconducting dipole magnets cooled down to 1.9 K bend the beams

29 3. Experiment

Figure 3.1.: The CERN accelerator complex in schematic view. The LHC is provided with proton beams by an accelerator chain consisting of the LINAC2, Booster, Proton-Synchroton (PS), and Super-Proton-Synchrotron (SPS). Depic- tion taken from Ref. [96].

via a 8.33 T strong magnetic field to follow a circular track. Additional magnets with higher moments allow to control the beam profile to ensure focussed and stable beams. An accelerating stretch employing cavity resonators provides the needed acceleration and energy loss compensation for the proton beams. The beams are made of about 1350 discrete packages of protons called bunches sepa- rated by a 50 ns gap with each bunch consisting of about 1010 protons.1 Figure 3.1 shows the accelerator complex of CERN. The LHC is supplied with proton beams by a chain of accelerators starting with the linear accelerator LINAC2. The protons are accelerated to ever higher energies by the Booster, Proton-Synchrotron (PS) and Super-Proton-Synchrotron (SPS) before entering the LHC with an energy of about 450 GeV, where the final gain to the desired beam energy is achieved. The two most important properties of the LHC are its centre-of-mass energy √sˆ and the delivered luminosity . The centre-of-mass energy is the collision energy of the L two protons and signifies the maximum energy that is available in a proton-proton collision. Therefore, a high centre-of-mass energy is desirable, especially in proton-

1 These values should be taken with a grain of salt as these are configurable parameters that are subject to change during a data taking period. The numbers quoted here were taken from the run summary for 2012 data [95]

30 3.3. The ATLAS Detector proton collisions where not all of the beam energy is available in the collision.2 Although the LHC is designed to provide a √sˆ of 14 TeV, the collision data used in this thesis was taken at √sˆ = 8 TeV due to technical reasons. The second important property, the delivered luminosity, has a direct impact on the available statistics for a given analysis. Using the formula

N˙ = σprocess, (3.1) L one can calculate the rate N˙ at which a given process may be observed as a function of the luminosity provided by the accelerator and the cross section σ of the process. L Therefore, a high luminosity is desirable. The luminosity of circular accelerators such as the LHC can be calculated via the formula

2 N1N2fNb = · , (3.2) L 4πσxσy with N1 and N2 being the number of protons per bunch, f being the revolution fre- quency, Nb the number of bunches per beam, and σx and σy the standard deviations in x and y direction for the profiles of the two beams that are assumed to be of gaussian shape. Figure 3.2 shows the expected cross sections and event rates as a function of √sˆ for several processes. As can be seen the dominating processes are jet production as well as W and Z boson production followed by top production. An additional property important to the analysis of collision data is the amount of pile- up in a given event. Pile-up denotes additional proton-proton interactions which may take place in the same bunch crossing (in-time pile-up) and is a direct consequence of the high amount of protons in each bunch and the tight focussing of the proton beams which is needed for achieving high luminosities. In addition, a second source called out-of-time pile-up is present which stems from the sensitivity of detector components to bunch crossing prior or after the bunch crossing of interest. Pile-up affects the measurement of particle properties by introducing additional energy deposits and tracks to the event. It also causes additional objects in the event when a particle from an additional proton-proton interaction passes the object selection. Therefore, its influence has to be taken into account when measuring object properties and selecting events. The distribution of pile-up in real data and simulated data is not necessarily the same. So-called pile-up corrections are applied which harmonise the distributions of the average interactions per bunch crossing by reweighting the simulated data to reproduce the distribution found in real data.

3.3. The ATLAS Detector

The ATLAS detector [98] is a multi-purpose detector situated at interaction point 1 of the LHC in a large underground cavern. It is of cylindrical shape with a width and

2 Protons have a composite substructure with three valence quarks, temporary quark-anti-quark pairs, and gluons. All of these so-called partons carry a fraction of the total momentum of the proton. The observed proton-proton collision is in fact that interaction of partons of the colliding protons and therefore not the whole proton momentum is available in such a collision.

31 3. Experiment

proton - (anti)proton cross sections 10 9 10 9

8 8 10 σσσ 10 tot

7 7 10 Tevatron LHC 10

10 6 10 6

5 5 10 10 -1 s

σσσ -2 b 10 4 10 4 cm 3 3

10 10 33 σσσ (E jet > √√√s/20) jet T 10 2 10 2 ) ) ) ) = 10 =

1 1 L

nb σσσ 10 W 10 ( ( ( ( σσσ Z 0 0 σ σ σ σ 10 σσσ (E jet > 100 GeV) 10 jet T 10 -1 10 -1

10 -2 10 -2 σσσ -3 WW -3 10 σσσ 10 σσσ t for / sec events -4 ZZ -4 10 σσσ 10 ggH -5 M =125 GeV σσσ -5 10 H { WH 10 σσσ VBF 10 -6 10 -6 WJS2012 10 -7 10 -7 0.1 1 10 √√√s (TeV)

Figure 3.2.: Cross sections at the LHC as a function of √sˆ. The gap at √sˆ = 4 TeV stems from the switch from pp¯ collisions conducted at the Teva- tron to the pp collisions conducted at the LHC. Smaller gaps indicate that gluon and/or sea quarks are the dominating source for the initial state particles. Depiction taken from Ref. [97].

32 3.3. The ATLAS Detector height of 25 m and a length of 44 m. A schematic depiction of the detector and its components is shown in Figure 3.3.

Figure 3.3.: Cutaway view of the ATLAS detector. Shown are the differ- ent sub-detectors (inner detector, electromagnetic and hadronic calorimeter, and muon spectrometer) and the components they are made of. Depiction taken from Ref. [99].

Being a general purpose detector it was designed to have a set of properties enabling it to meet the requirements of the demanding physics programme of the LHC. Its design aims at detector hermicity (the whole solid angle is equipped with detecting elements), precision tracking for advanced reconstruction techniques such as b-tagging and precise momentum measurement, low energy leakage so that particles being primarily measured in the calorimeters are measured correctly and do deposit energy in the outer detector systems (so-called punch through), fine granularity helping with the reconstruction of physical objects and high resolution translating to low uncertainties on the energy information. ATLAS consists of several sub-detector systems each emphasising a subset of the stated design aspects. These subsystems are: the inner detector, responsible for the spatial measurement of tracks of charged • particles and therefore integral to the momentum measurement, particle detec- tion, and particle identification, the electromagnetic calorimeter, providing information on energy and direction • of flight for electrons, positrons and photons as well as measuring a fraction of the hadron energies, the hadronic calorimeter, delivering energy measurements and spatial information • for hadrons and preventing them from reaching the outer detector systems, and the muon spectrometer, providing an additional measurement on the muon mo- • mentum and spatial information. The analysis described in this thesis makes use of all mentioned sub-detector systems which will be described in Sections 3.3.2 through 3.3.5.

33 3. Experiment

3.3.1. ATLAS coordinate system

ATLAS uses a right handed coordinate system. The origin is at the nominal interaction point. The x-axis points towards the centre of the LHC ring, the y-axis points upwards and the z-axis completes the right handed coordinate system. In practice these cartesian coordinates are not as commonly used as the ones described in the following, owing to the cylindrical shape of the detector. Spatial information is usually described via two variables, the azimuthal angle φ and the pseudo-rapidity η. Using the energy E and the particle momentum p and its components pi, i x, y, z the azimuthal angle is defined as: ∈ py φ = arctan (3.3) px and the pseudo-rapidity as: pz η = arctanh . (3.4) − p The pseudo-rapidity is the high energy limit of a more general quantity called rapidity y: 1 E + pz y = ln . (3.5) 2 E pz − Here, the mass of the particle in question is taken into account whereas pseudo-rapidity assumes the particles to be massless. An important property of the rapidity is the invariance of differences in rapidity under lorentz boosts. This property is highly useful in the context of hadron colliders where each event has a different overall lorentz boost due to the momenta of the partons of the colliding protons being differing from event to event. Another important variable reflecting this lack of knowledge regarding momenta along the beam axis is the transverse momentum pT defined as: q 2 2 pT = px + py (3.6)

Spatial distances between two particles are usually described by the variable ∆R: p ∆R = (∆φ)2 + (∆η)2 (3.7)

All these variables describe the kinematics of particles as they traverse the detector. Information on the kinematics in the vicinity of the interaction point is of interest, too. Therefore, two variables are defined describing the spatial relation between a given particle and the interaction point.

The transverse impact parameter, d0, is defined as the smallest distance between a track and the primary vertex in the transverse plane. This requirement also yields the point of closest approach (pca) necessary for defining the longitudinal impact parameter z0. Figure 3.4 illustrates the definition of both parameters.

3.3.2. Inner Detector

The inner detector itself consists of three subsystems: the pixel detector, the Semi- Conductor Tracker (SCT) and the Transiation Radiation Tracker (TRT). It is im- mersed in a 2 T strong magnetic field provided by a superconducting solenoid enclosing

34 3.3. The ATLAS Detector

Figure 3.4.: Schematic representation of the impact parameters d0 (left) and z0 (right). Depiction taken from Ref. [100]

the inner detector. A measurement of the reconstructed track’s curvature can be trans- lated to a value for the transverse momentum of the assumed particle while the sign of the curvature is linked to the particle’s charge. All subsystems may be divided into a barrel region where modules are aligned on the surface of an imaginary cylinder with a given radius and two endcap regions where modules are arranged perpendicular to the beam axis (see Figure 3.5). The innermost subsystem is the pixel detector which uses silicon based diodes in reverse bias as sensors. A traversing charged particle ionises the material causing charge carriers to be released which are accelerated by the voltage bias. A charge avalanche is caused which provides the signal for the individual pixel. The sub-detector consists of three cylindrical layers in the barrel region with radial distances from the interaction point between 50.5 mm and 122.5 mm. Each endcap hosts three discs of the same technology enabling the pixel detector to cover the range η < 2.5 and the whole azimuthal angle. | | The individual pixels have a size of 50 µm 400 µm resulting in a resolution of 14 µm. × ≈ A total of about 140 million channels are read out. The SCT represents the next subsystem of the inner detector. It is comprised of four layers in the barrel region and nine disks in each endcap region using a similar technology as the pixel detector but using strips instead of pixels. Each layer of the SCT is made of two layers comprised of silicon strips laid side by side which run at an angle of 40 mrad to each other. This stereo-strip principle allows for a precision of 17 µm in the R-φ plane and 580 µm in the z-direction. The layout of the SCT ensures a coverage of the whole azimuthal angle and η < 2.5. | | The third and outermost subsystem is the Transition Radiation Tracker (TRT). Cov- ering a smaller region of η < 2.0 and the whole azimuthal angle it is comprised of | | long drift tubes filled with a gas mixture of 70% Xe, 27% CO2 and 3% O2. The straw tubes measure 4 mm in diameter and 144 cm in length and are oriented parallel to the beam axis. A total of 73 straw planes in the barrel region and 160 straw planes in the end-caps results in an average number of 30 hits per track. The accuracy of the TRT is 130 µm in the (R φ) plane. In addition, the TRT is designed to cause passing − charged particles to emit transition radiation. The amount of radiation is proportional

35 3. Experiment to the γ-factor3 of the charged particle leading to higher emission for lighter particles, e.g. electrons w.r.t. heavier particles such as .

3512 ID end-plate Cryostat Solenoid coil

PPF1 R1150 712 PPB1 R1066 848 2710 R1004

TRT(end-cap) TRT(barrel) Cryostat 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 R644 R563 R560 Radius(mm) R514 Pixel R443 R438.8 R408 support tube R371 SCT (end-cap) SCT(barrel) R337.6 R299 R275 Pixel PP1 R229 R122.5 Beam-pipe R88.5 Pixel R50.5 R34.3 0 0 400.5 580 749 934 1299.9 1771.4 2115.2 2505 2720.2 495 650 853.8 1091.5 1399.7 z(mm)

Figure 3.5.: Schematic representation of a quadrant of the inner detector geom- etry, illustrating the coverage in η. Depiction is taken from Ref. [99].

Figure 3.5 shows a detailed layout of the inner detector. As can be seen the layout is optimised towards having a high number of hits for any given particle trajectory. A high tracking performance delivering very good spatial resolution, robust pattern recognition, high track separation and low dead time are the benefits of this detector design. Tracks are reconstructed with a minimum transverse momentum of 500 MeV and a pseudo-rapidity η < 2.5 with very high efficiency. Thus, the inner detector | | enables advanced analysis techniques such as b-tagging, jet-vertex-fraction calculations, and electron conversion detection. The inner detector adds between 0.2 and 0.7 interaction lengths of material distorting the energy measurement of the more outlying calorimeters.

3.3.3. Electromagnetic Calorimeter

The electromagnetic calorimeter is designed to deliver precise energy information on electrons, positrons, and photons as well as to prevent these particles from entering the outlying hadronic calorimeter. A key advantage of a dedicated calorimeter for these particles is their distinction from hadrons which in contrast are not stopped by the electromagnetic calorimeter. The electromagnetic calorimeter covers a range of η < 3.2 divided into the barrel part | | ( η < 1.475) and the end-cap part (1.375 < η < 3.2) with a notable gap region of | | | | reduced efficiency between 1.37 < η < 1.52. Figure 3.6 depicts the general layout of | | the electromagnetic calorimeter. It is divided into three longitudinal layers (sampling 1, 2, and 3) with differing granularities and priorities. Layer 1 an average cell size of ∆η ∆φ = 0.025/8 1 leading to a very precise position measurement in η of an × × incoming particle. Its thickness of X0 4.3 interaction lengths makes it a likely starting ≈ point for electromagnetic showers but is not thick enough to prevent the electrons and photons from escaping this layer. Therefore, layers 2 and 3 are installed behind layer 1 adding a combined thickness of X0 18 radiation lengths causing the electrons and ≈ photons to deposit all of their kinetic energy in the electromagnetic calorimeter. The

3The γ-factor, also known as Lorentz factor is defined as γ = 1/p1 − v2/c2.

36 3.3. The ATLAS Detector

Figure 3.6.: Schematic representation of the general layout of the electromag- netic calorimeter. Picture taken from Ref [99].

granularities of layer 2 and 3 are ∆η ∆φ = 0.025 0.025 and ∆η ∆φ = 0.05 0.025, × × × × respectively. The energy measurement is realised by inducing electromagnetic showers through pair production processes which are the dominating matter interaction processes for elec- trons and photons in the GeV-range. Interlacing lead as a passive absorbing layer and liquid argon as an active medium in an accordion-like geometry results in a measuring setup with several key advantages: The accordion-like structure prevents cracks in the geometry homogenising the • detector geometry in φ, allows for fast signal extraction, and performance regard- ing linearity and resolution independent of variations in φ. Liquid argon possesses high radiation tolerance, an intrinsic linear behaviour, and • a stable time response. Lead has a high atomic number leading to a high mass attenuation contributing to • the total radiation length of the electromagnetic calorimeter of X0 > 22 ensuring low leakage of electrons, positrons, and photons into the hadronic calorimeter.

3.3.4. Hadronic Calorimeter

The hadronic calorimeter’s purpose is measuring the energy and spatial information of hadrons as well as shielding the muon spectrometer from everything but muons and neutrinos. Different conditions (e.g. radiation levels) of the respective η - ranges motivate employing several technologies in the hadronic calorimeter. The sub-detectors comprising the calorimeter are:

37 3. Experiment

the barrel tile calorimeter covering η < 1.7, • | | the liquid argon hadronic end cap calorimeter covering 1.5 < η < 3.2, and • | | the liquid argon forward calorimeter covering 3.1 < η < 4.9. • | | The barrel tile calorimeter uses steel as an absorber and scintillating material as the active medium thus implementing a sampling calorimeter. Several layers of steel and scintillator form a module (wedge) with a depth of 7.4 interaction lengths, divided ≈ into three sampling layers with a depth of approximately 1.5, 4.1 and 1.8 interaction lengths. Cells in the first two layers have a size of ∆φ ∆η 0.1 0.1 and ∆φ × × × ∆η 0.1 0.2 in the third layer. Showers are induced by the absorbing material as in the × electromagnetic calorimeter. The particles forming the shower traverse the scintillating material causing it to emit photons which are collected via wavelength shifting fibres and transmitted to photomultiplier tubes on top of each module. A projective geometry4 in η is achieved by grouping the readout fibres of individual scintillator blocks. The expected light loss measured in irradiation experiments with a dose equivalent to ten years of operation at design luminosity is smaller than 10 %. The hadronic end-cap calorimeter is mounted on two wheels in the end-cap region and has a geometry similar to that of the barrel tile calorimeter. However, the passive medium here is copper and liquid argon acts as the active medium. The individual copper plates are 25 to 50 mm thick with 8.5 mm wide liquid argon gaps in between. Said gaps are divided by three electrodes equally spaced with the middle electrode as the readout. Pads are etched on this readout electrode defining the cell spacing with ∆φ ∆η < 0.1 0.1 in the region η < 2.5 and 0.2 0.2 for larger η. × × | | × The third sub-detector is the forward calorimeter. High particle fluxes favour a de- sign optimised for heat removal and resolution. The forward calorimeter has three lay- ers: FCAL1, FCAL2, and FCAL3. FCAL1 constitutes the electromagnetic calorimeter in the forward region using copper as an absorbent. In the FCAL2 and FCAL3 tung- sten is employed to this end. Each of the calorimeters is made of 45 cm thick plates into which closely spaced holes have been drilled. These holes house rods serving as electrodes and are filled with liquid argon as the active medium. This design provides high density and avoids ion build-up. Another advantage is a low drift time of only 60 ns opposed to roughly 400 ns in the the barrel tile and hadronic end cap calorimeter.

3.3.5. Muon Spectrometer

The outermost sub-detector system is the muon spectrometer. It is solely dedicated to providing information on muons. The presence of these particles in an event is often a sign that an interesting process has occurred. In addition they provide a very clean signal motivating the dedication of a whole sub-detector system to them. A schematic depiction can be found in Figure 3.7. Several requirements are imposed: muons have to be detected and identified with high efficiency, their kinematics have to be precisely measured up to the TeV range (10 % resolution at 1 TeV), and their charge has to be determined reliably. The make of the muon spectrometer allows for the detection of muons with momentum p > 3 GeV. It covers the range of η < 2.7, however its peak performance lies in the | | 4The term projective denotes that the the object in question is facing towards a common point of interest (here the interaction point). Slices of pizza exhibit a projective geometry.

38 3.3. The ATLAS Detector

Figure 3.7.: Schematic representation of a quadrant of the Muon Spectrometer, illustrating the individual technologies and the coverage in pseudo-rapidity. MDT chambers, shown in light blue and green, are not explicitly labeled. Depiction taken from Ref. [99].

range η < 2.5 where additional inner detector information may be available. A toroidal | | magnetic field provided by superconducting coils allows for momentum and charge measurements. The field strength itself is inhomogeneous but the bending strength is relatively constant across the muon spectrometer. The general layout of the muon spectrometer consists of three stations with increasing distance from the interaction point called the inner, middle and outer station. These stations are arranged in an octagonal structure with overlapping chambers allowing for in-situ alignment measurements and gap minimisation. Each station houses at least one precision chamber for the η measurement and one trigger chamber, providing the trigger signal and supporting the work of the precision chamber. Four technologies are employed in the muon spectrometer, two being dedicated to precise spatial measure- ments and the other two to triggering purposes. Monitored Drift Tubes (MDTs) are the default choice for precision measurements in the ATLAS muon spectrometer. They unite high measurement accuracy, predictability of mechanical deformations and simplicity of construction. Arranged in a projective layout they cover the range of η < 2.7. Drift tubes are filled with gas (93% Ar, | | 7% CO2) at three bar and are oriented perpendicular to the beam axis. Layers are constructed by arranging individual drift tubes side by side. Three to four layers make up an individual chamber. The resolution of the MDT chambers is 50 µm. Cathode-Strip Chambers (CSCs) are used in the innermost layer of the end-cap wheel (2.0 < η < 2.7) instead of MDTs, due to the proximity of the inner layer | | to the interaction point and the resulting high particle fluxes. These are multi-wire proportional chambers with wires oriented in the radial direction, dividing the chamber into two sub-chambers. Two cathodes provide the readout. One cathode is segmented with strips perpendicular to the wires while the other one is segmented parallel to the

39 3. Experiment wires to provide a two-dimensional spatial measurement. The advantages of the CSCs are good two track resolution, quick response, and low neutron sensitivity. However, the resolution is not as good as the one observed for MDTs with 5 mm in φ and 21 mm in η. One disadvantage of the MDTs is a relatively long drift time prohibiting their use for triggering purposes. Therefore each MDT chamber is accompanied by at least one RPC chamber in the barrel region ( η < 1.05) or at least one TGC chamber in the end-cap | | region (1.05 < η < 2.4). | | Resistive Plate Chambers (RPCs) consist of two independent detector layers of the same construction measuring either the η or φ coordinate. Each layer is made of two resistive plates arranged parallel to each other with a gas-filled gap of 2 mm in between. An electric field is established between the plates causing charge avalanches when a charged particle passes. The readout is achieved via metallic strips either oriented longitudinally or transversely which are coupled capacitively to the outer faces of the resistive plates. The RPCs provide a 5 ns wide signal used for triggering and are able to yield track information in a few tens of nanoseconds after a particle passes through the chamber. In the end-cap region four layers of Thin Gap Chambers (TGCs) are used. They operate on the principle of multi-wire proportional chambers. Advantages are good time resolution, high rate capability, reliability, and robustness. A gas-filled chamber with a gap of 2.8 mm is formed by two insulating plates with wires dividing the gap in two. The plates are coated with graphite at the inner surface and copper at the outer surface. One of the copper coats is uniform while the other one is cut in strips serving as the readout electrode. The TGCs deliver a signal about 25 ns after particle passage. Though the resolution depends on the technology employed, more general sources of distortions can be found for different pT ranges. In the low momentum range below 30 GeV fluctuations in the energy loss caused by the material in front of the spec- trometer dominate. Once the intermediate range (30 200) GeV is reached, multiple − scattering caused by the material of the magnets and chambers themselves becomes the main driving factor. At momenta larger than 200 GeV the single hit resolution of the individual MDT tubes and alignment/calibration issues are most important.

3.3.6. Trigger System

The trigger system of the ATLAS detector provides pre-filtering of events prior to dedicated physics analyses. Proton-proton collisions take place with a rate of 40 MHz. Depending on the process of interest a suitable event signature may only occur once in a million or more collisions. In addition the bandwidth for storing event information is limited and can only manage about 200 events per second. Therefore a multi-stage triggering system was devised consisting of the level 1 trigger (L1) and the high level trigger (HLT) subdivided into the L2 trigger and the event filter. The purpose of the level 1 trigger (L1) is to monitor the detector signals caused during the proton-proton collisions and watch out for interesting signatures. These sig- natures include leptons, photons, and jets with high transverse momenta, high missing transverse momentum, and high total transverse energy. It uses information form the RPCs, TGCs and the calorimeters to provide a decision and builds regions-of-interest (ROIs) that are later used by the L2 trigger. Information on the individual bunch

40 3.3. The ATLAS Detector crossings is stored in pipeline memories while custom electronics making up the L1 compute a decision. Processing is allowed to take 2.5 µs with a target time of 2 µs. The maximum event rate passing the L1 trigger is 75 kHz. Event-building, event filtering, storing, and controlling the configuration of the detector as well as event monitoring are the tasks of the high level trigger. Both the L2 trigger and the event filter are implemented on large farms of 500 and 1800 computing nodes respectively using commodity hardware. The L2 trigger uses information at full granularity from the calorimeter and muon spectrometer in the ROIs defined by the L1 trigger. It reduces the event rate to 3.5 kHz with the decision making process taking about 40 ms. The subsequent event filter uses the full information provided by the detector enhancing measurement precision and particle identification. Offline analysis procedures applied on the fully-built events provided by the event builder (implemented by about 100 nodes) are employed. Event rates are reduced to 200 events per second with a latency of about four seconds. Accepted events are stored and subjected to further processing resulting in data formats of manageable size for offline analysis.

3.3.7. Luminosity Monitoring

The luminosity is an important machine parameter as it is the link between the cross sections and observed event rates (see Equation (3.1)). Therefore, a precise luminosity measurement is paramount for precision physics at ATLAS. Two detectors in the very forward region are dedicated to luminosity measurements: LUCID (Luminosity measurement using Cerenkov Integrating Detector) which • is the main relative luminosity monitor for ATLAS and ALFA (Absolute Luminosity For ATLAS) providing an absolute luminosity • measurement. LUCID consists of two detectors located 17 m in opposite hemispheres from the in- teraction point corresponding to η 5.8. It is the only detector primarily dedicated | | ≈ to online luminosity monitoring. Instantaneous luminosity online monitoring and mea- surement of the integrated luminosity is done via the detection of inelastic proton- proton collisions. The detector uses Cerenkov tubes which emit light when a passing particles exceeds the speed of light for the given medium filling the tube. The lumi- nosity measurement requires a good acceptance for minimum bias events5, high time resolution to separate signals from individual bunch-crossings and the spatial resolu- tion to detect individual particles. Due to its placement the detector has to be very resistant against very high radiation levels. ALFA also consists of two detectors located 420 m in opposite hemispheres from the interaction point. It measures the absolute luminosity via the detection of particles from elastic scattering at small angles. For this purpose, the optical theorem is em- ployed relating the measured elastic scattering amplitude to the total cross section. A specialised setup, called Roman-pots, is used. These pots are stations where the beam pipe is intermitted and the beam itself traverses a larger volume. Detectors may be

5Minimum bias denotes that one tries not to introduce a bias towards any specific event topology when triggering.

41 3. Experiment driven from above and below into this volume until they are only 1 mm away from the beam while maintaining the vacuum conditions inside the beam pipe. These tracking detectors are required to have a spatial resolution of about 30 µm and have to be in- sensitive to the radio frequency noise emitted by the beams. More information on the forward detectors may be found in Ref. [99]. The relative uncertainty on the luminosity in the 2012 dataset was preliminarily mea- sured to be 2.8 % (OflLumi-8TeV-003). A smaller final value of 1.9 % (OflLumi-8TeV- 004) was determined later but is not used in this work as the bulk of the results was already finalised.

3.4. Object Reconstruction

The object reconstruction is the link between the recorded detector signals and the usable analysis objects representing particles such as muons or electrons. It is therefore a very important step that determines the quality and availability of information in subsequent analysis steps. Ideally, a reconstruction algorithm is highly efficient capturing as many physical ob- jects as possible. Additional variables may be calculated during reconstruction which enhance the available information for a given measured object. Some of these variables are utilised as inputs for algorithms evaluating the conclusiveness of the reconstruction results with respect to a certain particle hypothesis. Combining information from mul- tiple sub-detectors typically improves the precision with which variables are measured. Therefore, most reconstruction algorithms incorporate information from more than one sub-detector. Only the strategies for dealing with muons, electrons, jets, and neutrinos will be detailed as these are the objects of importance in this analysis. For these objects, information on the reconstruction, calibration, and identification will be given.

3.4.1. Muons

Reconstruction

Muons provide a very favourable signature for analyses as they are the only charged particles traversing the whole detector. Therefore, information from the inner detector, calorimeters, and muon spectrometer may be used in the reconstruction process. The highest resolution and best identification can be obtained when all three sub-detectors are available. However, this is not always the case and consequently a range of muon types exists: Combined muons are the muon type of the highest purity and the default • type used in analyses. Information from both the inner detector and the muon spectrometer is used for measuring the spatial information of the muon candidate. This combination strategy improves the resolution by incorporating inner detector information for muons with pT < 100 GeV. Due to the coverage of the inner detector these muons are only available for η < 2.5. | | Standalone muons use information from the muon spectrometer only. Ideally, • at least two chambers in the muon spectrometer have been traversed by the muon,

42 3.4. Object Reconstruction

otherwise the spatial information may be unreliable. The parameters of the muon at the interaction point are obtained by back extrapolation of the muon track. Standalone muons may be used to expand the acceptance range for muons as they are available for η < 2.7. | | Segment tagged muons are used to recover reconstruction inefficiencies in the • low transverse momentum range or where the instrumentation of the muon spec- trometer is sparse. Both an inner detector track and a matching muon segment from an inner station of the MDTs or CSCs have to be present for the recon- struction. Calorimeter tagged muons provide the least pure sample of muons and are • intended to recover acceptance in regions where muon spectrometer information is not available (e.g. for η < 0.1) Here an inner detector track is matched to | | calorimeter deposits satisfying the criteria for a muon induced deposit. The muon reconstruction consists of four steps. First, the registered hits in the muon spectrometer are translated into either drift circles in the case of MDTs or clusters for TGCs, RPCs and CSCs. This refined hit information is fed into a pattern finding algorithm (e.g. a Hough transform). During segment making the found patterns are combined to segments in the individual stations. These segments are made into muon tracks by combining the segments from the outer, middle and inner chambers (in that order). The resulting muon tracks are subjected to a track fitting algorithm looking for suitable tracks in the inner detector to form combined muons. Energy lost during the transition of the calorimeter is usually not measured but esti- mated. Using the detector description the most probable energy loss is calculated and used. The the measurement obtained from the calorimeters is only used if it is severely larger than the estimated energy loss and the muon is isolated. For Run 1, two competing reconstruction packages were available, called STACO [101] and Muid [102]. Both implement the general steps in the muon reconstruction but employ different strategies for the respective tasks. For instance, the combination of the track parameters from the inner detector and muon spectrometer measurement is done by STACO via a statistical combination of the track parameters using the corresponding covariance matrices while Muid attempts a global refit of the track. It has to be noted that both packages perform similarly and that a third package (“3rd chain”) has been formed using the parts from one of the former two packages with the best performance given a certain task. In Run 2, only these 3rd Chain muons will be available.

Calibration

The kinematic information of muons is both corrected for biases in the momentum measurement by shifting the momentum (scaling) and mismatches in the observed momentum resolution in real and simulated data (smearing). Correction factors are derived as a function of η and φ with the binning of the map adjusted to keep the expected corrections small. The measurements on the transverse momenta in both the inner detector and muon spectrometer are evaluated in simulated and real data. By adjusting the transverse momentum information for the tracks measured in the inner detector and muon spectrometer the invariant mass distribution of the Z Boson obtained in simulated data is harmonised with that observed in real data.

43 3. Experiment

Identification

Due to their distinct signature muon identification is rather straightforward and highly efficient. Three levels have been defined via a cut-based algorithm: loose, medium, and tight. These levels are based on cuts regarding quantities derived during reconstruction. Results on the performance and correction factors can be found in Ref. [103].

3.4.2. Electrons

Reconstruction

Electron reconstruction at the LHC is a particularly difficult endeavour due to the unfavourable ratio of electron to jets (e.g. 5 10 at pT = 40 GeV). ≈ − The reconstruction process starts with the search for electromagnetic clusters. A grid of so-called towers with a size of ∆η ∆φ = 0.025 0.025 is defined with the sum of the × × energy deposits along all three longitudinal layers of the electromagnetic calorimeter being the entry for the given grid point.6 Towers with a total transverse energy above 2.5 GeV act as seeds for electromagnetic clusters. These clusters are located using a sliding window algorithm [104] with a window size of 3 5 towers. Clusters are marked × for further use if a reconstructed inner detector track can be matched to them. After ruling out that the track has originated from a photon conversion the electron cluster is formed from the seed cluster by expanding the window size to 3 7 cells in the barrel × region and 5 5 cells in the end cap region. Window sizes have been optimised with × respect to pile-up and noise minimisation and adaptation to the energy distributions in the respective detector regions. Additional input variables for identification purposes are calculated by measuring the shower profile of the electron cluster. These variables include the lateral and longitudi- nal shower profiles, the ratio of energy measured in the calorimeter to the momentum information obtained from the inner detector track, the difference in the spatial infor- mation yielded by both the electron cluster and the matched track, and the number of high-threshold transition radiation hits compared to low threshold hits on the track in the TRT (see Section 3.3.2). In general the spatial information on the electrons is more accurately measured by the inner detector track while the energy information is provided by the electron cluster.

Calibration

Calibration algorithms are used to correct for several processes impeding the resolu- tion of the energy measurement. The cluster energy response is corrected using a map obtained from simulated data via multivariate methods. In real data an additional uni- formity factor is applied to take into account the possible inhomogeneous performance of the detector (e.g. due to non-optimal high voltage regions). A second scaling and smearing is applied on the electron objects in simulated data using correction factors obtained by studying Z ee events. This corrections aims at harmonising the ob- → served resolution in both simulated and real data. Recent results regarding the energy resolution and calibration can be found in Ref. [105].

6This defines a grid of 200 bins in η and 256 bins in φ.

44 3.4. Object Reconstruction

Identification

Multivariate as well as simple cut based identification algorithms are available for elec- tron identification with the cut-based identification being the most widely used. Shower shape variables, the fraction of energy deposited in the hadronic calorimeter as well as track quality and track-cluster matching information is used to define three levels: loose, medium, and tight. These levels are defined by their identification efficiency with loose being the most efficient and tight the least efficient option. The advantage of tighter quality levels is electrons with less contamination from other physical objects, e.g. jets. Recent results on efficiency measurements and correction factors for electron reconstruction can be found in Ref. [106].

3.4.3. Jets

Reconstruction

Jet reconstruction is similar to that of electrons. Towers are built using the information of both the electromagnetic and hadronic calorimeter yielding a two-dimensional grid with a cell size of ∆η ∆φ = 0.1 0.1. The individual cells are grouped to clusters × × which in turn form the basis for the subsequent jet clustering algorithm. The first step for the cell clustering is done by searching for towers with a high signal to noise ratio. A given cell is marked as a cluster seed if it exceeds a large threshold of tseed > 4. Cells neighbouring these seed cells are added if they exceed a threshold tcell > 0. In case they satisfy a threshold tneighbour > 2 their neighbouring cells are also tested and added using the same thresholds. Thus the cluster is expanded iteratively until no further cells suitable for addition are found along the perimeter of the cluster. If a cell may be claimed by two clusters an algorithm is applied to decide whether the two clusters shall be merged or stay separate with only one cluster incorporating the cell. After the clusters have been built a cluster splitter step is performed looking for local maxima in large clusters aiming at splitting these into smaller clusters. At this stage it may be decided whether the clusters are used as is (resulting in jets at the electromagnetic- scale) or subjected to a calibration scheme correcting the energy information (local cell weighting (LCW)). In the latter case the clusters are categorised into electromagnetic and hadronic clusters and calibrated using results from single charged and neutral simulations. In both cases the resulting clusters are fed into a jet builder. Multiple algorithmic choices are available for jet building, the anti-kt algorithm described in Ref. [107] being the most used in ATLAS due to its robustness against soft radiation. This algorithm uses the metric:

2 2p 2p ∆ij dij = min(k , k ) , (3.8) ti tj R2 2p diB = kti , (3.9)

2 2 2 with ∆ = (yi yj) + (φi φj) being the distance in the rapidity-azimuth plane, ij − −

45 3. Experiment

7 kti the transverse momentum of the ith entity and j the index of the jth entity. The subscript B stands for beam. The parameter p governs the behaviour of the clustering8 and R the size of the resulting jet. Jets are build iteratively by calculating the values of diB and dij for all entities sorting them from lowest to highest. In each step the smallest dij is found and entity i is added to entity j. If diB has the smallest value then entity i is marked as a jet and removed from the list of entities. The general behaviour of the algorithm for p < 0 is that particles with high momentum cluster low momentum particles around them before proto clusters are merged. Figure 3.8 illustrates the jet shapes yielded by different choices for p. The case of p = 1 yields the arguably most − regularly shaped jets.

Figure 3.8.: Jet shapes obtained by applying a sample parton-level event using several jet clustering algorithms. Top left shows the kt algorithm, top right the Cambridge/Aachen algorithm, lower left the SISCone algorithm [108] and lower right the anti-kt algorithm. The anti-kt algorithm reconstructs the most regular shaped jets and favours high momentum jets (Figure taken from Ref [107]).

Calibration

The reconstructed jets are not yet ready to be used as several corrections have to be applied first: The pile-up offset correction which compensates the energy offset caused by • pile-up. Correction factors are derived from simulated data and are given as maps

7Here entity is used to refer to both particles and proto clusters. Proto clusters denote clusters of particles which have been obtained in previous iteration steps but have not been flagged as standalone jets yet. 8 Popular choices for p are 1 (inclusive kt algorithm), 0 (Cambridge/Aachen algorithm), and −1 (anti- kt algorithm).

46 3.4. Object Reconstruction

in the average number of interactions per bunch crossing µ and the number of vertices in the event NPV. An origin correction which recalculates the jet direction making it point to- • wards the primary vertex instead of the nominal interaction point. The energy and η calibration which attempts to remove detector effects by • scaling the measured energy and pseudo-rapidity. Said scaling is derived from simulated data where reconstructed jets may be compared to jets built from simulated particles prior to the detector simulation. Residual in situ corrections which introduce further kinematic corrections • using data driven methods and are applied on jets in real data only. Recent calibration results can be found in Ref. [109].

Identification

Jet identification addresses primarily the rejection of jets from background sources (e.g. beam halo events or calorimeter malfunction) and problems in the energy measurement. Three categories are defined: good (jets ready for physics analyses), ugly (jets with bad energy measurement due to detector effects), and bad (jets stemming from background sources). For bad jets, a set of quality levels has been introduced called looser, loose, medium, and tight with looser being the default recommendation for physics analyses. Ugly jets are identified by a set of requirements regarding the amount of dead cells in a jet and proximity to gap regions. Good Jets are jets that are neither bad nor ugly.

3.4.4. Missing Transverse Momentum

Reconstruction

Particles which only interact weakly such as neutrinos or certain beyond the Standard Model physics particles escape the detector unnoticed. Therefore, direct observation of these particles is not possible. In general, the vector sum of the momenta of these invisible particles may be calculated by taking the negative of the vector sum of all visible particles. Since the LHC is a hadron collider, only the overall momenta in x and y prior to the collision are known. Consequently, the missing transverse momentum is defined as the negative vector sum of the transverse momenta of all visible particles. The general formula used for calculating the missing transverse momentum is [110]:

miss miss,e miss,γ miss,τ miss,jets miss,SoftTerms miss,µ Ex(y) = Ex(y) + Ex(y) + Ex(y) + Ex(y) + Ex(y) + Ex(y) (3.10) with each contribution representing the negative sum of the calibrated reconstructed objects in the x or y direction. Individual calorimeter energy deposits are associated with physical objects in a specific order, namely electrons (e), photons (γ), τ-leptons decaying to hadrons, jets and muons (µ). This association is done to determine the calibration scheme that should be applied to the individual deposits. In addition, deposits in topological clusters not associated miss,SoftTerms to high-pT objects are taken into account (Ex(y) ). Muons receive a special

47 3. Experiment treatment to avoid energy double counting by subtracting the parametrised muon en- ergy loss in the calorimeters from their contribution. Details on the requirements for the high-pT objects and the calibration schemes may be found in Ref. [110].

48 4. Datasets

4.1. Introduction

Experimental tests of theoretical predictions require a period of data taking prior to the analysis. In this work, the ATLAS detector at the LHC has been employed to collect proton-proton collision data. Additionally, simulated data has been generated to develop and test algorithms as well as to optimise the data analysis and compare the results to the collision data. After describing the set of real data used for the presented analysis the generation of simulated data will be discussed.

4.2. Real Data

This analysis is based on the data obtained by the ATLAS detector during the 2012 data taking of the LHC Run 1. ATLAS has recorded data for 10 periods (A-L) from April 4, 2012 to December 6, 2012 [111]. After running with √s = 7 TeV in 2011 the centre-of-mass energy was increased to √s = 8 TeV for the 2012 data taking. Events recorded during data taking are grouped into units of luminosity of increasing magnitude. Luminosity blocks (lumi blocks) are the smallest unit and mark an interval of stable detector conditions of approximately two minutes length. The next larger unit are data runs which take place over a full interval with stable proton beams delivered by the LHC and may span several hours in time. Several runs form a data sub period where detector and accelerator conditions are relatively stable. These sub periods are summarised to periods each denoted by a letter (see Table 4.1). Recording events with complex final states containing leptons and jets calls for the availability of the whole detector. The application of a so-called Good Run List (GRL) ensures that only data taken during lumi blocks where the whole detector operated sufficiently well is used. Events occurring during lumi blocks where the detector was not fully operational are vetoed by the GRL and not considered further in the analysis. The GRL used in the analysis is: data12 8TeV.periodAllYear DetStatus-v61-pro14-02 DQDefects-00-01-00 PHYS StandardGRL All Good. The total luminosity used in the analysis evaluates to (20.28 0.56) fb−1 and was cal- ± culated with the official ATLAS luminosity calculation tool [112]. Table 4.1 shows the individual luminosities and run conditions obtained during the in- dividual data taking periods. It contains the run ranges which comprise the individual sub periods as well as the luminosity delivered by the LHC during these runs, the Live- fraction luminosity representing the amount of recorded luminosity that was accepted by the GRL, the live fraction which measures the full availability of the detector as well as the average number of interactions per bunch crossing. As can be seen, the detector showed remarkable performance with an average live fraction of 97.05 % for the whole data taking period. The pile-up conditions were rather stable ranging from 29.7 to 36.2

49 4. Datasets

Period Run Range Runs Del. Lumi. Livefraction Live Average in pb−1 Lumi. in pb−1 Fraction Interactions A 200804 − 201556 29 811.60 786.14 96.86 % 29.7 B 202660 − 205113 72 5194.64 5060.82 97.42 % 31.2 C 206248 − 207397 34 1431.92 1399.80 97.76 % 34.1 D 207447 − 209025 48 3371.48 3282.82 97.37 % 34.5 E 209074 − 210308 29 2614.73 2530.73 96.79 % 35.9 G 211522 − 212272 18 1334.74 1280.71 95.95 % 34.2 H 212619 − 213359 22 1497.96 1454.89 97.12 % 35.4 I 213431 − 213819 14 1062.76 1024.57 96.41 % 34.3 J 213900 − 215091 29 2705.52 2616.87 96.72 % 35 L 215414 − 215643 10 876.05 848.70 96.73 % 36.2 total 200804 − 215643 305 20901.3 20286.00 97.06 % 33.8

Table 4.1.: Breakdown of sub periods detailing run ranges, acquired luminosity, live fraction and average interactions. All information was gathered from the COMA Period Documentation [95] and the ATLAS LumiCalc tool [112]. Lu- minosities have been calculated using the GRL mentioned in Section 4.2. Runs not in the GRL are omitted leading to different luminosity results than those depicted in Figure 4.1. While this table was created using the official tools possi- ble discrepancies in the configuration of said tools may lead to slightly different results.

average interactions per bunch crossing. Figure 4.1 illustrates the evolution of the total integrated luminosity over time for the year 2012 and is taken from Ref. [113].

4.3. Simulated Data

4.3.1. Introduction

Data simulation is a very involved task. Inelastic collisions of two protons at LHC energy levels usually result in the order of hundreds of particles whose momenta range over several orders of magnitude. Theories beyond the Standard Model may even add new particle flavours further increasing the amount of possible particles and Feynman diagrams that have to be considered in the event generation. It comes as no surprise that the simulation of particle physics events cannot be done using one single approach but that event generation is a series of contributing sub steps. This division of labour is exemplified by the factorisation theorem which is embodied in Equation (4.1). Z Z AB→X X ab→x x→X σ = dxa dxbfa/A(xa, µF )fb/B(xb, µF )ˆσ (µF , µR)D (4.1) a,b

The cross section for a given process AB X (e.g. pp (ab x) X) can be split → → → → into three steps:

Extraction of two partons a and b with momentum fractions xa and xb from the •

50 4.3. Simulated Data

25 •1 ATLAS Preliminary s = 8 TeV fb LHC Delivered 20 ATLAS Recorded Good for Physics

Total Delivered: 22.8 fb •1 15 Total Recorded: 21.3 fb •1 Good for Physics: 20.3 fb•1

10

5 Total Integrated Luminosity Luminosity Integrated Total

0 1/4 1/6 1/8 1/10 1/12 Day in 2012

Figure 4.1.: Depiction of accumulated luminosity versus time. The total lu- minosity delivered by the LHC is shown in green with the luminosity recorded by ATLAS overlayed in yellow. The remaining luminosity suitable for physics analysis is shown in blue. The difference between the green and yellow histogram stem from time periods where the LHC ring was already delivering stable beams but the ATLAS detector had not started recording yet. Inefficiencies in the start-up phase as well as data acquisition problems cause the differences in the yellow and blue histogram [113].

hadrons A and B, described by the parton distribution functions fa/A(xa, µF ) and fb/B(xb, µF ), the hard scattering of partons a and b resulting in the final state x encapsulated • ab→x inσ ˆ (µF , µR), and the evolution of x to the hadronised final state X reflected by Dx→X . • To arrive at the final result one has to sum over all possible parton flavours that may be P extracted from the hadrons in question ( a,b) and integrate over the possible momen- tum fractions. A factorisation scale µF is introduced which corresponds to an infrared cutoff moving collinear divergences outside the matrix element calculation. The cal- culation to all orders will be independent of this scale but the available finite series expansions depend on it and introduce a theoretical uncertainty. In addition, a renor- malisation scale µR is present which is necessary for the regularisation of ultraviolet divergences in the matrix-element of the hard scattering.

4.3.2. Event Generation

Actual calculation and event generation is based on Monte Carlo methods that imple- ment the stated sub steps. A more detailed account of the techniques involved can be found in Ref. [114] which was taken as the basis for the introduction given here. In practice, the user of an event generator will specify the type of hard interaction

51 4. Datasets

Figure 4.2.: Depiction of a simulated event. The incoming protons are shown as dark green ellipses. The hard interaction with the matrix element final state par- ticles is depicted in red. Initial state radiation taking place between the parton extraction and the hard scattering as well as the evolution of the matrix element final state particles after the hard interaction by the parton shower are shown in blue. Clusters form during the fragmentation process (light green) and are the basis for the hadron formation (dark green blobs). The subsequent decay of said hadrons is shown in dark green In addition, QED final state radiation (yellow), beam remnants (teal), and the underlying event from additional interactions be- tween the protons (purple) are depicted. Picture taken from Ref. [115].

that is to be simulated and will set the input parameters for the remaining sub steps accordingly. Figure 4.2 depicts the different sub steps. The distribution of the partons in the proton used in the matrix element (ME) cal- culation cannot be determined analytically but are taken from parton distribution 1 2 functions (PDFs) fa(x, Q ) which have to be measured in experiments. These PDFs encode the probability to obtain a parton of flavour a and momentum fraction x given the scale of the hard interaction Q2. Multiple collaborations provide such PDFs and the choice represents an important ingredient to the simulation. Theoretical uncertain- ties regarding the PDFs may be evaluated by varying the uncertainties on the PDFs as well as comparing the PDF of choice to alternatives. Figure 4.3 shows an example

1Each hadron type has a unique parton distribution function. Here, it is assumed implicitly that a proton PDF is considered.

52 4.3. Simulated Data

MSTW 2008 NLO PDFs (68% C.L.)

) 1.2 ) 1.2 2 2

Q2 = 10 GeV2 Q2 = 104 GeV2

xf(x,Q 1 xf(x,Q 1 g/10 g/10 0.8 0.8

0.6 u 0.6 b,b u

d 0.4 d 0.4 c,c

c,c s,s 0.2 s,s d 0.2 d u u

0 0 -3 -3 10-4 10 10-2 10-1 1 10-4 10 10-2 10-1 1 x x

Figure 4.3.: Two examples for parton distribution functions which may be used to describe the partial momenta of partons of different flavours from protons in collisions at the LHC. As can be seen the contribution at low x (low partial momenta) from gluons is dominant while at high x the valence quarks of the proton are primarily important. Picture taken from Ref. [116].

for such a PDF. Matrix element (ME) calculation simulating the hard scattering is done in per- turbation theory as the involved couplings between particles are small at high energies. This is especially important for the strong coupling as it becomes very large at small energies. Given the possible initial and final states the contributing Feynman diagrams up to a specified order are determined and calculated. The complexity may rise by adding additional jets to the final state (so-called multi-leg calculation) and by incor- porating higher order corrections (e.g. loops). A recent development is the possibility to calculate multi-leg final states at next-to-leading order which lowers the uncertain- ties of the theory prediction [117]. However, one has to bear in mind that increasing complexity and high dimensionality2 make the phase space integration computationally expensive. Parton shower algorithms are used to simulate the successive emission of additional particles simulating the shower evolution of strongly interacting particles, bridging the gap between the ME calculation which happens at high energy scales and the hadronisation processes taking place at energies around 1 GeV. They are necessary because matrix element calculation of arbitrarily high final state parton multiplicities is not feasible. Though the kinematics of the outgoing partons3 of the ME calculation may describe the transverse momenta of hadronic jets observed in real collisions well,

2The phase space integration of a final state with n particles has 3n−4 dimensions (three components of momentum per produced particle minus constraints for overall energy momentum conservation) plus flavour and spin labels. 3It has to be noted that these partons are not observable and therefore a comparison between their properties and a real world hadronic jet is not well-defined but would depend on details such as the used clustering algorithm.

53 4. Datasets important features such as the internal structure of jets would not be modelled. By simulating the emission of additional particles these features can be described. Computationally, these algorithms are Markov chains that simulate a step-wise emis- sion of new particles. Instead of time, the evolution may be ordered in the transverse momentum of the newly emitted particle (as done in Pythia8 [118]) or its angle w.r.t. to the emitting particle (Herwig++ [119,120]). In a transverse momentum ordered al- gorithm the hardest emission is simulated first. All subsequent emissions taking place in the next steps of the Markov chain have a lower transverse momentum. This is repeated until a lower threshold is reached and the event generation enters the hadro- nisation step. The matrix element calculation and the parton shower are quite complementary. While the matrix element step excels at calculating hard, wide-angle emissions it describes soft, collinear emissions rather poorly. For the parton shower the situation is completely reversed. Therefore, both steps work together to yield an accurate description. Their combination however is non-trivial. One problem in the combination process is to avoid double-counting or undercounting of phase space regions. The general techniques applied are called matching and merging. Matching describes the correct treatment of the first hard emission ensuring the correct balancing of contributions from the matrix-element calculation and the parton shower. Multi-jet topologies may be described better via merging where a matrix element calcu- lation for each jet multiplicity is made at leading order with the parton shower applied afterwards. A merging scale has to be defined above which the matrix element is re- sponsible for the jet production whereas below the parton shower populates the phase space. The hadronisation stage describes the formation of hadrons from the partons gen- erated thus far. Here, perturbative techniques cannot be applied due to the strong coupling constant αs becoming large. Phenomenological models which are based on QCD are used to describe the hadronisation process. These models offer many free parameters which may be tuned to describe the observed collision data well. Two general approaches are typically used: string fragmentation and cluster fragmen- tation. In string fragmentation, the quarks are connected via colourless strings. These strings experience tension and new quark-anti-quark pairs are created when string tension gets too high due to a too large spatial distance of the partons spanning the string and a new pair of partons is generated. The final hadrons are then formed from the quark-anti-quark pairs produced. In cluster fragmentation gluons split into quark-anti-quark pairs which form colourless clusters. Hadrons are then formed depending on the cluster energy. Lastly, the decay of short lived particles such as mesons and τ-leptons is simulated via packages such as Tauola [121] and Evtgen [122]. The steps outlined would lead to a description of a physical process from the colliding protons to the hadronic final state. For the sake of simplicity, some details were omitted which will be addressed in the following. As described above, parton shower algorithms are used to evolve the matrix element final state. Likewise, an evolution of the initial state takes place leading to initial-state

54 4.3. Simulated Data showers (also called initial-state radiation). Here, the incoming partons radiates off other partons before taking part in the hard interaction calculated in the matrix element step. Technically, this radiation is implemented by taking the partons partaking in the hard interaction and dressing them with additional radiation using a backward parton shower algorithm. The changes in the momentum fraction x and flavour a of the resulting dressed parton is taken into account and propagated to the parton distribution function. In addition to the hard interaction, the underlying event consisting of soft QCD interactions between the colliding protons as well as multiple parton interactions adding further hard interactions to the event are simulated to give an accurate and complete picture.

4.3.3. Event Record

The result of the event generation is a sample of simulated proton-proton collision events. Typical formats for the event record are HepMC [123] and LHEF [124]. Both formats aim to give a more or less traceable genealogy of the event starting from the initial state particle and following the evolution of the process through the generation and decay of intermediary particles to the final stable particles. The event record includes the kinematics for each particle as well as additional information useful for studies beyond particle level.4 The type of each particle is given via a so-called PDG ID.5 Particles involved in the different stages of the event generation (matrix element, hadronisation and so on) are labeled with a status flag. In principle this status flag is uniform between all generators with status 1 indicating final state particles and status 3 indicating matrix element particles. However, generator authors may also define additional statuses (see Table 5.5). One commonly used feature of the event record is to trace a given particle back to its origin, e.g. checking whether a certain lepton originated from the decay of a certain boson.

4.3.4. Detector Simulation

In order to compare the real and simulated data a detector simulation has to be applied to the output of the generated events. The ATLAS detector simulation [125] imple- ments the step from the simulations done in event generators to the inputs fed to recon- struction algorithms by simulating the detector response. It utilises GEANT4 [126,127] for simulating the interactions of the generated stable particles with the material of the detector. The simulated energy depositions from particles interacting with the mate- rial are saved in HITS files which form the input for the digitisation step. Here the response of the detector electronics is simulated translating the deposited energies into electronic signals which are further processed to the simulated output of the detector electronics. At this stage the trigger response is also simulated and possible pile-up conditions are added to the events. Great care is taken to create an exact replication of the real detector even taking into account time dependent conditions such as dead modules. The resulting Raw Data Object (RDO) files are similar in structure to the

4Particle level denotes the event information prior to the application of a detector simulation. 5The concrete encoding can be found in Ref. [39].

55 4. Datasets byte-stream format coming from the real detector. From here on the reconstruction chain for simulated and real data is identical.

4.3.5. Data Format for Analysis

The result of the reconstruction process are Analysis Object Data (AOD) files contain- ing the objects of each event put out by the reconstruction software. These files are only analysable within ATLAS’ own software, the Athena framework which incurs sig- nificant overhead due to many dependencies. Therefore, the preferred analysis format in Run 1 is the D3PD which is a flat ntuple readable via the ROOT framework [128]. The format can be roughly thought of as a very large table with the rows being indi- vidual events and the columns being information like the transverse momentum vector of all electrons. An adapter package called D3PDObjects is used to emulate the object oriented feel of the AODs thus facilitating the writing of object oriented code. All of the samples used in this work are so-called SMWZ D3PDs which are D3PDs adapted to the needs of the Standard Model working group.

4.3.6. Description of Used Generators

The simulated data used for comparison with the real data has been generated using a variety of generators.

Sherpa

Sherpa (Simulation of High-Energy Reactions of PArticles) [129] is a Monte Carlo event generator employing a generic ansatz for process generation. Whereas many other generators use precalculated matrix elements for a given process Sherpa uses inbuilt matrix-element generators AMEGIC++ and COMIX to determine and calculate all Feynman diagrams for a user defined set of initial and final states up to a specified order. This approach allows for the calculation of final states with high multiplicities, ensures gauge invariance, and yields consistent process definitions. In contrast to many other event generators, Sherpa is equipped with its own parton shower implementation. By incorporating both matrix element generation and parton shower, Sherpa is able to disentangle corrections coming from a real emission of a hard parton (matrix element effect) to those of the soft radiation evolution of a parton (e.g. splitting, parton shower effects). Correctly combining these effects improves the multi-jet description obtained leading to a higher accuracy than a simple leading order calculation. Sherpa can be used to generate both Standard Model processes as well as a variety of BSM physics processes and is able to write out standard file formats like HepMC.

Whizard

Whizard [130,131] provides next-to-leading order cross sections and event generation. It employs a matrix element generator called O’mega that, similar to Sherpa, calcu- lates all possible tree-level Feynman diagrams for a given set of initial and final states. Although the program includes a parton shower in practice its matrix element results are often showered using programs such as Pythia or Herwig. Whizard is able to calculate Standard Model processes as well as a range of beyond the Standard Model

56 4.4. Simulated Processes physics models. The ability to generate events in the framework of the electroweak chiral Lagrangian (see Section 2.4.4) and the availability of the K-Matrix unitarisation (see Section 2.4.6) are key features for exploring aQGCs. Whizard has therefore been the generator of choice for producing the aQGC signal samples for this analysis. Both HepMC and LHEF are possible output formats obtained from Whizard.

VBFNLO

VBFNLO [132–134] is a MC event generator specialised in calculating results for vec- tor boson fusion and vector boson scattering processes. The available processes are represented by matrix elements that are embedded in the generator’s code. Omitting the need to generate the necessary Feynman diagrams and exploiting analytic cancel- lations enables VBFNLO to do very fast integration at the cost of the possibility of missing diagrams in a given process definition. The program offers the calculation of cross sections at next-to-leading (NLO) order QCD accuracy and event generation at leading order (LO), both without further treatment by a parton shower. To obtain usable results one therefore has to interface VBFNLO with a suitable parton shower program e.g. Pythia or Herwig. VBFNLO is able to supply results for effective field theory using a non-linear param- eterisation with unitarisation being ensured by a form factor approach. Starting with version 3.0 VBFNLO also supports the K-matrix unitarisation approach.

PowHeg-Box

The PowHeg-Box [135–137] states as its main objective the consistent combination of NLO matrix element calculations and parton shower evolution. It is not a generator per se but a general framework that allows theorists to supply NLO calculations which are then interfaced to shower Monte Carlo programs via the PowHeg method. Processes are generated from precomputed matrix elements and are then interfaced to showering algorithms such as the ones implemented in Pythia or Herwig.

4.4. Simulated Processes

This section lists the simulated data samples used for signal and background modelling in the analysis of the WZ channel. In parallel, the mechanism through which the individual backgrounds contribute to the totally observed data yield will be discussed. Only official ATLAS samples are considered in this section. Each sample has a unique identifier, the DSID (dataset identifier). These identifiers may be used to obtain more information on the generator settings used for producing the sample at hand. All run cards encapsulating the generator settings of the official ATLAS samples are stored in the SVN area of the ATLAS experiment [138]. Using the DSID one can find and examine the run card used to simulate the sample. In the inclusive phase space (see Section 5.4.1) all WZ events are defined as signal. In the VBS phase space only the W ±Zjj-EW process is regarded as such.

57 4. Datasets

Figure 4.4.: Examples of Feynman diagrams contributing to the W ±Zjj-EW final state. The upper row shows from left to right the VBS diagrams with the dashed circle being a place holder for the possible triple and quartic gauge cou- pling vertices and Higgs interactions, the emission of two massive bosons without scattering between the bosons, and the emission of two vector bosons via a triple gauge vertex. The lower row shows contributions that strictly speaking are VVV background where one boson decays to hadrons. However, the contribution of these diagrams is negligible once a sufficiently high requirement on the invariant mass of the dijet system is applied because it peaks around the massive gauge boson masses.

4.4.1. W ±Zjj-EW

The W ±Zjj-EW sample defines the signal prediction used in this analysis. The sample is generated using Sherpa 1.4.5 by requesting the simulation of all diagrams for jj 6 → lllνjj of order αEW = 6. Therefore, not only the VBS signature is included but also other purely electroweak contributions which are not gauge invariantly separable. Some of these contributions are depicted in Figure 4.4. The main sample used for producing the results shown in this work is sample 185396 which includes contributions from the Higgs, treats c and b quarks as massive and uses CT10 [139] as the PDF set. Generator level selection cuts are applied to avoid loss of statistics through the unnecessary generation of low mass resonances which will not be considered in the analysis. Three requirements are introduced on generator level. Firstly, same flavour opposite charge lepton pairs have to have an invariant mass of at least 0.1 GeV removing the singularity at mll = 0 while keeping most of the γ∗ contribution. Secondly, only events with at least two leptons with a transverse momentum larger than 5 GeV are considered. Thirdly, all events have to contain at least two anti-kt jets with a transverse momentum greater 15 GeV and a maximum mass of 10 GeV. The run card used to generate the sample can be found in the SVN area of the ATLAS experiment [138]. The resulting total cross section of this sample is 82.1 fb−1 with a total statistics of 500000 events. The generic approach of Sherpa where only the desired initial and final states are given stands in contrast to other generators which may implement concrete physics processes such as top pair production based on a predefined set of Feynman diagrams. In the

6In Sherpa j defines a container that includes quarks and gluons.

58 4.4. Simulated Processes

bbq b t t W + W + W W Z Z

q q0 q¯0 ¯b Figure 4.5.: Selected diagrams contributing to the tZj background.

Full Sample W ±Zjj-EW (VBS only) W ±Zjj-EW (tZj) sample events 500000 267960 230240 σ in sample PS / fb 82.1 44.3 37.8

Table 4.2.: Sample statistics for the full generated W ±Zjj-EW sample as well as the subsamples defined by the splitting procedure.

sample at hand a process called tZj is contained which is the production of a single top quark in association with a Z boson and a jet. Though the process is also purely electroweak it is not realised via diagrams with quartic gauge couplings. Figure 4.5 shows some of the contributing diagrams for tZj. Therefore, the tZj process may be regarded as an irreducible background in the search for evidence for the existence of quartic gauge couplings. So, it was decided to exclude tZj from the W ±Zjj-EW signal definition and examine the two processes separately by splitting sample 185396. The implemented splitting prescription states that all events where the matrix element level initial or final state contains a b quark are labelled as tZj. Here, matrix element level denotes the simulation of the hard interaction prior to hadronisation. The result of the splitting procedure is shown in Table 4.2. It has to be noted that this simple criterion does not provide a 100 % clean separation. A small signal contamination present in the tZj part which stems from events with b quarks in the initial state where the b quark emits a Z boson. However, this contribution is estimated to be small as a prior analysis has shown that the ratio of the production cross section of Z boson from a b quark containing initial state to its total production cross section is rather small. The concrete numerical value was measured in [140] to be

= σZ+b/σZ+jets = [4.6+1.4(stat.) 0.5(syst.)]%. R incl incl −1.2 ± It is also expected that the overall cross section of the tZj-like part is only marginally affected by the inclusion of the small VBS contribution as the main tZj production diagram is a resonant s-channel in contrast to the t-channel in the VBS-like diagram. The impact of the splitting procedure on specific results will be discussed in this work whenever they apply.

4.4.2. W ±Zjj-QCD

The sample used for simulating the W ±Zjj-QCD process is defined via the final state WZ lllν with all contributing diagrams being of the order αEW = 4 and at least → αQCD = 2 with up to three jets from the matrix element. The sample used mainly in

59 4. Datasets

Figure 4.6.: Selected diagrams contributing to the W ±Zjj-QCD final state. The diagrams contributing to W ±Zjj-QCD and W ±Zjj-EW show multiple similari- ties.

this work is sample 185397 incorporating contributions from the Higgs boson, treating c and b quarks as massive and uses CT10 as the PDF set and was generated with Sherpa 1.4.1. Only events where both the W and Z boson decay to leptons are included. In addition up to 1 additional jet is considered in the matrix element generation. As in the case of the electroweak sample a generator filter has been applied. It requires that electron-positron-pairs have an invariant mass of at least 0.1 GeV and that the leading two leptons have a transverse momentum of at least 5 GeV. The generated events are filtered further by requiring that at least two leptons with a transverse momentum pT > 10 GeV and a pseudo-rapidity η < 2.8 have to be present. | | The inclusive WZ production may also be generated using PowHeg or MC@NLO [141]. Both generators define the process via 18 samples which are split with respect to the charge of the W boson and the flavours of the leptons (being electron, muon and τ- lepton). It has to be noted that these generators only provide W ±Zjj-QCD events through additional jets from the parton shower and not from the matrix element. They are therefore not used for any VBS related results but are used as cross checks in the unfolding.

4.4.3. Background processes

Several background processes contribute to the measurement of the W ±Z signal. These backgrounds can be categorised into prompt and non-prompt backgrounds. Prompt backgrounds originate from processes which either have the same final state as the signal or appear signal-like due to objects from the hard scattering being lost. Particles may be missed because their trajectory lies outside the ATLAS active volume or because a physical object coming from the hard interaction in question is discarded by the object requirements applied in the analysis. Non-prompt backgrounds originate from either misidentification of particles or selection of particles from secondary sources. In the first case, a physical object is categorised incorrectly by the used identification algorithms. In the second case, the physical object may stem from processes such as decays inside of jets or photon conversion. The various background processes will be discussed in the following.

60 4.4. Simulated Processes

Prompt Backgrounds

The main prompt backgrounds in this analysis are ZZ, tt¯+ W/Z, and tZj. Contri- butions from VVV and double particle scattering (DPS) are small to negligible in the considered phase spaces The ZZ llll background is a prompt background that contributes due to the loss of → one of the four leptons. The loss may either be caused by the kinematic requirements on the leptons or by identification and reconstruction inefficiencies. It is estimated using Sherpa 1.4.0 with the purely electroweak contributions of the ZZ lllljj background → being present in sample 147196 and the inclusive ZZ llll production represented by → sample 126894. Both samples are generated with the CT10 PDF set. The tt¯+ W/Z background is a prompt background due to additional leptons from the vector boson accompanying the top-antitop pair. In the case of tt¯+ W all W bosons in the event decay to leptons whereas in tt¯ + Z the Z boson supplies a lepton pair and one W boson from the top system decays to a lepton and a neutrino while the other decays to jets. The contribution from tt¯+ W/Z is primarily important for higher jet multiplicities due to the jets from the top pair system. tt¯+ W/Z is estimated via simulated data generated with MadGraph [142] interfaced with Pythia8 using the CTEQ6L1 [143] PDF set and the AUET2B tune for the underlying event. The relevant samples are 119353-6. The tZj background is of roughly the same size as tt¯+ W/Z with its generation being described in Section 4.4.1. Though having some importance in the inclusive phase space the VVV background is of minor importance in the VBS phase space. It can be suppressed quite effectively by requiring an invariant mass of the dijet system beyond the pole masses of the W and Z boson. The samples 167006-8 representing VVV are made using MadGraph interfaced with Pythia using the CTEQ6L1 PDF set and the AUET2B tune for the underlying event. The DPS background is caused by the coincidental occurence of a W +jets and Z+jets event during the same proton-proton collision. Calculating the rate at which DPS happens can be calculated via

DPS σWσZ σW+Z = . (4.2) σeff

Here, σV (V = W/Z) denotes the cross sections for the individual processes and +5 σeff = (15 3(stat.) (sys.)) mb the effective cross section of the proton in proton- ± −3 proton collisions which has been measured by ATLAS [144]. The contribution is simu- lated via Pythia8 using the CTEQ6L1 PDF set and the AU2 tune for the underlying event. Only sample 147282 simulating WZ DPS shows a sizeable contribution and is thus considered. It has to be noted however that contributions from DPS decline quickly with increasing jet multiplicity and are negligible in the VBS phase space.

Non-Prompt Backgrounds

The backgrounds contributing to the analysis via non-prompt sources are W +jets, Z+jets, tt¯, single top, WW , W ±γ, and Zγ. The main contributions stem from Z+jets, dileptonic tt¯, and Zγ while W +jets, WW , single top, and W ±γ contributions are

61 4. Datasets rather small due to combination of either low cross section or the low probability for the occurrence of more than one non-prompt lepton or misidentification of a physical object in a given event. Estimating the contributions from non-prompt backgrounds is typically done via data driven methods as the description of these backgrounds via simulated data is often not ideal. In this work the so-called matrix method will be em- ployed estimating both shape and normalisation using data (see Section 6). Therefore no simulated data is used for the estimate of the mentioned backgrounds.

4.4.4. Scaling Factors

Scaling factors, often called k-factors, are used to apply the results of calculations made at higher order in perturbation theory to predictions made at lower order in perturbation theory. A typical scenario is that a given generator may provide events at leading order in QCD whereas a dedicated program may produce a cross section at a higher order in QCD but is not able to generate events. Normalising the lower order prediction to the higher order prediction corrects for changes in normalisation introduced by the higher order effects. In the W ±Z analysis MCFM [145–147] provides the next-to-leading order prediction for the inclusive W ±Z process and a k-factor of 1.12 was derived for the Sherpa sample describing W ±Zjj-QCD in the inclusive phase space. However, this approach does not translate as well to W ±Zjj-EW. The phase space in which the k-factor is applied should not depend on requirements on observables overly sensitive to QCD. The inclusive phase space is defined solely through the lepton kinematics (see Section 5.2.1). For W ±Zjj-EW this is not possible. The only available simulation program offering next-to-leading order accuracy for the purely electroweak VBS processes is VBFNLO. Unfortunately, it does not consider certain Feynman diagrams in the low invariant mass of the tagging jet region. A common phase space between Sherpa and VBFNLO can thus only be found by introducing a requirement on the invariant mass of the tagging jets. However, this compromises the integrity of the general k-factor application ap- proach rendering. Therefore it was decided to not apply a k-factor for either W ±Zjj-EW nor W ±Zjj-QCD using the prediction of VBFNLO.

62 5. Object and Event Selection

Compared to the total inelastic cross section about 60 mb [148] of the LHC the total predicted cross section of the W ±Z process of 23 pb [146] appears insignificant. ≈ Extracting a signal nine magnitudes smaller than the total cross section of the LHC requires a well optimised object and event selection. This chapter describes the object selection chosen in this work as well as the event selection prescriptions that define the phase spaces in which the subsequent cross section measurements will be performed. In the following an object may be an or a jet. It may also be two jets that are very close together and are thus identified as one so-called fat jet. Objects are definable entities and a given event can contain multiple objects of the same kind. In this work, a distinction is made between physical objects describing objects that have actually existed (e.g., an electron traversing the detector) and measured objects describing objects that have been reconstructed by the detector algorithms. A physical object may be a jet which has been misidentified as an electron (the measured object).

5.1. Object Selection on Detector Level

The purpose of the object selection is to ensure that an elementary particle such as an electron is correctly and efficiently identified. There are many figure of merit which may be used to evaluate the performance of an object definition. Commonly used are efficiency and purity. The efficiency  is defined as the probability that a physical object is correctly reconstructed and identified:

rec  = Ntrue/Ntrue.

The purity p is defined as the probability that a physical object is correctly identified by the given object selection:

rec rec p = Ntrue/N . In general, both variables are counterbalancing each other. Highly efficient object definitions tend to accept objects of different origin thus leading to lower purity. Object definitions with strict criteria result in very pure samples but may endanger subsequent analysis steps due to inefficient use of available statistics. Compromising between high efficiency and purity is necessary to arrive at an optimal object selection. Once the objects are defined, corrections have to be applied accounting for differences in detector performance between simulated and real data as well as known biases of the measurement process. These differences may be categorised into kinematic corrections and algorithmic corrections. Kinematic corrections change the object information. Typically, this translates to al- tering the transverse momentum of the object by either scaling or smearing. Scaling

63 5. Object and Event Selection corrects for shifts in the energy response. Known biases between the measured and true momentum of an object are corrected for in both simulated and real data. Differences in the detector performance between simulated and real data can be determined by comparing sensitive variables, e.g., the invariant mass of the Z boson. Scaling factors derived in these studies are applied to simulated data to retrieve the same results found in real data. Smearing corrects for differences in momentum resolution. Similar to the scaling cor- rection the mass peak of the Z boson may be used to obtain the smearing correction. Here, the corrections applied on simulated data ensure that the width of the mass peak is the same in real and simulated data. Algorithmic corrections target the differing performance of algorithms used to enhance the information content of the measured objects in real and simulated data. An exam- ple for such an algorithm is an identification algorithm. Identification algorithms use multiple measured input variables to compute a single quality variable. This variable is a proxy for the definitiness with which the measured object resembles the object hypothesis. The input variables used by these algorithms show differences for real and simulated data resulting in differing identification efficiencies. A correction factor, called scale factor, is introduced which rescales the efficiency observed in simulated data to the one observed in real data thus correcting for the imperfections of the simulation of the input variables. Depending on the algorithm, such a scale factor may be defined on a per object or on a per event basis. Typical algorithms that are subject to such corrections are identification algorithms, reconstruction algorithms, isolation calculations, tracking related variables and flavour tagging algorithms.

5.1.1. Electron Definition

Selection criteria

The reconstructed electron candidates (see Section 3.4.2) are subjected to a number of requirements ensuring a low misidentification rate. Table 5.1 lists the requirements placed on the prospective electrons at the different stages of the object selection. In the following, the applied selection steps are explained in more detail. Author: Several algorithms are available for the reconstruction of electrons. The cho- sen algorithm requires a track of sufficient transverse momentum that is matched to a suitable topocluster in the electromagnetic calorimeter (denoted in Table 5.1 as author = 1 3 which represents the OR between two electron reconstruction algorithms). | Quality: The Egamma group provides an algorithm calculating the quality variable providing a measure for the confidence that the measured object is an electron. The quality is based on a number of variables that show different behaviour for electrons and other physical objects. Multiple quality levels called Loose++ ( 90%), Medium++ ≈ ( 80%), and T ight++ ( 70%) are available.1 In the preselection the Loose++ ≈ ≈ quality requirement is applied. This requirement is later tightened to Medium++ (T ight++) for electrons associated to the Z (W ) boson. The application of the T ight++ level for the electrons originating from the W boson is done to ensure lower

1 The efficiencies stated are only rough estimates neglecting the pT dependence of the efficiency.

64 5.1. Object Selection on Detector Level contamination from misidentified objects. More information on the individual quality levels can be found in Ref. [106]. GoodOQ: The GoodOQ requirement ensures that the topocluster associated to the electron is well measured. A given electron is discarded if its associated cluster is affected by faulty electronics, noisy cells or problems in the high voltage supply. pT: In the preselection step the transverse momentum is required to exceed 5 GeV. Depending on whether the electron is associated to the Z or W boson the requirement is tightened to pT > 15 GeV or pT > 20 GeV, respectively. η: The requirement on the pseudo-rapidity ensures that the electron lies inside the volume where the tracking detector is present. It is defined as η < 2.47 and 1.37 < | | η < 1.52. The latter term ensures that the electron does not lie within the non- | | optimally instrumented transition region between the barrel and endcap calorimeter.

Z0 sin(θ): The variable Z0 sin(θ) measures the distance along the z-axis of the electron- track to the primary vertex at the point of closest approach. It is required that Z0 sin(θ) < 0.5 mm. d0,sig: The significance of the transverse impact parameter d0 (see Section 3.3.1) mea- surement is computed by taking the ratio of the absolute value of the measured d0 and the uncertainty associated with the measurement and is required to be less than 6. This requirement ensures that the measured object is likely to originate from the primary vertex by restricting its displacement in the transverse plane. Track isolation: The track isolation is computed by summing the transverse momenta of all tracks in a cone of radius R around the electron track and dividing by the transverse momentum of the electron track. Effects from pile-up are corrected using an official package issued by the Egamma group. Depending on the association of the electron, the cut regarding the track isolation changes. For electrons associated to the Z boson, the relative track isolation in a cone of ∆R = 0.2 is required to be less than 0.13. In case the electron is associated to the W boson, the relative track isolation in a cone of ∆R = 0.3 is required to be less than 0.1. The stricter requirement on the isolation for electrons associated with the W boson is motivated by the large contribution of Z+jets and tt¯ to the non-prompt background. Both processes supply a good Z boson candidate making the lepton associated with the W boson likely to be the misidentified

Variable baseline Z-candidate W -candidate Author 1 3 1 3 1 3 | | | Quality Loose++ Medium++ T ight++ GoodOQ XXX pT > 5 GeV > 15 GeV > 20 GeV η < 2.47 w/o crack < 2.47 w/o crack < 2.47 w/o crack | | Z0 sin(θ) < 0.5 mm < 0.5 mm < 0.5 mm d0,sig < 6 < 6 < 6 cone cone Track isolation - pTrel (20) < 0.13 pTrel (30) < 0.1 cone cone Calo isolation - ETrel (20) < 0.14 ETrel (30) < 0.14

Table 5.1.: Reconstruction level object selection requirements for electrons for the different stages of the W ±Z analysis.

65 5. Object and Event Selection lepton in the event. Calorimeter isolation: The calorimeter isolation is computed similarly to the track isolation. Here the transverse energy ET of all clusters in a cone of radius R around the electron cluster is summed and divided by the ET of the electron cluster itself. Effects from pile-up as well as reconstruction specific leakage artefacts are corrected for by using an official package issued by the Egamma group. Electrons coming from the Z boson are required to have a value of less than 0.14 for ∆R = 0.2. The constraint is tightened for electrons from the W boson demanding less than 0.14 for ∆R = 0.3 for the same reasons stated for the track isolation.

Applied corrections

Electrons are subject to both scaling and smearing corrections. Both the scaling and smearing correction are applied using the EnergyRescalerUpgrade tool supplied by the Egamma group. Recent results on the size of the calibration corrections can be found in Ref. [105]. The algorithmic corrections include reconstruction, identification, and isolation scale factors. They are applied via the TElectronEfficiencyCorrectionTool supplied by the Egamma group which also provides the configuration files needed for initialising said tools. Recent results on the size of the efficiency corrections can be found in Ref. [106].

5.1.2. Muon Definition

Selection criteria

After being reconstructed (see Section 3.4.1) the muon candidates are required to fulfil a list of constraints. Table 5.2 shows the requirements placed on the prospective muons at the different stages of the object selection. Author: There are three algorithms that can be chosen as the author for a certain muon: STACO, Muid and 3rdChain. STACO and Muid are original implementations which do the same reconstruction tasks but use different solution strategies. The 3rdChain is

Variable baseline Z-candidate W -candidate Author STACO STACO STACO Quality loose loose tight TrackRequ. XXX pT > 5 GeV > 15 GeV > 20 GeV η < 2.5 < 2.5 < 2.5 | | Z0 sin(θ) < 0.5 mm < 0.5 mm < 0.5 mm d0,sig < 3 < 3 < 3 cone cone track isolation - pTrel (20) < 0.15 pTrel (30) < 0.1

Table 5.2.: Reconstruction level object selection requirements for muons for the different stages of the W ±Z analysis.

66 5.1. Object Selection on Detector Level an attempt to unify both STACO and Muid by identifying which algorithm performs better for a given subtask in the reconstruction. Therefore, the 3rdChain has been made the default choice for the muon author. However, studies in the W ±Z analysis group have shown that STACO is a more suitable choice. Quality: Three quality levels are available for muons: loose, medium, and tight. In a preselection step all muons are required to satisfy the loose quality requirement. Muons associated to the W boson have to be classified as tight whereas no further quality requirement is imposed on the muons associated with the Z boson. Track requirements: The Muon Combined Performance (MCP) group has issued a set of recommendations that ensure a well measured inner detector track for the muon. These recommendations consist of a set of cuts on the number of hits in the sub- detectors of the inner detector and are detailed in Ref. [103]. pT: In the preselection a cut of pT > 5 GeV is required. For muons associated to the Z boson this cut is tightened to pT > 15 GeV and to pT > 20 GeV for muons stemming from the W decay. η: The requirement on the pseudo-rapidity of η < 2.5 ensures that the muons lie inside | | the volume where both an inner track and muon spectrometer segments are available for muon reconstruction.

Z0 sin(θ): The variable Z0 sin(θ) measures the distance along the z-axis of the inner detector track of the muon to the primary vertex at the point of closest approach. It is required that Z0 sin(θ) < 0.5 mm. d0,sig: The significance of the d0 (see Section 3.3.1) measurement is computed by taking the ratio of the absolute value of the measured d0 and the uncertainty associated with this measurement and is required to be less than 3. This requirement ensures that the measured object is likely to originate from the primary vertex by restricting its displacement in the transverse plane. Track isolation: The track isolation is computed analogously to the electron case. Classes issued by the MCP group apply corrections to counterbalance the effects of pile-up. For muons associated to the Z boson the relative track isolation in a cone of ∆R = 0.2 is required to be less than 0.15. In case the electron is associated to the W boson the relative track isolation in a cone of ∆R = 0.3 is required to be less than 0.1.

Applied corrections

Muons are subject to both scaling and smearing corrections which are applied in si- multaneously using the SmearingClass tool provided by the MCP group. The scale factors used to harmonise the identification algorithm output between simu- lated and real data are taken from the AnalysisMuonConfigurableScaleFactors tool maintained by the MCP group. A private study in the WZ analysis group determined the scale factors needed for the isolation of the muons as no official values were pro- vided. Results on the corrections for the efficiency and calibration can be found in Ref. [103].

67 5. Object and Event Selection

5.1.3. Jet Definition

Selection criteria

The selection criteria applied to reconstructed jets (see Section 3.4.3) are given in the following and a are summarised in Table 5.3. Author : Multiple reconstruction prescriptions are available for jets. The one used in this analysis is called “AntiKt4LCTopo”. Thus the jets are clustered with the anti-kt algorithm using local cell weighted topo clusters as inputs (see also Section 3.4.3). pT: The transverse momentum of the jet candidates is required to be larger than 30 GeV. The cut value is a compromise between high efficiency and a high rejection of pile-up jets in the forward region where no other handles are available to suppress pile-up jets than a high transverse momentum criterion. η: Each jet candidate has to exhibit an absolute pseudo-rapidity smaller than 4.5. The allowed range in pseudo-rapidity is larger than the usual requirement of η < 2.5 for | | jets which is used to ensure that tracking information is available for the application of certain jet algorithms. This smaller range would severely limit the signal acceptance of the W Zjj process which exhibits a substantial amount of jets in the forward region ( η > 2.5). | | JetCleaning: The JetCleaning step ensures that the jet candidates used in the analysis are well measured. A jet cleaning tool provided by the Jet/EtMiss group is used to discard jets that are either from background events, are caused by detector effects or lie in defective calorimeter regions. Only jets that are deemed good for physics analysis use are considered for later analysis. JVF: The jet-vertex-fraction (JVF) is a handle that allows to distinguish between jets stemming from the primary vertex and jets originating from pile-up. All tracks as- sociated with the jet are tested whether they stem from the primary vertex or not. The jet-vertex-fraction is then defined as the ratio of the scalar sum of the transverse momenta of the tracks associated with the primary vertex to the scalar sum of all trans- verse momenta of all tracks. Further details on this variable can be found in Ref. [149]. Each jet is required to have a JVF value of more than 0.5 if it has a pT < 50 GeV and η < 2.4. All jets outside the quoted kinematic region are accepted by default as no | | reliable JVF value can be calculated. Here, other measures such as higher pT cuts have to be used to suppress contributions from pile-up jets. Overlap Removal: It is required that each jet has a minimum distance in the η φ −

Variable baseline

pT > 30 GeV η < 4.5 | | JetCleaning level good JVF JV F > 0.5 if pT > 30 GeV and η < 2.4 | | OLR ∆Rjl > 0.3 for l = e, µ

Table 5.3.: Reconstruction level object selection requirements for jets in the W ±Z analysis.

68 5.2. Event Selection plane of at least ∆R > 0.3 to the nearest selected baseline lepton.

Applied corrections

The transverse momentum of each jet is scaled and smeared to harmonise the simulated performance with the one encountered in data. The Jet/EtMiss group provides an official tool for this task and recent results on the calibration can be found in Ref. [109].

5.1.4. Missing Transverse Momentum

The missing transverse momentum on detector level is obtained via the METMakerTool. All reconstructed electrons, muons and jets with corrections applied as well as vertex information is provided as input to the METMakerTool. In addition, terms representing topoclusters not associated with any reconstructed objects and information on τ-leptons are taken directly from the sample as input. Multiple missing transverse momentum definitions are available depending on the way that the individual contributing objects are calibrated and considered. The definition chosen here is “MetRefFinal”.

5.2. Event Selection

This section outlines the flow of the analysis detailing the individual steps taken to define the phase spaces as well as ensuring that all relevant quantities are well measured. Event selection requirements will be given for events on detector level (real data and simulated data after application of a detector simulation) and for events on particle level (simulated data without any detector simulation). The selection on detector level enables the analysis of real data and their comparison to simulations whereas the particle level selection defines fiducial volumes for measur- ing cross sections. Some explanation regarding the implemented cuts will be given, the specifics of the optimisation of the phase spaces for the VBS and anomalous quar- tic gauge coupling (aQGC) measurements however will be discussed in the respective dedicated chapters.

5.2.1. Detector Level Event Selection for the Inclusive Phase Space

The inclusive phase space definition selects a relatively clean sample of WZ events without further requirements on jet related properties. In this work it will be used to unfold the differential distribution of the jet multiplicity (see Section 9). It also represents the phase space in which the inclusive cross section of the WZ production as well as limits on anomalous triple gauge couplings were derived in the analysis this work is embedded in (see Ref. [16]).

Event Cleaning

The first step in the analysis flow is applying correction factors to deal with pile-up and the primary vertex z-distribution. Generating simulated data samples for a given data taking period is a computing intensive task and spans several weeks. Sample

69 5. Object and Event Selection production usually starts before the actual data is taken and assumptions have to be made regarding the amount of pile-up that will be encountered. As is to be expected, the distribution of average interactions per bunch crossing µ differs for real and simulated data. Utilising the luminosity per run information and µ distribution for the real data, the µ distribution for the simulated data sample is made to match the one encountered in real data. Details on this approach, called pile-up reweighting, can be found in Ref. [150]. Also, the distribution for the position of the primary vertex on the z-axis is different for real and simulated data. The distribution encountered in the simulated data is altered to match the one in real data using an approach similar to the one employed for pile-up reweighting. Further event cleaning cuts are made to veto events where the electromagnetic or hadronic calorimeter exhibits noisy cells or so called noise bursts where a larger portion of the detector gives anomalous readings. Additionally, a cut is made on the core flags variable which ensures that the event was recorded correctly.

Jet and MET cleaning

The jet cleaning step ensures that no badly measured jets are present in the event. For this, all jets above a momentum threshold of pT > 20 GeV and with a minimum distance to any selected lepton of ∆R > 0.3 are considered. The event is discarded if any of the jets has the quality “isBadLooseMinus” as indicated in the jet information obtained from the input D3PD.

Good Run List

The provided samples of real data contain all recorded events for a given run. However, there may be data periods (measured in lumi blocks) where the detector does not meet the requirements for a given analysis. Each analysis chooses a GoodRunList (GRL) consisting of a list of run numbers and lumi blocks where the detector performance meets the demanded needs. Events that are not in a lumi block denoted in the list are vetoed and not considered for further analysis. The GRL used throughout this work is: data12 8TeV.periodAllYear DetStatus-v61-pro14-02 DQDefects-00-01-00 PHYS StandardGRL All Good.

Trigger

The ATLAS trigger system is described in Section 3.3.6. Triggers used for event selec- tion are looking out for either muons or electrons above a certain transverse momentum threshold. For muons it is required that either the EF mu24i tight or EF mu36 tight trigger has fired. The EF mu24i tight requires the muon to have pT > 24 GeV and imposes mild isolation and tight quality criteria.2 whereas the EF mu36 tight operates at a higher threshold of pT > 36 GeV in conjunction with a tight quality criterion.

2 The requirements beside the transverse momentum are imposed to avoid event rates that exceed the performance by the readout electronics or storage system. Triggers without these additional re- quirements would have a much higher transverse momentum threshold that would limit the number of leptons suitable for W and Z boson reconstruction.

70 5.2. Event Selection

3 The former algorithm ensures that unprescaled events with muons having a pT down to 24 GeV are available for the analysis while the latter trigger recovers inefficiencies of the former. Results on the muon trigger performance can be found in Ref. [151]. The same strategy is employed for electrons. Here, EF e24vhi medium1 is the lower pT threshold trigger requiring pT > 24 GeV and imposing isolation and medium++ quality requirements. Inefficiencies at higher transverse momenta are recovered by the EF e60 medium1 trigger which demands a pT > 60 GeV and medium++ quality.

Primary Vertex

It has to be ensured that the primary vertex is actually caused by a proton-proton interaction and is not due to a stray particle traversing the detector. Such a particle may cause a primary vertex with two associated tracks. This background is suppressed by requiring that at least three tracks stem from the primary vertex. The definition of the primary vertex is done by extrapolating all tracks in the event inwards. Regions where several extrapolations are in close vicinity may indicate the existence of an interaction vertex and thus vertex candidates are formed. Out of all these candidates the one with the highest sum of transverse momenta is chosen to be the primary vertex.

ZZ-Veto

Since the ZZ background is one of the dominating backgrounds in the analysis, a dedicated cut is introduced to suppress its contribution. The aim of the cut is to detect a fourth lepton in the event and if present veto the event. A set of leptons is defined with all object cuts of the preselection with a modified transverse momentum requirement of pT > 7 GeV applied. If the size of this set is equal or larger than four the event is discarded.

Boson properties

The leptons satisfying the baseline selection are handed to an algorithm determining which lepton stems from which boson. Two lists of leptons are generated, one for leptons coming from Z bosons and one for leptons coming from W bosons with the object selection requirements detailed in Table 5.1 and 5.2. Using these lists possible WZ candidates are built with Z boson candidates being made from same flavour opposite sign pairs. The WZ candidate with a Z mass closest to mZ = 91.1876 GeV is chosen as the best candidate and used for further consideration. If no such candidate can be found, the event is discarded. After the successful assignment a requirement on the invariant mass of the Z boson candidate is imposed. Events are only kept if mcand mZ < 10 GeV thus suppressing | − | backgrounds without an intermediary Z boson such as tt¯ at the expense of the W ±γ∗ contribution to the signal. ± In addition a cut on the transverse mass of the W boson of mT,W > 30 GeV is implemented. The transverse mass is defined as

3 Prescales denote the fact that not each triggered event is recorded but only a fraction of the triggered event limiting the available statistics.

71 5. Object and Event Selection

q l miss l miss mT,W = 2p p (1 cos(∆φ(φ , φ ))) (5.1) T T − with the superscript l denoting the lepton assigned to the W bosons and “miss” denoting the missing transverse momentum vector.

Triggermatch

It is ensured that the leptons associated to the W ± and Z boson have caused the trigger to fire by comparing the direction of flight of said leptons with the regions of interest (ROIs) of the used trigger algorithms. If not, the event is discarded. This requirement ensures that the event was triggered by a well measured lepton that passes the object selection criteria.

5.2.2. Event Selection for the VBS Phase Space

Two additional cuts are added to the requirements imposed by the inclusive phase space definition.

Number of Jets

It is required that at least two jets with pT > 30 GeV are present in the event. Tagging jet related observables become available after this cut providing the means for the subsequent criteria applied for the VBS phase space.

Invariant Mass of the Tagging Jets

The two jets with the highest transverse momentum are chosen as the tagging jets. The invariant mass of the tagging jets is one of the variables with the highest sep- aration power for suppressing W ±Zjj-QCD while retaining W ±Zjj-EW. An initial optimisation study based only on statistical uncertainties showed that the ideal cut value is mjj > 500 GeV. Later studies incorporating both systematic and statistical uncertainties yielded a higher ideal cut value. However, it was observed that the initial requirement proved most beneficial for further optimisation in the aQGC phase space. Thus, it is still required that mjj > 500 GeV. More details on the optimisation can be found in Section 8.1.

5.2.3. Event Selection for the aQGC Phase Space

The aQGC phase space is based on the VBS phase space optimising the impact of the added contributions by aQGCs. Its definition was subject of a dedicated study optimising the performance of the limit setting which is described in Section 10.2. The resulting event selection criteria are shown here.

|∆φWZ|

This variable is calculated from the angular information of the observed charged leptons and the missing transverse momentum vector. The four-vectors of the W and Z boson

72 5.3. Object Selection on Particle Level

Variable baseline Z-candidate W -candidate status 1 1 1 Barcode < 100000 < 100000 < 100000 pT - > 15 GeV > 20 GeV η - < 2.5 < 2.5 | | Table 5.4.: Particle level object selection requirements for electrons for the dif- ferent stages of the W ±Z analysis.

Level Default Sherpa PowHeg MC@NLO Whizard Matrix Element 3 11 2 3 123 124 23 || || Final State 1 1 1 1 1

Table 5.5.: Status codes of different event generators to denote particles at dif- ferent stages in the event generation. The default denotes the commonly agreed convention of the Les Houches Accord [152]. The values for the individual gen- erators are taken from their respective manuals.

are reconstructed using the association of the leptons to the intermediary massive gauge bosons. The absolute value of the difference in φ of these two four-vectors is then used as the selection criterion. Events where ∆φWZ > 2.0 are accepted. | | P |pT| P The final requirement of the selection is implemented using the variable pT . It rep- | | resents the scalar sum of the transverse momenta of the three charged leptons associated P to the Z and W boson. Only events where pT > 250 GeV are kept. | |

5.3. Object Selection on Particle Level

On particle level the object selection is simplified by the fact that generator based information is available for particle identification rendering it trivial. The selection criteria regarding the kinematics of the leptons listed in Table 5.4 are modelled after the detector level selection.

5.3.1. Lepton Definition

The criteria applied to select leptons on particle level are detailed in Table 5.4. Status: As written in Section 4.3.3 particles present at different steps in the event generation have different status codes. Table 5.5 shows the status codes used in the generators relevant to this analysis. Stable final state particles are always denoted by a 1 and it is the common recommendation to only use these particles. Barcode: Each particle in an event has a unique barcode. A barcode > 200000 indicates

73 5. Object and Event Selection that the particle in question was added to the event record by GEANT4 during the simulation of bremsstrahlung. Barcodes > 100000 are reserved for SUSY particles. Therefore, a requirement is imposed on the generated particles to avoid using particles added during the detector simulation or non Standard Model particles. PDG ID: The PDG ID encodes the type of a given particle e.g. electrons are denoted with an 11, positrons with a -11. Comprehensive information on the numbering scheme can be found in Ref. [39]. In this analysis only electrons (11), positrons (-11), muons (13), and anti-muons (-13) are considered as leptons. pT: The requirements on the transverse momentum are the same as those for the detector level selection. For the preselection no requirement is imposed whereas leptons stemming from a Z (W ) boson have to satisfy pT > 15 GeV (pT > 20 GeV). The effects of bremsstrahlung on the transverse momentum of each lepton is corrected for by adding the four vectors of all final state photons4 with ∆R(l, γ) < 0.1 to the four vector of the lepton in question. In cases where two leptons are in the vicinity of a given photon the lepton closest to the photon is chosen. This procedure is referred to as “dressing”. η: The requirements on the pseudo-rapidity are the same as those for the detector level selection with the difference that the crack region is not excluded for electrons. For the preselection no requirement is imposed whereas leptons assigned to a boson have to satisfy η < 2.5. | |

5.3.2. Jet Definition

Jets are clustered using the FastJet package [153] implementing the anti-kt algorithm described in Section 3.4.3. Two collections are available: AntiKt4TruthJets and AntiKt4TruthJetsWZ. The former uses all stable particles for clustering including the leptons stemming from the decays of W and Z bosons whereas the latter omits these particles. In this analysis, AntiKt4TruthJets were used because a feature of Sherpa prevented the AntiKt4TruthJetsWZ collection from being filled correctly.5

Only jets satisfying pT > 30 GeV and η < 4.5 are considered for further use. In the | | case of the jet multiplicity unfolding, the requirement on the transverse momentum is lowered to 25 GeV.

5.3.3. Neutrino Definition

Neutrinos are selected similarly to the leptons. The selected neutrinos have to have sta- tus = 1, a barcode < 100000, and a PDG ID indicating that they are either an electron- neutrino (12), electron-antineutrino (-12), muon-neutrino (14) or muon-antineutrino (-14). The neutrino with the highest transverse momentum is chosen to represent the missing transverse momentum.

4 The photons used in this approach have to have status = 1, a barcode < 100000, PDG ID = 22, and have to stem from either an electron, positron, muon, anti-muon, W boson, or Z boson. 5 The problem originates in the MCTruthClassifier code that decides which particles are considered for the jet finding. Sherpa does not provide a classical genealogy tree like other generators but the matrix element appears as a cloud with final state matrix element particles originating from it. This different event record is not handled correctly by the MCTruthClassifier causing incorrect input to the jet finding algorithm.

74 5.4. Event Selection on Particle Level

5.4. Event Selection on Particle Level

5.4.1. Event Selection for the Inclusive Phase Space

The inclusive phase space on particle level is designated to be the target phase space for the unfolding of the differential jet multiplicity distribution. It mimicks the selection on detector level.

Matrix Element Tau Veto

Only electrons, positrons, muons, and anti-muons are allowed as charged leptons in the final state. Thus, a τ-veto on matrix element objects is implemented removing events with τ-leptons in the final state. To this end, truth particles with a status code corresponding to the matrix element (see Table 5.5) and a PDG ID of either 15 (τ-lepton) or -15 (¯τ-lepton) are selected. The event is discarded if such particles are found. This criterion is applied on the Sherpa signal sample where only electrons, positrons, muons, or anti-muons are allowed as charged leptons in the final state.

Event Reweighting

The pile-up and primary vertex z-coordinate distributions are corrected in this step using the same method as on detector level.

Matrix Element Matching

In this step final state truth particles (status = 1) are matched to matrix element truth particles via a ∆R criterion. Matrix element truth electrons, positrons, muons, and anti-muons are selected by checking their status code and PDG ID. The selected truth particles are ordered by descending transverse momentum and are matched to the three matrix element truth particles highest in pT. The best matching truth particle with respect to a matrix element truth particle is found by minimising the ∆R between them. Only matched final state truth particles are considered further in the selection. No maximum ∆R requirement is imposed. Therefore, this step determines which leptons should be considered further rather than discarding events.

WZ Assignment

On particle level the Z-mass only algorithm employed on detector level cannot be used as it introduces a channel dependent bias. For the eµµ and µµe channel the acceptance of the total inclusive selection is about 40 % whereas for the eee and µµµ channel much lower acceptances are seen due to an incorrect treatment of events containing a γ∗ rather than an on-shell Z boson. This issue leads to a skewed mass distribution for the W boson and large imbalances regarding event yield between the channels. The general strategy in this case is to use the genealogy tree of the event record to determine the correct association of the final state leptons to the W and Z boson.

75 5. Object and Event Selection

However Sherpa does not allow for this strategy as it represents the matrix element as one blob with the matrix element leptons originating from it. This motivated the introduction of a specialised algorithm which is based on a prob- ability calculation. For each WZ candidate consisting of a same flavour opposite sign pair and the remaining charged lepton and neutrino pair the probability

1 1 Ptot = PZ PW = (5.2) × (m2 m2 )2 m2 Γ2 (m2 m2 )2 m2 Γ2 l1l2 − Z − Z Z l3ν − W − W W is computed using the constants

mZ = 91.19 GeV, ΓZ = 2.5 GeV,

mW = 80.4 GeV, ΓW = 2.14 GeV.

In case the neutrino flavour is not compatible with the charged lepton flavour of the W candidate PW is set to zero effectively vetoing the candidate.

The candidate which maximises Ptot is chosen and determines the lepton-boson as- signment for the event at hand. If no suitable candidate can be found the event is discarded.

Jet Overlap Removal

In this step all selected truth jets with a ∆R < 0.3 to any of the three selected charged leptons from the W and Z boson decays are removed.

WZ system cuts

Additional cuts on the boson and lepton kinematics are imposed after the assignment of the leptons to the W and Z boson. The invariant mass of the Z candidate has to satisfy mll mZ < 10 GeV with mZ = 91.1876 GeV. Likewise the transverse mass of | − | the W boson has to satisfy MT,W > 30 GeV.

The leptons associated with the Z (W ) boson are required to have pT > 15 GeV ( pT > 20 GeV) and η < 2.5. The detector level isolation requirements are mimicked | | by requiring that the leptons from the Z boson decay are separated by ∆R > 0.2 and that both have a minimum distance of ∆R > 0.3 to the lepton originating from the W boson decay.

5.4.2. Event Selection for the VBS Phase Space

The VBS phase space on particle level is used as the fiducial phase space for the measurement of the W ±Zjj-EW cross section.

Number of Jets

It is required that at least two jets with pT > 30 GeV are present in the event, thus enabling a proper definition of the tagging jets. Differential distributions for the in-

76 5.4. Event Selection on Particle Level variant mass of the tagging jets and the absolute separation in pseudo-rapidity of the tagging jets are unfolded to the phase space defined by this requirement.

Invariant Mass of the Tagging Jets

The two jets with the highest transverse momentum are chosen as the tagging jets. The chosen cut it is mjj > 500 GeV with details on the optimisation of the requirement being stated in Section 8.1.

5.4.3. Event Selection for the aQGC Phase Space

The aQGC phase space is based on the VBS phase space optimising the impact of the added contributions by aQGCs. Its definition was subject of a dedicated study optimising the performance of the limit setting which is described in Section 10.2. On particle level this phase space is used to obtain a map of cross sections in dependence on the anomalous gauge couplings. This map is used to derive limits on said aQGCs.

|∆φWZ|

This variable is calculated from the angular information of the W and Z candidate. The four-vectors of the W and Z boson are reconstructed using the association of the leptons to the intermediary massive gauge bosons. The absolute value of the difference in φ of these two four-vectors is then used as the selection criterion. Events with ∆φWZ > 2.0 are accepted. | | P |pT| P The final requirement of the selection is implemented using the variable pT. It repre- sents the scalar sum of the transverse momenta of the three charged leptons associated P to the Z and W boson. Only events where pT > 250 GeV are kept.

77

6. Background Estimation

In a particle physics analysis the process of interest is defined as the signal and all other processes are categorised as background. The selection strategy is then optimised towards the expected final state that is chosen to study the signal. In general, a high rejection rate for backgrounds is desirable resulting in a clean sample dominated by signal. However, it may not be possible to attain a clean sample and retain sufficient amounts of real data at the same time. Therefore, the eventually selected sample of events will contain signal events as well as background events. Understanding the sources and reliably estimating the amount of background is crucial for a robust signal extraction. The overall background can be divided into two types: prompt and non-prompt back- grounds. In prompt backgrounds, the observed final state only contains objects from the hard scattering. These processes may enter the selection because the process in question simply shares the same final state (e.g. tZj) or because one of the final state particles escapes detection and the event appears signal-like (e.g. ZZ). The prompt backgrounds for W ±Zjj considered in this work are:1 VVV , where two vector bosons mimic the WZ system and the third boson decays • to hadrons providing two jets, tt¯+ W/Z, where the additional boson is a Z boson and two W bosons and two • heavy flavour jets originate from the di-top system, Double parton scattering (DPS), where a W boson and a Z boson are produced • which decay to leptons and additional jets are introduced through real emissions, tZj, due to the top quark decaying to a W boson and a heavy flavour jet, and • ZZ, due to the failed identification of one of the four leptons originating from the • Z boson decays in diagrams similar to W ±Zjj-QCD. Suppressing backgrounds having the same final state as the signal is only possible by choosing a kinematic region that is enriched in signal events. In this analysis, W ±Zjj-QCD and W ±Zjj-EW share the same final state but W ±Zjj-EW tends to ex- hibit higher invariant masses of the tagging jets. Selecting events with a high invariant mass of the tagging jets enhances the relative contribution of W ±Zjj-EW in the se- lected phase space. Backgrounds where final state particles escape detection may be suppressed by employing vetoing strategies that enhance the acceptance and efficiency of the detector. The ZZ background is treated this way by selecting baseline leptons with a lowered transverse momentum cut recovering leptons from ZZ that fail the nominal baseline lepton selection. Through this increased acceptance more ZZ events can be identified and rejected increasing the purity of the selected sample. Purely data driven estimation techniques cannot be applied for these backgrounds due

1The stated mechanisms only reflect the way these processes mainly contribute but are not intended to be exhaustive.

79 6. Background Estimation to the statistical limitations caused by the processes’ low cross sections. A commonly used approach is to use simulated data to estimate the contributions from these back- grounds and check the validity of the estimate in background enriched control regions. If necessary a normalisation factor may be derived in such a control region harmonising the simulated prediction and the real data. The description for the used samples can be found in Section 4.4.3. Non-prompt backgrounds contribute to the selection by mimicking the signal final state through leptons from secondary sources, e.g. the misidentification of physical objects, photon conversions, or hadron decays to leptons inside jets. The non-prompt back- grounds for W ±Zjj considered in this work are: Z+jets, tt¯, and WW , where a jet is either misidentified as a lepton or a lepton • originates from a hadron decay inside the jet, Zγ, where an additional lepton stems from a photon conversion inside the detec- • tor, W +jets and single top, where two jets are either misidentified or give rise to • leptons from hadron decays, W ±γ, where an additional lepton is caused by a photon conversion inside the • detector and a jet is either misidentified or gives rise to a lepton from a hadron decay. Here, the application of data driven background estimation techniques is feasible as the dominant sources (Z+jets and tt¯) exhibit sufficient statistics. Multiple techniques have been evaluated in the ATLAS WZ analysis with the matrix method being the most sophisticated. The matrix method was adapted to provide the background estimate for this work and will be described in the following.

6.1. The Matrix Method

Similar to the categorisation of backgrounds into prompt and non-prompt the source from which individual leptons originate may be called either “real” or “fake”. Real leptons stem from the hard scattering whereas fake leptons are either misidentified jets or come from photon conversions or hadron decays inside jets. In the following the nominal object selection criteria will be labelled as “tight”. The number of events in the nominal event selection is therefore denoted as NTTT. The first index denotes the charged lepton from the W boson, the second index the leading lepton from the Z boson decay and the third index its subleading partner. The contributions to NTTT can be categorised as follows.

NRRR is the event yield coming from sources with three prompt leptons (either • signal or prompt background).

NFRR is the number of events where the lepton associated with the W boson is • in reality a fake lepton. The dominant background sources are Z+jets, tt¯, WW and Zγ.

NRFR and NRRF denote the contributions where one lepton associated with the • Z boson is a fake lepton. Main contributions stem from Z+jets and Zγ where a mis-pairing occurs when selecting the lepton pair associated with the Z boson. Additional contributions are introduced by tt¯ and WW events.

80 6.1. The Matrix Method

NFFR, NFRF, NRFF denote contributions where two fake leptons are present. • Contributing processes are W +jets, W ±γ, Z+jets, and tt¯.

NFFF is the event yield coming from sources with three fake leptons, e.g. pure • QCD processes. However, the contributions from this event category is negligibly small and thus not taken into account. Efficiency factors e (f) reflecting the probability of a real (fake) lepton being classified as “tight” are introduced to describe the relation between the mentioned categories and NTTT. Using these the following relation can be written down:

NFake = NTTT e1e2e3NRRR =f1e2e3NFRR + e1f2e3NRFR + e1e2f3NRRF+ (6.1) − f1f2e3NFFR + f1e2f3NFRF + e1f2f3NRFF.

Estimating the fake contribution NFake by subtracting the simulated contributions e1e2e3NRRR from NTTT is not feasible because NRRR contains the to be measured signal contribution leading to circular logic. Therefore, the contributions from the in- dividual fake afflicted yields have to be taken. The efficiencies may not be well described in simulated data motivating the use of data driven techniques to estimate them. In order to obtain a data driven estimate a second selection category called “loose” is introduced. Loose leptons are baseline leptons which fail certain identification or isolation cuts. This splits the samples of baseline leptons into two orthogonal parts: “tight” leptons and “loose” leptons. In this analysis, electrons are labelled “loose” if they fail the quality or isolation requirement with respect to their assigned boson. Loose muons fail the isolation requirements imposed on tight muons with respect to their assigned boson. Consequently, the probability that a real (fake) lepton is categorised as “loose” ise ¯ = 1 e (f¯ = 1 f). − − Using the same convention introduced for the fake and real leptons one can define NLTT, NTLT, NTTL, NLLT, NLTL, and NTLL. The category NLLL is also possible but omitted due to mainly being linked to NFFF. Thus, a 7x7 matrix is defined as such:

      NTTT e1e2e3 e1e2f3 e1f2e3 f1e2e3 e1f2f3 f1e2f3 f1f2e3 NRRR ¯ ¯ ¯ NTTL e1e2e¯3 e1e2f3 e1f2e¯3 f1e2e¯3 e1f2f3 f1e2f3 f1f2e¯3 NRRF       NTLT  e1e¯2e3 e1e¯2f3 e1f¯2e3 f1e¯2e3 e1f¯2f3 f1e¯2f3 f1f¯2e3 NRFR       NLTT  = e¯1e2e3 e¯1e2f3 e¯1f2e3 f¯1e2e3 e¯1f2f3 f¯1e2f3 f¯1f2e3 NFRR .       NTLL  e1e¯2e¯3 e1e¯2f¯3 e1f¯2e¯3 f1e¯2e¯3 e1f¯2f¯3 f1e¯2f¯3 f1f¯2e¯3 NRFF        NLTL  e¯1e2e¯3 e¯1e2f¯3 e¯1f2e¯3 f¯1e2e¯3 e¯1f2f¯3 f¯1e2f¯3 f¯1f2e¯3 NFRF  NLLT e¯1e¯2e3 e¯1e¯2f3 e¯1f¯2e3 f¯1e¯2e3 e¯1f¯2f3 f¯1e¯2f3 f¯1f¯2e3 NFFR (6.2) By calculating the inner product of the vectors on both sides of the equation with the vector

  f3 f2 f1 f2 f3 f1 f3 f1 f2 1 (6.3) −f¯3 −f¯2 −f¯1 f¯2 f¯3 f¯1 f¯3 f¯1 f¯2 the expression

81 6. Background Estimation

NTTT e1e2e3NRRR = (6.4) − f1 f2 (NLTT e¯1e2e3NRRR) + (NTLT e1e¯2e3NRRR) + − f¯1 − f¯2 f3 f2 f3 (NTTL e1e2e¯3NRRR) (NTLL e1e¯2e¯3NRRR) − f¯3 − − f¯2 f¯3 − f1 f3 f1 f2 (NLTL e¯1e2e¯3NRRR) (NLLT e¯1e¯2e3NRRR) − f¯1 f¯3 − − f¯1 f¯2 is obtained. As can be seen, the individual contributions to the total fake estimate are obtained by doing the nominal event selection with either one or two loose leptons, subtracting the contributions from prompt leptons and multiplying the result by a fake ratio Fi = fi/f¯i = fi/(1 fi). The contributions from prompt leptons are estimated us- − ing simulated data of the signal and prompt backgrounds. Denoting these contributions prompt as NTTL the final formula can be written as:

prompt prompt Nfake = (NLTT N )F1 + (NTLT N )F2 (6.5) − LTT − TLT prompt prompt + (NTTL N )F3 (NLLT N )F1F2 − TTL − − LLT prompt prompt (NLTL N )F1F3 (NTLL N )F2F3. − − LTL − − TLL Using this formula the data driven estimate for the non-prompt backgrounds is ob- tained. The fake ratios are determined in dependence of the transverse momentum of the leptons to account for differences in the transverse momentum distribution in the signal region and the control regions used to determine the fake ratios.

6.1.1. Fake Ratio Estimation

The fake efficiencies fi determining the fake ratios depend on the type of the non- prompt background (photon conversion, misidentification, heavy flavour decay) and the type (electron, muon) of the non-prompt lepton itself. Dedicated fake ratios F1 are determined for the leptons associated to the W boson due to the different requirements for being labelled “tight”. The fake ratios F2 and F3 are taken to be equal given that both describe fake leptons from Z bosons. Dedicated control regions are defined to determine F1 and F2 separately to avoid statistical correlations.

Determination of Fake Ratio for Leptons from Z Bosons

The strategy for determining F2 is to select a good Z boson candidate using two tight leptons and scan for an additional baseline lepton. The dominant process contributing to the selected events is Z+jets thus the additional lepton is assumed to be a fake lepton. The additional lepton is tested to see whether it satisfies the tight object selection requirements imposed on leptons coming from Z bosons used in the analysis. Dividing the event yield for all events with a good Z boson and an additional tight lepton by the event yield for all events with a good Z boson candidate returns the fake efficiency needed for calculating the fake ratio F2. This calculation is affected by

82 6.1. The Matrix Method contributions from signal and prompt backgrounds whose contributions are subtracted accordingly. The fake efficiency for muons is measured by selecting a good Z ee event using the → analysis’ tight object selection criteria and requiring that the Z boson candidates mass is in a 15 GeV window around the Z boson PDG mass. To enhance the contributions from fake muons, the d0 significance requirement applied in the baseline selection is reverted. For electrons a good Z µµ event is selected by finding a Z boson candidate satisfying → 81 GeV < mZ < 106 GeV. Prompt contributions are suppressed by inverting the cut W miss on the transverse mass of the W boson (MT < 30 GeV) and requiring ET < 30 GeV. In addition the contributions from fake electrons are enhanced by omitting the d0 cut applied in the baseline selection. The contributions from signal and prompt backgrounds are subtracted prior to measur- ing the fake efficiency and amount to less than 5 %. In both cases the leptons forming the Z boson and the additional lepton are required to be of different type to avoid complexities when selecting the leptons for the Z boson candidate.

Determination of Fake Ratio for Leptons from W Bosons

The fake ratio F1 associated with the object selection criteria for the leptons associated with the W boson are measured in a W +jets control region. A region with a good W boson candidate is selected using the tight object selection criteria required for miss leptons associated with a W boson. Further cuts are applied: ET > 25 GeV and W MT > 40 GeV. Additionally it is required that only one additional baseline lepton, whose charge is the same as the lepton of the W boson candidate, is present. The same charge requirement is necessary to minimise contributions from Z+jets which would introduce unwanted correlation between the derived fake ratios. For electrons the tight control region exhibits only small contaminations of 11 % from ≈ prompt contributions. In the case of muons the tight control region is contaminated with 21 % prompt leptons. These contributions are subtracted before determining ≈ the fake ratio.

Fake Ratio Results

The observed fake ratios and their uncertainties can be found in Table 6.1. They were derived in the course of the WZ production measurement in the inclusive phase space and implemented in the analysis code for this work to provide the estimate for non-prompt backgrounds in the VBS and aQGC phase spaces. The validity of the implementation was checked with the original developers and perfect agreement with respect to the estimated non-prompt background yields was found. Slight differences observed in the yields reported here and in the WZ analysis originate from different simulated data samples used to subtract the prompt contributions. This is explained by the use of Sherpa instead of PowHeg to model the W ±Zjj signal processes.

Systematic Uncertainties

Multiple sources of potential systematic effects are considered.

83 6. Background Estimation

pT range [15 35] GeV [35 50] GeV [50 ] GeV − − − ∞ e from W boson 0.018 0.0014 0.009 0.002 0.023 0.003 ± ± ± µ from W boson 0.063 0.005 0.061 0.010 0.136 0.023 ± ± ± e from Z boson 0.099 0.005 0.152 0.024 0.212 0.035 ± ± ± µ from Z boson 0.200 0.013 0.095 0.026 0.134 0.048 ± ± ± Table 6.1.: Fake ratios used to estimate the non-prompt backgrounds. The fake ratios are determined in dependence of the transverse momentum of the fake lepton, its type, and the boson it is associated to. The uncertainties are calculated as described in Section 6.1.1 and reflect the experimental uncertainties of the matrix method omitting the uncertainty on the choice of the data driven method.

pT range [15 35] GeV [35 50] GeV [50 ] GeV − − − ∞ e from W boson 0.0175 0.045 0.019

pT range [15 45] GeV [45 ] GeV − − ∞ µ from W boson 0.067 0.136

Table 6.2.: Alternative fake ratios used for the systematic uncertainty on the choice of the data driven method.

One systematic effect is the potential bias introduced through the composition of the non-prompt background. Non-prompt leptons may originate from photon conversions, misidentification of jets or hadron decays inside jets. The fake ratios associated with each of these origins are not necessarily the same. Therefore, differences in the com- position of the non-prompt background in the control region and signal region may introduce a bias in the background estimate. In order to study the composition, the simulated non-prompt background prediction in both the control region and signal re- gion were studied and compared. Reconstruction level objects were matched to particle level objects via a ∆R requirement. Subsequently, the TruthMCClassifier was used to gather information on the origin of the particle level object. Overall, the composition of both regions is fairly similar and a possible bias is small. An attempt was made to find control regions that are pure with respect to a certain non- prompt background source. By measuring these pure fake ratios the overall estimate in the signal region would be a expressible as a linear combination using the background composition of the signal region as an input. However, the found control regions were not entirely pure and the extrapolation of the fake ratios measured in the specialised control regions to the signal region introduced large uncertainties. It was decided to use the fake ratio as obtained in the control regions described above and use the results of the specialised control regions to estimate a background compo- sition uncertainty. To estimate the dependence on the selection criteria imposed in the control regions the

84 6.1. The Matrix Method

miss W requirements on ET and MT are varied by 10 GeV. The resulting changes in the observed fake ratios are taken as an uncertainty.

In the Z+jets region used to estimate the muon fake ratio F1 the d0 cut is varied between 2.4 and 3.6 and the observed variation is taken as an uncertainty. An additional uncertainty is added using an alternative data driven background esti- mation approach providing fake ratios for the leptons associated to the W boson. This alternative approach takes into account the different source composition of the region where the fake ratios are determined and the region where they are applied but was not feasible to use because of statistical limitations. The resulting fake background estimates are compared to the nominal data driven background estimate and the dif- ference is taken as an uncertainty. The fake ratios for the alternative method are listed in Table 6.2. Together with the statistical uncertainties these systematic uncertainties were combined to yield an overall uncertainty taking into account correlations between individual chan- nels.

6.1.2. Application of the Matrix Method

The application of the fake ratios is complex due to the many possibilities to categorise an event. Possible categories are LTT, TLT, TTL, LLT, LTL, and TLL. An event is put into a specific category using the following algorithm: Four lists of objects are built for each combination of being either tight or loose • and coming from either the W or Z boson. The resulting lists will be called tight Z leptons, loose Z leptons, tight W leptons, and loose W leptons. The list of tight Z leptons is used to search for a Z boson candidate satisfying • mll mZ < 10 GeV. | − | In case a Z boson candidate from two tight Z leptons is found it is tested whether • a tight W lepton is present in the event. If such a lepton is present the event is categorised as TTT and is discarded. If a loose W lepton is found the event is categorised as LTT and the appropriate fake ratio is applied. If no such Z boson candidate is found the Z boson candidate closest to the Z • boson mass is determined by testing: – all candidates made of two tight Z leptons, – all candidates made of a leading loose Z lepton and a subleading tight Z lepton, – all candidates made of a leading tight Z lepton and a subleading loose Z lepton, – all candidates made of two loose Z leptons. The best candidate decides whether the fake category is XTT, XLT, XTL, or XLL with X being determined in the following by the lepton associated to the W boson. It is noteworthy that no requirement on the Z boson candidate mass is applied here. If a tight W lepton is present and the Z boson candidate is either LT, TL, or • LL the resulting category is TLT, TTL, or TLL. The event is discarded if it falls into the TTT category. If no tight W lepton is present but a loose W lepton can

85 6. Background Estimation

be found the event will be categorised either as LTT, LLT, or LTL. The event is discarded if it is categorised as LLL or neither a loose nor a tight W lepton is found. The final weight is the product of the fake ratios for the loose leptons of the WZ candidate.

6.1.3. Matrix Method Results

Table 6.3 lists a detailed breakdown of the event yields from the individual categories in the inclusive phase space. The appropriate fake ratios and the inversion of the yield for the TLL, LTL, and LLT categories (see Equation (7.1)) have been applied. The total estimate for the non-prompt background is the total sum of all categories. Estimates from simulated data for the three main contributing backgrounds Z+jets, tt¯, and Zγ agree well with the data driven estimate within uncertainties. Results for the VBS phase space are listed in Table 6.4. A good agreement between the data driven estimate and the simulation of the non-prompt backgrounds within uncertainties is observed. Judging from the uncertainties observed in the VBS phase space, the data driven estimate begins to suffer from low statistics. The simulated data for the non-prompt backgrounds exhibits even greater statistical limitations.

86 6.1. The Matrix Method 4 9 1 . . . 73 69 44 11 10 24 07 94 all ...... 10 30 0 0 0 43 2 8 6 6 7 ± ± ± ± ± ± ± ± ± ± ± 5 5 2 . . . 17 12 07 68 58 24 84 84 ...... 2 4 . . 1 2 0 0 0 27 84 65 177 ± ± ± ± ± ± ± 8 9 . . 20 46 57 39 94 . . . . . 1 1 0 124 175 13 40 − − − 19 14 05 79 28 96 20 13 36 84 18 ...... 0 0 0 8 1 4 0 5 4 1 3 ± ± ± ± ± ± ± ± ± ± ± 09 09 03 58 55 96 20 71 64 64 89 ...... 0 1 0 0 0 2 1 ± ± ± ± ± ± ± 76 20 27 35 03 58 27 ...... 0 0 0 − − − 92 51 16 8 4482 2 17 09 10 03 7857 31 40 91 32 25 0 ...... 1 1 1 0 0 0 3 6 5 16 13 ± ± ± ± ± ± ± ± ± ± ± 47 55 31 06 02 01 37 96 12 ...... 67 88 . 0 1 0 0 0 . 1 0 ± ± ± ± ± ± ± 12 20 12 15 04 . . . . . 72 70 . . 0 0 0 − − − 0 42 04 49 89 8 69 39 6609 2 09 8 15 10 9003 22 72 ...... ± 5 0 3 0 1 0 0 0 2 3 00 ± ± ± ± ± ± ± ± ± ± . 07 27 43 02 09 06 06 06 56 84 ...... 2 1 1 1 0 0 0 ± ± ± ± ± ± ± 45 86 18 27 22 45 20 ...... 0 0 0 − − − 80 31 13 2 36 29 26 80 6 75 24 85 5 283650 13 19 0 ...... eee µee eµµ µµµ 2 0 0 0 0 9 1 4 3 5 12 ± ± ± ± ± ± ± ± ± ± ± 28 12 03 03 38 74 26 69 92 99 ...... 95 . 1 0 0 0 4 0 1 1 16 22 43 ± ± ± ± ± ± ± 33 59 51 30 98 26 ...... 17 . 6 0 0 0 27 10 − − − 43 ¯ t Estimates for the non-prompt backgrounds in the inclusive phase space scaled to luminosity. The estimates shown are t Zγ LTL LLT TLL LTT TLT TTL +jets all mc Z all fakes Table 6.3.: derived from the data-drivendata. matrix Contributions method from andcategories the gives are TLL, the compared total LTL, toboth and estimate. the the LLT Only statistical categories non-prompt statistical and uncertainties are backgrounda systematic are inverted estimate reliable uncertainties quoted so data-driven obtained are for non-prompt that from included. the background the simulated simulated A estimate data. total good in For sum each agreement the of channel. is data observed the driven and background yields sufficient from statistics the is individual available for

87 6. Background Estimation 01 01 01 36 17 11 62 57 67 14 89 all ...... 0 0 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± ± ± 01 01 01 34 29 36 57 00 94 14 08 ...... 1 0 0 2 0 0 0 0 0 0 0 ± ± ± ± ± ± ± 01 02 01 70 34 50 49 ...... 0 0 0 1 0 0 2 − − − 0 00 00 40 46 61 09 10 02 00 20 ...... ± 0 0 0 0 0 0 0 0 0 0 00 ± ± ± ± ± ± ± ± ± ± . 01 00 24 46 70 27 13 20 00 36 ...... 0 0 0 0 0 0 0 ± ± ± ± ± ± ± 74 12 17 00 00 00 04 ...... 0 − 0 0 0 0 1401 0 02 0 0 01 0011 0 1 311434 0 0 0 ...... ± ± 0 0 0 0 0 0 0 0 0 0 00 ± ± ± ± ± ± ± ± ± . ± 08 00 02 01 00 08 22 14 36 ...... 00 . 0 0 0 0 0 0 ± ± ± ± ± ± 29 01 04 01 00 23 ...... 0 0 0 − − − 00 0 0 1003 0 00 0000 0012 0 0 0 2424 0 0 ...... ± ± 0 0 0 0 0 0 0 0 0 00 00 ± ± ± ± ± ± ± ± ± . . 17 15 14 00 01 01 27 44 44 ...... 0 0 0 0 0 0 0 ± ± ± ± ± ± ± 40 09 10 00 01 01 57 ...... 0 0 − − 0 0 0505 0 11 0 01 0 00 0 00 20 0 084849 0 0 0 ...... eee µee eµµ µµµ ± 0 0 0 0 0 0 0 0 0 0 00 ± ± ± ± ± ± ± ± ± ± . 0 08 21 26 01 00 01 35 11 48 59 ...... 0 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± 28 13 27 01 01 01 65 ...... 0 0 0 0 0 0 0 − − − ¯ t Estimates for the non-prompt backgrounds in the VBS phase space scaled to luminosity. The estimates shown are t Zγ LTL LLT TLL LTT TLT TTL +jets all mc Z all fakes Table 6.4.: derived from the data-drivendata. matrix Contributions method from andcategories the gives are TLL, the compared total LTL, tobot and estimate. the the LLT Only statistical categories non-prompt statistical and systematic uncertainties are backgroundfor uncertainties are inverted estimate are a quoted included. so reliable obtained for A that channel from the good dependent the simulated simulated agreement estimate. data. is total observed For sum but the the of data statistics the are driven becoming background yields insufficient from the individual

88 7. Systematics

Measurements of high level observables such as cross sections are complex and depend on a large number of algorithms and measured inputs. Each of these inputs may introduce systematic uncertainties affecting the result of the analysis and has to be studied. A systematic uncertainty source is relevant if it affects observables that are either ex- plicitly studied or used in the object or event selection. The impact of a systematic uncertainty source is evaluated by varying it by one standard deviation in either di- rection and observing the changes in the observables of the analysis. The propagated changes in the observables are then quoted as the uncertainty originating from the sys- tematic uncertainty source in question. Both experimental uncertainties and theoretical uncertainties exist. Experimental uncertainties include the finite resolution of the measured particle kine- matics, calibration uncertainties, and uncertainties on the outputs of algorithms such as identification and reconstruction algorithms. Theoretical uncertainties are introduced by parameter choices that have to be made during calculations. These parameters are varied according to given prescriptions and the calculation is repeated. The change in the prediction is then taken as the uncer- tainty originating from the variation of the parameter in question. In this work, the experimental and theoretical uncertainties are bundled into categories to facilitate the statistical evaluation. The individual categories and the uncertainty sources contributing to them will be discussed in the following. Results on the actual impact of these uncertainties will be shown in Sections 8 to 10.

7.1. Experimental Uncertainties

Experimental uncertainties are usually evaluated using official tools provided by the combined performance groups. Typical sources of uncertainty include the scale of the momentum or energy measurement, the resolution on the momentum or energy mea- surement, and uncertainties on the scale factors harmonising the output of algorithms between real and simulated data. The uncertainties comprising the individual cate- gories are summed in quadrature if not stated otherwise.

7.1.1. Muon Uncertainties

Three categories are present for muons: Muon momentum uncertainties are comprised of the uncertainties on the • total muon momentum scale and the resolution of the muon spectrometer and inner detector. Both uncertainties are evaluated using the SmearingClass from the MuonMomentumCorrections package.

89 7. Systematics

The uncertainties on the muon identification scale factors stem from the • uncertainties of the tag and probe analyses used to determine them. The to- tal uncertainties from these sources are taken as the uncertainty on the muon identification scale factors and are propagated using the methods provided by the AnalysisMuonConfigurableScaleFactors. The class and configuration files can be found in the MuonEfficiencyCorrections package. Muon trigger uncertainties are placed on the scale factor harmonising the • trigger performance with respect to muons in real and simulated data. The uncer- tainty is provided by the LeptonTriggerSF tool from the TrigMuonEfficiency package. The prescriptions for applying these uncertainties were taken from Ref. [154].

7.1.2. Electron Uncertainties

Six categories are introduced for electrons: The electron resolution uncertainty reflects the finite resolution of the en- • ergy measurement of the electron. The EnergyRescalerUpgrade tool from the egammaAnalysisUtils package provides the estimate for this uncertainty. The Electron scale uncertainty is the set of uncertainties on the tag and probe • analyses used to determine the correction factors for harmonising the energy scale in real and simulated data. The EnergyRescalerUpgrade tool is used to apply it. The Electron identification uncertainty, electron reconstruction uncer- • tainty, electron isolation uncertainty, and electron trigger uncertainty represent distinct categories. All of these uncertainties originate from the uncer- tainty on the scale factor determination methods. Estimates are obtained via the TElectronEfficiencyCorrectionTool in the ElectronEfficiencyCorrection package implementing a common tool for evaluating all four categories. Reference [155] lists the prescriptions for evaluating the stated energy calibrations and Ref. [156] was consulted for the efficiency related uncertainties.

7.1.3. Jet Uncertainties

The jet uncertainties are comprised of two categories: The jet energy scale uncertainty is evaluated by applying a set of 15 in- • dividual uncertainties isolating individual uncertainty sources in the jet energy measurement. These sources include uncertainties from the in-situ analyses used to determine the jet energy scale, eta inter-calibration uncertainties, flavour de- pendent uncertainties and pile-up related uncertainties among others. All of these uncertainties are applied using the MultijetJESUncertaintyProvider tool from the JetUncertainties package. The guidelines stated in Ref. [157] are used. The jet energy resolution is evaluated by smearing the transverse momentum • of each jet using a Gaussian with a ηjet-dependent width. The technical implemen- tation uses the JetSmearingTool [158] from the ApplyJetResolutionSmearing package. The derived uncertainty is taken as symmetric.

90 7.2. Theoretical Uncertainties

7.1.4. Missing Transverse Momentum Uncertainties

The missing transverse momentum is calculated from the momenta of the objects ob- served in the event and is thus affected by their uncertainties. Therefore, the effects of these uncertainties are propagated accordingly. One additional category is introduced, evaluating the uncertainty on the soft terms of the missing transverse momentum calculation (see. Equation (3.10)). The prescription for this uncertainty is stated in Ref. [159] and is carried out using the METUtility tool from the MissingETUtility package.

7.1.5. Other Uncertainties

Due to the large computational effort involved, simulated data is generated for a • given data period before the actual data taking. Therefore, the pile-up conditions for said data period have to be estimated beforehand and corrected later in case of mismatches. The PileupReweightingTool is used to calculate event weights to reweigh the simulated data so that the average interaction per bunch crossing distribution resembles the one found in real data. A residual uncertainty on the event weights is present and taken into account by varying the event weights within uncertainties. The value of the integrated luminosity is determined using an array of dedicated • detectors such as ALFA and LUCID as well as in-situ techniques. The uncer- tainty on the integrated luminosity is 2.8 % using the methodology detailed in ± Ref. [160] and data obtained from beam-separation scans carried out in November 2012.

7.2. Theoretical Uncertainties

Theoretical uncertainties reflect the limited accuracy of current particle physics calcu- lations. They may stem from the uncertainties on the inputs to these calculations such as the uncertainties on parton distribution functions (PDFs) or the measured values of coupling strengths and particle masses. In addition, the dependence of the theoretical predictions with respect to certain choices such as the factorisation and renormalisation scale used in the calculation has to be taken into account. The theoretical uncertainties quoted in this work were derived during the course of another work [70] studying W ±Zjj. The same VBS phase space is examined in both works making the theoretical uncertainties applicable. Here, a summary of the methods used to estimate the uncertainties will be given. Results relevant to the objectives of this work are reported in Table 7.1.

7.2.1. Scale Uncertainties

Technical/mathematical issues in theoretical calculations necessitate the use of spe- cialised prescriptions to obtain sensible results. The factorisation scale governing the energies at which either the matrix element calculation or the parton shower is used was introduced in Section 4.3.1. The choice of the concrete factorisation scale depends on the simulated process and its influence on the predictions has to be evaluated.

91 7. Systematics

Another type of scale introduced to address issues in calculations is the renormalisation scale. Calculations beyond leading order in either electroweak theory or QCD introduce loop corrections which lead to divergent integrals resulting in infinities. In a calculation to all orders in perturbation theory divergent terms obtained at some order would be cancelled by divergent terms at other orders. However, a calculation to all orders is not feasible as of yet and the divergent terms have to be treated. The ill-behaving nature of the series expansion is remedied by redefining the parameters of the theory turning the expansion into a well-behaving one which yields finite results on all orders. This technique is called renormalisation and introduces a renormalisation scale which balances the influences of the finite and infinite results from the original calculation. The resulting predictions thus depend on the choice of the renormalisation scale and their sensitivity to the scale has to be studied. Uncertainties on both scales are typically derived by varying both scales independently by 1/2, 1, and 2. Out of the eight variations the one yielding the largest deviation from the nominal point in a given direction is chosen as the uncertainty. For this analysis a dynamical scale was chosen drawing from the findings in Ref. [161]. The dependence reported in Ref. [70] was measured in the µ±e+e− channel in the VBS phase using VBFNLO by varying both scales in the aforementioned way. The results on the scale uncertainties can be found in Table 7.1.

7.2.2. Parton Distribution Function Uncertainties

Several PDFs are available with the individual PDFs differing in terms of the experi- mental inputs they are derived from and the theoretical assumptions they make. For the signal processes the CT10 [139] PDF set was chosen. The general prescription to determine the intrinsic uncertainties of a PDF is to vary the set of eigenvectors one obtains from diagonalising the Hessian of the global PDF model fit. The overall systematic uncertainty is determined as the envelope of all variations as calculated via:

v u 26 + 1 uX + − 2 ∆σ = t [max(σ σ0, σ σ0, 0)] (7.1) max 1.645 i − i − i=1 v u 26 − 1 uX + − 2 ∆σ = t [max(σ0 σ , σ0 σ , 0)] (7.2) max 1.645 − i − i i=1

Uncertainties on the PDFs are given at 90 % confidence level by the collaborations mea- suring them. The factor 1.645 translates them to the one standard deviation variations commonly used in experiments. The variation of the eigenvectors was done using the LHAPDF [162] package. The results described in Ref. [163] were measured in the µ±e+e− channel in the VBS phase space using VBFNLO. Table 7.1 contains the PDF uncertainties relevant to this work.

92 7.2. Theoretical Uncertainties

7.2.3. Parton Shower Uncertainties

Parton showers are used to simulate soft QCD effects in energy regions where matrix element calculations are afflicted by increasingly large corrections from higher orders in the strong coupling αs. The uncertainties associated with the parton shower imple- mentation used in Sherpa are estimated by varying its steering parameters. The CKKW merging scale governing the energy threshold separating the phase spaces populated by the parton shower and the matrix element calculation is varied by 25 % ± around the nominal value of Qcut = 20 GeV. Relative changes in the fiducial cross section on particle level in the VBS phase space of 7.4 % and +5.4 % are observed. − In addition, the parton shower initial scale is varied by a factor of two and one-half leading to deviations of 1.3 % and +7.9 % for W ±Zjj-EW in the VBS phase space. − The W ±Zjj-QCD process is similarly affected with the resulting relative uncertainty being 3.4 % and +15.0 %. − No uncertainty regarding the choice of the parton shower algorithm is considered in contrast to Ref. [70]. The applied methodology compared the results of VBFNLO interfaced with Pythia8 and Herwig++ and Sherpa. It was found that this approach does not isolate the influence of the parton shower and inflates the parton shower uncertainty arbitrarily. Consequently, the contribution of the choice of the parton shower algorithm was dropped from the total parton shower uncertainty. The results on the parton shower uncertainties can be found in Table 7.1.

7.2.4. Lower Parton Multiplicity Uncertainties

The uncertainties discussed so far were derived using dedicated samples simulating W ±Zjj-EW and W ±Zjj-QCD with at least two quarks/gluons from the hard scatter- ing process. No Feynman diagrams with less than two quarks from the hard scattering exist for W ±Zjj-EW. For W ±Zjj-QCD contributions from events with less than two quarks/gluons from the hard scattering are possible due to additional jets from par- ton showering effects, pile-up, or multiple parton interactions. All these effects are accounted for in the Sherpa calculation and thus contributions to W ±Zjj-QCD from events with less than two jets from the hard scattering are possible. Using the Sherpa sample 185397 the fraction of such lower parton multiplicity events was determined. The fraction of events with zero (one) quarks or gluons in the final state in the VBS phase space is 6.2 % (4.9 %). Uncertainties introduced by these events are estimated by comparing predictions from Sherpa and MadGraph simulating WZ restricted to events with less than two jets from the hard scattering. Both Pythia8 AU2 and Herwig++ provided showering for MadGraph. Comparing the results of these samples shows maximum deviations of 21.3 % and +29.6 % ( 31.3 % and +9.6 %) for the zero (one) quark/gluon case. − − Combining these results with the fractions of events contributing to the W ±Zjj-QCD process leads to a total uncertainty from lower parton multiplicity bins of 2.3 % and − +2.9 % assuming full correlation.

93 7. Systematics

process scale PDF parton lower total variation shower parton W ±Zjj-QCD +6.7 % +4.6 % +15.9 % +2.3 % +17.9 % 10.9 % 4.1 % 8.1 % 2.9 % 14.5 % − − − − − W ±Zjj-EW +2.1 % +6.5 % +9.6 % - +11.8 % 1.0 % 9.7 % 7.5 % - 12.3 % − − − − Table 7.1.: Breakdown of the relative theoretical uncertainties on the fiducial cross section in the VBS phase space for Sherpa.

7.2.5. Theory Uncertainties for Signal Summary

The theory uncertainties taken into account in this work are the uncertainties on the renormalisation and factorisation scale, parton shower uncertainties estimated via vari- ation of the merging scale and the initial showering scale, PDF uncertainties of the CT10 PDF set, and uncertainties arising from contributions of lower parton multi- plicity bins. The individual uncertainties are added assuming no correlations and are summarised in Table 7.1.

7.2.6. Theoretical Uncertainties on Backgrounds

The theoretical uncertainties on the cross sections of the backgrounds are derived for the tt¯+ W/Z, ZZ, VVV , and tZj backgrounds and are taken from Ref. [70]. A summary of the methods used is given here. For tt¯+W/Z the theoretical uncertainties are taken from literature [164,165] evaluating to 22 %. The stated uncertainties are derived in the context of next-to-leading-order calculations and are comprised of factorisation and renormalisation scale uncertainties and PDF uncertainties. There is some difference between the phase spaces in which these uncertainties are determined and the phase space considered in this work. There- fore, the uncertainty is expanded to 30 %. Uncertainties on the ZZ background in the VBS phase space are estimated by consid- ering the uncertainties on the ZZ/γ∗jj production as contributions from lower parton bins are expected to be small following the argumentation for W ±Zjj-QCD. Samples are simulated using VBFNLO interfaced with either Pythia8 AU2 or Herwig++ and are compared to the Sherpa prediction. In addition, scale uncertainties, PDF uncertainties as well as parton shower uncertainties are considered leading to a total uncertainty of +15.1 % and 22.3 %. − Due to its low contribution the influence of the theoretical uncertainties of VVV on the cross section is virtually negligible. A conservative total uncertainty of 10 % is assumed after evaluating scale and PDF variations. The uncertainty on the tZj production resulting from PDF and scale uncertainties is taken from Ref. [166] and evaluates to 11.7 % at next-to-leading order in perturbative ± QCD. This uncertainty is derived in a more inclusive phase space than the VBS phase space. No studies of the uncertainty in the VBS phase space are available and it was decided to increase the theoretical uncertainty to 20 %.

94 7.2. Theoretical Uncertainties

An additional parton shower uncertainty is estimated by comparing events generated with Whizard and showered with Pythia8 and Herwig++, respectively. The re- sulting uncertainty is 27.5 % in the VBS phase space leading to a total theoretical uncertainty of 34 % on tZj cross section.

95

8. Measuring the VBS Cross Section

This chapter will summarise the results of the VBS cross section measurement in the VBS phase space defined in Section 5.2.2. The method used for optimising the phase space definition and the methodology for extracting the cross section results will be discussed. Following this, fiducial cross sections on particle level for the W ±Zjj-EW process will be reported. A description of the W ±Zjj-EW process can be found in Sec- tion 2.3. The concrete implementation of the process in event generators is summarised in Section 4.4.1.

8.1. Phase Space Optimisation

The VBS phase space is based on the definition of the inclusive phase space used in the inclusive WZ production cross section measurement. Reusing the object and event selection definitions enable the sharing and cross checking of methods and results in the inclusive phase space. These cross checks are important tests ensuring the reliability of the results obtained in the VBS phase space. The final state definition of W ±Zjj-EW demands for at least two jets. The selection requirements for these jets are detailed in Section 5.1.3. The two jets with the highest transverse momentum are chosen as the tagging jets. Several new observables are made available by the definition of the tagging jets such as the centrality defined in Section 2.3. The ideal phase space selection is determined by searching for observables showing good separation power between the W ±Zjj-EW signal and the background processes. Rectangular cuts on these observables are then optimised towards a given figure of merit. Out of all observables, the invariant mass of the tagging jets depicted in Figure 8.1 shows the best overall performance with respect to the expected significance.1 It exploits the differences between W ±Zjj-EW and W ±Zjj-QCD in the transverse momentum and

1 The significance shown in plots in this work is based on a likelihood approach. The likelihood function is defined as:

L = Pois(N|S + B) ∗ G(B,B0, dB ).

The total expected event yield in simulated data (signal + backgrounds) is denoted with N. B0 is the event yield for the background processes with dB being the quadratic sum of the statistical and systematic uncertainties. The total expected yield for the background only hypothesis is calculated via the conditional maximum likelihood estimator q ˆ 2 2 2 B(S = 0) = 0.5 ∗ (B0 − dB ) + 0.25 ∗ (B0 − dB ) + B ∗ dB . Using the test statistic

ˆ ˆ ˆ 2 q0 = 2[N ∗ (ln(N − 1 − ln Bˆ + Bˆ))] + [(B0 − Bˆ)/dB ] √ the expected significance is calculated as Z = q0.

97 8. Measuring the VBS Cross Section

103 s = 8 TeV, L = 20.3 fb-1 Non-Prompt all channels Prompt WZjj-QCD 102 WZjj-EW total uncertainty

Events / 100 GeV Data 10

1

10-1

1.5 Sim Data 1 0.5

Z 2 > X < X window 1.5 1 0.5 0 0 200 400 600 800 1000 1200 1400 1600

mjj [GeV]

Figure 8.1.: Optimisation of the invariant mass of the tagging jets requirement on reconstruction level. All systematic uncertainties and data-driven background estimation based on the matrix method are shown. The baseline expected sig- nificance obtained when applying no cut is 0.5 and is raised to 1.3 at the ≈ ≈ ideal cut of mjj = 800 GeV. The expected significance “Z” is calculated taking into account all available systematics.

pseudo-rapidity distributions of both tagging jets (see Figure A.3). The characteristics of these variables result in an invariant mass distribution that leans to higher invariant masses for W ±Zjj-EW than for W ±Zjj-QCD. A high invariant mass favours VBS dia- grams which contribute to W ±Zjj-EW but not to W ±Zjj-QCD leading to a suppression of the latter process. Table 8.1 lists the figure of merit for multiple thresholds for the invariant mass of the tagging jets. The expected signal and background event yields obtained from simu- lated data are stated together with their statistical and systematic uncertainties. These numbers form the inputs for the figures of merit which are from top to bottom the dis- covery significance with and without the incorporation of systematics, the Gaussian significance of a Poisson distribution’s p value for the signal plus background hypoth- esis, and the inverse of the total relative uncertainty on the expected signal with and without the incorporation of systematics. The inverse of the total relative uncertainty on the expected signal and the Gaussian significance of a Poisson distribution’s p value for the signal plus background hypothesis favour very strict cuts but the influence of systematics is omitted in their calculation. When including systematics a cut in the vicinity of 1000 GeV is more likely to be optimal which is also observed in Figure 8.1 where the expected significance Z is calculated including systematics. It still has to be concluded that a statistically significant observation of W ±Zjj-EW is not to be expected using the cut-and-count approach.2 Therefore, the optimisation in the VBS phase space is carried out with an eye towards the measurement of anoma-

2 In the cut-and-count approach the cross section of the studied process is determined by selecting a suitable measuring phase space and calculating the cross section from the observed number of events in said phase space. No shape information regarding sensitive observables is utilised in this approach.

98 8.1. Phase Space Optimisation lous quartic gauge couplings. The optimisation study for the aQGC phase space (see Section 10.2) showed that a requirement of mjj = 500 GeV forms the optimal basis for subsequent cuts. As can be seen in Figure 8.1 the penalty incurred on the expected significance by moving away from the optimal cut of mjj = 800 GeV is tolerable. Con- sequently, mjj = 500 GeV was chosen as the requirement for defining the VBS phase space. Choosing this looser requirement enhances the statistics observed in the VBS phase space leading to more robust results for the systematics uncertainties and the data-driven background estimate. The observables listed in Figure 8.2 have also been considered but show significantly less separating abilities than the invariant mass of the tagging jets. The absolute difference in pseudo-rapidity of the tagging jets ∆ηjj , the lepton centrality ζlep, and | | the boson centrality ζbos only exploit the spatial characteristics of the final state but do not consider the transverse momenta. This loss in information most likely leads to the diminished separating power for these observables.

The transverse mass of the WZ system MT,WZ shows almost no ability to differentiate between W ±Zjj-EW and W ±Zjj-QCD. Interestingly, it is a viable observable for sep- arating W ±Zjj-EW and the contributions from aQGCs as will be shown in Section 10. The application of additional cuts after the requirement on the invariant mass of the tagging jets was considered. Several observables were tested but no significant im- provement was observed. Figure 8.3 depicts the alternative observables considered after requiring mjj > 500 GeV. The significance plots do not exhibit sizeable local maxima. Therefore any additional cut would only cause a decrease in statistics and would be detrimental to the goal of measuring the fiducial W ±Zjj-EW cross section.

mjj cut [GeV] 500 800 1000 1500 expected signal events 7.40 0.95 4.37 0.58 3.10 0.43 1.30 0.20 ± ± ± ± background events 29.82 4.75 8.37 1.79 4.03 0.91 0.79 0.38 ± ± ± ± p 2 S/ B + ∆syst.B 1.02 1.28 1.41 1.34 S/√B 1.36 1.51 1.54 1.46 Poissonian 1.38 1.38 1.62 1.68 p 2 S/ S + B + ∆syst.B 0.96 1.09 1.10 0.87 S/√S + B 1.21 1.22 1.16 0.90

Table 8.1.: Figures of merit for optimisation of tagging jet invariant mass cut. Shown are from top to bottom the discovery significance with and without the incorporation of systematics, the Gaussian significance of a Poisson distribution’s p value for the signal plus background hypothesis, and the inverse of the total uncertainty on the expected signal with and without the incorporation of sys- tematics. The quoted uncertainties give the total uncertainties on the expected signal and background yields and have been symmetrised by taking the larger value of the up and down variations.

99 8. Measuring the VBS Cross Section

-1 3 -1 103 s = 8 TeV, L = 20.3 fb Non-Prompt 10 s = 8 TeV, L = 20.3 fb Non-Prompt all channels Prompt all channels Prompt

Events / 1 WZjj-QCD Events / 1 WZjj-QCD 2 102 WZjj-EW 10 WZjj-EW total uncertainty total uncertainty Data Data 10 10

1 1

10-1 10-1

1.5 1.5 Sim Sim Data 1 Data 1 0.5 0.5

Z 2 Z 2 > X < X window > X < X window 1.5 1.5 1 1 0.5 0.5 0 0 0 1 2 3 4 5 6 7 8 9 10 -6 -4 -2 0 2 4 6 ∆ Eta ζ jj l

s = 8 TeV, L = 20.3 fb-1 Non-Prompt 103 s = 8 TeV, L = 20.3 fb-1 Non-Prompt 103 all channels Prompt all channels Prompt

WZjj-QCD Events / 1 WZjj-QCD WZjj-EW 2 WZjj-EW 102 10 total uncertainty total uncertainty

Events / 100 GeV Data Data 10 10

1 1

10-1 10-1

1.5 1.5 Sim Sim Data 1 Data 1 0.5 0.5

Z 2 Z 2 > X < X window > X < X window 1.5 1.5 1 1 0.5 0.5 0 0 0 100 200 300 400 500 600 700 800 900 1000 -6 -4 -2 0 2 4 6 m [GeV] ζ t,wz V

Figure 8.2.: Alternative variables for the optimisation of the VBS selection. Shown are from top left to bottom right the absolute separation in pseudo-rapidity between the tagging jets ∆ηjj , the lepton centrality ζl, the transverse mass of | | the WZ system MT,WZ and the boson centrality ζV. The lower plots show the likelihood based significance for various selection strategies (window cut, lower bound, upper bound). Gains in significance may be obtained by using these variables but not in the same magnitude as with a cut on the invariant mass of the tagging jets.

8.2. Event Yields in the VBS Phase Space

With the VBS phase space selection criteria in place the event yields and uncertainties can be determined. The expected yields for the signal and background processes as well as data are detailed in Table 8.2. Observed and expected yields agree reasonably well in general with the eee channel being a notable exception.

100 8.2. Event Yields in the VBS Phase Space

20 s = 8 TeV, L = 20.3 fb-1 WZjj-EW 35 s = 8 TeV, L = 20.3 fb-1 WZjj-EW 18 all channels WZjj-QCD all channels WZjj-QCD

Events / 1 Non-Prompt Events / 1 30 Non-Prompt 16 Prompt Prompt 14 total uncertainty 25 total uncertainty 12 Data Data 20 10 8 15 6 10 4 5 2

1.5 1.5 Sim Sim Data 1 Data 1 0.5 0.5

Z 2 Z 2 > X < X window > X < X window 1.5 1.5 1 1 0.5 0.5 0 0 0 1 2 3 4 5 6 7 8 9 10 -6 -4 -2 0 2 4 6 ∆ Eta ζ jj l

s = 8 TeV, L = 20.3 fb-1 WZjj-EW 35 s = 8 TeV, L = 20.3 fb-1 WZjj-EW 30 all channels WZjj-QCD all channels WZjj-QCD

Non-Prompt Events / 1 30 Non-Prompt 25 Prompt Prompt total uncertainty 25 total uncertainty

Events / 100 GeV 20 Data Data 20 15 15 10 10

5 5

1.5 1.5 Sim Sim Data 1 Data 1 0.5 0.5

Z 2 Z 2 > X < X window > X < X window 1.5 1.5 1 1 0.5 0.5 0 0 0 100 200 300 400 500 600 700 800 900 1000 -6 -4 -2 0 2 4 6 m [GeV] ζ t,wz V

Figure 8.3.: Variables considered for additional requirements to isolate VBS after the requirement on the invariant mass of the tagging jets. Shown are from top left to bottom right the absolute separation in pseudo-rapidity between the tagging jets ∆ηjj , the lepton centrality ζl, the transverse mass of the WZ sys- | | tem MT,WZ and the boson centrality ζV. The lower plots state the likelihood based significance for various selection strategies (window cut, lower bound, up- per bound). No significant gains in significance are expected by introducing requirements on the shown variables.

101 8. Measuring the VBS Cross Section 83 93 63 33 03 01 53 50 62 all ...... 4 0 4 4 1 0 0 0 0 ± ± ± ± ± ± ± ± ± 06 17 04 79 11 00 34 09 57 ...... 500 GeV scaled to 1 0 1 0 0 0 0 0 0 > ± ± ± ± ± ± ± ± ± jj 22 40 82 76 96 02 94 65 49 ...... m 60 37 30 7 52 29 37 20 3501 2 12 0 17 1 20 1 2 ...... 1 0 1 1 0 0 0 0 0 ± ± ± ± ± ± ± ± ± 61 10 60 46 06 00 11 05 36 ...... 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± 04 43 61 73 99 01 31 53 04 ...... 41 12 22 2 36 9 3627 6 00 0 28 0 12 0 11 0 1 ...... 1 0 1 1 0 0 0 0 0 ± ± ± ± ± ± ± ± ± systematic uncertainty. Systematic uncertainties have been 50 08 49 42 05 00 25 04 08 ...... 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± ± 69 77 92 67 76 00 86 39 23 ...... 04 9 23 1 01 7 91 5 25 0 01 0 06 0 14 0 12 0 ...... 1 0 1 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± 47 08 46 37 05 00 04 05 27 ...... 0 0 0 0 0 0 0 0 0 statistical uncertainty ± ± ± ± ± ± ± ± ± ± 04 68 36 53 71 00 10 45 57 ...... 1493 8 20 1 89 6 84 4 18 6 0 00 0 24 0 08 0 20 0 13 12 45 ...... eee eeµ µµe µµµ 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± 53 08 52 33 04 00 21 04 35 ...... 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± 44 51 93 82 51 00 67 28 65 . The format is yield ...... 1 7 1 5 3 0 0 0 0 0 − 3 fb . jj-EW jj-QCD Event yields with total uncertainties (statistical and systematic) after the VBS selection with W/Z Z Z ± ± + ¯ t Data Total Exp W Total Bkg W tZj VVV ZZ t non-prompt Table 8.2.: a luminosity of 20 symmetrised using the maximum value of the up and down variation.

102 8.3. Statistical Evaluation

W ±Zjj W ±Zjj tZj V V V ZZ tt¯+ W/Z non EW QCD prompt stat ±2.3 ±3.8 ±3.6 ±28.2 ±17.5 ±5.4 ±23.0 ElEScale ±0.6 ±0.6 ±0.6 ±0.0 ±3.5 ±0.6 ±0.0 ElESmear ±0.1 ±0.1 ±0.6 ±0.1 ±0.3 ±0.1 ±0.0 ElID ±1.2 ±1.1 ±1.1 ±0.8 ±1.6 ±1.1 ±0.0 ElIso ±0.3 ±0.3 ±0.3 ±0.2 ±0.3 ±0.2 ±0.0 ElReco ±0.5 ±0.4 ±0.4 ±0.3 ±0.6 ±0.4 ±0.0 ElTrigger ±0.0 ±0.0 ±0.0 ±0.0 ±0.0 ±0.0 ±0.0 Fakes ±0.0 ±0.0 ±0.0 ±0.0 ±0.0 ±0.0 ±24.8 JER ±0.3 ±3.5 ±0.5 ±1.5 ±7.4 ±0.4 ±0.0 JES ±2.9 ±10.0 ±7.3 ±26. ±15.3 ±3.9 ±0.0 MET ±0.3 ±0.1 ±0.3 ±0.0 ±0.3 ±0.3 ±0.0 MuID ±0.7 ±0.7 ±0.7 ±0.7 ±0.6 ±0.7 ±0.0 MuPt ±0.3 ±0.5 ±0.1 ±0.0 ±0.2 ±0.8 ±0.0 MuTrigger ±0.1 ±0.1 ±0.1 ±0.1 ±0.0 ±0.1 ±0.0 Pileup ±0.1 ±0.5 ±0.1 ±1.3 ±2.8 ±1.0 ±0.0 Theory ±12.3 ±17.9 ±34.0 ±10.0 ±22.3 ±30.0 ±0.8 total syst ±12.5 ±20.8 ±34.8 ±26.2 ±27.5 ±30.3 ±24.8

Table 8.3.: Breakdown of the relative systematic uncertainties on the event yield in the VBS phase space for all channels combined. The uncertainty values per category are symmetrised by taking the larger value of the observed up and down variations. Uncertainties are given in percent.

A breakdown of the individual uncertainty categories (see Section 7.1) is given in Ta- ble 8.3 for the combination of all channels. Driving systematics are the theoretical uncertainties afflicting the simulation based predictions and the uncertainties on the non-prompt background estimate followed by the uncertainties on the jet energy scale and the jet energy resolution. Statistical limitations of the simulated data are becoming sizeable with the VVV and ZZ background being affected the most. Multiple control distributions can be found in Appendix B. Overall agreement between simulated and real data is good though the low statistics do not permit strict state- ments.

8.3. Statistical Evaluation

8.3.1. Cross Section Formula

The cross section of a given process is linked to the observed number of signal events S via the formula:

S σ = (8.1) L with being the integrated luminosity. L This formula is not immediately applicable. Firstly, the event yield observed in real data is comprised of both signal and background events. The number of observed signal events is calculated by subtracting the expected number of background events

103 8. Measuring the VBS Cross Section

eee eeµ µµe µµµ all part NVBS PS 877.42 771.41 800.44 883.75 3333.03 reco NVBS PS 454.12 504.94 532.57 730.28 2221.91  in % 52 2.4 65 3.6 67 3.0 83 3.6 67 2.2 ± ± ± ± ± Table 8.4.: Unscaled event yields in the VBS phase space for events on detector and particle level. Pile-up and vertex position corrections are applied on particle and detector level events. All further detector related corrections are applied on the detector level events. The yields were obtained using the W ±Zjj-EW sample with the DSID 185396. The quoted uncertainties represent the effects of experimental uncertainties on the efficiency.

from the number of events seen in real data: S = Nobs Bexp. Secondly, A comparison − between real and simulated data can only be made on the so-called “detector level” which includes the detector effects. These effects are introduced by either the real detector when recording collision data or by the detector simulation when simulating data. Applying Equation (8.1) one would obtain a cross section on detector level which contains the detector effects. In order to be of use for theorists a correction factor has to be introduced: the efficiency . This correction factor translates the number of events observed on detector level to the number of events on “particle level”. Here, particle level denotes the particle properties prior to the detector simulation. The value of the efficiency depends on the definitions of the fiducial phase spaces on detector and particle level. Ideally, these definitions are well aligned so that only detector related effects are considered. The fiducial phase spaces used here are defined in Section 5.2.2 for the detector level and Section 5.4.2 for the particle level. Table 8.4 lists the efficiencies in the different channels using these phase space definitions. Now, Equation 8.1 can be extended to:

reco reco Nfid, obs Bfid, exp σpart = − (8.2) fid, obs  × L An additional factor called acceptance may be added to the formula to translate the measured cross section from the fiducial phase space on particle level to a more general phase space on particle level. However, such an extrapolation is not done in this work. Equation (8.2) and the yields obtained in the combined channel may be used to cal- culate the cross section for W ±Zjj-EW. Unfortunately, this number on its own does not convey the full information on the measurement as it does not incorporate the uncertainty of the measurement. The desirable result of the statistical evaluation is a so-called confidence belt which describes the uncertainty of the measurement. If one would repeat the given experiment over and over again the outcomes of these exper- iments would form a distribution. The confidence belt is the interval which contains a given percentage of these outcomes. The actual interval depends on the chosen per- centage also called coverage, the chosen construction method, and the statistical and systematic uncertainties encountered. Typical values for the coverage are 68.3 % and 95 % which correspond approximately to intervals of one and two standard deviations around the mean of a normal distribution.

104 8.3. Statistical Evaluation

Construction prescriptions for the confidence belt depend on the interpretation of the measurement. A one sided interval which starts at either infinity or negative infinity and extends to a certain value is interpreted as a limit whereas an interval between two finite values may be seen as a measurement interval. These intervals may be chosen to be symmetric around a central value such as the median or mean or to cover the same percentage on both sides of the central value. The unified approach of Cousins and Feldman [167] removes these ambiguities and provides a clear prescription based on profile likelihoods leaving only the coverage as a free parameter. It is used in this work due to these desirable properties.

8.3.2. Profile Likelihood Method

A likelihood is a function that is defined by a statistical model and its parameters. Dif- ferent hypotheses can be the basis for such statistical models, e.g. the background only hypothesis where the signal contribution is assumed to be zero. When confronted with the observed data the likelihood function representing the hypothesis yields the prob- ability to obtain the data given the statistical model and the parameters. A likelihood fit would aim to maximise the obtained probability by tuning the model parameters thus fitting the model to the observed data. Some of the model parameters e.g. the influence of systematic sources may not be of interest but have to be incorporated into the model to be complete. In these cases the profile likelihood method is employed which eliminates the parameters that are not of interest. In order to apply the method a statistical model is built:

Y Y 0 (σfid, θ) = Poiss (Nc Sc(σfid, θ) + Bc(θ)) Gauss(θ θj, 1). (8.3) L | j | c ∈ channel j ∈ systc

The model assumes that the event yield follows Poisson statistics given by:

−[Sc(σ ,θ)+Bc(θ)] Nc e fid [Sc(σfid, θ) + Bc(θ)] Poiss (Nc Sc(σfid, θ) + Bc(θ)) = × (8.4) | Nc! with Nc being the observed events in a given channel and Sc and Bc the expected signal and background yields in said channel. In this case σfid is the parameter of interest which enters through the expected signal. The Poisson terms for the channels are simply multiplied with each other as they are statistically independent.

The signal Sc on detector level is linked to the fiducial cross section on particle level as follows:

Sc(σfid, θ) = σfid c fc(θ) (8.5) × L × × with denoting the luminosity and c the efficiency observed in the given channel. The L influence of systematic uncertainties is expressed via the vector of nuisance parameters θ:

 + Y 1 + θj∆j if θj 0 fc(θ) = − ≥ (8.6) 1 + θj∆ if θj < 0 j j

105 8. Measuring the VBS Cross Section

+ − where ∆j and ∆j are the relative uncertainties in upward and downward direction introduced by the systematic source j. The same definition regarding uncertainties is + − applied analogously on Bc. The size of ∆j and ∆j is determined by applying the uncertainty prescriptions in Section 7.1. Each nuisance parameter is assigned a normal distributions which implements a penalty when it deviates from zero. The total penalty impact is the product of all normal distributions

Y 0 Gauss(θ θj, 1). j | j ∈ systc

Using the model defined in Equation (8.3) on can write down the profile likelihood ratio as a test statistic:

ˆ (σfid, ˆθ) t(σfid) = 2 lnL . (8.7) − (ˆσfid, θˆ) L The denominator is the maximised unconditional likelihood where both the parameter of interest σfid and the vector of nuisance parameters θ are free to float. Here, the parameters obtained in the unconditional fit are denoted with a single hat. The nu- merator is the maximised conditional likelihood with only the nuisance parameters free to float and the parameter of interest set to a certain value. The vector of nuisance parameters obtained in the conditional fit is denoted with a double hat. The test statistic expresses the level of disagreement between the observed data and the model prediction for a given parameter of interest. It is zero when the tested parameter of interest coincides with the one obtained from the unconditional fit.

The confidence belt for a certain hypothesised fiducial cross section σfid is built by 0 determining all values of the parameter of interest σfid which satisfy

Z t(σ0 ) 0 fid 0 F (t(σfid) σfid) = f(t(σfid) σfid)dt < α (8.8) | 0 | with α being the desired coverage. 0 The exact form of f(t(σ ) σfid) depends on the used statistical model. fid | Its shape can be determined by carrying out pseudo experiments. In each experiment the size of the nuisance parameters and the number of observed events are generated randomly following the probability density function defined by the statistical model and the hypothesised fiducial cross section σfid. The likelihood ratio is determined and a value for the test statistic t is obtained. Large numbers of pseudo experiments have to be done to get a stable estimate for the distribution of the test statistic as high coverages require good statistics in the tails of the test statistic distribution. 0 This procedure is repeated for many σfid and the σfid for these points are determined. obs 0 The actual confidence interval on the σfid is then constructed by finding all σfid which feature the observed fiducial cross section in their interval constructed in the previous step. The computational cost for generating many pseudo experiments may become too large for complex models with many parameters. It has been shown in Ref. [168] that for large yields the test statistic considered here follows a χ2 distribution with n degrees of

106 8.3. Statistical Evaluation freedom with n being equal to the number of parameters of interest. This asymptotic behaviour of the test statistic simplifies the calculation of the confidence interval to:

0 0 0 2 −1 t(σ ) = 2∆ log (σ ) = 2(ln (σ ) ln (ˆσfid)) (χ ) (α, n) (8.9) fid − L fid − L fid − L ≤ withσ ˆfid being the best fit which coincides with the hypothesised σfid in the case of the expected intervals and the best fit fiducial cross section for the observed intervals. For the 1-dimensional case this leads to (χ2)−1(0.683, 1) = 1.00 and (χ2)−1(0.95, 1) = 3.84 for confidence levels of 68.3 % and 95 %. It has to be noted that this asymptotic approach may break down in the case of low statistics where the prerequisites for Wilk’s theorem [168] may not be satisfied. Another point of failure are degeneracies, e.g. when the calculated area intersects with unphysical parameter ranges. This may be of concern in the present case as the expected signifi- cance is only 0.98 (see Table 8.5) which may lead to issues when calculating intervals with 95 % coverage. Therefore, the nominal results will be calculated using pseudo experiments. Results using the asymptotic formulae are determined for comparison.

8.3.3. Technical Implementation

The statistical evaluation is implemented using the RooStats framework [169]. Con- figurable calculator classes for both the asymptotic and toy-based approaches are avail- able. The asymptotic formulae found in Ref. [170] are implemented in RooStats via the AsymptoticCalculator [171]. The pseudo experiments are carried out using the FrequentistCalculator [172] provided by RooStats using the profile likelihood stated in Equation (8.7) as the test statistic. Both calculators require the user to define a background and a signal plus background model. For the VBS the signal plus back- ground model is defined by the W ±Zjj-EW signal and the prompt and non-prompt background processes and the uncertainties assigned to them. The background model sets the fiducial cross section for the W ±Zjj-EW signal to zero. Only positive cross sections are allowed. The FrequentistCalculator is set to generate 10000 pseudo experiments for both background and signal plus background model. Parallelisation facilities offered by RooStats are used to speed up the computation. No additional configuration is needed for the AsymptoticCalculator. The confidence belt is constructed by scanning the possible values of the parame- ter of interest from zero to ten times the best fit value of the parameter of interest. The range is sampled in 200 steps for the AsymptoticCalculator and in 50 for the FrequentistCalculator. The number of steps for the FrequentistCalculator is lim- ited by the time it takes to generate the 10000 pseudo experiments for the background and signal plus background model for each step. An additional constraint is imposed by a memory leak in the RooStats code that causes a total memory consumption of about 15 GB for the whole 50 step grid scan. Since the confidence belt is interpolated between the steps it can still be constructed despite the low number of steps. The impact of the number of steps on the end result has been tested using the AsymptoticCalculator and it was found that 50 steps is sufficient for obtaining stable end results independent of the number of used steps.

107 8. Measuring the VBS Cross Section

8.3.4. Results

Table 8.5 lists the results obtained from the FrequentistCalculator. Two hypotheses are listed: W ±Zjj-EW without tZj dubbed VBS only and W ±Zjj-EW combined with tZj. The expected values for the observables are calculated using the asimov3 dataset which is obtained by replacing the observed real data event yield with the total expected event yield from simulation. The cross section measurement is not significant, therefore the upper limits at 95 % confidence level of the measured interval are also given. The values computed via pseudo-experiments and the asymptotic formulae agree completely in all cases. The observed fiducial cross sections on particle level are about 0.9 σ larger than the expected ones for both signal hypotheses. Thus, some tension between expectation and observation is present but no striking contradiction is found. A possible explanation for the observed discrepancy may be mere chance, given that the expected and observed event yields are not overly large. Another possibility is that the cross section for the main contributing background W ±Zjj-QCD is assumed too low. Recent results publishing NNLO cross sections for diboson processes show an overall increase in the diboson cross sections for a multitude of processes [29, 30]. For the inclusive W ±Z measurement the increase in cross section is about 17 % and explained the discrepancy between expectation and observation in the Run 1 analysis. A simple extrapolation to W ±Zjj-QCD is of course not accurate but the possibility that higher order effects may have a sizeable impact on the cross section of this important background should not be discarded. A third possibility is that yet unknown processes that have not been taken into account in the cross section measurement contribute to the observed data. In this case the tension between the expected and observed results may hint at new physics. Cross sections for W ±Zjj-EW or other unknown processes above 2.5 fb are excluded at 95 % assuming that the VBS only part of W ±Zjj-EW production is the signal.

3The term “asimov” is derived from a short story written by Isaac Asimov in which the most average person of the population is selected to vote on behalf of the entire population.

108 8.3. Statistical Evaluation

signal definition W ±Zjj-EW (VBS only) W ±Zjj-EW (VBS + tZj) expected +0.63 +0.62 σfiducial / fb 0.55−0.43 0.76−0.57 observed +0.68 +0.67 σfiducial / fb 1.12−0.61 1.34−0.62

expected Zfiducial 0.98 1.36 observed Zfiducial 1.90 2.3 expected upper limitfiducial /fb 1.8 2.0 observed upper limitfiducial /fb 2.5 2.7

Table 8.5.: Fiducial cross section results in the VBS phase space on particle level for the VBS only signal definition and the combination of VBS and tZj contributions to W ±Zjj-EW. The uncertainties given are the quadratic sum of the statistical and systematic uncertainties. The results were obtained using the pseudo-experiments and cross checked with the asymptotic formulae. No discrepancies between the two approaches were found.

109

9. Differential Distributions

A key aspect of experiments is to provide results that may be compared to theoretical predictions. To achieve this, the measuring effects introduced by the experimental setup have to be found and corrected for. These effects include inefficiencies regarding the reconstruction and identification of physical objects, scaling and resolution effects on the measured kinematics as well as limitations of the experimental setup with respect to spatial coverage. All these effects necessitate the introduction of normalisation and distortion corrections to go from a measured distribution to one that may be compared to a theoretical prediction. The overall technique to achieve this is called unfolding.1

9.1. General Approach

Some nomenclature has to be defined to discuss the topic of unfolding efficiently. From here on, distributions that contain detector effects are called “reconstruction level” distributions.2 These may be the distributions as they are measured using the real detector or simulated data which have been subjected to a detector simulation. In con- trast, distributions without detector effects will be called “particle level” distributions. These are distributions which stem from either simulated data without the applica- tion of a detector simulation or are measured distributions which have been treated to remove the detector effects. Multiple steps are needed to get from reconstruction-level distributions to particle-level distributions. Figure 9.1 tries to capture these individual steps. The starting point is a measured distribution containing contributions from both sig- reco nal and background processes in the fiducial region on reconstruction level (XS, reco + reco XB,reco). Here, the superscript denotes whether the distribution is obtained on re- construction level or on particle level whereas the subscript denotes the fiducial phase space in which the distribution is measured (either reco, joint, or part). The term “reco” denotes all events that pass the selection on reconstruction level whereas “part” summarises all events passing the particle level selection. Finally, events in the “joint” phase space pass both the selection on reconstruction and particle level. The back- ground contribution is estimated using simulated data and data driven techniques (see Section 6) and is subtracted from the measured distribution resulting in a background reco free distribution on reconstruction-level (XS,reco). The next step addresses migration

1 It should be noted that it may also be possible to provide a “folding matrix” for each variable which encapsulates the detector effects of the measurement. In doing so, one would be able to compare the theoretical predictions and the observations obtained in an experiment. This approach would be stable in contrast to the determination of the unfolding matrix. However the provision of such a matrix for each variable and each experiment would be cumbersome and comparisons between experiments would not be possible using this approach. 2 The term “reconstruction level” is for all intents and purposes identical to the “detector level” used in the last and following chapter. However, the term is used in this chapter to be consistent with the typically used nomenclature in the literature.

111 9. Differential Distributions

Bkg. Subtraction

Fiducial Correction Unfolding Matrix Efficiency Correction

reco part part reco reco XS, joint XS, joint XS, part XB, reco XS, reco

Figure 9.1.: General methodology for unfolding. reco The necessary steps to arrive from a measured distribution (XS+B, reco) to the part fully unfolded one (XS, part) are background subtraction, fiducial correction, ap- plication of the unfolding matrix, and efficiency correction.

effects which cause events to be in the fiducial phase space on reconstruction level but not on particle level. These so-called fiducial corrections result in distributions on reconstruction-level in the joint phase space where events lie in both the fiducial reco phase space on reconstruction-level and particle level (XS, joint). In this joint phase space one can construct a response matrix that relates the value of an observable on reconstruction-level to its value on particle level.3 By inverting this matrix one can correct for detector effects and arrive at the distribution on particle level in the joint part phase space (XS, joint). To obtain the fully unfolded results an extrapolation from the joint phase space to the full fiducial phase space on particle level is needed. This is achieved by applying a so-called efficiency correction resulting in the fully unfolded part distribution on particle level (XS, part). The background subtraction, fiducial correction, and efficiency correction are relatively straight forward to implement as they merely rely on bin-wise histogram operations. In general, the necessary inversion of the response matrix is not trivial. Depending on the structure of the response matrix an inversion may either be impossible or may result in unstable behaviour. Instabilities arise both from the statistical nature of the matrix itself and from its determination from simulations with a finite sample size. Therefore, a regularisation procedure is made necessary and multiple approaches are available to do this. Some popular approaches are: Bin-By-Bin Unfolding where the unfolding is simply done by using the ratio • of the reconstruction-level distribution and particle level distribution in the joint phase space. This represents a very simple and straight forward way of removing the detector effects and relies heavily on the quality of the simulated data and

3 A response matrix does not necessarily have to be constructed in the joint fiducial phase space but may relate the information in the reco fiducial phase space on reconstruction level with the particle level information in the joint phase space thus incorporating the fiducial corrections.

112 9.2. Introduction to Bayesian Iterative Unfolding

detector simulation. Therefore, it should be used with caution but provides an easy to calculate cross check for more sophisticated approaches. Singular Value Decomposition (SVD) [173] attempts to remedy the prob- • lems that arise due to the inversion of the response matrix. The problem is approached by decomposing the response matrix into a set of three matrices R = SVD where S and D are chosen such that V only has diagonal elements (the singular values). This enables various techniques that eliminate statistically problematic behaviour by only considering singular values above a certain thresh- old. Regularisation techniques may be applied that suppress oscillations in the unfolded results due to the statistical limitations of the response matrix. Bayesian Iterative Unfolding (BIU) [174, 175] attempts to circumvent the • stated problems via bypassing the inversion of the response matrix. It employs Bayes’ Theorem to relate the migration effects that transform the true distribution to the measured distribution.

In this work Bayesian Iterative Unfolding is used as it represents the best compromise between computational speed and performance.

9.2. Introduction to Bayesian Iterative Unfolding

Bayesian iterative unfolding [174,175] was re-invented4 in 1994 as an attempt to address the problems typically found when trying to invert the response matrix (or smearing matrix as the paper calls it). The argument is that the inversion of the response matrix is an ill-defined problem due to its statistical nature which may even prevent the existence of an inverted matrix altogether. The reasoning is continued by stating that unfolding is a statistical problem and should therefore be solved with statistical tools, hence the application of Bayes’ Theorem. The inputs needed for the application of BIU are the measured binned distribution with ne bins and a ne nc response matrix that encapsulates the relations between × the bins of the true distribution and the measured distribution. The subscripts e and c stand for effect and cause, respectively. The goal is to reconstruct the true distribution using these inputs. In the original publication the individual bins of the true distribution are called “cause cells” and the bins of the measured distribution “effect cells”. The response matrix is seen as a set of probabilities governing how likely it is that an event from a cause cell Ci ends up in an effect cell Ej and is therefore denoted as P (Ej Ci). Using Bayes’ | Theorem the probability that an event belongs to cause cell Ci given that it is observed in effect cell Ej is calculated as:

P (Ej Ci)P0(Ci) P (Ci Ej) = Pnc | (9.1) | P (Ej Cl)P0(Cl) l=1 | The best estimate regarding the number of events in the individual cause cells is cal- culated as:

4A similar method was used before in astrophysics but was unknown to the author of BIU.

113 9. Differential Distributions

n 1 Xe nˆ(Ci) = n(Ej)P (Ci Ej) (9.2)  | i j=1 with i denoting the probability that an event of cause Ci will end up in any effect bin implementing the efficiency correction stated earlier. The equations may be written in matrix form

n Xe nˆ(Ci) = Mijn(Ej) (9.3) j=1 with

P (Ej Ci)P0(Ci) Mij = Pne P| nc (9.4) [ P (El Ci)][ P (Ej Cl)P0(Cl)] l=1 | l=1 | by combining Equation (9.1) and Equation (9.2).

One important issue of the method is the existence of the prior P0(Ci) which models the previous knowledge (or prejudice) regarding the shape of the true distribution. The dependence on the prior is alleviated through an iterative approach. In the first itera- tion some prior P0(C) is chosen. Usual choices are either a uniform distribution or the particle level distribution obtained from simulation. Then, the unfolded distribution is calculated using the formulas stated above. In the next iteration, the unfolded distri- bution is used as the prior and the process is repeated. This lessens the impact of the prior with rising number of iterations but each iteration increases the statistical uncer- tainties observed. Therefore, the ideal number of iterations is a compromise between minimising the impact of the prior as well as the statistical uncertainties encountered.

9.3. Analysis Implementation

9.3.1. Technical Setup

The RooUnfold [176,177] package was chosen to carry out the unfolding with EWUn- folding [178] being a second package that acts as a wrapper around RooUnfold providing facilities for configuration and plotting. RooUnfold implements several unfolding approaches making comparisons between alternative unfolding strategies con- veniently possible. The general workflow consists of several steps. Firstly, small and flat TTrees [179] are produced via a dedicated analysis code that selects events in both the reconstruction level and particle level fiducial region. A flag is saved for each event in the signal samples indicating whether it lies in the fiducial region on particle level, in the fiducial region on reconstruction level or in both. All other events are discarded. The values of the observable in question on particle or reconstruction level are also written out depending on their availability. For background and data events only the reconstruction level value in the reconstruction level fiducial region is saved. In a second step these TTrees are processed further to obtain the necessary inputs for the application of the EWUnfolding package. Due to this two-step process,

114 9.3. Analysis Implementation it is possible to define which samples make up the signal in question and which are considered as background when applying the second step. The inputs required for the unfolding are: The data distribution which is obtained on reconstruction level in the recon- • struction level fiducial phase space. The background distribution estimated using data driven techniques for non- • prompt backgrounds as well as simulated data for prompt backgrounds on recon- struction level in the reconstruction level fiducial phase space. The numerator for the fiducial correction being the reconstruction level • distribution for the variable in question in the joint fiducial phase space. The denominator for the fiducial correction which is defined as the recon- • struction level distribution in the fiducial phase space on reconstruction level. The response matrix whose columns (rows) contain the reconstruction (parti- • cle) level value of the variable in question in the joint fiducial phase space. The numerator for the efficiency correction which is the particle level dis- • tribution in the joint phase space. The denominator for the efficiency correction which is the particle level • distribution in the particle level fiducial phase space. The third step is simply the application of the EWUnfolding package with the final figures and tables presented in this work being the output.

9.3.2. Systematic Uncertainties

The EWUnfolding package provides facilities for a user-friendly implementation of systematics. The input for each systematic effect is prepared by rerunning the first two steps of the three step process stated above with the systematic source in question being varied up or down. Then, the systematic effects are registered with EWUnfolding via a configuration file. Here, the user may specify which systematics are correlated and whether correlations between the background and the signal are present. In this work, the signal and background for each systematic effect have been treated as correlated meaning that the systematic effect afflicts the signal and background simultaneously. Internally, the unfolding procedure is executed for the nominal case as well as each systematic. The resulting uncertainty is the deviation of the unfolded result for the systematic effect with respect to the nominal unfolded result. In addition, several statistical effects are considered. The statistical uncertainty on the real data distribution is evaluated via pseudo-experiments and propagated to the unfolded result. For each pseudo-experiment, the individual bins are varied under the assumption of a Poisson distribution before executing the nominal unfolding procedure. This results in a distribution of possible yields in each bin of the unfolded distribution. The quoted statistical uncertainty for each bin is the root-mean square of the obtained unfolded distribution doing 2000 pseudo-experiments. The effect on the unfolded distribution introduced by the statistical uncertainty on the background yield is estimated similarly. Here, a Gaussian instead of a Poissonian is used for varying the yields in the individual bins with the width of the Gaussian being the statistical error on the background. The use of a Gaussian is motivated by the use

115 9. Differential Distributions of weighted simulated data for the background estimation. The rest of the procedure is the same as for the data distribution. Additionally, an uncertainty from the possible mis-modelling of the data via the signal simulation is estimated via a multi-step algorithm. Firstly, the nominal unfolding is performed on data which yields the nominal response matrix. A reweighting histogram is obtained by dividing the unfolded data binwise by the simulated prediction at particle level. The so obtained reweighting factors are multiplied with the original event weights resulting in reweighted events. A second unfolding is done using a second response matrix made with the reweighted events and the simulated prediction as inputs. The obtained unfolded distribution is compared to the simulated prediction on particle level and the deviations are taken as the unfolding systematic uncertainty.

9.3.3. Optimising the Number of Iterations

In BIU the only free parameter is the number of iterations that is used to unfold the data. Two competing influences are at work. The uncertainties obtained rise with the number of iterations whereas the dependency on the chosen prior decreases. Therefore, the ideal number of iterations should be a good compromise between both effects. However, the evaluation of the dependency on the prior is a not well defined problem which hinders the possibility for a fine-tuned optimisation. Preliminary studies in the WZ group have shown that the convergence of BIU to stable values depends on the chosen prior. A convergence may be achieved after only one iteration if the inputs for the unfolding matrix are chosen as the prior and the measured distribution. Tests unfolding the transverse mass of the WZ system altering the prior in a continuous way between the predicted particle level distribution and the flat prior show that an increasing number of iterations is needed before convergence is achieved when moving away from the particle level distribution. About eight iterations are needed to arrive at stable values in the case of the flat prior. Therefore, the number of iterations decreases the dependence on the prior but at the cost of increasing uncertainties. This dependence links the optimisation of the number of iterations to the expectation one has regarding the underlying particle level distribution. Small variations of the prior simulated by unfolding the measured distribution of Sherpa using the response matrix from Sherpa and the prior from PowHeg showed convergence after two to three iterations. As a result three iterations were chosen in accordance to the rest of the analysis group for the unfolding.

9.4. Results

The variables unfolded in this work are:

The jet multiplicity Njets measured in the inclusive fiducial phase space. • The invariant mass of the dijet system mjj measured after requiring at least • two jets.

The absolute difference in rapidity of the tagging jets ∆yjj measured after • requiring at least two jets.

116 9.4. Results

5+ 0.00 0.00 0.04 0.53 0.9 5+ 0.00 0.00 0.06 0.57 0.9 0.8 0.8 (Truth) (Truth) 4 0.00 0.00 0.06 0.50 0.32 4 0.00 0.00 0.05 0.47 0.27 0.7 0.7 Jets Jets N 0.6 N 0.6 3 0.00 0.00 0.05 0.59 0.34 0.10 3 0.00 0.00 0.05 0.55 0.33 0.12 0.5 0.5 2 0.00 0.04 0.66 0.27 0.10 0.04 0.4 2 0.00 0.04 0.63 0.31 0.10 0.03 0.4 0.3 0.3 1 0.03 0.77 0.24 0.06 0.01 0.01 1 0.03 0.79 0.27 0.08 0.03 0.01 0.2 0.2

0 0.97 0.19 0.04 0.01 0.00 0.1 0 0.97 0.16 0.04 0.01 0.00 0.1 0 0 0 1 2 3 4 5+ 0 1 2 3 4 5+

NJets (Reco.) NJets (Reco.)

Figure 9.2.: Response matrices for the jet multiplicity measurement for Sherpa (left) and PowHeg (right).

The combination of the W ±Zjj-EW and W ±Z-QCD processes with electrons or muons in the final state on particle level is taken as the signal. The background is composed of the data driven estimate for tt¯, Z+jets, W +jets, Zγ, and W ±γ production, the simulated estimate for ZZ, tt¯+W/Z, VVV , and the contributions to the W ±Z process via final states containing τ-leptons estimated via simulation.5 The set of real data is the full dataset taken in 2012. A thorough description of the samples is given in Section 4.4.

9.4.1. Jet Multiplicity

The jet multiplicity is of particular interest as it enables a comparison between PowHeg and Sherpa in events with increasingly high amounts of jets. The prediction from Sherpa includes up to three jets in the matrix element offering LO* accuracy whereas the prediction from PowHeg provides NLO accuracy for the inclusive WZ process populating the jet multiplicity bins larger than one via the parton shower. In contrast to the selection criteria for the VBS phase space, the requirement on the transverse momentum of the jets is 25 GeV to be consistent with other ATLAS analyses sensi- tive to the jet selection [87]. The response matrices obtained from both PowHeg and Sherpa are shown in Figure 9.3. The response matrices show increasingly large off-diagonal elements with rising jet multiplicity. A fairly high asymmetry can be seen which suggests that the effect that a particle level jet fails the acceptance is less likely than effects from e.g. pile-up leading to additional jets or an overestimate of the jet energy leading to more accepted jets. The purity of the measurement (expressed by the diagonal elements) decreases with rising jet multiplicity indicating that the correct reconstruction of a high multiplicity event is increasingly challenging. The obtained response matrices are fairly similar. A technicality has to be noted. Two jet collections are available on particle level dubbed AntiKt4TruthJets and AntiKt4TruthJetsWZ. The difference between the two collec- tions is that AntiKt4TruthJetsWZ does not include the decay products from W and

5 Separate samples for each combination of leptonic final state exist for PowHeg and MC@NLO making this separation trivial. For Sherpa a particle level algorithm was devised that vetoes events with matrix element level τ-leptons.

117 9. Differential Distributions

1

fid WZ Theory (PowHeg jetAntiKt4_Truth) σ

/ Theory (PowHeg jetAntiKt4_TruthWZ) fid WZ σ ∆ 10-1

10-2

1.1 1

Truth 0.9 TruthWZ 0 1 2 3 4 5+

NJets

Figure 9.3.: Comparison of the unfolding results in a closure test using the inputs from PowHeg for the AntiKt4TruthJets and AntiKt4TruthJetsWZ collection. The difference between both collections shown in the lower part are taken as an uncertainty and are propagated to the final unfolding results.

Z boson decays in the jet clustering process whereas AntiKt4TruthJets does. There- fore, the AntiKt4TruthJetsWZ collection is somewhat favourable as it is more aligned with the reconstruction level case. Due to the technical implementation of Sherpa the association of the decay products to the W and Z bosons is not straightforward. This led to problems in the code responsible for the jet collections on particle level rendering the AntiKt4TruthJetsWZ collection unusable in Sherpa. Therefore, the AntiKt4TruthJets collection is used in this work though not being the initial choice. The impact of the truth jet collection can be seen in Figure 9.3 where the unfolding results in a closure test from PowHeg for both collections are compared. The system- atic introduced by the truth jet collection choice is propagated to the final unfolding results for the jet multplicity by taking the difference shown between both collections in Figure 9.3 as the uncertainty. The efficiency for the jet multiplicity unfolding is shown in Figure 9.4. It increases with the jet multiplicity leading to smaller corrections in the higher bins. The overall value of the efficiency is rather high indicating that the extrapolation from the joint phase space to the particle level phase space is not too large. This indicates that the algorithms for the reconstruction of objects are fairly efficient and do not lead to a large loss of statistics. The resulting unfolded distributions are shown in Figure 9.5. As can be seen Sherpa describes the data more accurately than PowHeg especially in the higher jet mul- tiplicity bins. The differing behaviour is expected as both generators populate the multiplicity bins differently. Sherpa is able to provide up to three jets from the matrix element calculation. These jets are more likely to have a higher transverse momentum and subsequently a higher probability to satisfy the imposed jet requirements. Addi- tionally, hard emissions may be better modelled by these matrix element calculations as well es incorporating diagrams not considered in PowHeg at the matrix element level (e.g. those for W ±Zjj-QCD). In contrast, additional jets in the PowHeg simu-

118 9.4. Results

1.4 Simulation, s = 8 TeV 1.4 Simulation, s = 8 TeV

1.2 1.2 Efficiency Efficiency 1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0 0 1 2 3 4 5+ 0 1 2 3 4 5+

NJets NJets

Figure 9.4.: Efficiency as a function of jet multiplicity for the Sherpa (left) and PowHeg (right) simulated data.

lation stem purely from the parton shower. These jets tend to have a softer transverse momentum spectrum leading to a lower number of accepted jets and therefore to a jet multiplicity distribution with a higher emphasis on the lower bins. The normalised distributions using the Sherpa response matrix shows very good agree- ment between data and the Sherpa prediction which suggests that higher order QCD corrections may only affect the overall normalisation but not the shape. The compo- sition of the uncertainties afflicting the unfolding process can be found in Figure 9.6. Overall the systematic and statistical uncertainties are balanced in this phase space. The relative uncertainties rise with increasing jet multiplicity becoming 88% in the ≈ 5 bin. A more detailed breakdown of the estimated unfolding uncertainties for the ≥ normalised and unnormalised distribution for both Sherpa and PowHeg can be found in Appendix D. Driving uncertainties are the jet energy scale and jet energy resolution as well as the uncertainties on the background processes.

9.4.2. Invariant Mass of the Dijet System

The invariant mass of the dijet system is measured in the phase space that adds a requirement of at least two jets with a transverse momentum of pT 30 GeV to the ≥ inclusive phase space. Figure 9.7 shows the response matrix for both Sherpa and PowHeg. The response matrices exhibit diagonal elements close to one suggesting relatively small migrations between the individual bins and agree well between the two generators. This suggests that detector effects on the kinematics of the jets dominate. A distinctive asymmetry can be observed leading to a migration from a bin of lower invariant mass on particle level to the next higher bin on reconstruction level. This may hint at additional contributions to the measured jet energies via pile-up. Migration effects spanning larger distances (e.g. a migration from the 0 150 GeV bin on particle − level to the 300 500 GeV bin on reco level) may be caused by a mismatch between − the selected tagging jets on particle and reconstruction level. The efficiency correction shown in Figure 9.8 is rather stable suggesting that no bias in the shape of the invariant mass of the dijet system is introduced in the joint phase space. The efficiencies obtained from Sherpa and PowHeg agree well. Figure 9.9 shows both the normalised and unnormalised differential cross section as a

119 9. Differential Distributions

1 100

fid WZ Data 2012 ( s = 8 TeV) fid WZ Data 2012 ( s = 8 TeV) σ 0.9 σ 90 / -1 -1 ∫L dt = 20.3 fb ∆ ∫L dt = 20.3 fb fid WZ 0.8 80 σ ∆ 0.7 Theory (Sherpa) 70 Theory (Sherpa) 0.6 Theory (PowHeg) 60 Theory (PowHeg) Data (2012) Data (2012) 0.5 Stat. Uncertainty 50 Stat. Uncertainty 0.4 Uncertainty 40 Uncertainty 0.3 30 0.2 20 0.1 10

2 0 1 2 3 4 5+ 2 0 1 2 3 4 5+ 1.5 1.5

Sim N Sim N Data 1 Jets Data 1 Jets 0.5 0.5 0 1 2 3 4 5+ 0 1 2 3 4 5+

NJets NJets 1 100

fid WZ Data 2012 ( s = 8 TeV) fid WZ Data 2012 ( s = 8 TeV) σ 0.9 σ 90 / -1 -1 ∫L dt = 20.3 fb ∆ ∫L dt = 20.3 fb fid WZ 0.8 80 σ ∆ 0.7 Theory (PowHeg) 70 Theory (PowHeg) 0.6 Theory (Sherpa) 60 Theory (Sherpa) Data (2012) Data (2012) 0.5 Stat. Uncertainty 50 Stat. Uncertainty 0.4 Uncertainty 40 Uncertainty 0.3 30 0.2 20 0.1 10

3 0 1 2 3 4 5+ 4 0 1 2 3 4 5+ 3

Sim 2 N Sim N Data Jets Data 2 Jets 1 1 0 1 2 3 4 5+ 0 1 2 3 4 5+

NJets NJets

Figure 9.5.: Unfolding results for the jet multiplicity measurement. The upper row shows the unfolded results using the response matrix obtained with Sherpa while the lower row results use the PowHeg response matrix. The left-hand side results are normalised whereas the right-hand side results are non-normalised.

function of the invariant mass of the dijet system. The invariant mass spectrum seen in PowHeg is softer than the one obtained from Sherpa. This again may be explained via the different treatment of the higher jet multiplicities. Sherpa genuinely includes matrix element contributions from the two jet bin and thus contributions from VBS and VBS-like processes which are expected to exhibit larger invariant dijet masses. In contrast, all jets beyond the first jet bin stem from the parton shower in the PowHeg simulation. In addition, PowHeg predicts a lower fiducial cross section which may be attributed to a smaller efficiency of the number of jet requirement The overall agreement between simulated and real data is good. However, one has to note the rather high uncertainties stemming from both the lower statistics in this more restricted phase space as well as a higher dependence on jet measurement uncertainties. Consequently, the driving uncertainties are the jet energy scale and jet energy resolution uncertainties, as well as the uncertainty due to back- ground statistics and the uncertainty on the background estimates, both data driven and taken from simulation. The relative uncertainty compositions for both generators are shown in Figure 9.10. As in the jet multiplicity measurement, the statistical and systematic uncertainties are relatively balanced against each other. The influence of the background uncertainty increases. A detailed breakdown of the uncertainties can be found in Appendix D.

120 9.4. Results

240 200 Data 2012 ( s = 8 TeV) Stat. Unc. Data 2012 ( s = 8 TeV) Stat. Unc. 220 -1 180 -1 ∫L dt = 20.3 fb Stat.+Sys. Unc. ∫L dt = 20.3 fb Stat.+Sys. Unc. 200 Stat.+Sys.+Bkg. Unc. 160 Stat.+Sys.+Bkg. Unc. 180 Full Unc. (quad.add) 140 Full Unc. (quad.add) 160 140 120 120 100

Rel. Uncertainty [%] 100 Rel. Uncertainty [%] 80 80 60 60 40 40 20 20 0 0 0 1 2 3 4 5+ 0 1 2 3 4 5+

NJets NJets

Figure 9.6.: Relative uncertainty composition for the Sherpa (left) and PowHeg (right) as a function of jet multiplicity. The uncertainty composition in the non-normalised cases are very similar and are tabulated in Appendix D.

0.9 0.9 800+ 0.00 0.00 0.00 0.02 0.91 800+ 0.00 0.00 0.00 0.03 0.85 0.8 0.8 500 0.7 500 0.7 -800 0.00 0.00 0.03 0.84 0.08 -800 0.00 0.00 0.03 0.82 0.11 0.6 0.6 [GeV] (Truth) [GeV] (Truth) jj jj

M 300 0.5 M 300 0.5 -500 0.01 0.02 0.83 0.11 0.00 -500 0.00 0.03 0.85 0.10 0.01 0.4 0.4 150 0.3 150 0.3 -300 0.04 0.87 0.12 0.01 0.00 -300 0.05 0.87 0.10 0.02 0.02 0.2 0.2 0 0.1 0 0.1 -150 0.95 0.11 0.02 0.02 0.01 -150 0.94 0.09 0.02 0.03 0.02 0-150 150-300 300-500 500-800 800+ 0-150 150-300 300-500 500-800 800+

Mjj [GeV] (Reco.) Mjj [GeV] (Reco.)

Figure 9.7.: Inputs to the unfolding procedure for the measurement of the in- variant mass of the dijet system using data simulated with Sherpa (left) and PowHeg (right).

9.4.3. Absolute difference in Rapidity of the Dijet System

The absolute difference in rapidity of the dijet system is unfolded due to its im- portance in the W ±W ±jj analysis where it exhibits good separation power between W ±W ±jj-EW and W ±W ±jj-QCD. In W ±Zjj, larger values for the absolute difference in rapidity of the dijet system are expected for W ±Zjj-EW than for W ±Zjj-QCD. Figure 9.11 shows the response matrices for both generators for the measurement of the variable which is measured in the same phase space as the invariant mass of the dijet system distribution. The response matrices exhibit only very small off-diagonal elements and show a small bias towards higher ∆yjj values on reconstruction level compared to particle level. Entries in bins with a higher distance to the diagonal stem from events where the particle level tagging jets do not match the reconstruction level tags. The efficiency corrections depicted in Figure 9.12 show that both generators exhibit a slight slope towards higher ∆yjj values suggesting a slight bias introduced by the

121 9. Differential Distributions

1.4 Simulation, s = 8 TeV 1.4 Simulation, s = 8 TeV

1.2 1.2 Efficiency Efficiency 1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0 0-150 150-300 300-500 500-800 800+ 0-150 150-300 300-500 500-800 800+

Mjj [GeV] Mjj [GeV]

Figure 9.8.: Efficiency as a function of the invariant mass of the dijet system for the Sherpa (left) and PowHeg (right) simulated data.

1 fid WZ Data 2012 ( s = 8 TeV) fid WZ 12 Data 2012 ( s = 8 TeV) σ 0.9 σ / -1 -1 ∫L dt = 20.3 fb ∆ ∫L dt = 20.3 fb fid WZ 0.8 σ 10 ∆ 0.7 Theory (Sherpa) Theory (Sherpa) 0.6 Theory (PowHeg) 8 Theory (PowHeg) Data (2012) Data (2012) 0.5 Stat. Uncertainty 6 Stat. Uncertainty 0.4 Uncertainty Uncertainty 0.3 4 0.2 2 0.1

2 0-150 150-300 300-500 500-800 800+ 2.5 0-150 150-300 300-500 500-800 800+ 1.5 2

Sim M [GeV] Sim M [GeV]

Data jj Data 1.5 jj 1 1 0.5 0.5 0-150 150-300 300-500 500-800 800+ 0-150 150-300 300-500 500-800 800+ Mjj [GeV] Mjj [GeV]

Figure 9.9.: Normalised (left) and non-normalised (right) unfolding results for the invariant mass of the dijet system using the Sherpa inputs. Results obtained from PowHeg are shown for comparison.

reconstruction level selection. The resulting unfolded distributions are shown in Figure 9.13. Here, the overall shape of the distribution is described equally well by Sherpa and PowHeg. Larger dis- crepancies can be seen in the unnormalised result where Sherpa shows a favourable description of the real data. The uncertainties are dominated by the background estimation, both data and simula- tion driven, as well as the uncertainty on the jet energy scale. The overall composition of the statistical and systematic uncertainties is shown in Figure 9.14. A similar com- position as in the case of the invariant mass of the dijet system is encountered with the statistical and systematic uncertainties being of roughly the same size. The overall error increases with rising ∆yjj due to lower statistics in the high ∆yjj bins.

122 9.4. Results

70 70 Data 2012 ( s = 8 TeV) Stat. Unc. Data 2012 ( s = 8 TeV) Stat. Unc. -1 -1 ∫L dt = 20.3 fb Stat.+Sys. Unc. 60 ∫L dt = 20.3 fb Stat.+Sys. Unc. 60 Stat.+Sys.+Bkg. Unc. Stat.+Sys.+Bkg. Unc. Full Unc. (quad.add) 50 Full Unc. (quad.add) 50 40 40 30 Rel. Uncertainty [%] 30 Rel. Uncertainty [%]

20 20

10 10

0 0 0-150 150-300 300-500 500-800 800+ 0-150 150-300 300-500 500-800 800+

Mjj [GeV] Mjj [GeV]

Figure 9.10.: Relative error composition as a function of the invariant mass of the dijet system for Sherpa (left) and PowHeg (right) for the normalised unfolded distribution. The uncertainty composition in the non-normalised cases are tabulated in Appendix D.

5+ 0.00 0.00 0.00 0.00 0.01 0.92 0.9 5+ 0.00 0.00 0.00 0.00 0.01 0.86 0.9 0.8 0.8 | (Truth) | (Truth) jj 4-5 0.00 0.00 0.00 0.01 0.83 0.02 0.7 jj 4-5 0.00 0.00 0.00 0.01 0.87 0.05 0.7 y y ∆ ∆ | 0.6 | 0.6 3-4 0.01 0.00 0.01 0.94 0.05 0.00 3-4 0.00 0.01 0.01 0.92 0.04 0.01 0.5 0.5 2-3 0.01 0.02 0.93 0.02 0.02 0.01 0.4 2-3 0.01 0.02 0.92 0.03 0.02 0.02 0.4 0.3 0.3 1-2 0.03 0.93 0.03 0.01 0.03 0.02 1-2 0.03 0.93 0.04 0.02 0.02 0.04 0.2 0.2

0-1 0.96 0.04 0.03 0.02 0.05 0.03 0.1 0-1 0.95 0.04 0.02 0.03 0.03 0.02 0.1

0-1 1-2 2-3 3-4 4-5 5+ 0-1 1-2 2-3 3-4 4-5 5+ |∆ y | (Reco.) |∆ y | (Reco.) jj jj

Figure 9.11.: Inputs to the unfolding procedure for the measurement of the ab- solute difference in rapidity of the dijet system using data simulated with Sherpa (left) and PowHeg (right).

1.4 Simulation, s = 8 TeV 1.4 Simulation, s = 8 TeV

1.2 1.2 Efficiency Efficiency 1 1

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

0 0 0-1 1-2 2-3 3-4 4-5 5+ 0-1 1-2 2-3 3-4 4-5 5+ |∆ y | |∆ y | jj jj

Figure 9.12.: Efficiency as a function of the absolute difference in rapidity of the dijet system for the Sherpa (left) and PowHeg (right) simulated data.

123 9. Differential Distributions

1 fid WZ Data 2012 ( s = 8 TeV) fid WZ 12 Data 2012 ( s = 8 TeV) σ 0.9 σ / -1 -1 ∫L dt = 20.3 fb ∆ ∫L dt = 20.3 fb fid WZ 0.8 σ 10 ∆ 0.7 Theory (Sherpa) Theory (Sherpa) 0.6 Theory (PowHeg) 8 Theory (PowHeg) Data (2012) Data (2012) 0.5 Stat. Uncertainty 6 Stat. Uncertainty 0.4 Uncertainty Uncertainty 0.3 4 0.2 2 0.1

0-1 1-2 2-3 3-4 4-5 5+ 0-1 1-2 2-3 3-4 4-5 5+ 1.5 2 |∆ y | 1.5 |∆ y | Sim Sim Data 1 jj Data 1 jj 0.5 0.5 0-1 1-2 2-3 3-4 4-5 5+ 0-1 1-2 2-3 3-4 4-5 5+ |∆ y | |∆ y | jj jj

Figure 9.13.: Normalised (left) and non-normalised (right) unfolding results for the absolute difference in rapidity of the dijet system using the Sherpa inputs. Results obtained from PowHeg are shown for comparison.

140 Data 2012 ( s = 8 TeV) Stat. Unc. Data 2012 ( s = 8 TeV) Stat. Unc. ∫L dt = 20.3 fb-1 Stat.+Sys. Unc. 120 ∫L dt = 20.3 fb-1 Stat.+Sys. Unc. 120 Stat.+Sys.+Bkg. Unc. Stat.+Sys.+Bkg. Unc. 100 Full Unc. (quad.add) 100 Full Unc. (quad.add)

80 80

60 Rel. Uncertainty [%] 60 Rel. Uncertainty [%]

40 40

20 20

0 0 0-1 1-2 2-3 3-4 4-5 5+ 0-1 1-2 2-3 3-4 4-5 5+ |∆ y | |∆ y | jj jj

Figure 9.14.: Relative error composition as a function of the absolute differ- ence in rapidity of the dijet system for Sherpa (left) and PowHeg (right) for the normalised unfolded distribution. The uncertainty composition in the non- normalised cases are tabulated in Appendix D.

124 10. Setting Limits on Anomalous Quartic Gauge Couplings

This chapter is dedicated to setting limits on the aQGC parameters α4 and α5 describing the impact of physics beyond the Standard Model in the framework of the electroweak chiral Lagrangian. It will describe the optimisation of the selection criteria defining the aQGC phase space, present the statistical method used to extract limits on the aQGC parameters and compare the obtained results to the W ±W ±jj-EW analysis [18]. The theoretical groundwork was laid in Section 2.4. The fiducial phase space on recon- struction level and particle level are documented in Section 5.2.3 and 5.4.3, respectively.

10.1. Simulation

The general strategy for determining the expected (observed) aQGC limits will be to find all points in the α4-α5 plane for which the hypothesised fiducial cross section at the given point is incompatible with the expected (observed) fiducial cross section. However, no analytical expression is available that links the hypothesised fiducial cross section to a given pair of α4 and α5 values. Therefore, samples for a set of aQGC parameter pairs have to be generated to provide the necessary translation between the fiducial cross section and the aQGC parameters. The Whizard generator was chosen for this task as it provides the K-matrix unitarisation which is necessary to obtain physical predictions (see Section 2.4.6). Version 2.1.1. of the generator was used in the generation process.

In total 57 samples have been generated spanning a grid of α4 α5 with α4, α5 × ∈ [ 2.0, 1.0, 0.8, 0.4, 0.1, 0, 0.1, 0.4, 0.8, 1.0, 2.0]. The samples are listed in Table 10.1. − − − − − Each sample was generated with a target sample size of 50000 events. After genera- tion the samples were subjected to the splitting procedure described in Section 4.4.1 to separate the tZj contributions. This splitting procedure is not entirely clean. A certain dependence of the tZj pre- diction on aQGCs remains which leads to a loss of sensitivity as not all contributions from the aQGCs are included into the signal definition. The effect was studied and the result is summarised in Figure 10.1.1 The ratio of the fiducial cross section on particle level for the point α4 = 1 and the Standard Model ratios is 1.59 for tZj and 25.13 for W ±Zjj-EW. Therefore, the loss of sensitivity to aQGCs is small and tZj is not included into the signal definition. Originally, it was intended to determine the acceptance and efficiencies for each aQGC point using the Whizard samples. The low initial statistics of 50000 events coupled

1The shown plots have a reduced number of cells compared to the grid used in the limit setting. There, a grid of 1000 × 1000 cells in the α4-α5 plane is used which would result in very large PDF files which may prevent printing.

125 10. Setting Limits on Anomalous Quartic Gauge Couplings

DSID α4 α5 VBS PS aQGC PS DSID α4 α5 VBS PS aQGC PS xsec [fb] xsec [fb] xsec [fb] xsec [fb] 185631 0.0 0.0 0.398 0.063 185673 2.0 0.0 5.641 2.582 185648 0.0 0.1 0.584 0.177 185674 2.0 1.0 5.870 2.236 185632 0.0 0.4 1.365 0.646 185675 2.0 2.0 9.620 3.584 185633 0.0 0.8 2.631 1.325 185672 2.0 -1.0 6.293 2.918 185664 0.0 1.0 3.355 1.668 185671 2.0 -2.0 8.200 3.623 185665 0.0 2.0 7.423 3.276 185645 -0.1 0.0 0.611 0.209 185647 0.0 -0.1 0.560 0.154 185646 -0.1 0.1 0.612 0.180 185630 0.0 -0.4 1.354 0.611 185644 -0.1 -0.1 0.742 0.269 185629 0.0 -0.8 2.873 1.323 185626 -0.4 0.0 1.438 0.710 185663 0.0 -1.0 3.477 1.580 185627 -0.4 0.4 1.592 0.824 185662 0.0 -2.0 7.566 3.245 185628 -0.4 0.8 2.211 1.084 185650 0.1 0.0 0.570 0.172 185625 -0.4 -0.4 2.115 1.006 185651 0.1 0.1 0.660 0.270 185624 -0.4 -0.8 3.310 1.519 185649 0.1 -0.1 0.565 0.172 185621 -0.8 0.0 2.550 1.291 185636 0.4 0.0 1.316 0.630 185622 -0.8 0.4 2.618 1.391 185637 0.4 0.4 1.832 0.876 185623 -0.8 0.8 3.359 1.783 185638 0.4 0.8 3.271 1.632 185620 -0.8 -0.4 2.766 1.161 185635 0.4 -0.4 1.468 0.737 185619 -0.8 -0.8 3.815 1.665 185634 0.4 -0.8 2.255 1.024 185659 -1.0 0.0 3.131 1.536 185641 0.8 0.0 2.284 1.198 185660 -1.0 1.0 4.098 2.041 185642 0.8 0.4 2.469 1.136 185661 -1.0 2.0 5.991 2.596 185643 0.8 0.8 3.674 1.729 185658 -1.0 -1.0 4.808 2.119 185640 0.8 -0.4 2.631 1.407 185657 -1.0 -2.0 8.848 3.498 185639 0.8 -0.8 2.991 1.536 185654 -2.0 0.0 5.817 2.378 185668 1.0 0.0 3.072 1.583 185655 -2.0 1.0 6.419 3.037 185669 1.0 1.0 4.613 2.168 185656 -2.0 2.0 8.562 3.909 185670 1.0 2.0 8.371 3.557 185653 -2.0 -1.0 5.911 2.033 185667 1.0 -1.0 3.965 1.989 185652 -2.0 -2.0 9.785 3.641 185666 1.0 -2.0 6.237 2.627

Table 10.1.: The list of Whizard samples providing the supporting points in the α4-α5 plane. Given are the dataset ids (DSID), the aQGC parameters, the fiducial cross section on particle level in the VBS phase space and the optimised aQGC phase space. All samples have been subjected to a splitting procedure filtering out tZj and the stated cross sections only contain the VBS-like part of W ±Zjj-EW.

with the relatively strict cuts introduced in the aQGC phase space limit the available simulated data statistics. Therefore, it was decided to use the efficiency obtained from Sherpa and treat it as independent of the aQGC parameters. This assumption was val- idated by studying the efficiencies of all generated Whizard samples. The efficiencies were found to be independent of the aQGC parameters with statistical uncertainties. In order to account for possible effects introduced by a residual dependence on the aQGC parameters the envelope of the statistical uncertainties on the efficiency evaluating to 7 % was taken as an uncertainty. In addition, all Whizard samples are scaled by the ratio k of the SM predictions from Sherpa and Whizard to harmonise the VBS fiducial cross section measurement and the aQGC limit setting:

126 10.2. Phase Space Optimisation

5 1 5 1 2 α α 1.78 1.08 1.33 1.63 1.73 2 0.06 0.03 0.03 0.05 0.05 1.8 1.8 1.6 0.5 1.6 0.5 1.39 0.82 0.65 0.88 1.14 0.03 0.03 0.03 0.04 0.04 1.4 1.4 1.2 1.2 0 1.29 0.71 0.06 0.63 1.20 0 0.05 0.03 0.03 0.04 0.05 1 1 0.8 0.8 1.16 1.01 0.61 0.74 1.41 0.04 0.04 0.04 0.04 0.04 0.6 -0.5 0.6 -0.5 0.4 0.4 1.67 1.52 1.32 1.02 1.54 0.06 0.04 0.03 0.04 0.06 0.2 0.2 -1 -1 0 -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 α4 α4

Figure 10.1.: Fiducial cross sections on particle level for W ±Zjj-EW (left) and tZj (right) in the aQGC phase space generated with Whizard. Points for which a sample was generated are marked by the corresponding fiducial cross section. A bilinear interpolation is applied between the samples. The shown number of cells in the α4-α5 plane was as well as the number of shown supporting points was reduced for viewing purposes.

Sherpa, part σW ±Zjj-EW,aQGC-fid 0.084 fb k = = = 1.31. (10.1) Whizard, part 0.064 fb σW ±Zjj-EW,aQGC-fid

Overall, only the relative change in cross section depicted in Figure 10.2 is taken from Whizard. The low statistics of the Whizard sample at the Standard Model point introduce a non negligible uncertainty of 4 % on the cross section ratios which is taken into account in the limit setting procedure. Figure 10.2 shows the grid of fiducial cross sections on particle level used in the limit setting with a reduced resolution.

10.2. Phase Space Optimisation

10.2.1. Finding Variables for Optimisation

The VBS phase space definition is optimised towards measuring the W ±Zjj-EW cross section without considering the effects of aQGCs. However, it is a good starting point for applying further cuts that enhance the sensitivity towards aQGCs. Contributions from Feynman diagrams containing the quartic gauge vertex are en- hanced in the VBS phase space. Anomalous quartic gauge couplings add to these Feynman diagrams but with possibly different final state particle kinematics. It is therefore assumed that observables describing the WZ system are sensitive to the ef- fects of new physics. In the following the definition of the signal will change from the Standard Model W ±Zjj-EW process to W ±Zjj-EW including aQGCs. The signal shown in the plots in this chapter is simulated with Whizard with α4 = 0.4 scaled with the global factor derived in Equation (10.1). A simple approach to optimise the aQGC phase space would have been to tighten the requirements on the invariant mass of the tagging jets. Figure 10.3 illustrates the

127 10. Setting Limits on Anomalous Quartic Gauge Couplings

5 1 5 1 α α 2.37 1.44 1.76 2.17 2.30 28.19 17.13 20.94 25.79 27.33 2.5 30

0.5 0.5 25 1.85 1.10 0.86 1.17 1.51 2 21.98 13.02 10.21 13.85 17.96 20 0 1.72 0.94 0.08 0.84 1.59 1.5 0 20.40 11.21 1.00 9.95 18.94 15

1.54 1.34 0.81 0.98 1.87 1 18.36 15.91 9.66 11.64 22.24 -0.5 -0.5 10 0.5 2.21 2.02 1.76 1.36 2.04 26.32 24.01 20.92 16.19 24.28 5 -1 -1 -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 α4 α4

Figure 10.2.: The fiducial cross sections on particle level for W ±Zjj-EW scaled to the Sherpa prediction (left) and the cross section ratios observed in Whizard (right) in the aQGC phase space. Points for which a sample was generated are marked by the corresponding fiducial cross section. A bilinear interpolation is applied between the samples. The shown number of cells in the α4-α5 plane was as well as the number of shown supporting points was reduced for viewing purposes.

evolution of the significance for an aQGC signal. After reaching a plateau of 1.7 for ≈ mjj > 500 GeV the significance changes only marginally to the maximum of 1.8 and decreases above cuts of mjj > 1000 GeV. Optimising the cut on the invariant mass of the tagging jets does not increase the sensitivity towards aQGCs and alternative observables have to be found. Similar behaviour was already observed for aQGCs in the W ±W ±jj-EW analysis which motivated the investigation of additional variables and led to improved exclusion lim- its [180]. Preliminary optimisation work done for W ±Z is documented in Ref. [181]. Figure 10.4 presents the observables used in the subsequent optimisation whereas Fig- ure 10.5 presents some alternative observables that were also considered. The back- grounds are estimated as is done for the VBS phase space using data driven techniques for non-prompt backgrounds and simulated data for prompt backgrounds. A global scaling factor of 1.37 is applied to the Whizard samples which scales the SM predic- tion of Whizard to that of Sherpa in the VBS phase space on particle level using Equation 10.1. Overall agreement between real and simulated data is good within uncertainties. Of all presented observables the transverse mass of the WZ system appears to be the best variable for setting limits on aQGCs as it exhibits the highest significance once optimised. The suggested cut of mT,W Z > 500 GeV would result in a phase space with only three observed events in data. This strict criterion reduces the available statistics for the data driven and simulation based background estimates as well as the statistics in the signal samples. It was therefore decided to try to achieve the same significance using observables with a smaller penalty on the event yield. However, with larger samples available this cut may prove to be promising for future analyses.

The centrality observables ζl and ζV exhibit only small separating power enhancing the significance only from 1.7 to about 2. This unfavourable result is only reached by

128 10.2. Phase Space Optimisation

-1 α =0.4, α =0 160 s = 8 TeV, L = 20.3 fb 4 5 all channels WZjj-EW 140 WZjj-QCD Non-Prompt 120 Prompt total uncertainty Events / 100 GeV 100 Data

80

60

40

20

1.5 Sim Data 1 0.5

Z 3 > X < X window 2 1 0 0 200 400 600 800 1000 1200 1400 1600

mjj [GeV]

Figure 10.3.: Naive optimisation attempt using the invariant mass of the tagging jets. The lower plots state the likelihood based significance for various selection strategies (lower bound, upper bound, window cut). π 1 8 -1 α α 18 -1 α α 35 s = 8 TeV, L = 20.3 fb 4=0.4, 5=0 s = 8 TeV, L = 20.3 fb 4=0.4, 5=0 WZ-EW WZjj-EW all channels 16 all channels 30 WZ-QCD WZjj-QCD Non-Prompt Non-Prompt Events / 14 Prompt Prompt 25 total uncertainty total uncertainty Events / 50 GeV 12 Data Data 20 10

15 8 6 10 4 5 2

1.5 1.5 Sim Sim Data 1 Data 1 0.5 0.5

Z 4 Z > X < X window > X < X window 3 4 2 2 1 0 0 0 0.5 1 1.5 2 2.5 3 0 100 200 300 400 500 600 ∆φ [rad] ∑ p [MeV] WZ T

Figure 10.4.: Used variables for the aQGC phase space optimisation in the VBS phase space summed over all channels. The contributions of W ±Zjj-EW SM and aQGC signal with α4 = 0.4 is estimated using Whizard. Shown are the absolute opening angle between the vector bosons ∆φWZ (left) and the P | | scalar sum of the three charged leptons pT (right). The lower plots state the | | likelihood based significance for various selection strategies (window cut, lower bound, upper bound).

129 10. Setting Limits on Anomalous Quartic Gauge Couplings

-1 α α -1 α α s = 8 TeV, L = 20.3 fb 4=0.4, 5=0 35 s = 8 TeV, L = 20.3 fb 4=0.4, 5=0 30 all channels WZjj-EW all channels WZjj-EW WZjj-QCD WZjj-QCD

Events / 1 30 25 Non-Prompt Non-Prompt Prompt Prompt total uncertainty 25 total uncertainty

Events / 100 GeV 20 Data Data 20 15 15 10 10

5 5

1.5 1.5 Sim Sim Data 1 Data 1 0.5 0.5 Z Z 4 > X < X window 4 > X < X window 2 2 0 0 0 100 200 300 400 500 600 700 800 900 1000 -6 -4 -2 0 2 4 6 m [GeV] ζ t,wz l

-1 α α -1 α α 35 s = 8 TeV, L = 20.3 fb 4=0.4, 5=0 20 s = 8 TeV, L = 20.3 fb 4=0.4, 5=0 all channels WZjj-EW all channels WZjj-EW WZjj-QCD 18 WZjj-QCD

Events / 1 30 Events / 1 Non-Prompt 16 Non-Prompt Prompt Prompt 25 total uncertainty 14 total uncertainty Data 12 Data 20 10 15 8 10 6 4 5 2

1.5 1.5 Sim Sim Data 1 Data 1 0.5 0.5 Z Z 4 > X < X window 4 > X < X window 2 2 0 0 -6 -4 -2 0 2 4 6 0 1 2 3 4 5 6 7 8 9 10 ζ ∆ Eta V jj

Figure 10.5.: Alternative variables for the aQGC phase space optimisation in the VBS phase space summed over all channels. The contributions of W ±Zjj-EW SM and aQGC signal with α4 = 0.4 is estimated using Whizard. Shown are from top left to bottom right the transverse mass of the WZ system mT,W Z , the lepton centrality ζl, the boson centrality ζV , and the absolute separation in pseudo-rapidity between the tagging jets ∆yjj. The lower plots state the likelihood based significance for various selection strategies (window cut, lower bound, upper bound).

introducing very harsh cuts discarding almost all real and simulated data. Therefore, both observables are not considered from here on. P Three observables remain: ∆yjj, ∆φWZ , and pT . Though not being as discrimi- | | | | nating as mT,W Z they exhibit a good separation power while conserving statistics. Of P these three observables, ∆φWZ and pT perform best and are chosen for further | | | | optimisation.

130 10.2. Phase Space Optimisation

10.2.2. Optimisation Approach

The goal of the optimisation is to find the combination of cuts on mjj, ∆φWZ , and P | | pT suited best for setting limits on aQGCs. It was decided to include the cut on the | | invariant mass of the tagging jets into the optimisation as it was not self-evident that the optimisation of the VBS phase space yielded the best conditions for the extraction of aQGCs. Several figures of merit may be used to optimise the selection criteria. The expected limit on α4, the expected limit on α5, and the total area of allowed aQGC parameter combinations in the α4-α5 plane were used in this work to evaluate the performance of each given set of cuts. Optimisation was done by testing all combinations of cuts on the three observables in a grid search. The grid is defined via the ranges and steps of the variables which are:

[300 1000] GeV in steps of 100 GeV for mjj, • − P [0 500] GeV in steps of 50 GeV for pT , and • − | | [0 2.4] in steps of 0.2 for ∆φWZ . • − | | For each set of cuts the exclusion limit contour for the aQGC parameters is calculated by performing a grid scan in the α4-α5 plane. The plane is divided into 1000000 cells with a size of 0.002 0.002 ranging from -1.0 to 1.0 in each parameter. The × compatibility of the expected fiducial cross section on particle level for the Standard Model case and the hypothesised aQGC cross section is tested for each cell of the grid. Here, a toy based approach is computationally not feasible and the ∆log likelihood method is applied. The profile likelihood defined in Equation (8.7) is determined and the cross section interval at the 95 % confidence level is calculated for the hypothesised cross section using the criterion stated in Equation (8.9).

A cell in the α4-α5 plane is marked as excluded if the expected fiducial cross section on particle level for the Standard Model case is incompatible with the hypothesised aQGC cross section. For this scan the minimal allowed cross section is set to the expected Standard Model cross section in the aQGC phase space on particle level as it is assumed that the Standard Model process exists and that contributions from the aQGCs are positive.

The figure of merit used for the evaluation is the limit on α4 of the aQGC limits in the α4-α5 plane at 95 % confidence level. The results are depicted in Figure E.1 in Appendix E. The set of optimal cuts are:

mjj > 500 GeV

∆φWZ > 2 |X | pT > 250 GeV | |

The optimisation study has been redone during the writing of this work and its results are summarised in Figure E.2 and Figure E.3 in Appendix E. Interestingly, the set of optimal cuts does not coincide with the ones stated in Ref. [16] but favours a more restricted phase space. The original optimisation of the aQGC phase space differs in

131 10. Setting Limits on Anomalous Quartic Gauge Couplings two key points from the one described in this work. The efficiencies at the different aQGC points as well as the overall normalisation was taken from Whizard and not from Sherpa as done here. Furthermore, the original optimisation was done using simulated data for the estimation of the non-prompt backgrounds. This leads to a different working point in the original optimisation. The stated changes were made after the definition of the aQGC phase space was already frozen for a publication. Thus, the optimisation study was not repeated at that time. The set of cuts favoured by the redone optimisation study suggests a quite restricted P phase space defined by mjj > 800 GeV, ∆φWZ > 2.0, and pT > 200 GeV. Both | | | | simulated and real data statistics are low in this phase space and the estimate for the non-prompt background becomes negative and is not reliable. Furthermore, the uncer- tainties on the acceptance and efficiency and no theoretical uncertainties are available for this phase space. Therefore, the results of the original optimisation are used in the following. Results for an alternative optimisation are shown in Appendix E.

10.3. Event Yields and Systematic Uncertainties

Applying the optimal set of cuts on real and simulated data yields the results of Ta- ble 10.2. A total Standard Model prediction of 4.89 events is predicted whereas 9 events are observed in real data. All processes besides W ±Zjj have been significantly reduced and contribute only 0.94 events. The single largest contribution comes from W ±Zjj-QCD with 2.8 expected events followed by W ±Zjj-EW with 1.15 expected events. Due to the low yields in both simulated data and for the data-driven background estimate the measurement is done in the combined channel. A combined fit in multiple channels would suffer from the fact that the non-prompt background is estimated to be negative in the eee and eeµ channel and the overall low statistics in the simulated data would not predict the actual ratios between the channels in a reliable way.

132 10.3. Event Yields and Systematic Uncertainties 00 59 15 57 52 09 01 30 12 04 all ...... 3 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± ± 00 35 07 34 29 03 00 11 04 14 ...... 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± 89 15 76 80 27 00 25 32 10 ...... 73 9 21 4 05 1 21 3 20 2 03 0 00 0 00 0 04 0 02 0 ...... 1 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± ± 00 22 04 22 17 02 00 00 02 14 ...... 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± 62 39 24 92 08 00 01 11 12 ...... 73 3 21 1 04 0 20 1 19 0 02 0 00 0 16 0 04 0 03 0 ...... 1 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± ± 00 18 03 18 16 01 00 07 02 03 ...... 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± 40 27 14 83 06 00 09 11 05 ...... 0 3 12 1 04 0 12 1 11 0 03 0 00 0 00 0 03 0 02 0 ...... ± systematic uncertainty. Systematic uncertainties have been symmetrised 0 0 0 0 0 0 0 0 0 ± 00 ± ± ± ± ± ± ± ± ± . 13 03 13 13 02 00 00 02 03 ...... 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± 93 26 67 55 07 00 01 08 04 ...... 73 0 23 0 03 0 22 0 1202 0 00 0 19 0 03 0 03 0 -0 ...... eee eeµ µµe µµµ 1 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± ± statistical uncertainty 00 15 03 15 12 02 00 08 01 01 ...... ± 3 0 0 0 0 0 0 0 0 0 ± ± ± ± ± ± ± ± ± 02 94 23 71 51 06 00 13 02 ...... 0 0 0 0 0 0 0 0 -0 Event yields on detector level with uncertainties (statistical and systematic) after the aQGC selection scaled to jj-EW jj-QCD W/Z Z Z ± ± + ¯ t Data Total Exp W Total Bkg W tZj VVV ZZ t non-prompt Table 10.2.: luminosity. The format isusing yield the maximum value of the up and down variation.

133 10. Setting Limits on Anomalous Quartic Gauge Couplings

eee eeµ µµe µµµ all part NaQGC PS 140.38 111.17 111.20 149.11 511.85 reco NaQGC PS 69.65 79.51 79.94 115.87 344.98  in % 50 2.3 72 4 72 3.2 78 3.3 67 2.5 ± ± ± ± ± Table 10.3.: Unscaled event yields on particle and detector level with pile-up and vertex position corrections applied in the aQGC phase space. The yields were obtained using the W ±Zjj-EW sample with the DSID 185396.

Table 10.3 details the efficiencies predicted by the Sherpa sample. The efficiencies are comparable to the ones observed in the VBS phase space and show the expected behaviour of increasing with the number of muons due to the higher efficiency of the muon selection. As can also be seen in the table, the available number of simulated events is rather small, even in the combined channel. It was therefore decided to evaluate the sys- tematic uncertainties in the VBS phase space and assume that the leading systematic uncertainties are not affected by the additional requirements of the aQGC phase space. The leading systematics are the jet energy scale and the theoretical uncertainties. No studies are available for the theoretical uncertainties in the aQGC phase space. How- ever, both applied criteria are targeting the WZ system and it is assumed that the lepton kinematics are not overly sensitive to QCD effects which are the main source of the theoretical uncertainties. It is also assumed that the jet energy scale uncertainty should not be affected by the cuts on the lepton kinematics. The experimental system- atic uncertainties were evaluated in the aQGC phase space as a cross check and the agreement between the uncertainties observed in the VBS phase space and the aQGC phase space were overall good. Table 10.4 summarises the uncertainties encountered in the aQGC phase space. Statis- tical uncertainties on the simulated data are determined in the aQGC phase space, all other uncertainties are as they have been observed in the VBS phase space. The driving uncertainties are the theory uncertainties and the jet related uncertainties followed by the limited statistics of the simulated data.

10.4. Results

10.4.1. Fiducial Cross Section Results

A statistical model is built using the inputs of Table 10.2 and Table 10.4. The evaluation procedure is analogous to the one used in Section 8.3 with the difference that the determined cross section is not allowed to be lower than the expected Standard Model W ±Zjj-EW cross section of 0.084 fb determined with Sherpa. Table 10.5 summarises the obtained results in the combined channel. All results were determined via pseudo- experiments. The asymptotic formulae were also evaluated and showed deviations from the results from pseudo-experiments indicating that the prerequisites for their application are no longer valid in this phase space.

134 10.4. Results

W ±Zjj W ±Zjj tZj V V V ZZ tt¯+ W/Z non EW QCD prompt stat 5.8 10.3 12.0 65.3 43.0 12.4 137.1 ± ± ± ± ± ± ± ElEScale 0.6 0.6 0.6 0.0 3.5 0.6 0.0 ± ± ± ± ± ± ± ElESmear 0.1 0.1 0.6 0.1 0.3 0.1 0.0 ± ± ± ± ± ± ± ElID 1.2 1.1 1.1 0.8 1.6 1.1 0.0 ± ± ± ± ± ± ± ElIso 0.3 0.3 0.3 0.2 0.3 0.2 0.0 ± ± ± ± ± ± ± ElReco 0.5 0.4 0.4 0.3 0.6 0.4 0.0 ± ± ± ± ± ± ± ElTrigger 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ± ± ± ± ± ± ± Fakes 0.0 0.0 0.0 0.0 0.0 0.0 24.8 ± ± ± ± ± ± ± JER 0.3 3.5 0.5 1.5 7.4 0.4 0.0 ± ± ± ± ± ± ± JES 2.9 10.0 7.3 26.0 15.3 3.9 0.0 ± ± ± ± ± ± ± MET 0.3 0.1 0.3 0.0 0.3 0.3 0.0 ± ± ± ± ± ± ± MuID 0.7 0.7 0.7 0.7 0.6 0.7 0.0 ± ± ± ± ± ± ± MuPt 0.3 0.5 0.1 0.0 0.2 0.8 0.0 ± ± ± ± ± ± ± MuTrigger 0.1 0.1 0.1 0.1 0.0 0.1 0.0 ± ± ± ± ± ± ± Pileup 0.1 0.5 0.1 1.3 2.8 1.0 0.0 ± ± ± ± ± ± ± Theory 12.3 17.9 34.0 10.0 22.3 30.0 0.8 ± ± ± ± ± ± ± total syst 12.5 20.8 34.8 26.2 27.5 30.3 24.8 ± ± ± ± ± ± ± Table 10.4.: Breakdown of the relative uncertainties in percent for the cross section measurement in the aQGC phase space. The systematic uncertainties are estimated after applying the mjj cut. The quoted statistical uncertainties are evaluated in the aQGC phase space.

expected +0.20 σfiducial / fb 0.08−0.08 observed +0.27 σfiducial / fb 0.39−0.21

expected upper limitfiducial / fb 0.54 observed upper limitfiducial / fb 0.98

Table 10.5.: Fiducial cross section on particle level and upper limits on particle level in the optimised aQGC phase space for the W ±Zjj-EW process. Expected limits were derived using the expected signal plus background yield as the ob- served yield.

135 10. Setting Limits on Anomalous Quartic Gauge Couplings

5 1 s = 8 TeV, 20.3 fb-1 α K-matrix unitarisation pp → W±Zjj 0.5

0

± -0.5 obs. 95% CL, W±Zjj exp. 95% CL, W Zjj ± 1σ expected ± σ 2 expected ± ± exp. 95% CL, W W jj Standard Model -1 -1 -0.5 0 0.5 1 α 4

Figure 10.6.: Observed and expected fiducial particle level cross section limits at 95 % confidence level in the α4-α5 plane. The expected limit is shown with the 1σ (green) and 2σ (yellow) bands. Superimposed is the expected upper limit for W ±W ±jj-EW taken from Ref. [18].

The observed fiducial cross section on particle level differs by 1.5σ from the theoretical prediction of 0.08 0.01 fb provided by Sherpa. This represents an increased tension ± with respect to the result observed in the VBS phase space. A possible explanation may be that the tension is caused by a statistical fluctuation and that repeating the experiment would present a different picture. However, it has to be noted that the W ±W ±jj-EW analysis also observed a similar result. In addition, the tension may be alleviated by the incorporation of higher order calculation effects which may be sizeable. The most interesting interpretation however is that some yet unknown physics is present which contributes to the observed fiducial cross section. In this case cross sections for the combination of W ±Zjj-EW plus other unknown processes above 0.98 fb are excluded at 95 %.

In order to calculate the upper cross section limits in the α4-α5 plane the cross section the limit setting procedure is repeated with the acceptance and efficiency uncertain- ties described in Section 10.1, and theoretical uncertainties on W ±Zjj-EW taken into account. Comparing these newly obtained fiducial cross section limits on the particle level with the aQGC predictions shown in Figure 10.2 yields the expected and observed limits in Figure 10.6. The figure includes the sigma bands around the expected fiducial cross section limit for W ±Zjj-EW. These bands are useful for evaluating the expected sensitivity of the limit setting procedure if the experiment was to be repeated with the same experimental parameters. A white area around the Standard Model point is present as it is explicitly prohibited to exclude the Standard Model point. The tension between the expectation and observation is also visible in the plot as the obsered contour lies outside the green

136 10.4. Results band indicating the points where the cross sections deviate by maximally 1σ from the expectation. The edgy complexion of the contour is caused by the low number of generated samples in the α4-α5 plane.

10.4.2. Limits on Anomalous Quartic Gauge Couplings

Though Figure 10.6 shows the fiducial cross sections on particle level in the aQGC phase space in the α4-α5 plane these values cannot be interpreted as the actual exclusion limits of the aQGC parameters. In order to derive these limits a grid scan has to be performed in the α4-α5 plane. At each point it needs to be decided whether the observed or expected fiducial cross section limit is compatible with the assumed cross section at the given point in the plane. Therefore the question is not whether the aQGC cross section is excluded given the observed or expected data but whether the observed or expected data is in agreement with the assumed cross section for W ±Zjj-EW consisting of the Standard Model and aQGC contributions.

A grid with a cell size of 0.002 0.002 in the α4-α5 plane ranging from 1.0 to 1.0 with × − a total of 1000000 cells is defined and each individual cell is tested. Here, a toy based approach is computationally not feasible and the ∆log likelihood method is applied. For each point the profile likelihood defined in Equation (8.7) is determined and the confidence interval at 95 % confidence level is calculated by finding the cross sections for which the test statistic gets larger than 3.84. For this scan the minimal allowed cross section is set to the expected Sherpa Standard Model cross section in the aQGC phase space on particle level.

The resulting exclusion limits on α4 and α5 are shown in Figure 10.7. The one dimen- sional aQGC limits are:

expected 0.23 < α4 < 0.26 0.27 < α5 < 0.25 − − observed 0.44 < α4 < 0.49 0.49 < α5 < 0.47 − − The observed limits are considerably looser than the expected ones. Several causes are possible. One explanation would be that this is merely chance and a upwards fluctuation is encountered which may happen given the low observed event yield in the aQGC phase space. It may also hint at new physics which would enhance the cross section observed in the aQGC phase space. Another reason may be that the Standard Model expectation estimate is too low. In this case all values in the α4-α5 plane would smaller than they should be and thus the observed contour too large. Recent theory calculations for the WZ process incorporating QCD effects at NNLO have resulted in a 17% increase of the total predicted cross section for WZ. This ≈ effect may not translate to W ±Zjj but should also not be ignored. A definitive answer regarding the reasons for the looser observed aQGC limits cannot be given with the currently available data. Repeating the analysis at higher centre-of-mass energies and luminosities in Run 2 is therefore desirable to obtain further results.

The limits in the α4-α5 plane can also be expressed in terms of the fS,0 and fS,1 parameters of the effective field theory ansatz. The conversion formula, taken from Ref. [28], is:

4 4 fS,0 v fS,1 v α4 = and α5 = . (10.2) Λ4 16 Λ4 16

137 10. Setting Limits on Anomalous Quartic Gauge Couplings

4 fS,0x1000/TeV -4 -3 -2 -1 0 1 2 3 4 4

5 1 s = 8 TeV, 20.3 fb-1 4 α K-matrix unitarisation pp → W±Zjj 3

0.5 2 x1000/TeV S,1 f 1 0 0 -1

± -2 -0.5 obs. 68% CL, W Zjj ± obs. 95% CL, W Zjj ± -3 exp. 95% CL, W Zjj Standard Model -4 -1 -1 -0.5 0 0.5 1 α 4

Figure 10.7.: Two-dimensional expected and observed exclusion contours for the aQGC parameters α4 and α5. Observed exclusion contours are obtained for either 68.3 % or 95 % confidence levels whereas only the 95 % confidence level exclusion contour is shown for the expected limit. The limits may also be expressed in the fs,0-fs,1 parameterisation using the conversion prescription in Equation (10.2).

The numerical value of the conversion factor is determined by the vacuum expectation value v = 246 GeV and the assumed scale of new physics Λ = 1 TeV. The conversion rule depends on the contributing quartic vertices and is only derived for non-unitarised samples though preliminary studies suggest their validity also for unitarised samples. No shape altering effects are introduced by the conversion rule as α4 and α5 have a direct and equal correspondence to fS,0 and fS,1. Thus the one-dimensional limits in this parameterisation are:

expected 1005 < fS,0 < 1136 1180 < fS,1 < 1092 − − observed 1922 < fS,0 < 2141 2141 < fS,1 < 2053 − −

± Figure 10.8 compares the W Zjj-EW expected exclusion contours in the α4-α5 plane with the W ±W ±jj-EW one obtained in Ref. [18]. Here, only one parameterisation is shown as the conversion rule for W ±W ±jj-EW introduces a rotation between the two parameterisations. This is due to the WWWW vertex which is the sole quartic gauge vertex contributing to W ±W ±jj-EW and converts differently from WWZZ. As can be seen, the expected 1-dimensional limits found in W ±W ±jj-EW are smaller than the ones derived in W ±Zjj-EW. However, the shape of both exclusion contours are quite different with W ±W ±jj-EW having an oblong shape which extends into the second and fourth quadrant whereas W ±Zjj-EW is much more spherical. This differ- ence in shape enables the result of this work to improve the current limits on aQGCs.

138 10.4. Results

5 1 s = 8 TeV, 20.3 fb-1 α K-matrix unitarisation pp → W±Zjj 0.5

0

± -0.5 obs. 68% CL, W Zjj ± obs. 95% CL, W Zjj ± exp. 95% CL, W Zjj ± ± exp. 95% CL, W W jj Standard Model -1 -1 -0.5 0 0.5 1 α 4

Figure 10.8.: Two-dimensional expected and observed exclusion contours for the aQGC parameters α4 and α5. Expected exclusion contours are obtained in W ±Zjj-EW and W ±W ±jj-EW at 95 % confidence level. Observed exclusion contours are derived for W ±Zjj-EW for either 68.3 % or 95 % confidence levels.

139

11. Summary

The goal of this work is to provide a test of the predictions of the electroweak theory re- garding the scattering of massive electroweak gauge bosons, a process also called vector boson scattering (VBS). These massive gauge bosons, the force carriers which mediate the weak interaction, interact with each other through triple and quartic gauge cou- plings predicted by the electroweak theory and through the exchange of Higgs bosons. The study of massive electroweak gauge boson scattering therefore tests both the elec- troweak theory as well as the electroweak symmetry breaking which gives rise to the Higgs boson. Theoretical predictions yield sensible results at high energies only if the Higgs boson is the one described by the Standard Model. Only then do terms in- troduced by the massive gauge boson self-couplings and the Higgs boson cancel each other. New physics beyond the Standard Model is needed to ensure that calculations return physically sensible results if the Higgs boson is not the one predicted by the Standard Model. The possible effects of such new physics are evaluated in this work in the framework of the electroweak chiral Lagrangian, an effective field theory which describes new physics in a model independent way through the introduction of new gauge couplings whose influence is governed by strength parameters. Limiting the size of these parameters translates to limiting the possible effects of new physics beyond the Standard Model. Proton-proton collision data with a centre-of-mass energy of 8 TeV recorded by the ATLAS detector at the LHC during the year 2012 is used to achieve the stated goals using protons as sources for the massive gauge bosons. However, massive electroweak gauge boson scattering is a rare process and a robust event selection strategy has to be employed to enrich the selected sample with collision events that may stem from the process of interest. In this work the massive electroweak gauge boson scattering process W ±Z W ±Z → which has not been measured up to now is examined. The study of W ±Z W ±Z → is complementary to the measurement of W ±W ± W ±W ± in which first evidence → of electroweak gauge boson scattering was found. This process was chosen due to its relatively large abundance and the clean experimental signature in case both bosons decay to leptons. The experimental signature for the W ±Z W ±Z scattering is lllνjj with l denoting → the charged leptons, ν the neutrino, and j the so-called “tagging jets”. For massive elec- troweak gauge boson scattering processes these jets exhibit unique properties, namely a large spatial separation between the two jets and a high invariant mass of the dijet system they define. These properties can be used to enrich the selected sample with VBS events. The study is embedded in the general effort to examine the W ±Z production cross sec- tion, thus enabling the use of common methods and cross checks providing a solid basis for the subsequent examination of vector boson scattering. Exploiting the properties of the leptons yields a sample consisting to 80 % of W ±Z events with the rest stemming ≈

141 11. Summary from background processes whose contributions are estimated via both simulations and data-driven techniques. Based on this preselection the events with at least two jets are selected and a requirement on the invariant mass of the two jets with the highest transverse momentum is imposed to isolate the signal process. The thus obtained sample is predicted to consist of 20 % W ±Zjj-EW, which is the signal process containing both VBS events and purely electroweak processes which cannot be separated from it, 56 % W ±Zjj-QCD, the main background which has the same experimental signature but is not entirely electroweak, and 24 % other processes introduced by misidentifications of objects or inefficient object detection. A total of 45 events is selected from the dataset recorded by ATLAS whereas simulations predict a total background of 37.8 events and 7.8 events for the W ±Zjj-EW signal. The statistical evaluation of these event yields is done using pseudo-experiment-based frequentist methods and fiducial cross sections with corrections for detector effects are calculated as:

expected +0.6 σfiducial = 0.6−0.4 fb observed +0.7 σfiducial = 1.1−0.6 fb.

The observed cross section is larger than the theory prediction of σ = 0.55 0.07fb ± obtained from the Sherpa simulation program. The difference between both results is 0.9 standard deviations. A worse agreement between the Standard Model prediction and the observed data can thus be expected in 37 % of all reruns of the experiment if the conditions are kept the same. The measurement has an observed significance of 1.9σ meaning that the background hypothesis that W ±Zjj-EW does not exist is excluded with 94 % confidence. In particle physics the common threshold to state the discovery of a process is 5σ, therefore it cannot be claimed that W ±Zjj-EW has been observed. Instead an observed upper limit on the fiducial cross section of 2.5fb at 95 % confidence level is established. Figure 11.1 shows predicted and observed cross section results for a multitude of Stan- dard Model processes. As can be seen the signal process W ±Zjj-EW situated in the right lower corner is a quite rare process. Differential distributions for several observables are measured to supply further insight into the nature of electroweak gauge boson scattering. The jet multiplicity sheds light on the overall modelling of jet activity whereas the invariant mass and the absolute separation in rapidity of the tagging jets are two important variables used in separating W ±Zjj-EW and W ±Zjj-QCD. Distortion effects introduced by the detector during the measurement on the stated variables are removed via the iterative Bayesian unfolding technique enabling direct comparison to upcoming predictions from theory groups. At the moment the available statistics is not sufficient to do unfolding using W ±Zjj-EW as the sole signal. Therefore, W ±Zjj-EW and W ±Zjj-QCD were used together as signal for the unfolding studies. Figure 11.2 shows the results of the unfolding process for the jet multiplicity. The theoretical predictions of the two simulation programs Sherpa and PowHeg providing simulated collision data are compared to the unfolded collision data measured by the ATLAS detector. Authors of the simulation programs may use such comparisons to improve the predictions made by the programs.

142 Standard Model Production Cross Section Measurements Status: June 2016

1 80 µb− total (x2) 1 11 60 µb− 10 inelastic ATLAS Preliminary Theory 1 [pb] 20 µb− √ nj 1 Run 1,2 s = 7, 8, 13 TeV √ σ s TeV 6 ≥ LHC pp =7 10 0.1 < pT < 2 TeV 1 Data 4.5 4.9 fb− nj 2 − 0.3 < m ≥< 5 TeV 105 jj LHC pp √s =8 TeV pT > 25 GeV 1 4 Data 20.3 fb− n 0 10 j ≥ LHC pp √s = 13 TeV 3 nj 0 10 nj 1 ≥ nj 0 1 ≥ ≥ Data 0.08 3.2 fb− total − p > 100 GeV n 2nj 1 T j ≥ ≥ t-chan 2 nj 1 WW 10 ≥ WW nj 2 total nj 3 ≥ Wt WZ ≥ nj 2 ≥ WZ WZ ggF 1 nj 3 1 ZZ 10 nj 4 ≥ 2.0 fb− H WW ≥ nj 3 nj 4 → W γ ≥ ≥ ZZ ZZ nj 4 nj 5 Zγ ≥ s-chan nj 5 ≥ ≥ nj 4 n 6 H ττ 1 ≥ j ≥ → Zγ n 6 j ≥ n 5 nj 7 VBF j ≥ ≥ H WW 1 → 10− nj 8 n 6 ≥ j ≥

2 nj 7 H γγ ≥ → 10− nj 7 ≥ H ZZ 4ℓ → → W ±W ± 10 3 − WZ

pp Jets γ W Z t¯t t VV γγ H Vγ t¯tWt¯tZ t¯tγ Zjj Zγγ Wγγ VVjj R=0.4 EWK nj = 0 nj = 0 EWK fid. fid. fid. fid. tot. tot. fid. fid. fid. tot. tot. fid. fid. tot. tot. fid.

Figure 11.1.: Summary of ATLAS results on Standard Model processes. Shown are the results obtained from the collision data recorded by the ATLAS detector at centre-of-mass energies of 7, 8, and 13 TeV. The results are compared to theory predictions. The W ±Zjj-EW process studied in this work is shown in the lower right corner. Results are taken from Ref. [182].

Searching for new physics beyond the Standard Model is one of the main points of the LHC physics programme. This is well motivated as the Standard Model has no explanation for a multitude of questions such as the origin of dark matter. A typical approach to search for new physics is to optimise the analysis towards a proposed theoretical model and check whether its predictions do not contradict the observed data. Testing each proposed model is laborious. Therefore, a more generic approach is chosen. Effective field theories model the effects of new physics in a generic way by describing the contributions which may arise from new physics. The description adds a complete basis of new operators accompanied by strength parameters introducing anomalous couplings to the Standard Model Lagrangian. These operators describe the low en- ergy behaviour introduced by new physics, e.g. particles that are too massive to be observed directly. Constraining the strength parameters restricts the possible effects a new theory may introduce. Results for triple gauge couplings are readily available while examining quartic gauge couplings has only become feasible in the LHC era. In the parameterisation of the electroweak chiral Lagrangian, a particular effective field theory, two parameters, α4 and α5 are purely associated with anomalous quartic gauge couplings and constraining them is of high interest. The Standard Model itself is well behaved up to very high energies but effects introduced by the new operators may lead to unphysical results at high energies. In this work the

143 11. Summary

100

fid WZ Data 2012 ( s = 8 TeV) σ 90 -1 ∆ ∫L dt = 20.3 fb 80

70 Theory (Sherpa) 60 Theory (PowHeg) Data (2012) 50 Stat. Uncertainty 40 Uncertainty 30 20 10

2 0 1 2 3 4 5+ 1.5

Sim N Data 1 Jets 0.5 0 1 2 3 4 5+

NJets

Figure 11.2.: Differential Distributions for the jet multiplicity comparing the predictions made by the Sherpa and PowHeg simulation programs.

K-Matrix approach is employed to ensure the physical sensibility of the results. Examining multiple suitable variables a phase space optimised towards measuring the effects of new physics is found. Limits on α4 and α5 are derived by testing the compati- bility of the theoretical predictions for given pair of α4 and α5 values with the expected and observed event yields using the ∆log likelihood method. The resulting exclusion limits are shown in Figure 11.3. The one dimensional exclusion limits are:

expected 0.23 < α4 < 0.26 0.27 < α5 < 0.25 − − observed 0.44 < α4 < 0.49 0.49 < α5 < 0.47. − −

The derived limits are competitive with the ones determined in the W ±W ±jj analy- sis due to the different dependencies of the fiducial cross sections to the α4 and α5 parameters in the two channels. A slight tension of the observed cross section with the Standard Model expectation has to be noted. This tension may be attributed to chance since the observed event yield in the optimised phase space is only 9 events and statistical uncertainties dominate. It also may also hint at possible effects of new physics which are yet too subtle to be seen clearly. It is therefore desirable to redo the analysis with a larger dataset at a preferably higher centre-of-mass energy enhancing the contributions from VBS. The upcoming Run 2 will enable studies of electroweak gauge boson scattering at the unprecedented centre-of-mass energy of 13 GeV. Studies have shown that in the case of W ±Zjj the cross sections of both the contributing electroweak and the strong processes will rise at the same pace with cross sections at √sˆ = 13 GeV being three to four times larger than at √sˆ = 8 GeV. Given the experience that was gathered in Run 1 it is to be expected that data will be acquired with high efficiency promising better statistics in the analyses to come. At the same time the techniques used for estimating

144 5 1 s = 8 TeV, 20.3 fb-1 α K-matrix unitarisation pp → W±Zjj 0.5

0

± -0.5 obs. 68% CL, W Zjj ± obs. 95% CL, W Zjj ± exp. 95% CL, W Zjj ± ± exp. 95% CL, W W jj Standard Model -1 -1 -0.5 0 0.5 1 α 4

Figure 11.3.: Two-dimensional expected and observed exclusion contours for the aQGC parameters α4 and α5. All points in the α4-α5 plane beyond the light blue area are excluded with 95 % confidence by the observed collision data. Expected exclusion contours are shown for W ±Zjj-EW and W ±W ±jj.

systematic uncertainties and controlling backgrounds will become more sophisticated making measurements more precise. Furthermore, selection strategies will become more refined moving away from simple cut-based analyses to multivariate techniques such as boosted decision trees and neural networks. Also, more sophisticated statistical methods such as shape fits will make better use of the measured data providing more precise measurements and more stringent limits on the effects of new physics. Reaching this higher precision will also be facilitated by better theory predictions as higher order corrections in both strong and electroweak theory will become available leading to smaller theoretical uncertainties. Last but not least the quality of the analysis software used in ATLAS will attain a higher standard enabling faster development with fewer defects. It is to be expected that the process W ±W ±jj-EW will be the first process for which a discovery may be claimed. The higher centre-of-mass energy and expected luminosities will provide the data for this possible discovery which will most likely be achieved before the end of Run 2. But also W ±Zjj-EW can be expected to be measured with sufficient significance eventually though this may need the whole integrated luminosity which will be accumulated in Run 2. However, the process should not be seen as a second-best alternative as it offers the opportunity to measure the polarisation of the scattering massive gauge bosons, a direct test of the electroweak symmetry breaking which is not as easily feasible in W ±W ±jj-EW. Furthermore it may be used to search for singly charged resonances as the higher centre-of-mass energy explores a new energy frontier. Therefore studies of this channel will continue to be important and one should stay tuned for new results.

145

A. Particle Level Distributions for the WZ Channel

This appendix intends to give a more complete overview of the event topologies present in the W ±Zjj-EW (185396) and W ±Zjj-QCD (185397) samples generated with Sherpa. In all cases the phase space is the same as the one described in Section 2.3.4.

0.35 WZjj-EW 0.06 WZjj-EW WZjj-QCD WZjj-QCD 0.3

Arbitrary Units Arbitrary Units 0.055 0.25

0.2 0.05 0.15

0.1 0.045

0.05 0.04 40 4 0 50 100 150 200 250 300 350 400 450 500 −3 −2 −1 0 1 2 3 3.5 3.5 pMET φMET 3 T 3 WZjj-EW WZjj-EW

WZjj-QCD 2.5 WZjj-QCD 2.5 2 2 1.5 1.5 1 1 0.5 0.5 0 0 0 50 100 150 200 250 300 350 400 450 500 −3 −2 −1 0 1 2 3 pMET φMET T

Figure A.1.: Neutrino properties of the W ±Zjj-EW and the W ±Zjj-QCD pro- cess. The shown distributions are obtained on particle level after the application of the inclusive selection and requiring at least two jets. The properties shown are the transverse momentum pT(left), and the azimuthal angle φ (right) for the neutrino associated with the W boson. The observed behaviour is largely the same for both processes.

147 0.06 0.4 0.14 WZjj-EW WZjj-EW WZjj-EW 0.058 WZjj-QCD WZjj-QCD 0.35 WZjj-QCD 0.12 0.056

Arbitrary Units Arbitrary Units Arbitrary Units 0.3 0.1 0.054 0.25 0.052 0.08 0.05 0.2 0.06 0.048 0.15 0.046 0.04 0.1 0.044 0.02 0.05 0.042 40 4 4 −5 −4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 3 0 50 100 150 200 250 300 350 400 450 500 3.5 3.5 3.5 ηl3 φl3 pl3 3 3 3 T WZjj-EW WZjj-EW WZjj-EW

WZjj-QCD 2.5 WZjj-QCD 2.5 WZjj-QCD 2.5 2 2 2 1.5 1.5 1.5 1 1 1 0.5 0.5 0.5 0 0 0 −5 −4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 3 0 50 100 150 200 250 300 350 400 450 500 ηl3 φl3 pl3 T

0.35 0.16 WZjj-EW 0.06 WZjj-EW WZjj-EW 0.14 WZjj-QCD WZjj-QCD 0.3 WZjj-QCD

Arbitrary Units 0.12 Arbitrary Units Arbitrary Units 0.25 0.055 0.1 0.2

0.08 0.05 0.15 0.06 0.1 0.04 0.045

0.02 0.05 0.04 40 4 4 −5 −4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 3 0 50 100 150 200 250 300 350 400 450 500 3.5 3.5 3.5 ηl1 φl1 pl1 [GeV] 3 3 3 T WZjj-EW WZjj-EW WZjj-EW

WZjj-QCD 2.5 WZjj-QCD 2.5 WZjj-QCD 2.5 2 2 2 1.5 1.5 1.5 1 1 1 0.5 0.5 0.5 0 0 0 −5 −4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 3 0 50 100 150 200 250 300 350 400 450 500 ηl1 φl1 pl1 [GeV] T

0.062 0.6 0.16 WZjj-EW 0.06 WZjj-EW WZjj-EW WZjj-QCD WZjj-QCD 0.5 WZjj-QCD 0.14 0.058

Arbitrary Units Arbitrary Units 0.056 Arbitrary Units 0.12 0.4 0.054 0.1 0.052 0.3 0.08 0.05 0.06 0.048 0.2

0.04 0.046 0.044 0.1 0.02 0.042 40 4 40 −5 −4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 3 0 50 100 150 200 250 300 350 400 450 500 3.5 3.5 3.5 ηl2 φl2 pl2 [GeV] 3 3 3 T WZjj-EW WZjj-EW WZjj-EW

WZjj-QCD 2.5 WZjj-QCD 2.5 WZjj-QCD 2.5 2 2 2 1.5 1.5 1.5 1 1 1 0.5 0.5 0.5 0 0 0 −5 −4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 3 0 50 100 150 200 250 300 350 400 450 500 ηl2 φl2 pl2 [GeV] T

Figure A.2.: Lepton properties of the W ±Zjj-EW and the W ±Zjj-QCD pro- cess. The shown distributions are obtained on particle level after the application of the inclusive selection and requiring at least two jets. The properties shown are pseudo-rapidity η (left), azimuthal angle φ (middle), and the transverse momen- tum pT(right) for the charged lepton associated with the W boson (top row) as well as the leading (middle row) and subleading (bottom row) lepton associated with the Z boson, (middle and lower row). The observed behaviour is largely the same for both processes.

148 0.24 0.22 WZjj-EW 0.14 WZjj-EW 0.095 WZjj-EW 0.2 WZjj-QCD WZjj-QCD WZjj-QCD 0.12 0.18 0.09 Arbitrary Units Arbitrary Units Arbitrary Units 0.16 0.1 0.14 0.085 0.12 0.08 0.08 0.1 0.06 0.08 0.075 0.06 0.04 0.04 0.07 0.02 0.02 40 40 0.0654 0 50 100 150 200 250 300 −5 −4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 3.5 3.5 3.5 p (j ) [GeV] η(j ) φ(j ) 3 T 1 3 1 3 1 WZjj-EW WZjj-EW WZjj-EW

WZjj-QCD 2.5 WZjj-QCD 2.5 WZjj-QCD 2.5 2 2 2 1.5 1.5 1.5 1 1 1 0.5 0.5 0.5 0 0 0 0 50 100 150 200 250 300 −5 −4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 p (j ) [GeV] η(j ) φ(j ) T 1 1 1

0.1 0.5 WZjj-EW 0.14 WZjj-EW WZjj-EW WZjj-QCD WZjj-QCD WZjj-QCD 0.12 0.095

Arbitrary Units 0.4 Arbitrary Units Arbitrary Units 0.1 0.09

0.3 0.08 0.085

0.06 0.08 0.2 0.04 0.075 0.1 0.02 0.07

40 40 0.0654 0 50 100 150 200 250 300 −5 −4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 3.5 3.5 3.5 p (j ) [GeV] η(j ) φ(j ) 3 T 2 3 2 3 2 WZjj-EW WZjj-EW WZjj-EW

WZjj-QCD 2.5 WZjj-QCD 2.5 WZjj-QCD 2.5 2 2 2 1.5 1.5 1.5 1 1 1 0.5 0.5 0.5 0 0 0 0 50 100 150 200 250 300 −5 −4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 p (j ) [GeV] η(j ) φ(j ) T 2 2 2

Figure A.3.: Jet properties of the W ±Zjj-EW and the W ±Zjj-QCD process. The shown distributions are obtained on particle level after the application of the inclusive selection and requiring at least two jets. The properties shown are the transverse momentum pT(left), pseudo-rapidity η (middle), and azimuthal angle φ (right) for the leading (top row) and subleadint (bottom row) tagging jet. As can be seen the tagging jets in the W ±Zjj-EW process exhibit harder pTdistributions as well as broader η distributions.

149 0.5 0.24 0.7 WZjj-EW WZjj-EW 0.22 WZjj-EW WZjj-QCD WZjj-QCD 0.2 WZjj-QCD 0.4 0.6 0.18 Arbitrary Units Arbitrary Units Arbitrary Units 0.5 0.16 0.3 0.14 0.4 0.12 0.2 0.3 0.1 0.08 0.2 0.06 0.1 0.04 0.1 0.02 40 40 40 0 50 100 150 200 250 300 0 5 10 15 20 25 30 35 40 0 100 200 300 400 500 600 700 800 900 1000 3.5 3.5 3.5 mW |M(Z)-M | [GeV] mWZ 3 T 3 Z 3 T WZjj-EW WZjj-EW WZjj-EW

WZjj-QCD 2.5 WZjj-QCD 2.5 WZjj-QCD 2.5 2 2 2 1.5 1.5 1.5 1 1 1 0.5 0.5 0.5 0 0 0 0 50 100 150 200 250 300 0 5 10 15 20 25 30 35 40 0 100 200 300 400 500 600 700 800 900 1000 W |M(Z)-M | [GeV] WZ mT Z mT

0.16 0.2 WZjj-EW 0.18 WZjj-EW WZjj-EW 0.14 0.18 WZjj-QCD 0.16 WZjj-QCD WZjj-QCD 0.16 0.12 Arbitrary Units Arbitrary Units 0.14 Arbitrary Units 0.14 0.12 0.1 0.12 0.1 0.08 0.1 0.08 0.08 0.06 0.06 0.06 0.04 0.04 0.04 0.02 0.02 0.02 4 4 4 0 50 100 150 200 250 300 350 400 450 500 0 50 100 150 200 250 300 350 400 450 500 0 50 100 150 200 250 300 350 400 450 500 3.5 3.5 3.5 pW pZ [GeV] pWZ 3 T 3 T 3 T WZjj-EW WZjj-EW WZjj-EW

WZjj-QCD 2.5 WZjj-QCD 2.5 WZjj-QCD 2.5 2 2 2 1.5 1.5 1.5 1 1 1 0.5 0.5 0.5 0 0 0 0 50 100 150 200 250 300 350 400 450 500 0 50 100 150 200 250 300 350 400 450 500 0 50 100 150 200 250 300 350 400 450 500 pW pZ [GeV] pWZ T T T

0.12 WZjj-EW WZjj-EW 0.1 WZjj-EW 0.1 WZjj-QCD WZjj-QCD WZjj-QCD 0.1

Arbitrary Units 0.08 Arbitrary Units Arbitrary Units 0.08 0.08

0.06 0.06 0.06

0.04 0.04 0.04

0.02 0.02 0.02

4 4 4 −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 3.5 3.5 3.5 ηW ηZ ηWZ 3 3 [GeV] 3 WZjj-EW WZjj-EW WZjj-EW

WZjj-QCD 2.5 WZjj-QCD 2.5 WZjj-QCD 2.5 2 2 2 1.5 1.5 1.5 1 1 1 0.5 0.5 0.5 0 0 0 −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 ηW ηZ [GeV] ηWZ

Figure A.4.: Boson properties of the W ±Zjj-EW and the W ±Zjj-QCD process. The shown distributions are obtained on particle level after the application of the inclusive selection and requiring at least two jets. The properties shown are the transverse mass (top row), transverse momentum pT(middle row), pseudo- rapidity η (lower row) for the W boson (left), Z boson (middle) and WZ system (right). For the Z boson the transverse mass plot was exchanged for a window plot showing the deviation from the expected Z mass.

150 B. Additional Plots in the VBS Phase Space

Figure B.1, Figure B.2 and Figure B.3 depict the agreement between simulated and real data for a selection of observables in the VBS phase space (see Section 5.2.2). Figure B.1 summarises the modelling of the missing transverse momentum, the invariant mass of the Z boson candidate and the multiplicities of the leptons. The observed jet properties in the VBS phase space are shown in FigureB.2 whereas the lepton properties are summarised in Figure B.3.

151 152 rmtplf obto ih h isn rnvremmnu,teinvariant the momentum, transverse missing the the right of bottom mass to left top from B.1.: Figure ubro un nteevent. the in muons of number

Data Data Events Events / 50 GeV Sim Sim 0.5 1.5 0.5 1.5 10 15 20 25 30 10 15 20 25 5 5 1 1 0 0 all channels all channels 50 s s =8TeV,L20.3fb =8TeV,L20.3fb 1 100 Z 2 150 vn aibe nteVSpaesae h aibe hw are shown variables The space. phase VBS the in variables Event oo addt,tenme feetosi h vn n the and event the in electrons of number the candidate, boson 3 200 -1 -1 4 250 5 300 6 350 Data total uncertainty Prompt Non-Prompt WZ WZ Data total uncertainty Prompt Non-Prompt WZ WZ 7 QCD EW QCD EW 400 MET [GeV] 8 450 9 n 500 el

Data Data Events Events / 5 GeV Sim Sim 0.5 1.5 0.5 1.5 10 15 20 25 10 15 20 25 30 35 5 5 75 1 1 0 all channels all channels s s =8TeV,L20.3fb =8TeV,L20.3fb 80 1 85 2 3 90 -1 -1 4 95 5 100 6 Data total uncertainty Prompt Non-Prompt WZ WZ Data total uncertainty Prompt Non-Prompt WZ WZ 105 7 QCD EW QCD EW 8 110 m ll [GeV] 9 n 115 µ aiiy(et n rnvremmnu rgt o h edn tprw and row) (top leading the jet. for tagging (right) row) momentum (bottom subleading transverse and (left) rapidity B.2.: Figure

Data Data Events / 1 Events / 1 Sim Sim 0.5 1.5 0.5 1.5 10 15 20 25 10 12 14 16 18 5 2 4 6 8 1 1 -4 -4 all channels all channels s s =8TeV,L20.3fb =8TeV,L20.3fb -3 -3 -2 -2 e ieaisi h B hs pc.Sonaetepseudo- the are Shown space. phase VBS the in kinematics Jet -1 -1 -1 -1 0 0 1 1 Data total uncertainty Prompt Non-Prompt WZ WZ Data total uncertainty Prompt Non-Prompt WZ WZ 2 2 QCD EW QCD EW 3 3 4 4 η η j2 j1

Data Data Events / 50 GeV Events / 50 GeV Sim Sim 0.5 1.5 0.5 1.5 10 12 14 16 18 10 12 14 16 18 2 4 6 8 2 4 6 8 1 1 0 0 all channels all channels 50 50 s s =8TeV,L20.3fb =8TeV,L20.3fb 100 100 150 150 200 200 -1 -1 250 250 300 300 350 350 Data total uncertainty Prompt Non-Prompt WZ WZ Data total uncertainty Prompt Non-Prompt WZ WZ QCD EW QCD EW 400 400 p p T,j2 T,j1 450 450 [GeV] [GeV] 500 500 153 154 o ett otmrgt hw r h suorpdt lf)adtransverse the and the with the (left) associated with with pseudo-rapidity lepton associated associated the lepton lepton are the leading Shown of right: (right) bottom momentum to left top B.3.: Figure

Data Data Data Events / 0.5 Events / 0.5 Events / 0.5 Sim Sim Sim 0.5 1.5 0.5 1.5 0.5 1.5 10 15 20 25 10 15 20 25 10 15 20 25 -2.5 -2.5 -2.5 5 5 5 1 1 1 all channels all channels all channels s s s -2 -2 -2 =8TeV,L20.3fb =8TeV,L20.3fb =8TeV,L20.3fb -1.5 -1.5 -1.5 etnknmtc nteVSpaesae h aibe r from are variables The space. phase VBS the in kinematics Lepton -1 -1 -1 -0.5 -0.5 -0.5 -1 -1 -1 0 0 0 0.5 0.5 0.5 Z Data total uncertainty Prompt Non-Prompt WZ WZ Data total uncertainty Prompt Non-Prompt WZ WZ Data total uncertainty Prompt Non-Prompt WZ WZ 1 1 1 QCD EW QCD EW QCD EW boson. 1.5 1.5 1.5 2 2 2 η η η Z lZ2 lW lZ1 2.5 2.5 2.5 oo mdl o) n h subleading the and row), (middle boson

Data Data Data Events / 50 GeV Events / 50 GeV Events / 50 GeV Sim Sim Sim 0.5 1.5 0.5 1.5 0.5 1.5 10 15 20 25 30 35 40 10 15 20 25 30 35 40 10 15 20 25 30 35 5 5 5 1 1 1 0 0 0 all channels all channels all channels 50 50 50 s s s =8TeV,L20.3fb =8TeV,L20.3fb =8TeV,L20.3fb 100 100 100 W 150 150 150 oo tprw,the row), (top boson 200 200 200 -1 -1 -1 250 250 250 300 300 300 350 350 350 Data total uncertainty Prompt Non-Prompt WZ WZ Data total uncertainty Prompt Non-Prompt WZ WZ Data total uncertainty Prompt Non-Prompt WZ WZ QCD EW QCD EW QCD EW 400 400 400 p p p T,lZ2 T,lZ1 T,lW 450 450 450 [GeV] [GeV] [GeV] 500 500 500 C. Event Display for Event in VBS Phase Space

The event display shown in Figure C.1 was taken from the set of 45 events in 2012 data satisfying the requirements of the VBS phase space. It was recorded in Run 207620 and has the event number 77863808. Properties of the objects found in the event are listed in Table C.1. The invariant mass of the Z boson candidate is mll = 90.6 GeV. The transverse mass of the W boson candidate is found to be mT,W = 77.0 GeV. The invariant mass of the two tagging jets evaluates to mjj = 619.2 GeV. The event display was prepared using the ATLANTIS event viewer [183].

object pT [ GeV] η φ q leading µ from Z candidate 82.6 0.08 2.95 +1 − − subleading µ from Z candidate 22.4 0.70 0.64 1 − e from W candidate 62.2 0.55 0.87 1 miss − − ET 150.3 - 0.05 - leading tagging jet 152.0 1.39 1.39 - − subleading tagging jet 257.2 0.66 2.22 - − Table C.1.: Kinematic properties of the objects found in event 77863808 from run 207620.

155 Figure C.1.: Event display for event 77863808 from run 207620. Shown is the ATLAS detector consisting of the inner detector (grey), electromagnetic calorimeter (green), hadronic calorimeter (salmon), and muon detector (blue). Only tracks with pT > 10 GeV are included in the representation. Energy deposits in the calorimeters are shown in yellow. Muon detector hits from MDT, CSC, and TGC chambers have been omitted for easier viewing. Reconstructed electrons (muons) are shown in green (red). Jets from the An- tiKt4LCTopo collection are represented by orange cones. The missing transverse momentum vector is depicted as a yellow arrow. Trigger regions of interest in the colours of the causing objects are shown as bars outside the detector. The event is categorised as a µµe event with the muon-antimuon pair forming the Z boson candidate. The electron stems presumably from the W boson.

156 D. Additional Information on Unfolding Results

Njet 0 1 2 3 4 5 fid.PS ≥ σW ±Z 19.73 9.02 3.71 1.35 0.59 0.15 Total Relative Uncertainties [%] Statistics 3.76 6.29 10.00 16.30 23.60 61.40 All systematics 7.70 8.83 12.84 18.75 27.05 70.39 Luminosity 3.05 3.21 3.42 3.59 3.93 5.68 Total 8.56 10.84 16.28 24.85 35.90 93.41 Unfolding 0.02 0.13 0.09 3.77 13.26 12.05 MC Stat 0.48 0.76 1.16 2.05 3.47 5.74 Bkg. Stat. 0.34 0.55 0.91 1.36 1.64 5.70 τ Bkg. 0.23 0.27 0.32 0.34 0.28 0.38 ZZ Bkg. 0.45 0.75 0.69 0.52 0.28 0.33 Matrix Method 5.67 7.44 8.30 8.68 8.18 11.72 Other Bkg. 0.00 0.43 3.02 5.45 10.20 28.60 Pile Up 0.30 0.09 0.05 0.22 0.26 0.57 e - Energy Scale 0.46 0.36 0.62 0.42 0.72 0.33 e - Energy Smearing 0.06 0.09 0.24 0.51 0.40 0.35 e - Id. Efficiency 1.15 1.29 1.39 1.51 1.61 2.09 e - Rec. Efficiency 0.41 0.49 0.54 0.58 0.63 0.81 e - Iso. Efficiency 0.29 0.31 0.32 0.34 0.35 0.43 µ - pT Scale 0.00 0.00 0.13 0.22 0.32 0.20 µ - pT Smearing MS 0.07 0.17 0.07 0.20 0.32 0.67 µ - pT Smearing ID 0.05 0.09 0.09 0.38 0.27 1.00 µ - Rec. Efficiency 0.76 0.78 0.86 0.90 1.01 1.54 jet - Energy Scale (tot) 3.87 2.42 7.54 12.30 13.29 53.36 jet - Res. Smearing 2.93 3.34 3.64 6.18 9.06 8.90 miss ET - SoftTerm Scale 0.05 0.09 0.12 0.32 0.14 0.12 miss ET - SoftTerm Res. 0.08 0.11 0.24 0.33 0.54 0.39 Trigger (e and µ) 0.14 0.14 0.14 0.15 0.15 0.28

Table D.1.: Differential cross section of W ±Zjj-EW + W ±Z-QCD as a func- tion of jet multiplicity using the Sherpa simulated data. All uncertainties are expressed as relative quantities. The total uncertainty is the quadratic sum of the statistical and systematic uncertainty. The unfolded distribution is determined in the inclusive phase space.

157 Njet 0 1 2 3 4 5 fid.PS ≥ 1/σtot σ ± 0.57 0.26 0.11 0.04 0.02 0.00 · W Z Total Relative Uncertainties [%] Statistics 2.68 5.68 9.71 16.10 23.40 61.30 All systematics 5.26 4.30 9.10 15.95 25.06 68.94 Luminosity 0.13 0.00 0.24 0.41 0.75 0.50 Total 5.90 7.12 13.31 22.66 34.29 92.25 Unfolding 0.02 0.14 0.09 3.77 13.27 12.04 MC Stat 0.30 0.63 1.10 1.97 3.47 5.68 Bkg. Stat. 0.25 0.39 0.78 1.28 1.60 5.66 τ Bkg. 0.00 0.00 0.06 0.08 0.00 0.12 ZZ Bkg. 0.10 0.19 0.13 0.00 0.27 0.22 Matrix Method 0.97 0.90 1.79 2.21 1.74 5.37 Other Bkg. 0.94 0.54 2.07 4.52 9.35 27.90 Pile Up 0.11 0.10 0.24 0.00 0.45 0.76 e - Energy Scale 0.09 0.23 0.21 0.23 0.44 0.26 e - Energy Smearing 0.00 0.00 0.16 0.58 0.33 0.28 e - Id. Efficiency 0.09 0.05 0.15 0.27 0.38 0.86 e - Rec. Efficiency 0.00 0.00 0.08 0.12 0.17 0.35 e - Iso. Efficiency 0.00 0.00 0.00 0.00 0.00 0.12 µ - pT Scale 0.00 0.09 0.09 0.18 0.28 0.15 µ - pT Smearing MS 0.07 0.16 0.06 0.21 0.32 0.68 µ - pT Smearing ID 0.00 0.06 0.13 0.41 0.30 0.96 µ - Rec. Efficiency 0.00 0.00 0.06 0.11 0.22 0.75 jet - Energy Scale (tot) 3.99 2.26 7.44 12.18 13.18 53.30 jet - Res. Smearing 2.96 3.32 3.61 6.16 9.04 8.88 miss ET - SoftTerm Scale 0.00 0.12 0.09 0.29 0.18 0.15 miss ET - SoftTerm Res. 0.00 0.00 0.13 0.22 0.64 0.50 Trigger (e and µ) 0.00 0.00 0.00 0.00 0.00 0.12

Table D.2.: Normalised differential cross section of W ±Zjj-EW + W ±Z-QCD as a function of jet multiplicity using the Sherpa simulated data. All uncertain- ties are expressed as relative quantities. The total uncertainty is the quadratic sum of the statistical and systematic uncertainty. The unfolded distribution is determined in the inclusive phase space.

158 Njet 0 1 2 3 4 5 fid.PS ≥ σW ±Z 19.76 9.00 3.82 1.41 0.65 0.19 Total Relative Uncertainties [%] Statistics 3.80 6.34 9.94 15.90 23.10 54.10 All systematics 5.31 4.99 10.51 14.72 24.11 54.23 Luminosity 3.07 3.24 3.42 3.56 3.81 5.42 Total 6.53 8.07 14.47 21.67 33.39 76.60 Unfolding 0.67 1.29 2.45 0.18 10.43 9.69 MC Stat 0.17 0.26 0.43 0.83 1.46 2.37 Bkg. Stat. 0.32 0.58 0.84 1.18 1.38 4.77 τ Bkg. 0.14 0.18 0.16 0.12 0.08 0.14 ZZ Bkg. 0.46 0.75 0.70 0.53 0.28 0.33 Matrix Method 1.90 3.31 3.63 3.95 2.45 6.32 Other Bkg. 0.00 0.40 2.91 5.35 9.26 26.00 Pile Up 0.18 0.06 0.09 0.06 0.50 0.62 e - Energy Scale 0.32 0.33 0.47 0.51 0.22 0.53 e - Energy Smearing 0.05 0.14 0.22 0.23 0.14 0.46 e - Id. Efficiency 1.12 1.29 1.40 1.52 1.54 2.20 e - Rec. Efficiency 0.40 0.48 0.54 0.60 0.61 0.88 e - Iso. Efficiency 0.29 0.31 0.32 0.34 0.33 0.45 µ - pT Scale 0.00 0.08 0.10 0.00 0.10 0.35 µ - pT Smearing MS 0.00 0.16 0.13 0.14 0.35 0.19 µ - pT Smearing ID 0.00 0.10 0.05 0.00 0.10 0.30 µ - Rec. Efficiency 0.76 0.79 0.85 0.88 0.97 1.43 jet - Energy Scale (tot) 3.75 1.82 7.23 9.93 14.57 35.32 jet - Res. Smearing 2.67 1.98 4.01 6.09 8.52 12.20 miss ET - SoftTerm Scale 0.00 0.06 0.00 0.08 0.07 0.08 miss ET - SoftTerm Res. 0.00 0.06 0.09 0.19 0.24 0.39 Trigger (e and µ) 0.14 0.14 0.13 0.14 0.14 0.22

Table D.3.: Differential cross section of W ±Z-QCD as a function of jet mul- tiplicity using the PowHeg simulated data. All uncertainties are expressed as relative quantities. The total uncertainty is the quadratic sum of the statisti- cal and systematic uncertainty. The unfolded distribution is determined in the inclusive phase space.

159 Njet 0 1 2 3 4 5 fid.PS ≥ 1/σtot σ ± 0.57 0.26 0.11 0.04 0.02 0.01 · W Z Total Relative Uncertainties [%] Statistics 2.70 5.73 9.58 15.70 22.80 54.00 All systematics 4.81 3.44 9.17 13.49 23.14 53.30 Luminosity 0.12 0.00 0.22 0.36 0.61 2.23 Total 5.52 6.68 13.26 20.70 32.49 75.88 Unfolding 0.38 1.58 2.15 0.47 10.10 9.96 MC Stat 0.11 0.22 0.41 0.80 1.46 2.36 Bkg. Stat. 0.24 0.41 0.72 1.12 1.35 4.74 τ Bkg. 0.00 0.00 0.00 0.00 0.06 0.00 ZZ Bkg. 0.10 0.19 0.14 0.00 0.27 0.23 Matrix Method 0.69 0.76 1.13 1.49 0.43 3.83 Other Bkg. 0.94 0.57 1.96 4.42 8.37 25.30 Pile Up 0.08 0.00 0.18 0.16 0.60 0.71 e - Energy Scale 0.00 0.00 0.12 0.32 0.31 0.39 e - Energy Smearing 0.00 0.05 0.13 0.31 0.23 0.37 e - Id. Efficiency 0.10 0.06 0.18 0.29 0.31 0.98 e - Rec. Efficiency 0.05 0.00 0.08 0.14 0.16 0.43 e - Iso. Efficiency 0.00 0.00 0.00 0.00 0.00 0.14 µ - pT Scale 0.00 0.00 0.00 0.07 0.00 0.40 µ - pT Smearing MS 0.06 0.13 0.09 0.10 0.39 0.23 µ - pT Smearing ID 0.00 0.09 0.06 0.00 0.09 0.29 µ - Rec. Efficiency 0.00 0.00 0.05 0.08 0.17 0.64 jet - Energy Scale (tot) 3.74 1.83 7.24 9.94 14.57 35.28 jet - Res. Smearing 2.57 2.07 4.10 6.17 8.61 12.30 miss ET - SoftTerm Scale 0.00 0.07 0.00 0.07 0.08 0.09 miss ET - SoftTerm Res. 0.00 0.00 0.00 0.15 0.29 0.44 Trigger (e and µ) 0.00 0.00 0.00 0.00 0.00 0.07

Table D.4.: Normalised differential cross sections of W ±Z-QCD as a function of jet multiplicity using the PowHeg simulated data. All uncertainties are ex- pressed as relative quantities. The total uncertainty is the quadratic sum of the statistical and systematic uncertainty. The unfolded distribution is determined in the inclusive phase space.

160 Mjj 0-150 150-300 300-500 500-800 800- fid.PS ∞ σW ±Z 2.14 1.73 0.44 0.34 0.30 Total Relative Uncertainties [%] Statistics 12.10 13.10 30.70 30.30 28.70 All systematics 12.61 13.87 21.09 15.22 18.56 Luminosity 3.50 3.48 3.98 3.63 3.14 Total 17.48 19.08 37.24 33.91 34.18 Unfolding 0.36 1.21 2.20 1.50 0.16 MC Stat 1.53 1.66 3.11 3.57 3.90 Bkg. Stat. 1.34 1.43 3.46 2.83 1.94 τ Bkg. 0.58 0.40 0.91 0.62 0.49 ZZ Bkg. 1.95 1.48 1.77 1.95 0.40 Matrix Method 8.53 7.52 9.89 6.56 8.70 Other Bkg. 3.97 4.84 9.29 5.71 2.46 Pile Up 0.47 0.19 0.07 0.25 0.13 e - Energy Scale 0.58 0.52 0.36 0.70 1.11 e - Energy Smearing 0.25 0.22 0.55 0.08 0.42 e - Id. Efficiency 1.42 1.49 1.57 1.49 1.40 e - Rec. Efficiency 0.54 0.58 0.61 0.56 0.54 e - Iso. Efficiency 0.33 0.34 0.35 0.33 0.30 µ - pT Scale 0.16 0.22 0.09 0.05 0.21 µ - pT Smearing MS 0.13 0.34 0.00 0.42 0.17 µ - pT Smearing ID 0.18 0.11 0.23 0.15 0.38 µ - Rec. Efficiency 0.87 0.86 1.03 0.91 0.77 jet - Energy Scale (tot) 5.45 8.14 10.43 7.00 15.10 jet - Res. Smearing 2.61 2.38 3.84 5.54 0.67 miss ET - SoftTerm Scale 0.27 0.08 0.00 0.00 0.08 miss ET - SoftTerm Res. 0.19 0.06 0.23 0.05 0.16 Trigger (e and µ) 0.14 0.15 0.18 0.14 0.11

Table D.5.: Differential cross section of W ±Zjj-EW + W ±Z-QCD as a function of the invariant dijet mass using the Sherpa simulated data. All uncertainties are expressed as relative quantities. The total uncertainty is the quadratic sum of the statistical and systematic uncertainty. The unfolded distribution is determined in the phase space obtained after the application of the inclusive selection and requiring Njet 2. ≥

161 Mjj 0-150 150-300 300-500 500-800 800- fid.PS ∞ 1/σtot σ ± 0.43 0.35 0.09 0.07 0.06 · W Z Total Relative Uncertainties [%] Statistics 9.49 11.30 29.40 29.60 28.20 All systematics 3.38 2.86 10.11 7.64 11.95 Luminosity 0.00 0.00 0.45 0.11 0.38 Total 10.07 11.66 31.09 30.57 30.63 Unfolding 0.35 1.22 2.19 1.51 0.15 MC Stat 1.11 1.31 3.04 3.48 3.69 Bkg. Stat. 0.96 1.07 3.13 2.72 2.00 τ Bkg. 0.00 0.14 0.36 0.07 0.05 ZZ Bkg. 0.28 0.20 0.10 0.27 1.29 Matrix Method 0.37 0.70 1.89 1.78 0.59 Other Bkg. 0.84 0.06 4.73 0.97 2.43 Pile Up 0.36 0.29 0.17 0.36 0.24 e - Energy Scale 0.25 0.16 0.28 0.44 0.92 e - Energy Smearing 0.08 0.05 0.72 0.08 0.25 e - Id. Efficiency 0.00 0.00 0.10 0.00 0.06 e - Rec. Efficiency 0.00 0.00 0.00 0.00 0.00 e - Iso. Efficiency 0.00 0.00 0.00 0.00 0.00 µ - pT Scale 0.00 0.08 0.00 0.19 0.35 µ - pT Smearing MS 0.17 0.30 0.05 0.46 0.13 µ - pT Smearing ID 0.08 0.21 0.13 0.06 0.29 µ - Rec. Efficiency 0.00 0.00 0.15 0.00 0.10 jet - Energy Scale (tot) 2.50 1.31 4.22 3.79 9.64 jet - Res. Smearing 0.00 0.27 1.23 2.98 3.41 miss ET - SoftTerm Scale 0.17 0.18 0.05 0.05 0.00 miss ET - SoftTerm Res. 0.06 0.06 0.09 0.19 0.00 Trigger (e and µ) 0.00 0.00 0.00 0.00 0.00

Table D.6.: Normalised differential cross section of W ±Zjj-EW + W ±Z-QCD as a function of the invariant dijet mass using the Sherpa simulated data. All uncertainties are expressed as relative quantities. The total uncertainty is the quadratic sum of the statistical and systematic uncertainty. The unfolded dis- tribution is determined in the phase space obtained after the application of the inclusive selection and requiring Njet 2. ≥

162 Mjj 0-150 150-300 300-500 500-800 800- fid.PS ∞ σW ±Z 2.12 1.77 0.46 0.34 0.30 Total Relative Uncertainties [%] Statistics 12.30 13.00 29.30 30.50 28.70 All systematics 11.91 11.52 20.22 13.38 17.72 Luminosity 3.49 3.47 3.92 3.62 3.08 Total 17.12 17.37 35.60 33.31 33.73 Unfolding 1.68 2.44 2.17 4.20 0.11 MC Stat 0.62 0.69 1.38 1.89 2.66 Bkg. Stat. 1.37 1.38 3.21 2.66 1.67 τ Bkg. 0.15 0.11 0.20 0.12 0.06 ZZ Bkg. 0.61 0.46 0.55 0.62 0.12 Matrix Method 3.54 3.36 4.96 1.34 2.30 Other Bkg. 3.96 4.78 9.06 5.84 2.39 Pile Up 0.07 0.23 0.21 0.07 0.48 e - Energy Scale 0.42 0.34 0.50 0.49 0.38 e - Energy Smearing 0.14 0.12 0.10 0.33 0.12 e - Id. Efficiency 1.42 1.46 1.61 1.55 1.28 e - Rec. Efficiency 0.55 0.56 0.63 0.60 0.52 e - Iso. Efficiency 0.33 0.32 0.36 0.33 0.26 µ - pT Scale 0.09 0.00 0.11 0.10 0.00 µ - pT Smearing MS 0.05 0.10 0.22 0.14 0.00 µ - pT Smearing ID 0.06 0.07 0.10 0.19 0.37 µ - Rec. Efficiency 0.87 0.86 0.97 0.90 0.80 jet - Energy Scale (tot) 7.54 7.48 12.03 7.46 15.39 jet - Res. Smearing 5.47 2.36 6.57 3.61 6.66 miss ET - SoftTerm Scale 0.06 0.00 0.07 0.20 0.10 miss ET - SoftTerm Res. 0.06 0.11 0.12 0.26 0.45 Trigger (e and µ) 0.14 0.14 0.15 0.13 0.11

Table D.7.: Differential cross section of W ±Z-QCD as a function of the invariant dijet mass using the PowHeg simulated data. All uncertainties are expressed as relative quantities. The total uncertainty is the quadratic sum of the statistical and systematic uncertainty. The unfolded distribution is determined in the phase space obtained after the application of the inclusive selection and requiring Njet ≥ 2.

163 Mjj 0-150 150-300 300-500 500-800 800- fid.PS ∞ 1/σtot σ ± 0.43 0.36 0.09 0.07 0.06 · W Z Total Relative Uncertainties [%] Statistics 9.65 11.20 28.20 29.90 28.30 All systematics 2.97 2.93 9.99 6.21 11.29 Luminosity 0.00 0.00 0.41 0.11 0.42 Total 10.10 11.58 29.92 30.54 30.47 Unfolding 0.16 0.91 3.63 2.64 1.39 MC Stat 0.48 0.56 1.35 1.83 2.52 Bkg. Stat. 0.95 1.04 2.92 2.56 1.75 τ Bkg. 0.00 0.00 0.06 0.00 0.06 ZZ Bkg. 0.08 0.06 0.00 0.09 0.40 Matrix Method 0.16 0.13 1.73 2.11 1.24 Other Bkg. 0.84 0.00 4.51 1.13 2.48 Pile Up 0.18 0.13 0.11 0.00 0.38 e - Energy Scale 0.05 0.07 0.35 0.35 0.36 e - Energy Smearing 0.00 0.00 0.00 0.19 0.25 e - Id. Efficiency 0.00 0.00 0.16 0.10 0.16 e - Rec. Efficiency 0.00 0.00 0.06 0.00 0.00 e - Iso. Efficiency 0.00 0.00 0.00 0.00 0.06 µ - pT Scale 0.00 0.00 0.05 0.15 0.00 µ - pT Smearing MS 0.00 0.00 0.15 0.21 0.06 µ - pT Smearing ID 0.10 0.00 0.05 0.14 0.32 µ - Rec. Efficiency 0.00 0.00 0.09 0.00 0.07 jet - Energy Scale (tot) 2.02 0.80 4.32 2.65 9.62 jet - Res. Smearing 1.10 2.14 2.25 0.83 2.35 miss ET - SoftTerm Scale 0.00 0.06 0.10 0.17 0.07 miss ET - SoftTerm Res. 0.07 0.10 0.13 0.27 0.44 Trigger (e and µ) 0.00 0.00 0.00 0.00 0.00

Table D.8.: Normalised differential cross section of W ±Z-QCD as a function of the invariant dijet mass using the PowHeg simulated data. All uncertainties are expressed as relative quantities. The total uncertainty is the quadratic sum of the statistical and systematic uncertainty. The unfolded distribution is determined in the phase space obtained after the application of the inclusive selection and requiring Njet 2. ≥

164 ∆yjj 0 - 1 1 -2 2-3 3-4 4-5 5- | fid.PS| ∞ σW ±Z 2.27 1.29 0.95 0.29 0.09 0.07 Total Relative Uncertainties [%] Statistics 11.20 15.60 16.80 33.70 59.80 51.90 All systematics 11.96 15.11 12.69 20.27 32.93 30.16 Luminosity 3.49 3.60 3.32 3.96 3.91 3.40 Total 16.39 21.72 21.05 39.33 68.27 60.03 Unfolding 0.04 0.06 0.12 0.07 0.49 0.18 MC Stat 1.57 1.86 2.04 3.02 6.24 7.21 Bkg. Stat. 1.23 1.91 1.94 4.48 8.55 5.18 τ Bkg. 0.51 0.64 0.38 0.84 0.80 0.57 ZZ Bkg. 1.54 1.92 1.14 3.08 2.86 1.82 Matrix Method 7.50 9.67 6.74 10.25 11.53 8.68 Other Bkg. 4.58 5.11 3.68 7.64 7.46 3.42 Pile Up 0.24 0.14 0.32 1.15 1.54 0.20 e - Energy Scale 0.41 0.73 0.59 1.40 0.18 0.62 e - Energy Smearing 0.13 0.65 0.25 0.32 0.51 0.26 e - Id. Efficiency 1.42 1.51 1.42 1.68 1.42 1.35 e - Rec. Efficiency 0.55 0.58 0.55 0.64 0.53 0.48 e - Iso. Efficiency 0.32 0.34 0.33 0.41 0.33 0.32 µ - pT Scale 0.14 0.21 0.10 0.31 0.84 0.31 µ - pT Smearing MS 0.20 0.35 0.46 0.25 1.21 0.48 µ - pT Smearing ID 0.14 0.20 0.18 0.45 1.29 0.14 µ - Rec. Efficiency 0.88 0.89 0.81 0.97 1.02 0.87 jet - Energy Scale (tot) 5.39 6.90 7.79 10.53 21.94 21.08 jet - Res. Smearing 1.22 3.34 3.04 0.13 12.10 15.80 miss ET - SoftTerm Scale 0.29 0.20 0.00 0.27 0.08 0.08 miss ET - SoftTerm Res. 0.12 0.18 0.26 0.42 0.47 0.11 Trigger (e and µ) 0.14 0.15 0.14 0.17 0.19 0.15

Table D.9.: Differential cross section of W ±Zjj-EW + W ±Z-QCD as a function of the absolute difference in pseudo-rapidity between the tagging jets using the Sherpa simulated data. All uncertainties are expressed as relative quantities. The total uncertainty is the quadratic sum of the statistical and systematic un- certainty. The unfolded distribution is determined in the phase space obtained after the application of the inclusive selection and requiring Njet 2. ≥

165 ∆yjj 0 - 1 1 -2 2-3 3-4 4-5 5- | | fid.PS ∞ 1/σtot σ ± 0.46 0.26 0.19 0.06 0.02 0.01 · W Z Total Relative Uncertainties [%] Statistics 8.56 13.90 15.60 32.70 59.50 51.70 All systematics 3.00 3.67 4.19 10.43 24.16 23.43 Luminosity 0.00 0.07 0.20 0.44 0.38 0.11 Total 9.07 14.38 16.15 34.32 64.22 56.76 Unfolding 0.04 0.06 0.12 0.07 0.49 0.18 MC Stat 1.17 1.60 1.88 2.94 6.21 7.16 Bkg. Stat. 0.95 1.56 1.74 4.30 8.36 5.15 τ Bkg. 0.00 0.09 0.16 0.29 0.25 0.00 ZZ Bkg. 0.14 0.24 0.54 1.42 1.20 0.14 Matrix Method 0.75 1.61 1.51 2.25 3.67 0.83 Other Bkg. 0.19 0.36 1.12 3.02 2.83 1.40 Pile Up 0.12 0.00 0.21 1.27 1.66 0.08 e - Energy Scale 0.21 0.22 0.36 0.94 0.68 0.14 e - Energy Smearing 0.13 0.38 0.00 0.58 0.24 0.52 e - Id. Efficiency 0.00 0.05 0.00 0.22 0.00 0.10 e - Rec. Efficiency 0.00 0.00 0.00 0.07 0.00 0.08 e - Iso. Efficiency 0.00 0.00 0.00 0.07 0.00 0.00 µ - pT Scale 0.00 0.00 0.06 0.14 0.67 0.48 µ - pT Smearing MS 0.25 0.31 0.41 0.29 1.25 0.52 µ - pT Smearing ID 0.06 0.12 0.27 0.37 1.38 0.22 µ - Rec. Efficiency 0.00 0.00 0.06 0.09 0.13 0.00 jet - Energy Scale (tot) 1.83 1.37 1.32 5.03 16.32 15.85 jet - Res. Smearing 1.27 0.89 0.58 2.67 9.94 13.70 miss ET - SoftTerm Scale 0.19 0.30 0.11 0.17 0.00 0.00 miss ET - SoftTerm Res. 0.05 0.00 0.09 0.25 0.64 0.06 Trigger (e and µ) 0.00 0.00 0.00 0.00 0.00 0.00

Table D.10.: Normalised differential cross section of W ±Zjj-EW + W ±Z-QCD as a function of the absolute difference in pseudo-rapidity between the tagging jets using the Sherpa simulated data. All uncertainties are expressed as relative quantities. The total uncertainty is the quadratic sum of the statistical and systematic uncertainty. The unfolded distribution is determined in the phase space obtained after the application of the inclusive selection and requiring Njet ≥ 2.

166 ∆yjj 0 - 1 1 -2 2-3 3-4 4-5 5- | fid.PS| ∞ σW ±Z 2.30 1.28 0.97 0.26 0.11 0.07 Total Relative Uncertainties [%] Statistics 11.20 15.60 16.90 35.30 56.60 53.30 All systematics 10.84 12.68 12.38 19.72 26.77 35.74 Luminosity 3.48 3.57 3.32 3.99 3.89 3.35 Total 15.59 20.10 20.95 40.43 62.61 64.17 Unfolding 2.31 0.89 1.17 0.88 2.37 1.52 MC Stat 0.69 0.79 0.88 1.37 2.58 3.79 Bkg. Stat. 1.20 1.92 1.92 4.44 8.02 4.78 τ Bkg. 0.12 0.13 0.11 0.24 0.23 0.10 ZZ Bkg. 0.48 0.60 0.35 0.99 0.88 0.56 Matrix Method 2.49 4.55 2.77 5.86 7.15 3.33 Other Bkg. 4.58 5.08 3.69 7.83 7.35 3.31 Pile Up 0.06 0.07 0.11 0.54 0.40 0.84 e - Energy Scale 0.29 0.60 0.47 0.52 0.68 0.83 e - Energy Smearing 0.05 0.39 0.00 0.19 0.43 0.44 e - Id. Efficiency 1.43 1.49 1.39 1.62 1.64 1.22 e - Rec. Efficiency 0.56 0.57 0.54 0.60 0.63 0.44 e - Iso. Efficiency 0.32 0.33 0.31 0.37 0.38 0.29 µ - pT Scale 0.09 0.05 0.05 0.00 0.21 0.00 µ - pT Smearing MS 0.14 0.11 0.11 0.00 0.18 0.12 µ - pT Smearing ID 0.07 0.10 0.16 0.14 0.00 0.32 µ - Rec. Efficiency 0.87 0.88 0.82 0.98 0.94 0.91 jet - Energy Scale (tot) 6.60 8.13 8.84 12.24 17.92 28.01 jet - Res. Smearing 3.66 2.94 5.20 5.75 9.44 19.80 miss ET - SoftTerm Scale 0.09 0.00 0.00 0.09 0.16 0.18 miss ET - SoftTerm Res. 0.07 0.17 0.00 0.00 0.33 0.39 Trigger (e and µ) 0.13 0.15 0.13 0.16 0.13 0.16

Table D.11.: Differential cross section of W ±Z-QCD as a function of the ab- solute difference in pseudo-rapidity between the tagging jets using the PowHeg simulated data. All uncertainties are expressed as relative quantities. The total uncertainty is the quadratic sum of the statistical and systematic uncertainty. The unfolded distribution is determined in the phase space obtained after the application of the inclusive selection and requiring Njet 2. ≥

167 ∆yjj 0 - 1 1 -2 2-3 3-4 4-5 5- | | fid.PS ∞ 1/σtot σ ± 0.46 0.26 0.19 0.05 0.02 0.01 · W Z Total Relative Uncertainties [%] Statistics 8.53 14.00 15.70 34.50 56.20 53.20 All systematics 2.73 3.31 3.46 10.04 18.04 29.12 Luminosity 0.00 0.06 0.18 0.48 0.38 0.15 Total 8.96 14.39 16.08 35.93 59.02 60.65 Unfolding 0.75 0.65 0.37 0.66 0.81 3.01 MC Stat 0.51 0.69 0.81 1.33 2.56 3.76 Bkg. Stat. 0.92 1.57 1.70 4.22 7.81 4.79 τ Bkg. 0.00 0.00 0.00 0.11 0.09 0.00 ZZ Bkg. 0.00 0.07 0.16 0.47 0.35 0.00 Matrix Method 0.90 1.23 0.62 2.57 3.89 0.85 Other Bkg. 0.18 0.35 1.11 3.23 2.73 1.50 Pile Up 0.12 0.00 0.05 0.48 0.34 0.77 e - Energy Scale 0.14 0.19 0.15 0.17 0.35 0.61 e - Energy Smearing 0.10 0.23 0.11 0.00 0.27 0.28 e - Id. Efficiency 0.00 0.00 0.05 0.17 0.19 0.23 e - Rec. Efficiency 0.00 0.00 0.00 0.00 0.06 0.12 e - Iso. Efficiency 0.00 0.00 0.00 0.00 0.05 0.00 µ - pT Scale 0.00 0.00 0.00 0.00 0.13 0.08 µ - pT Smearing MS 0.07 0.00 0.18 0.10 0.25 0.05 µ - pT Smearing ID 0.09 0.08 0.13 0.16 0.00 0.30 µ - Rec. Efficiency 0.00 0.00 0.05 0.10 0.06 0.00 jet - Energy Scale (tot) 1.91 1.21 1.18 5.67 11.65 22.51 jet - Res. Smearing 0.60 1.34 1.01 1.58 5.44 16.30 miss ET - SoftTerm Scale 0.05 0.08 0.00 0.13 0.13 0.21 miss ET - SoftTerm Res. 0.00 0.11 0.08 0.08 0.39 0.45 Trigger (e and µ) 0.00 0.00 0.00 0.00 0.00 0.00

Table D.12.: Normalised differential cross section of W ±Z-QCD as a function of the absolute difference in pseudo-rapidity between the tagging jets using the PowHeg simulated data. All uncertainties are expressed as relative quantities. The total uncertainty is the quadratic sum of the statistical and systematic un- certainty. The unfolded distribution is determined in the phase space obtained after the application of the inclusive selection and requiring Njet 2. ≥

168 E. Additional Results on Anomalous Quartic Gauge Couplings

E.1. Optimisation Studies

This appendix presents the results of the original optimisation study used to derive the best phase space for the determination of aQGC exclusion limits as well as the results of a more recent rerun of said study. The obtained results are different as the estimation of the non-prompt background has changed over time. The original study was done using simulated data for the estimation of the non-prompt background whereas the rerun employs the matrix method (see Section 6). This changes both the overall contribution from non-prompt backgrounds and the uncertainties attributed. Figure E.1 shows the results of the original optimisation. The figure of merit used P is the expected limit on α4. A grid scan in mjj, ∆φWZ , and pT is performed | | | | using the evaluation methods described in Section 10. Results are shown as two- P dimensional plots in ∆φWZ and pT for mjj cuts ranging from 400 GeV to 900 GeV. | | | | P The global minimum is found at mjj > 500 GeV, ∆φWZ > 2 and pT > 250 GeV | | | | with an expected limit of 0.29. The plots exhibit empty entries for grid points where the optimisation procedure failed to produce results due to technical reasons. This is primarily caused by very low statistics causing the limit setting procedure to become unstable. Figure E.2 summarises the results of the rerun of the optimisation study using the matrix method for the non-prompt background estimate. It is observed that algorithmic P instabilities arise for regions with low statistics. Therefore, any point with pT > | | 350 GeV is excluded from the optimisation as the results cannot be trusted. Judging solely from the results observed the ideal working point is mjj > 800 GeV, ∆φWZ > 2.2 P | | and pT > 200 GeV. A closer investigation of this point yields that the data-driven | | estimate for the non-prompt background returns negative event yields indicating that no events from real data are present anymore and that the phase space is only populated by simulated data used for the subtraction of prompt sources to the estimate. Therefore, this point has to be omitted. P The ideal working point is found to be mjj > 500 GeV, ∆φWZ > 2 and pT > | | | | 250 GeV. A slightly better working point may be found for mjj > 600 GeV but the improvement is marginal and it was decided to use the least restrictive phase space to ensure that the estimation of systematic uncertainties is as robust as possible. The looser phase space is also preferable as it increases the number of real data events available for the non-prompt background estimate making it more reliable. At the same time the use of the real data events for the non-prompt background may introduce a bias as they are likely to be correlated with the data events used for deriving the observed limits. Though the expected limits are used as figure of merit a dependence on the actually observed data may be introduced. Therefore, the results obtained in

169 the rerun of the analysis have to be considered with such a potential bias in mind.

It has to be noted that the optimal expected α4 limit obtained in the original optimi- sation study is larger than for the rerun of the study. This is caused by the scaling of the Standard Model Whizard prediction to the Sherpa prediction. Figure E.3 presents the results of the rerun of the optimisation study using the area of the expected exclusion limits in the α4-α5 plane at 95 % confidence level as the figure of merit. The results obtained are compatible with the ones using the limit on α4 as the figure of merit. The area of the exclusion limits is an interesting measure for the optimisation as it includes effects of the α5 parameter.

∆PhiWZ vs SumPt ∆PhiWZ vs SumPt 2.5 1 2.5 1 0.39 0.39 0.39 0.37 0.33 0.29 0.29 0.31 0.35 0.43 0.39 0.39 0.37 0.35 0.33 0.31 0.31 0.31 0.43

0.39 0.39 0.39 0.35 0.31 0.29 0.29 0.31 0.35 0.41 0.9 0.37 0.37 0.37 0.35 0.31 0.29 0.31 0.31 0.35 0.43 0.9 $PhiWZ $PhiWZ ∆ ∆

2 0.39 0.39 0.39 0.35 0.31 0.29 0.29 0.31 0.33 0.39 0.8 2 0.37 0.37 0.37 0.35 0.31 0.29 0.29 0.31 0.33 0.41 0.8

0.39 0.39 0.39 0.37 0.31 0.29 0.29 0.31 0.33 0.39 0.37 0.37 0.37 0.35 0.31 0.29 0.29 0.31 0.33 0.41 0.7 0.7 0.41 0.41 0.41 0.37 0.33 0.31 0.31 0.33 0.33 0.39 0.39 0.39 0.39 0.35 0.33 0.29 0.31 0.31 0.33 0.41 1.5 0.6 1.5 0.6 0.43 0.43 0.43 0.39 0.35 0.31 0.31 0.31 0.33 0.39 0.41 0.41 0.41 0.37 0.33 0.31 0.31 0.31 0.33 0.39

0.43 0.43 0.43 0.39 0.35 0.31 0.31 0.33 0.33 0.39 0.5 0.41 0.41 0.41 0.37 0.33 0.31 0.33 0.33 0.33 0.39 0.5

1 0.45 0.45 0.45 0.41 0.37 0.33 0.33 0.33 0.35 0.39 1 0.43 0.43 0.43 0.39 0.35 0.31 0.33 0.33 0.35 0.41 0.4 0.4 0.47 0.47 0.47 0.41 0.37 0.33 0.33 0.33 0.35 0.41 0.45 0.45 0.43 0.39 0.35 0.33 0.33 0.33 0.35 0.43 0.69 0.3 0.3 0.49 0.49 0.49 0.43 0.37 0.35 0.33 0.33 0.35 0.41 0.61 0.45 0.45 0.45 0.41 0.35 0.33 0.33 0.33 0.35 0.43 0.69 0.5 0.5 0.51 0.51 0.49 0.43 0.39 0.35 0.33 0.35 0.35 0.41 0.61 0.2 0.47 0.47 0.45 0.41 0.37 0.35 0.35 0.35 0.35 0.43 0.71 0.2

0.51 0.51 0.51 0.45 0.39 0.35 0.33 0.35 0.35 0.43 0.67 0.1 0.49 0.49 0.47 0.41 0.37 0.35 0.35 0.35 0.37 0.47 0.77 0.1 0 0.53 0.53 0.53 0.47 0.39 0.37 0.35 0.35 0.37 0.45 0.69 0 0.49 0.49 0.49 0.43 0.37 0.35 0.37 0.37 0.37 0.51 0.81 0 0 0 100 200 300 400 500 0 100 200 300 400 500 SumPt in GeV SumPt in GeV

∆PhiWZ vs SumPt ∆PhiWZ vs SumPt 2.5 1 2.5 1 0.39 0.39 0.39 0.35 0.33 0.33 0.33 0.33 0.41 0.41 0.41 0.37 0.35 0.35 0.37 0.37 0.61

0.37 0.37 0.37 0.35 0.33 0.31 0.33 0.35 0.39 0.51 0.9 0.39 0.39 0.39 0.37 0.33 0.35 0.37 0.39 0.63 0.9 $PhiWZ $PhiWZ ∆ ∆

2 0.37 0.37 0.37 0.33 0.31 0.31 0.33 0.33 0.37 0.51 0.8 2 0.39 0.39 0.39 0.35 0.33 0.33 0.37 0.37 0.61 0.8

0.37 0.37 0.37 0.33 0.31 0.31 0.33 0.33 0.37 0.49 0.39 0.39 0.39 0.37 0.33 0.33 0.37 0.37 0.59 0.7 0.7 0.37 0.37 0.37 0.35 0.31 0.31 0.33 0.35 0.37 0.47 0.39 0.39 0.39 0.37 0.33 0.35 0.37 0.37 0.57 1.5 0.6 1.5 0.6 0.39 0.39 0.39 0.37 0.33 0.33 0.33 0.33 0.37 0.47 0.41 0.41 0.41 0.37 0.35 0.35 0.39 0.37 0.57

0.41 0.41 0.39 0.37 0.33 0.33 0.35 0.35 0.37 0.47 0.5 0.41 0.41 0.41 0.39 0.35 0.35 0.39 0.39 0.57 0.5

1 0.41 0.41 0.41 0.37 0.35 0.33 0.35 0.35 0.37 0.49 1 0.41 0.41 0.41 0.39 0.35 0.37 0.41 0.39 0.61 0.97 0.4 0.4 0.41 0.41 0.41 0.37 0.35 0.35 0.37 0.35 0.39 0.51 0.77 0.43 0.43 0.43 0.39 0.37 0.37 0.41 0.39 0.61 0.97 0.3 0.3 0.43 0.43 0.41 0.39 0.35 0.35 0.37 0.35 0.39 0.51 0.75 0.43 0.43 0.43 0.39 0.37 0.37 0.41 0.39 0.43 0.61 0.99 0.5 0.5 0.43 0.43 0.43 0.39 0.35 0.35 0.37 0.37 0.39 0.51 0.75 0.2 0.45 0.45 0.43 0.41 0.37 0.39 0.41 0.39 0.43 0.61 0.97 0.2

0.45 0.45 0.45 0.39 0.37 0.37 0.37 0.37 0.39 0.55 0.87 0.1 0.45 0.45 0.45 0.41 0.37 0.39 0.41 0.41 0.45 0.67 0.1 0 0.47 0.47 0.45 0.41 0.37 0.37 0.39 0.39 0.41 0.57 0.91 0 0.47 0.47 0.45 0.41 0.39 0.41 0.43 0.43 0.47 0.69 0 0 0 100 200 300 400 500 0 100 200 300 400 500 SumPt in GeV SumPt in GeV

∆PhiWZ vs SumPt ∆PhiWZ vs SumPt 2.5 1 2.5 1 0.41 0.41 0.41 0.39 0.35 0.37 0.41 0.41 0.53 0.41 0.41 0.41 0.39 0.37 0.37 0.43 0.47

0.39 0.39 0.39 0.37 0.33 0.37 0.41 0.41 0.51 0.9 0.39 0.39 0.39 0.37 0.35 0.37 0.43 0.47 0.9 $PhiWZ $PhiWZ ∆ ∆

2 0.39 0.39 0.39 0.37 0.33 0.35 0.39 0.41 0.49 0.8 2 0.39 0.39 0.39 0.37 0.33 0.35 0.41 0.45 0.8

0.41 0.41 0.39 0.37 0.33 0.35 0.41 0.41 0.47 0.69 0.41 0.41 0.39 0.37 0.35 0.37 0.41 0.45 0.59 0.7 0.7 0.41 0.41 0.41 0.37 0.35 0.37 0.41 0.41 0.47 0.69 0.41 0.41 0.41 0.39 0.35 0.37 0.43 0.45 0.57 1.5 0.6 1.5 0.6 0.41 0.41 0.41 0.39 0.35 0.37 0.43 0.41 0.45 0.67 0.41 0.41 0.41 0.39 0.37 0.39 0.45 0.45 0.57

0.43 0.43 0.43 0.41 0.37 0.39 0.43 0.43 0.45 0.67 0.5 0.43 0.43 0.43 0.41 0.39 0.41 0.45 0.47 0.57 0.85 0.5

1 0.43 0.43 0.43 0.41 0.37 0.39 0.43 0.43 0.47 0.71 1 0.43 0.43 0.43 0.41 0.39 0.41 0.45 0.45 0.57 0.85 0.4 0.4 0.43 0.43 0.43 0.41 0.39 0.41 0.45 0.45 0.49 0.71 0.43 0.43 0.43 0.41 0.39 0.41 0.45 0.47 0.59 0.85 0.3 0.3 0.43 0.43 0.43 0.41 0.37 0.41 0.45 0.43 0.49 0.71 0.43 0.43 0.43 0.41 0.39 0.41 0.45 0.47 0.59 0.85 0.5 0.5 0.45 0.45 0.45 0.41 0.39 0.41 0.45 0.43 0.47 0.69 0.2 0.45 0.45 0.43 0.41 0.39 0.41 0.45 0.45 0.57 0.85 0.2

0.45 0.45 0.45 0.41 0.39 0.41 0.45 0.45 0.49 0.73 0.1 0.45 0.45 0.43 0.41 0.39 0.43 0.47 0.47 0.61 0.89 0.1 0 0.47 0.47 0.45 0.41 0.39 0.41 0.45 0.47 0.53 0.79 0 0.45 0.45 0.45 0.41 0.39 0.43 0.47 0.49 0.63 0.89 0 0 0 100 200 300 400 500 0 100 200 300 400 500 SumPt in GeV SumPt in GeV P Figure E.1.: Results for the original optimisation of mjj, ∆φWZ and pT . | | | | The grid spanned by the three variables is scanned and the expected upper limit on α4 at 95% confidence level is used as a figure of merit. The used mjj cuts in GeV from upper left to lower right are: 400, 500, 600, 700, 800, 900.

170 | 2.5 1 | 2.5 1

WZ 0.356 0.356 0.352 0.316 0.294 0.260 0.280 1.000 1.000 1.000 1.000 WZ 0.330 0.330 0.324 0.292 0.266 0.260 0.278 0.344 1.000 1.000 1.000 Φ Φ

∆ 0.9 ∆ 0.9

| 0.326 0.326 0.320 0.286 0.264 0.238 0.252 0.316 1.000 1.000 1.000 | 0.300 0.300 0.294 0.264 0.240 0.240 0.246 0.304 1.000 1.000 1.000 2 0.338 0.338 0.332 0.296 0.268 0.250 0.276 0.330 1.000 1.000 1.000 0.8 2 0.308 0.308 0.304 0.274 0.246 0.256 0.282 0.326 0.172 1.000 1.000 0.8 0.358 0.358 0.352 0.308 0.284 0.254 0.280 0.342 1.000 1.000 1.000 0.324 0.324 0.316 0.280 0.262 0.260 0.286 0.334 0.178 1.000 1.000 0.7 0.7 0.378 0.378 0.372 0.332 0.304 0.264 0.278 0.322 1.000 1.000 1.000 0.340 0.340 0.334 0.302 0.276 0.262 0.280 0.308 0.374 1.000 1.000 1.5 0.6 1.5 0.6 0.408 0.408 0.402 0.358 0.328 0.286 0.300 0.342 0.364 1.000 1.000 0.370 0.370 0.364 0.328 0.304 0.282 0.304 0.330 0.362 0.168 1.000 0.430 0.430 0.424 0.378 0.348 0.292 0.302 0.350 0.372 1.000 1.000 0.5 0.390 0.390 0.384 0.348 0.322 0.288 0.306 0.346 0.374 0.178 1.000 0.5 1 0.422 0.422 0.416 0.370 0.338 0.286 0.296 0.340 0.358 1.000 1.000 1 0.382 0.382 0.374 0.340 0.312 0.282 0.298 0.336 0.358 1.000 1.000 0.4 0.4 0.436 0.436 0.430 0.380 0.344 0.288 0.296 0.336 0.358 1.000 1.000 0.394 0.394 0.386 0.348 0.318 0.286 0.300 0.332 0.358 0.176 1.000 0.3 0.3 0.468 0.468 0.462 0.402 0.370 0.314 0.314 0.360 0.382 0.192 1.000 0.420 0.420 0.412 0.368 0.340 0.314 0.322 0.358 0.386 0.542 1.000 0.5 0.5 0.464 0.464 0.458 0.398 0.360 0.308 0.304 0.346 0.366 0.188 1.000 0.2 0.414 0.414 0.408 0.362 0.330 0.306 0.312 0.342 0.372 0.524 1.000 0.2

0.486 0.486 0.478 0.410 0.366 0.318 0.318 0.362 0.366 0.206 1.000 0.1 0.428 0.428 0.422 0.368 0.334 0.314 0.324 0.352 0.374 1.000 1.000 0.1 0 0.496 0.496 0.490 0.414 0.366 0.322 0.328 0.364 0.368 0.202 1.000 0 0.438 0.438 0.432 0.372 0.334 0.318 0.332 0.354 0.378 1.000 1.000 0 0 0 100 200 300 400 500 0 100 200 300 400 500 ∑ |pT| [GeV] ∑ |pT| [GeV]

| 2.5 1 | 2.5 1

WZ 0.320 0.320 0.316 0.292 0.262 0.274 0.298 0.376 1.000 1.000 1.000 WZ 0.326 0.326 0.324 0.302 0.262 0.298 0.340 0.182 1.000 1.000 1.000 Φ Φ

∆ 0.9 ∆ 0.9

| 0.288 0.288 0.284 0.262 0.238 0.254 0.266 0.330 1.000 1.000 1.000 | 0.296 0.296 0.292 0.272 0.238 0.274 0.298 0.392 1.000 1.000 1.000 2 0.294 0.294 0.290 0.266 0.238 0.264 0.296 0.346 1.000 1.000 1.000 0.8 2 0.308 0.308 0.304 0.280 0.240 0.290 0.330 0.406 1.000 1.000 1.000 0.8 0.306 0.306 0.302 0.272 0.256 0.266 0.300 0.354 0.180 1.000 1.000 0.318 0.318 0.314 0.284 0.260 0.290 0.334 0.418 0.186 1.000 1.000 0.7 0.7 0.316 0.316 0.312 0.288 0.264 0.262 0.288 0.330 0.400 1.000 1.000 0.334 0.334 0.330 0.306 0.272 0.284 0.318 0.382 0.444 1.000 1.000 1.5 0.6 1.5 0.6 0.338 0.338 0.334 0.306 0.286 0.282 0.314 0.352 0.376 1.000 1.000 0.348 0.348 0.346 0.324 0.294 0.310 0.352 0.412 0.414 1.000 1.000 0.358 0.358 0.354 0.326 0.304 0.288 0.318 0.368 0.386 0.176 1.000 0.5 0.370 0.370 0.366 0.344 0.312 0.310 0.348 0.416 0.438 1.000 1.000 0.5 1 0.352 0.352 0.346 0.320 0.296 0.282 0.314 0.358 0.372 0.178 1.000 1 0.366 0.366 0.360 0.338 0.304 0.306 0.340 0.404 0.414 1.000 1.000 0.4 0.4 0.362 0.362 0.358 0.328 0.302 0.286 0.316 0.354 0.372 0.176 1.000 0.366 0.366 0.362 0.336 0.306 0.306 0.334 0.400 0.418 1.000 1.000 0.3 0.3 0.390 0.390 0.384 0.348 0.324 0.314 0.340 0.384 0.416 0.200 1.000 0.400 0.400 0.396 0.366 0.334 0.338 0.360 0.434 0.488 1.000 1.000 0.5 0.5 0.384 0.384 0.380 0.342 0.314 0.302 0.326 0.368 0.392 1.000 1.000 0.2 0.394 0.394 0.388 0.356 0.322 0.326 0.346 0.412 0.458 1.000 1.000 0.2

0.396 0.396 0.392 0.350 0.320 0.312 0.340 0.382 0.396 1.000 1.000 0.1 0.402 0.402 0.396 0.362 0.328 0.336 0.362 0.436 0.476 1.000 1.000 0.1 0 0.404 0.404 0.400 0.354 0.320 0.316 0.354 0.388 0.410 1.000 1.000 0 0.406 0.406 0.400 0.360 0.324 0.336 0.368 0.428 0.470 1.000 1.000 0 0 0 100 200 300 400 500 0 100 200 300 400 500 ∑ |pT| [GeV] ∑ |pT| [GeV]

| 2.5 1 | 2.5 1

WZ 0.334 0.334 0.330 0.296 0.248 0.288 0.360 0.430 1.000 1.000 1.000 WZ 0.358 0.358 0.354 0.334 0.276 0.302 0.372 1.000 1.000 1.000 1.000 Φ Φ

∆ 0.9 ∆ 0.9

| 0.304 0.304 0.300 0.270 0.228 0.264 0.318 0.378 1.000 1.000 1.000 | 0.324 0.324 0.320 0.300 0.252 0.278 0.334 0.400 1.000 1.000 1.000 2 0.300 0.300 0.296 0.266 0.224 0.268 0.324 0.368 1.000 1.000 1.000 0.8 2 0.330 0.330 0.324 0.300 0.246 0.288 0.354 0.408 1.000 1.000 1.000 0.8 0.318 0.318 0.314 0.282 0.258 0.280 0.342 0.396 1.000 1.000 1.000 0.344 0.344 0.340 0.312 0.280 0.306 0.382 0.450 1.000 1.000 1.000 0.7 0.7 0.338 0.338 0.334 0.310 0.280 0.280 0.332 0.364 0.434 1.000 1.000 0.366 0.366 0.360 0.342 0.306 0.306 0.364 0.396 1.000 1.000 1.000 1.5 0.6 1.5 0.6 0.354 0.354 0.348 0.328 0.304 0.320 0.382 0.400 0.388 1.000 1.000 0.366 0.366 0.360 0.348 0.318 0.348 0.420 0.432 0.158 1.000 1.000 0.374 0.374 0.368 0.348 0.316 0.318 0.376 0.408 0.416 1.000 1.000 0.5 0.388 0.388 0.382 0.368 0.330 0.344 0.410 0.434 0.520 1.000 1.000 0.5 1 0.368 0.368 0.364 0.342 0.308 0.314 0.366 0.392 0.392 1.000 1.000 1 0.380 0.380 0.374 0.360 0.320 0.336 0.394 0.410 0.470 1.000 1.000 0.4 0.4 0.366 0.366 0.362 0.342 0.312 0.316 0.362 0.390 0.396 1.000 1.000 0.368 0.368 0.362 0.350 0.318 0.330 0.380 0.396 0.466 1.000 1.000 0.3 0.3 0.404 0.404 0.400 0.368 0.342 0.354 0.396 0.432 0.486 1.000 1.000 0.408 0.408 0.402 0.382 0.350 0.380 0.420 0.450 0.602 1.000 1.000 0.5 0.5 0.394 0.394 0.390 0.354 0.324 0.338 0.374 0.402 0.448 1.000 1.000 0.2 0.394 0.394 0.390 0.366 0.332 0.362 0.398 0.422 0.570 1.000 1.000 0.2

0.404 0.404 0.398 0.364 0.332 0.350 0.398 0.432 0.468 1.000 1.000 0.1 0.408 0.408 0.400 0.378 0.338 0.376 0.422 0.444 0.598 1.000 1.000 0.1 0 0.408 0.408 0.402 0.362 0.322 0.344 0.392 0.424 0.462 1.000 1.000 0 0.404 0.404 0.398 0.370 0.328 0.364 0.408 0.426 0.568 1.000 1.000 0 0 0 100 200 300 400 500 0 100 200 300 400 500 ∑ |pT| [GeV] ∑ |pT| [GeV]

P Figure E.2.: Results for the rerun of the optimisation of mjj, ∆φWZ and pT . | | | | The grid spanned by the three variables is scanned and the expected upper limit on α4 at 95% confidence level is used as a figure of merit. The used mjj cuts in GeV from upper left to lower right are: 400, 500, 600, 700, 800, 900.

171 | 2.5 0.5 | 2.5 0.5

WZ 0.379 0.379 0.369 0.302 0.261 0.208 0.233 4.008 4.008 4.008 4.008 WZ 0.315 0.315 0.304 0.248 0.204 0.198 0.216 0.326 4.008 4.008 4.008 Φ Φ

∆ 0.45 ∆ 0.45

| 0.319 0.319 0.309 0.250 0.210 0.175 0.190 0.281 4.008 4.008 4.008 | 0.262 0.262 0.252 0.206 0.166 0.169 0.172 0.254 3.889 4.008 4.008 2 0.336 0.336 0.326 0.265 0.216 0.191 0.228 0.308 3.929 4.008 4.008 0.4 2 0.278 0.278 0.268 0.221 0.179 0.193 0.227 0.293 0.078 4.008 4.008 0.4 0.374 0.374 0.361 0.282 0.243 0.197 0.232 0.325 3.987 4.008 4.008 0.300 0.300 0.287 0.230 0.202 0.200 0.230 0.304 0.083 4.008 4.008 0.35 0.35 0.409 0.409 0.398 0.322 0.272 0.210 0.230 0.288 4.008 4.008 4.008 0.328 0.328 0.316 0.262 0.220 0.200 0.221 0.259 0.397 3.793 4.008 1.5 0.3 1.5 0.3 0.472 0.472 0.460 0.371 0.317 0.244 0.261 0.317 0.369 3.910 4.008 0.386 0.386 0.374 0.310 0.265 0.231 0.257 0.294 0.361 0.067 4.008 0.516 0.516 0.505 0.406 0.349 0.250 0.262 0.331 0.384 3.897 4.008 0.25 0.421 0.421 0.409 0.339 0.293 0.240 0.260 0.318 0.381 0.075 4.008 0.25 1 0.498 0.498 0.485 0.390 0.329 0.240 0.251 0.309 0.355 3.858 4.008 1 0.404 0.404 0.390 0.324 0.275 0.229 0.247 0.298 0.348 4.008 4.008 0.2 0.2 0.522 0.522 0.509 0.406 0.339 0.243 0.249 0.302 0.351 3.772 4.008 0.423 0.423 0.410 0.336 0.283 0.235 0.249 0.295 0.348 0.072 4.008 0.15 0.15 0.599 0.599 0.584 0.455 0.390 0.289 0.283 0.352 0.408 0.093 4.008 0.487 0.487 0.472 0.380 0.327 0.285 0.288 0.343 0.413 0.612 4.008 0.5 0.5 0.589 0.589 0.575 0.446 0.370 0.278 0.264 0.323 0.374 0.089 4.008 0.1 0.474 0.474 0.460 0.367 0.308 0.270 0.268 0.315 0.381 0.582 4.008 0.1

0.647 0.647 0.631 0.479 0.386 0.301 0.292 0.357 0.374 0.106 4.008 0.05 0.509 0.509 0.494 0.386 0.321 0.289 0.291 0.335 0.386 4.008 4.008 0.05 0 0.672 0.672 0.658 0.488 0.390 0.307 0.311 0.360 0.381 0.102 4.008 0 0.528 0.528 0.514 0.391 0.321 0.293 0.306 0.338 0.398 4.008 4.008 0 0 0 100 200 300 400 500 0 100 200 300 400 500 ∑ |pT| [GeV] ∑ |pT| [GeV]

| 2.5 0.5 | 2.5 0.5

WZ 0.286 0.286 0.280 0.238 0.190 0.208 0.240 0.388 4.008 4.008 4.008 WZ 0.294 0.294 0.289 0.253 0.189 0.239 0.306 0.086 4.008 4.008 4.008 Φ Φ

∆ 0.45 ∆ 0.45

| 0.233 0.233 0.227 0.194 0.157 0.180 0.194 0.297 3.972 4.008 4.008 | 0.243 0.243 0.237 0.206 0.158 0.204 0.236 0.412 3.985 4.008 4.008 2 0.245 0.245 0.239 0.204 0.163 0.198 0.241 0.329 3.787 4.008 4.008 0.4 2 0.267 0.267 0.261 0.224 0.166 0.229 0.291 0.440 3.821 4.008 4.008 0.4 0.262 0.262 0.256 0.212 0.189 0.202 0.245 0.341 0.087 4.008 4.008 0.281 0.281 0.275 0.229 0.192 0.230 0.299 0.453 0.086 4.008 4.008 0.35 0.35 0.276 0.276 0.272 0.232 0.199 0.196 0.228 0.294 0.447 3.943 4.008 0.305 0.305 0.300 0.262 0.212 0.224 0.275 0.380 0.497 4.008 4.008 1.5 0.3 1.5 0.3 0.316 0.316 0.311 0.264 0.231 0.225 0.269 0.329 0.384 3.422 4.008 0.334 0.334 0.328 0.294 0.247 0.262 0.332 0.438 0.445 3.917 4.008 0.345 0.345 0.340 0.292 0.256 0.231 0.273 0.353 0.404 0.071 4.008 0.25 0.364 0.364 0.359 0.323 0.270 0.261 0.321 0.438 0.474 3.946 4.008 0.25 1 0.333 0.333 0.325 0.280 0.243 0.223 0.264 0.336 0.371 0.072 4.008 1 0.356 0.356 0.347 0.311 0.257 0.253 0.308 0.417 0.443 4.000 4.008 0.2 0.2 0.350 0.350 0.343 0.291 0.250 0.228 0.267 0.331 0.371 0.071 4.008 0.357 0.357 0.349 0.308 0.258 0.252 0.297 0.410 0.445 3.981 4.008 0.15 0.15 0.403 0.403 0.396 0.330 0.288 0.272 0.309 0.389 0.458 0.096 4.008 0.426 0.426 0.417 0.361 0.307 0.308 0.348 0.480 0.555 4.008 4.008 0.5 0.5 0.393 0.393 0.387 0.318 0.272 0.254 0.286 0.358 0.417 4.008 4.008 0.1 0.412 0.412 0.404 0.344 0.287 0.289 0.323 0.440 0.507 4.008 4.008 0.1

0.422 0.422 0.414 0.338 0.285 0.272 0.310 0.386 0.429 4.008 4.008 0.05 0.432 0.432 0.422 0.359 0.300 0.311 0.353 0.479 0.531 4.008 4.008 0.05 0 0.435 0.435 0.428 0.341 0.287 0.279 0.336 0.399 0.448 4.008 4.008 0 0.437 0.437 0.428 0.352 0.292 0.308 0.362 0.463 0.522 4.008 4.008 0 0 0 100 200 300 400 500 0 100 200 300 400 500 ∑ |pT| [GeV] ∑ |pT| [GeV]

| 2.5 0.5 | 2.5 0.5

WZ 0.296 0.296 0.288 0.233 0.163 0.213 0.320 0.471 4.008 4.008 4.008 WZ 0.326 0.326 0.321 0.281 0.192 0.229 0.343 3.021 4.008 4.008 4.008 Φ Φ

∆ 0.45 ∆ 0.45

| 0.247 0.247 0.240 0.194 0.140 0.180 0.248 0.370 3.866 4.008 4.008 | 0.269 0.269 0.264 0.231 0.161 0.197 0.273 0.407 4.008 4.008 4.008 2 0.248 0.248 0.240 0.195 0.139 0.187 0.268 0.360 3.225 4.008 4.008 0.4 2 0.286 0.286 0.277 0.239 0.162 0.215 0.319 0.429 4.008 4.008 4.008 0.4 0.275 0.275 0.266 0.220 0.184 0.206 0.299 0.406 3.139 4.008 4.008 0.309 0.309 0.301 0.255 0.207 0.244 0.366 0.500 4.008 4.008 4.008 0.35 0.35 0.302 0.302 0.294 0.259 0.213 0.210 0.282 0.339 0.469 3.791 4.008 0.336 0.336 0.327 0.299 0.245 0.244 0.333 0.391 3.754 4.008 4.008 1.5 0.3 1.5 0.3 0.331 0.331 0.320 0.289 0.250 0.264 0.364 0.401 0.390 3.372 4.008 0.341 0.341 0.330 0.309 0.265 0.308 0.439 0.455 0.063 3.919 4.008 0.357 0.357 0.347 0.316 0.266 0.259 0.354 0.412 0.434 3.609 4.008 0.25 0.367 0.367 0.357 0.333 0.277 0.300 0.417 0.460 0.587 3.987 4.008 0.25 1 0.348 0.348 0.339 0.307 0.253 0.252 0.335 0.383 0.399 3.820 4.008 1 0.354 0.354 0.344 0.321 0.263 0.288 0.383 0.417 0.506 3.998 4.008 0.2 0.2 0.347 0.347 0.339 0.307 0.259 0.254 0.328 0.381 0.401 3.583 4.008 0.333 0.333 0.325 0.303 0.256 0.277 0.357 0.391 0.497 3.856 4.008 0.15 0.15 0.422 0.422 0.413 0.357 0.310 0.323 0.397 0.468 0.538 3.997 4.008 0.412 0.412 0.404 0.365 0.317 0.374 0.447 0.506 0.754 4.008 4.008 0.5 0.5 0.402 0.402 0.396 0.332 0.282 0.295 0.358 0.415 0.481 3.920 4.008 0.1 0.387 0.387 0.381 0.338 0.286 0.340 0.403 0.455 0.682 4.008 4.008 0.1

0.424 0.424 0.414 0.354 0.297 0.322 0.401 0.465 0.506 4.008 4.008 0.05 0.416 0.416 0.406 0.365 0.300 0.371 0.451 0.491 0.725 4.008 4.008 0.05 0 0.425 0.425 0.416 0.344 0.282 0.307 0.389 0.452 0.502 4.008 4.008 0 0.406 0.406 0.396 0.346 0.283 0.347 0.426 0.463 0.667 4.008 4.008 0 0 0 100 200 300 400 500 0 100 200 300 400 500 ∑ |pT| [GeV] ∑ |pT| [GeV]

P Figure E.3.: Additional results for the optimisation of mjj, ∆φWZ and pT . | | | | The grid spanned by the three variables is scanned and the area of the expected exclusion limits in the α4-α5 plane at 95 % confidence level is used as a figure of merit. The used mjj cuts in GeV from upper left to lower right are: 400, 500, 600, 700, 800, 900.

172 E.2. Results in the alternative aQGC Phase Space

The resulting event yields and uncertainties are summarised in Table E.2 and Table E.1, respectively. The total expected yield for the combined SM processes is 47 % larger whereas the observed yield remains the same. Uncertainties are estimated using the systematic uncertainties after the mjj cut and the statistical uncertainties after the P pT cut. Therefore, the systematic uncertainties are the same as for the nominal | | aQGC phase space. The statistical uncertainties are significantly lower for the non- prompt background in the alternative aQGC phase space. Table E.3 lists the cross section results in the alternative aQGC phase space. The results are derived in the same way as for the nominal aQGC phase space. Less tension between the expected and observed fiducial cross sections on particle level are observed due to the larger expected event yields.

W ±Zjj W ±Zjj tZj V V V ZZ tt¯+ W/Z non EW QCD prompt stat 5.0 8.7 8.7 51.6 37.2 10.5 61.1 ± ± ± ± ± ± ± ElEScale 0.6 0.6 0.6 0.0 3.5 0.6 0.0 ± ± ± ± ± ± ± ElESmear 0.1 0.1 0.6 0.1 0.3 0.1 0.0 ± ± ± ± ± ± ± ElID 1.2 1.1 1.1 0.8 1.6 1.1 0.0 ± ± ± ± ± ± ± ElIso 0.3 0.3 0.3 0.2 0.3 0.2 0.0 ± ± ± ± ± ± ± ElReco 0.5 0.4 0.4 0.3 0.6 0.4 0.0 ± ± ± ± ± ± ± ElTrigger 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ± ± ± ± ± ± ± Fakes 0.0 0.0 0.0 0.0 0.0 0.0 24.8 ± ± ± ± ± ± ± JER 0.3 3.5 0.5 1.5 7.4 0.4 0.0 ± ± ± ± ± ± ± JES 2.9 10.0 7.3 26. 15.3 3.9 0.0 ± ± ± ± ± ± ± MET 0.3 0.1 0.3 0.0 0.3 0.3 0.0 ± ± ± ± ± ± ± MuID 0.7 0.7 0.7 0.7 0.6 0.7 0.0 ± ± ± ± ± ± ± MuPt 0.3 0.5 0.1 0.0 0.2 0.8 0.0 ± ± ± ± ± ± ± MuTrigger 0.1 0.1 0.1 0.1 0.0 0.1 0.0 ± ± ± ± ± ± ± Pileup 0.1 0.5 0.1 1.3 2.8 1.0 0.0 ± ± ± ± ± ± ± Theory 12.3 17.9 34.0 10.0 22.3 30.0 0.8 ± ± ± ± ± ± ± total syst 12.5 20.8 34.8 26.2 27.5 30.3 24.8 ± ± ± ± ± ± ± Table E.1.: Breakdown of the uncertainties given in percent the for cross section measurement in the alternative aQGC phase space. The systematic uncertainties are estimated after applying mjj requirement. The quoted statistical uncertainty is evaluated in the alternative aQGC phase space.

173 174

eee eeµ µµe µµµ all Data 4 0 2 3 9 Total Exp 1.45 0.27 0.23 1.31 0.16 0.16 1.98 0.21 0.37 2.46 0.30 0.30 7.20 0.48 0.92 ± ± ± ± ± ± ± ± ± ± W ±Zjj-EW 0.31 0.03 0.04 0.35 0.04 0.05 0.37 0.04 0.05 0.49 0.04 0.06 1.52 0.08 0.19 ± ± ± ± ± ± ± ± ± ± Total Bkg 1.15 0.26 0.22 0.95 0.15 0.15 1.62 0.21 0.38 1.97 0.29 0.29 5.68 0.47 0.90 ± ± ± ± ± ± ± ± ± ± W ±Zjj-QCD 0.68 0.14 0.17 0.74 0.15 0.15 1.17 0.19 0.35 1.31 0.20 0.24 3.89 0.34 0.80 ± ± ± ± ± ± ± ± ± ± tZj 0.09 0.02 0.03 0.11 0.02 0.04 0.16 0.03 0.06 0.16 0.02 0.07 0.52 0.04 0.18 ± ± ± ± ± ± ± ± ± ± VVV 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 ± ± ± ± ± ± ± ± ± ± ZZ 0.09 0.07 0.02 0.03 0.01 0.01 0.11 0.08 0.07 0.13 0.08 0.09 0.36 0.13 0.12 ± ± ± ± ± ± ± ± ± ± tt¯+ W/Z 0.05 0.02 0.03 0.12 0.02 0.04 0.14 0.03 0.05 0.11 0.02 0.05 0.43 0.05 0.14 ± ± ± ± ± ± ± ± ± ± non-prompt 0.24 0.21 0.10 -0.05 0.03 0.02 0.03 0.03 0.01 0.26 0.19 0.04 0.48 0.29 0.10 ± ± ± ± ± ± ± ± ± ± Table E.2.: Event yields with total uncertainties (statistical and systematic) after the alternative aQGC selection scaled to luminosity. The format is yield statistical uncertainty systematic uncertainty. Systematic uncertainties have been symmetrised using the ± ± maximum value of the up and down variation. expected upper limitfiducial / fb 0.61 observed upper limitfiducial / fb 0.79

Table E.3.: Upper limit results on particle level in the alternative aQGC phase space. The expected upper limit on particle level is represented by the asimov dataset. The results are obtained using pseudo-experiments.

The dependence of the cross section on the aQGC parameters is depicted in Figure E.4. The cross section ratios in the alternative phase space are very similar to those in the nominal aQGC phase space shown in Figure 10.2. The uncertainties on the acceptance and the stability of the efficiencies in the α4-α5 plane is taken from the nominal aQGC phase space due to the similarities observed. Using these inputs the limit setting procedure is repeated as was done in the nominal case. Figure 10.7 shows the expected and observed limits on α4 and α5 in the alternative phase space. The observed and expected exclusion limits are tighter and closer to each other than in the nominal case. In addition, the tension with the Standard Model point observed in the nominal results is not present anymore. The one dimensional limits on the aQGC parameters are:

expected: 0.21 < α4 < 0.24 0.25 < α5 < 0.23 − − observed: 0.28 < α4 < 0.31 0.32 < α5 < 0.31 − − and may be translated to limits on fS,0 and fS,1:

expected 917 < fS,0 < 1049 1092 < fS,1 < 1005 − − observed 1223 < fS,0 < 1354 1398 < fS,1 < 1354. − −

5 1 5 1 α α 3.5 3.04 1.78 2.24 2.78 3.03 27.57 16.11 20.31 25.16 27.51 30 3 0.5 0.5 2.41 1.42 1.08 1.51 1.98 21.88 12.85 9.77 13.72 17.96 25 2.5 20 0 2.20 1.23 0.11 1.07 2.03 2 0 19.98 11.12 1.00 9.69 18.37 15 1.5 2.06 1.72 1.05 1.22 2.44 18.70 15.63 9.50 11.08 22.10 -0.5 1 -0.5 10

2.94 2.64 2.32 1.76 2.61 0.5 26.66 23.92 21.03 15.93 23.69 5 -1 -1 -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 α4 α4

Figure E.4.: The fiducial cross sections on particle level for W ±Zjj-EW scaled to the Sherpa prediction (left) and the cross section ratios observed in Whizard (right) in the alternative aQGC phase space. Points for which a samples was generated are marked by the corresponding fiducial cross section and a bilinear interpolation is applied between the samples. The shown resolution in the α4-α5 plane was reduced for viewing purposes.

175 4 fS,0x1000/TeV -4 -3 -2 -1 0 1 2 3 4 4

5 1 s = 8 TeV, 20.3 fb-1 4 α K-matrix unitarization pp → W±Zjj 3

0.5 2 x1000/TeV S,1 f 1 0 0 -1

± -2 -0.5 obs. 68% CL, W Zjj ± obs. 95% CL, W Zjj ± -3 exp. 95% CL, W Zjj Standard Model -4 -1 -1 -0.5 0 0.5 1 α 4

Figure E.5.: Two-dimensional expected and observed exclusion contours for the aQGC parameters α4 and α5. Observed exclusion contours are obtained for either 68.3 % or 95 % confidence levels whereas only the 95 % confidence level exclusion contour is shown for the expected limit.

The slightly smaller expected limits are likely to be caused by the smaller uncertainties on the background estimate. The observed limits are tighter than in the nominal case due to number of observed events remaining constant and predicted fiducial cross sections for the aQGC parameters being higher.

176 F. Simulated Data Samples

This appendix summarises the samples of simulated data used in this work. Each table details the simulated process, the dataset id (DSID) which identifies the sample, the generated cross section in pb, the applied k-factor k, the filter efficiency f and the number of generated events. The DSID may be used to find more information on the sample either bei searching the AMI interface [184] or the SVN area [185] where the generator run cards are stored. There, the concrete process definition as well as information on the phase space in which the events have been generated can be inquired. The k-factor is a normalisation correction which is intended to improve the predictions made using the samples by scaling the generated cross section at a lower order to a cross section result at a higher order in perturbation theory. Typically, generators providing events have either leading order or next-to-leading order accuracy whereas dedicated cross section calculations may reach higher order such as next-to-next-to- leading order. The filter efficiency f encapsulates the efficiency of generator cuts that have been applied during the sample generation process. These generator cuts are used to reduce the number of events in samples by restricting the phase space used for the generation to a one that suits the analysis goals. Thus fewer events are discarded in the subsequent analysis. This keeps the sample size at a manageable size and provides better statistics in the selected phase space compared to the unfiltered sample of the same size. All samples used in this work have been either NTUP SMWZ or NTUP COMMON D3PDs of the MC12b or MC12c simulation campaigns. Table F.1 lists the signal samples used in this analysis. The main results use the Sherpa samples 185396 (W ±Zjj-EW) and 185397 (W ±Zjj-QCD) and detailed information on these samples is available in Section 4.4. The PowHeg samples were used for cross checks and as an alternative generator in the unfolding results. The Whizard samples implementing the aQGC contributions are listed in Section 10.1 and are not repeated here. The samples simulating the non-prompt backgrounds where one lepton stems from non- prompt sources are listed in Table F.2. These samples are not used in the analysis itself as the contributions from non-prompt backgrounds are estimated using a data driven approach (see Section 6). They are, however, used to check the results obtained from the data driven method. The samples simulating the prompt backgrounds contributing to the analysis are listed in Table F.2.

177 Process DSID Generator σ/pb k f Events W ±Zjj-EW 185396 Sherpa 0.0821 1.0 1.0 500000 W ±Zjj-QCD 185397 Sherpa 9.7446 1.12 0.24 1999996 + + − + W Z e e e νe 185212 PowHeg 0.0785 1.0 1.0 349995 + → + − + W Z µ µ e νe 185213 PowHeg 0.0524 1.0 1.0 349997 + → + − + W Z τ τ e νe 185214 PowHeg 0.0523 1.0 1.0 349900 + → + − + W Z e e µ νµ 185215 PowHeg 0.0523 1.0 1.0 350000 + → + − + W Z µ µ µ νµ 185216 PowHeg 0.0785 1.0 1.0 349999 + → + − + W Z τ τ µ νµ 185217 PowHeg 0.0523 1.0 1.0 350000 + → + − + W Z e e τ ντ 185218 PowHeg 0.0523 1.0 1.0 349999 + → + − + W Z µ µ τ ντ 185219 PowHeg 0.0523 1.0 1.0 350000 + → + − + W Z τ τ τ ντ 185220 PowHeg 0.0792 1.0 1.0 350000 − → + − − W Z e e e ν¯e 185221 PowHeg 0.0467 1.0 1.0 349999 − → + − − W Z µ µ e ν¯e 185222 PowHeg 0.0296 1.0 1.0 350000 − → + − − W Z τ τ e ν¯e 185223 PowHeg 0.0296 1.0 1.0 349998 − → + − − W Z e e µ ν¯µ 185224 PowHeg 0.0296 1.0 1.0 349999 − → + − − W Z µ µ µ ν¯µ 185225 PowHeg 0.0467 1.0 1.0 349999 − → + − − W Z τ τ µ ν¯µ 185226 PowHeg 0.0296 1.0 1.0 349998 − → + − − W Z e e τ ν¯τ 185227 PowHeg 0.0296 1.0 1.0 349999 − → + − − W Z µ µ τ ν¯τ 185228 PowHeg 0.0296 1.0 1.0 349900 − → + − − W Z τ τ τ ν¯τ 185229 PowHeg 0.0472 1.0 1.0 350000 → Table F.1.: Signal samples used in this work. See Section 4.4 for more informa- tion.

178 Process DSID Generator σ/pb k f Events tt¯ 110001 MC@NLO +Jimmy 21.81 1.22 1.0 9988449 t eν s-chan 108343 MC@NLO +Jimmy 0.565 1.0 1.0 199899 → t µν s-chan 108344 MC@NLO +Jimmy 0.565 1.0 1.0 199899 → t τν s-chan 108345 MC@NLO +Jimmy 0.565 1.0 1.0 199799 → W t 108346 MC@NLO +Jimmy 22.37 1.0 1.0 1999194 t eν t-chan 117360 AcerMC +Pythia 9.48 1.0 1.0 299899 → t µν t-chan 117361 AcerMC +Pythia 9.48 1.0 1.0 300000 → t τν t-chan 117362 AcerMC +Pythia 9.48 1.0 1.0 293499 → Z e+e− 147770 Sherpa 1241 1.0 1.0 29993053 → Z µ+µ− 147771 Sherpa 1241 1.0 1.0 29989732 → Z τ +τ − 147772 Sherpa 1241 1.0 1.0 14998212 → Zγ e+e−γ 145161 Sherpa 32.26 1.0 1.0 8844673 → Zγ µ+µ−γ 145162 Sherpa 32.32 1.0 1.0 9198579 → Zγ τ +τ −γ 126854 Sherpa 32.33 1.0 1.0 3999409 → Zγ eeγ VBS 185307 Sherpa 0.342 1.0 1.0 199999 → Zγ eeγ VBS 185308 Sherpa 0.342 1.0 1.0 198299 → Zγ eeγ VBS 185309 Sherpa 0.342 1.0 1.0 197998 → Table F.2.: Samples used for the cross check of the data driven background estimate. The listed samples implement all non-prompt backgrounds where one lepton stemps from a secondary source.

Process DSID Generator σ/pb k f Events ttW¯ 119353 MadGraph +Pythia 0.1041 1.18 1.0 399997 ttW¯ j 119354 MadGraph +Pythia 0.0932 1.18 1.0 399896 ttZ¯ 119355 MadGraph +Pythia 0.0677 1.34 1.0 399996 ttZj¯ 119356 MadGraph +Pythia 0.0874 1.34 1.0 399895 ttW¯ W 119583 MadGraph +Pythia 0.000919 1.0 1.0 10000 ttW¯ j Excl. 174830 MadGraph +Pythia 0.053372 1.17 1.0 399896 ttW¯ jj Incl. 174831 MadGraph +Pythia 0.041482 1.17 1.0 399798 ttZj¯ Excl 174832 MadGraph +Pythia 0.045357 1.35 1.0 399995 ttZjj¯ Incl. 174833 MadGraph +Pythia 0.039772 1.35 1.0 399798 WWW 167006 MadGraph +Pythia 0.005096 1.0 1.0 50000 ZWW 167007 MadGraph +Pythia 0.001554 1.0 1.0 50000 ZZZ 167008 MadGraph +Pythia 0.000332 1.0 1.0 50000 WZ DPI 185360 Pythia 0.0048 1.0 1.0 200000 ZZ-QCD 126894 Sherpa 8.7403 1.0 1.0 3799491 ZZ-EW 147196 Sherpa 0.00691 1.0 1.0 187500

Table F.3.: Samples used to estimate the prompt backgrounds in this analysis.

179

G. Software Framework

High energy physics analyses are computing intensive and require high performant and flexible code. Usually, each new analysis is implemented by starting with a lightweight framework that provides basic functionality with all necessary facilities needed for the analysis being added by the analysis team. In this case the custom code was based on SFrame [186], a C + + based framework used for processing D3PDs.1 SFrame provides easy configuration via XML files and facilities to run code in parallel using PROOF [187] on multiple cores. Further elementary functionality is added via the DDCore package developed by Christian Gumpert. It expands the configurability of SFrame by providing mechanism that allow to write analysis building blocks in C + + which are put together to form an analysis by writing an XML file detailing the indi- vidual analysis steps. This way code can be reused effectively. The actual analysis code is implemented in two packages, the CommonAnalysis package which houses common functionality between the W ±Zjj and the W ±W ±jj analysis. The application of external packages, common variable definitions as well as generic analysis steps are implemented here. A list of all used external packages is given in Table G.1. More analysis specific code is implemented in the WZAnalysis package that was mainly developed by Philipp Anger and the author. The analysis workflow starts with D3PDs for various processes being selected and sub- jected to a skimming and slimming procedure.2 Skimming is the process of applying a loose preselection to reduce the storage footprint of the needed data samples while slim- ming denotes the removal of not needed blocks of information, e.g. variables concerning τ-leptons. The total size of all needed simulated and real data samples is reduced to 2 Tb in this step enabling local storage. The skimmed D3PDs are fed into the custom code written in the SFrame framework resulting in small ntuple files tailored to specific tasks such as general histogramming, statistical evaluation, or unfolding. These custom ntuples are further processed by using a slightly modified3 version of the CommonAnalyisFramework (CAF) [188]. Here, small python scripts were used to produce histograms, write selection flow tables, evaluate systematics, carry out further optimisation, and do statistical analysis. All written code will be saved to GitLab with usage instructions.

1 D3PDs are a data format derived from the primary data format in Run1 called AODs. While AODs are object oriented storing complex objects, D3PDs are so-called flat ntuples where the information regarding an object is written as plain-old-data values to a number of columns. A dedicated library, called the D3PDReader, is then employed to give the user the impression of working with object oriented data. 2 The preselection only requires the presence of three leptons (electrons or muons), with a transverse momentum greater 10 GeV. 3Some customization was necessary to make the CAF handle the naming scheme of the local custom ntuples

181 package tag

GoodRunsLists 00-01-09 PileupReweighting 00-02-12 egammaAnalysisUtils 00-04-58 ElectronPhotonFourMomentumCorrection 00-00-34 egammaMVACalib 00-00-28 egammaLayerRecalibTool 00-01-14 ElectronEfficiencyCorrection 00-00-50 CalibrationDataInterface 00-03-06 MuonIsolationCorrection 01-01 MuonMomentumCorrections 00-09-23 MuonEfficiencyCorrections 02-01-19-01 TrigMuonEfficiency 00-02-48 ApplyJetCalibration 00-03-20 ApplyJetResolutionSmearing 00-01-02 JetUncertainties 00-08-15 JetResolution-02 00-02 MissingETUtility 01-02-05 PATCore 00-00-16 TileTripReader 00-00-19 BCHCleaningTool 00-00-07

Table G.1.: Package versions of the official ATLAS packages used in the analysis. The tags given were the officially recommended SVN tags for Run1 analyses when the code development of the analysis was frozen.

182 List of Figures

2.1. Feynman Diagrams of electroweak gauge boson self-couplings ...... 15 2.2. Possible Feynman diagrams for VBS with final states containing massive electroweak gauge bosons. The primes of the quarks indicate possible changes in flavour...... 16 2.3. Cross section dependence of VBS processes on energy ...... 17 2.4. VBS event topology ...... 18 2.5. Jet dependent variables for the W ±Zjj-EW and the W ±Zjj-QCD process 19 2.6. Bosonic variables for the W ±Zjj-EW and the W ±Zjj-QCD process . . . 20

3.1. CERN accelerator complex in schematic view ...... 30 3.2. Cross sections at the LHC as a function of √sˆ ...... 32 3.3. Cutaway view of the ATLAS detector ...... 33 3.4. Schematic representation of the impact parameters d0 and z0 ...... 35 3.5. Schematic representation of a quadrant of the inner detector ...... 36 3.6. Schematic representation of the general layout of the electromagnetic calorimeter ...... 37 3.7. Schematic representation of a quadrant of the Muon Spectrometer . . . 39 3.8. Jet shapes obtained from several jet clustering algorithms ...... 46

4.1. Depiction of accumulated luminosity versus time for 2012 data taking . 51 4.2. Depiction of a simulated event ...... 52 4.3. Example for parton distribution functions ...... 53 4.4. Examples of Feynman diagrams contributing to the W ±Zjj-EW final state 58 4.5. Selected diagrams contributing to the tZj background ...... 59 4.6. Selected diagrams contributing to the W ±Zjj-QCD final state ...... 60

8.1. Optimisation of the invariant mass of the tagging jets requirement . . . 98 8.2. Alternative variables for optimisation of VBS selection ...... 100 8.3. Additional optimisation after the cut on the invariant mass of the tagging jets...... 101

9.1. Depiction of the general unfolding procedure ...... 112 9.2. Response matrices for the jet multiplicity measurement ...... 117 9.3. Impact of truth jet collection choice ...... 118 9.4. Efficiency as a function of jet multiplicity...... 119 9.5. Unfolding results for the jet multiplicity measurement ...... 120 9.6. Systematic uncertainties of the jet multiplicity unfolding ...... 121 9.7. Unfolding inputs for the invariant mass of the dijet system measurement 121 9.8. Efficiency as a function of the invariant mass of the dijet system . . . . 122 9.9. Unfolding results for the invariant mass of the dijet system measurement 122 9.10. Uncertainties of the unfolding procedure for the invariant mass of the dijet system measurement ...... 123

183 9.11. Unfolding inputs for the absolute difference in rapidity of the dijet system measurement ...... 123 9.12. Efficiency as a function of the absolute difference in rapidity of the dijet system ...... 123 9.13. Unfolding results for the absolute difference in rapidity of the dijet sys- tem measurement ...... 124 9.14. Uncertainties of the unfolding procedure absolute difference in rapidity of the dijet system measurement ...... 124

10.1. Dependence of W ±Zjj-EW and tZj on aQGC parameters ...... 127 10.2. Fiducial Cross Sections on Particle Level in the aQGC phase space . . . 128 10.3. Invariant mass of the tagging jets after the Njets 2 requirement. . . . . 129 ≥ 10.4. Used variables for the aQGC phase space optimisation ...... 129 10.5. Alternative variables for the aQGC phase space optimisation ...... 130 10.6. Fiducial particle level cross section limits in the α4-α5 plane in the aQGC phase space ...... 136 10.7. Two-dimensional exclusion limits for α4 and α5 parameter in the aQGC phase space ...... 138 10.8. Two-dimensional exclusion limits for α4 and α5 parameter in the aQGC phase space ...... 139

11.1. Summary of ATLAS results on Standard Model processes ...... 143 11.2. Unfolded Distributions for W ±Zjj ...... 144 ± ± 11.3. aQGC limits on α4 and α5 in comparison to the W W jj-EW measure- ment...... 145

A.1. Neutrino properties of the W ±Zjj-EW and the W ±Zjj-QCD process on particle level ...... 147 A.2. Lepton properties of the W ±Zjj-EW and the W ±Zjj-QCD process on particle level ...... 148 A.3. Jet properties of the W ±Zjj-EW and the W ±Zjj-QCD process on par- ticle level ...... 149 A.4. Boson properties of the W ±Zjj-EW and the W ±Zjj-QCD process on particle level ...... 150

B.1. Event variables in the VBS phase space ...... 152 B.2. Jet variables in the VBS phase space ...... 153 B.3. Lepton variables in the VBS phase space ...... 154

C.1. Event display for event 77863808 from run 207620 ...... 156

E.1. Results of the original aQGC optimisation study ...... 170 E.2. Results of the rerun of the aQGC optimisation study ...... 171 E.3. Additional results of the rerun of the aQGC optimisation study . . . . . 172 E.4. Fiducial Cross Sections on Particle Level in the altnerative aQGC phase space ...... 175 E.5. Two-dimensional exclusion limits for α4 and α5 parameter in the alter- native aQGC phase space ...... 176

184 List of Tables

2.1. Properties of elementary fermions ...... 6 2.2. Properties of elementary bosons ...... 7 2.3. Charges of the fermions with respect to the SU(2)L U(1)Y symmetry 11 × 2.4. Cross sections of VBS final states ...... 21

4.1. Details on Data Taking in 2012 ...... 50 4.2. Sample statistics for the full generated W ±Zjj-EW sample ...... 59

5.1. Reconstruction level object selection requirements for electrons for the W ±Z analysis ...... 65 5.2. Reconstruction level object selection requirements for muons for the W ±Z analysis ...... 66 5.3. Reconstruction level object selection requirements for jets for the W ±Z analysis ...... 68 5.4. Particle level object selection requirements for leptons on particle level for the W ±Z analysis ...... 73 5.5. Generator Status Codes ...... 73

6.1. Fake ratios used to estimate non-prompt Backgrounds ...... 84 6.2. Alternative fake ratios used to estimate the non-prompt background . . 84 6.3. Comparison of data and mc driven background estimate for inclusive phase space ...... 87 6.4. Comparison of data and mc driven background estimate for VBS phase space ...... 88

7.1. Breakdown of the theoretical uncertainties in the VBS phase space . . . 94

8.1. Figures of merit for optimisation of tagging jet invariant mass cut . . . 99 8.2. Event yields with uncertainties after the VBS selection...... 102 8.3. Relative systematic uncertainties in the VBS phase space ...... 103 8.4. Efficiency Calculation for VBS phase space ...... 104 8.5. Fiducial cross section results in the VBS phase space on particle level . 109

10.1. Whizard samples spanning the α4-α5 plane ...... 126 10.2. Event yields with uncertainties after the aQGC selection...... 133 10.3. Efficiency Calculation for aQGC phase space ...... 134 10.4. Uncertainties in the aQGC phase space ...... 135 10.5. Fiducial cross section on particle levels in the aQGC phase space . . . . 135

C.1. Event Properties for event 77863808 from run 207620 ...... 155

D.1. Differential cross section of W ±Zjj-EW + W ±Z-QCD as a function of jet multiplicity using Sherpa ...... 157

185 D.2. Normalised differential cross section of W ±Zjj-EW + W ±Z-QCD as a function of jet multiplicity using Sherpa ...... 158 D.3. Differential cross section of W ±Z-QCD as a function of jet multiplicity using PowHeg ...... 159 D.4. Normalised differential cross sections of W ±Z-QCD as a function of jet multiplicity using PowHeg ...... 160 D.5. Differential cross section of W ±Zjj-EW + W ±Z-QCD as a function of the invariant dijet mass using Sherpa ...... 161 D.6. Normalised differential cross section of W ±Zjj-EW + W ±Z-QCD as a function of the invariant dijet mass using Sherpa ...... 162 D.7. Differential cross section of W ±Z-QCD as a function of the invariant dijet mass using PowHeg ...... 163 D.8. Normalised differential cross section of W ±Z-QCD as a function of the invariant dijet mass using PowHeg ...... 164 D.9. Differential cross section of W ±Zjj-EW + W ±Z-QCD as a function of the absolute difference in pseudo-rapidity between the tagging jets using Sherpa ...... 165 D.10.Normalised differential cross section of W ±Zjj-EW + W ±Z-QCD as a function of the absolute difference in pseudo-rapidity between the tagging jets using Sherpa ...... 166 D.11.Differential cross section of W ±Z-QCD as a function of the absolute difference in pseudo-rapidity between the tagging jets using PowHeg . 167 D.12.Normalised differential cross section of W ±Z-QCD as a function of the absolute difference in pseudo-rapidity between the tagging jets using PowHeg ...... 168

E.1. Uncertainties in the alternative aQGC phase space ...... 173 E.2. Event yields with uncertainties after the alternative aQGC selection. . . 174 E.3. Upper limit results on particle level in the alternative aQGC phase space 175

F.1. List of WZ signal samples ...... 178 F.2. List of simulated data samples of non-prompt backgrounds ...... 179 F.3. List of simulated data samples for prompt backgrounds ...... 179

G.1. Package versions of the official ATLAS packages used in the analysis . . 182

186 Bibliography

[1] T. Pratchett, The Colour of Magic. Colin Smythe, 1983. [2] S. L. Glashow, Partial-Symmetries of Weak Interactions, Nuclear Physics 22 (1961) no. 4, 579–588. [3] A. Salam, Elementary Particle Theory. Almqvist and Wiksell, Stockholm, 1968. [4] S. Weinberg, A Model of Leptons, Phys. Rev. Lett. 19 (1967) 1264–1266. [5] G. ’t Hooft and M. Veltman, Regularization and renormalization of gauge fields, Nuclear Physics B 44 (1972) no. 1, 189 – 213. [6] J. R. Ellis, Beyond the standard model for hill walkers, in 1998 European School of high-energy physics, St. Andrews, Scotland, 23 Aug-5 Sep 1998: Proceedings. 1998. arXiv:hep-ph/9812235 [hep-ph]. [7] ATLAS Collaboration, G. Aad et al., Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC , Phys. Lett. B716 (2012) 1–29, arXiv:1207.7214 [hep-ex]. [8] CMS Collaboration, S. Chatrchyan et al., Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC , Phys. Lett. B716 (2012) 30–61, arXiv:1207.7235 [hep-ex]. [9] ALEPH, DELPHI, L3, OPAL and the LEP TGC Working Group, A Combination of Results on Charged Triple Gauge Boson Couplings Measured by the LEP Experiments, . http://lepewwg.web.cern.ch/LEPEWWG/lepww/tgc/. [10] CDF Collaboration, T. Aaltonen et al., Measurement of the WZ Cross Section and Triple Gauge Couplings in pp¯ Collisions at √s = 1.96 TeV , Phys. Rev. D86 (2012) 031104, arXiv:1202.6629 [hep-ex]. [11] CDF Collaboration, T. Aaltonen et al., Limits on Anomalous Trilinear Gauge Couplings in Zγ Events from pp¯ Collisions at √s = 1.96 TeV , Phys. Rev. Lett. 107 (2011) 051802, arXiv:1103.2990 [hep-ex]. [12] D0 Collaboration, V. M. Abazov et al., Limits on anomalous trilinear gauge boson couplings from WW , WZ and W γ production in pp¯ collisions at √s = 1.96 TeV , Phys. Lett. B718 (2012) 451–459, arXiv:1208.5458 [hep-ex]. [13] D0 Collaboration, V. M. Abazov et al., Measurement of the Zγ ννγ¯ cross → section and limits on anomalous ZZγ and Zγγ couplings in pp¯ collisions at √s = 1.96 TeV , Phys. Rev. Lett. 102 (2009) 201802, arXiv:0902.2157 [hep-ex]. [14] D0 Collaboration, V. M. Abazov et al., Zγ production and limits on anomalous ZZγ and Zγγ couplings in pp¯ collisions at √s = 1.96 TeV , Phys. Rev. D85 (2012) 052001, arXiv:1111.3684 [hep-ex]. [15] ATLAS Collaboration, G. Aad et al., Measurements of Zγ and Zγγ production in pp collisions at √s = 8 TeV with the ATLAS detector, Phys. Rev. D93 (2016) no. 11, 112002, arXiv:1604.05232 [hep-ex].

187 [16] ATLAS Collaboration, G. Aad et al., Measurements of W ±Z production cross sections in pp collisions at √s = 8 TeV with the ATLAS detector and limits on anomalous gauge boson self-couplings, arXiv:1603.02151 [hep-ex]. [17] ATLAS Collaboration, G. Aad et al., Evidence of Wγγ Production in pp Collisions at s=8 TeV and Limits on Anomalous Quartic Gauge Couplings with the ATLAS Detector, Phys. Rev. Lett. 115 (2015) no. 3, 031802, arXiv:1503.03243 [hep-ex]. [18] ATLAS Collaboration, G. Aad et al., Evidence for Electroweak Production of W ±W ±jj in pp Collisions at √s = 8 TeV with the ATLAS Detector, Phys. Rev. Lett. 113 (2014) no. 14, 141803, arXiv:1405.6241 [hep-ex]. [19] CMS Collaboration, V. Khachatryan et al., Study of vector boson scattering and search for new physics in events with two same-sign leptons and two jets, Phys. Rev. Lett. 114 (2015) no. 5, 051801, arXiv:1410.6315 [hep-ex]. [20] CMS Collaboration, S. Chatrchyan et al., Search for W W γ and W Zγ production and constraints on anomalous quartic gauge couplings in pp collisions at √s = 8 TeV , Phys. Rev. D90 (2014) no. 3, 032008, arXiv:1404.4619 [hep-ex]. [21] CMS Collaboration, V. Khachatryan et al., Evidence for exclusive γγ W +W − production and constraints on anomalous quartic gauge couplings → at √s = 7 and 8 TeV , arXiv:1604.04464 [hep-ex]. [22] P. W. Anderson, , Gauge Invariance, and Mass, Physical Review 130 (1963) 439–442. [23] G. S. Guralnik, C. R. Hagen, and T. W. Kibble, Global Conservation Laws and Massless Particles, Physical Review Letters 13 (1964) 585–587. [24] F. Englert and R. Brout, Broken Symmetry and the Mass of Gauge Vector Mesons, Physical Review Letters 13 (1964) 321–323. [25] P. W. Higgs, Broken Symmetries and the Masses of Gauge Bosons, Physical Review Letters 13 (1964) 508–509. [26] G. Bhattacharyya, D. Das, and P. B. Pal, Modified Higgs couplings and unitarity violation, Phys. Rev. D87 (2013) 011702, arXiv:1212.4651 [hep-ph]. [27] A. Dobado, D. Espriu, and M. J. Herrero, Chiral lagrangians as a tool to probe the symmetry breaking sector of the SM at LEP, Physics Letters B 255 (1991) no. 3, 405 – 414. [28] C. Degrande, O. Eboli, B. Feigl, B. J¨ager, W. Kilian, O. Mattelaer, M. Rauch, J. Reuter, M. Sekulla, and D. Wackeroth, Monte Carlo tools for studies of non-standard electroweak gauge boson interactions in multi-boson processes: A Snowmass White Paper, in Community Summer Study 2013: Snowmass on the Mississippi (CSS2013) Minneapolis, MN, USA, July 29-August 6, 2013. 2013. arXiv:1309.7890 [hep-ph]. [29] T. Gehrmann, M. Grazzini, S. Kallweit, P. Maierh¨ofer,A. von Manteuffel, S. Pozzorini, D. Rathlev, and L. Tancredi, W +W − Production at Hadron Colliders in Next to Next to Leading Order QCD, Phys. Rev. Lett. 113 (2014) no. 21, 212001, arXiv:1408.5243 [hep-ph]. [30] M. Grazzini, S. Kallweit, D. Rathlev, and M. Wiesemann, W ±Z production at hadron colliders in NNLO QCD, arXiv:1604.08576 [hep-ph].

188 [31] E. Noether, Invariant variation problems, Transport Theory and Statistical Physics 1 (1971) 186–207, physics/0503066. [32] E. Noether, Invariante Variationsprobleme, Nachrichten von der Gesellschaft der Wissenschaften zu G¨ottingen,Mathematisch-Physikalische Klasse 1918 (1918) 235–257. [33] S. Weinberg, The Quantum Theory of Fields, vol. 1. Cambridge University Press, 1995. [34] M. E. Peskin and D. V. Schroeder, An introduction to quantum field theory. Westview Press Reading (Mass.), 1995. [35] M. Kaku, : A Modern Introduction. Oxford University Press, 1993. [36] D. Griffiths, Introduction to Elementary Particles. Physics textbook. Wiley, 2008. [37] M. Herrero, The Standard model, NATO Sci. Ser. C 534 (1999) 1–59, arXiv:hep-ph/9812242 [hep-ph]. [38] S. F. Novaes, Standard model: An Introduction, in Particles and fields. Proceedings, 10th Jorge Andre Swieca Summer School, Sao Paulo, Brazil, February 6-12, 1999. 1999. arXiv:hep-ph/0001283 [hep-ph]. [39] Collaboration, K. A. Olive et al., Review of Particle Physics, Chin. Phys. C38 (2014) 090001. [40] W. Pauli, Relativistic Field Theories of Elementary Particles, Rev. Mod. Phys. 13 (1941) 203–232. [41] R. P. Feynman, Space-Time Approach to Quantum Electrodynamics, Phys. Rev. 76 (1949) 769–789. [42] R. P. Feynman, Mathematical Formulation of the Quantum Theory of Electromagnetic Interaction, Physical Review 80 (1950) 440–457. [43] J. Schwinger, Quantum Electrodynamics. I. A Covariant Formulation, Phys. Rev. 74 (1948) 1439–1461. [44] S. Tomonaga, On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields, Progress of Theoretical Physics 1 (1946) no. 2, 27–42. [45] J. D. Jackson and L. B. Okun, Historical roots of gauge invariance, Rev. Mod. Phys. 73 (2001) 663–680, arXiv:hep-ph/0012061 [hep-ph]. [46] J. Chadwick, Possible Existence of a Neutron, Nature 129 (1932) 312. [47] M. Gell-Mann, Symmetries of Baryons and Mesons, Phys. Rev. 125 (1962) 1067–1084. [48] Y. Ne’eman, Derivation of strong interactions from a gauge invariance, Nuclear Physics 26 (1961) no. 2, 222 – 229. [49] M. Y. Han and Y. Nambu, Three-Triplet Model with Double SU(3) Symmetry, Phys. Rev. 139 (1965) B1006–B1010. [50] H. Fritzsch and M. Gell-Mann, Current algebra: Quarks and what else?, eConf C720906V2 (1972) 135–165, arXiv:hep-ph/0208010 [hep-ph]. [51] H. Leutwyler, On the history of the strong interaction, Mod. Phys. Lett. A29 (2014) 1430023, arXiv:1211.6777 [physics.hist-ph]. [,29(2014)].

189 [52] C. N. Yang and R. L. Mills, Conservation of Isotopic Spin and Isotopic Gauge Invariance, Phys. Rev. 96 (1954) 191–195. [53] P. e. a. Abreu, Measurement of the triple-gluon vertex from 4-jet events at LEP, Zeitschrift f¨ur Physik C Particles and Fields 59 no. 3, 357–368. [54] K. G. Wilson, Confinement of quarks, Phys. Rev. D 10 (1974) 2445–2459. [55] A. Jaffe and E. Witten, Description of the Yang-Mills and Mass gap Millenium Problem, http://www.claymath.org/millennium-problems/yang-mills-and-mass-gap. Accessed: 2016-07-14. [56] D. J. Gross and F. Wilczek, Ultraviolet Behavior of Non-Abelian Gauge Theories, Phys. Rev. Lett. 30 (1973) 1343–1346. [57] D. J. Gross and F. Wilczek, Asymptotically Free Gauge Theories. I , Phys. Rev. D 8 (1973) no. 10, 3633–3652. [58] H. D. Politzer, Asymptotic Freedom: An Approach to Strong Interactions, Physics Reports 14 (1974) no. 4, 129 – 180. [59] H. D. Politzer, Reliable Perturbative Results for Strong Interactions?, Phys. Rev. Lett. 30 (1973) no. 26, 1346–1349. [60] T. Nakano and K. Nishijima, Charge Independence for V-particles, Progress of Theoretical Physics 10 (1953) 581–582. [61] M. Gell-Mann, The interpretation of the new particles as displaced charge multiplets, Il Nuovo Cimento (1955-1965) 4 (1956) no. 2, 848–866. [62] N. Cabibbo, Unitary Symmetry and Leptonic Decays, Phys. Rev. Lett. 10 (1963) 531–533. [63] M. Kobayashi and T. Maskawa, CP-Violation in the Renormalizable Theory of Weak Interaction, Progress of Theoretical Physics 49 (1973) no. 2, 652–657. [64] CKMfitter Group Collaboration, J. Charles, O. Deschamps, S. Descotes-Genon, H. Lacker, A. Menzel, S. Monteil, V. Niess, J. Ocariz, J. Orloff, A. Perez, W. Qian, V. Tisserand, K. Trabelsi, P. Urquijo, and L. Vale Silva, Current status of the standard model CKM fit and constraints on ∆F = 2 new physics, Phys. Rev. D 91 (2015) 073007. [65] J. Goldstone, Field theories with Superconductor solutions, Il Nuovo Cimento (1955-1965) 19 (2008) no. 1, 154–164. [66] A. Pich, The Standard model of electroweak interactions, in High-energy physics. Proceedings, European School, Aronsborg, Sweden, June 18-July 1, 2006. 2007. arXiv:0705.4264 [hep-ph]. [67] C. Bittrich, M. Kobel, and D. St¨ockinger, Study of Polarization Fractions in the Scattering of Massive Gauge Bosons W ±Z W ±Z with the ATLAS Detector → at the Large Hadron Collider. PhD thesis, Dresden, Tech. U., Apr, 2015. https://cds.cern.ch/record/2014124. Presented 2015. [68] CMS Collaboration, V. Khachatryan et al., Measurement of the W +W − cross section in pp collisions at √s = 8 TeV and limits on anomalous gauge couplings, arXiv:1507.03268 [hep-ex]. [69] ATLAS Collaboration, G. Aad et al., Measurement of the WW + WZ cross section and limits on anomalous triple gauge couplings using final states with

190 one lepton, missing transverse momentum, and two jets with the ATLAS detector at √s = 7 TeV , JHEP 01 (2015) 049, arXiv:1410.7238 [hep-ex]. [70] P. Anger, M. Kobel, and S. Lammers, Probing Electroweak Gauge Boson Scattering with the ATLAS Detector at the Large Hadron Collider. PhD thesis, Dresden, Tech. U., Jun, 2014. https://cds.cern.ch/record/1753849. Presented 01 Sep 2014. [71] ATLAS Collaboration, G. Aad et al., Measurement of ZZ production in pp collisions at √s = 7 TeV and limits on anomalous ZZZ and ZZγ couplings with the ATLAS detector, JHEP 03 (2013) 128, arXiv:1211.6096 [hep-ex]. [72] F. Speiser, M. Kobel, and F. Barreiro Alonso, Study of Opposite Sign WWjj Production using Boosted Decision Trees with the ATLAS Detector at the LHC. PhD thesis, Dresden, Tech. U., Oct, 2014. https://cds.cern.ch/record/1980845. Presented 28 Oct 2014. [73] C. Gumpert, M. Kobel, B. Heinemann, and U. Klein, Measurement of Electroweak Gauge Boson Scattering in the Channel pp W ±W ±jj with the → ATLAS Detector at the Large Hadron Collider. PhD thesis, Dresden, TU Dresden, Dec, 2014. https://cds.cern.ch/record/2003240. Presented 27 Feb 2015. [74] U. Schnoor, M. Kobel, and S. Lammers, Vector Boson Scattering and Electroweak Production of Two Like-Charge W Bosons and Two Jets at the Current and Future ATLAS Detector. PhD thesis, Dresden, Tech. U., Nov, 2014. https://cds.cern.ch/record/2000941. Presented 30 Jan 2015. [75] Public results page of the ATLAS Exotics Working Group, https://twiki.cern.ch/twiki/bin/view/AtlasPublic/ExoticsPublicResults. Accessed: 2016-04-21. [76] E. Fermi, Versuch einer Theorie der β-Strahlen. I , Zeitschrift f¨urPhysik 88 no. 3, 161–177. [77] M. Sekulla, Anomalous couplings, resonances and unitarity in vector boson scattering. PhD thesis, University of Siegen, 2015. [78] S. Weinberg, Implications of Dynamical Symmetry Breaking, Phys. Rev. D13 (1976) 974–996. [79] L. Susskind, Dynamics of Spontaneous Symmetry Breaking in the Weinberg-Salam Theory, Phys. Rev. D20 (1979) 2619–2625. [80] T. Appelquist and C. Bernard, Strongly interacting Higgs bosons, Phys. Rev. D 22 (1980) 200–213. [81] J. Gasser and H. Leutwyler, Chiral Perturbation Theory to One Loop, Annals Phys. 158 (1984) 142. [82] M. J. Herrero and E. Ruiz Morales, The Electroweak chiral Lagrangian as an effective field theory of the standard model with a heavy Higgs, in Workshop on Electroweak Symmetry Breaking Budapest, Hungary, July 11-13, 1994. 1994. arXiv:hep-ph/9412317 [hep-ph]. [83] A. Alboteanu, W. Kilian, and J. Reuter, Resonances and Unitarity in Weak Boson Scattering at the LHC , JHEP 11 (2008) 010, arXiv:0806.4145 [hep-ph]. [84] CMS Collaboration, V. Khachatryan et al., Measurement of the Zγ ννγ¯ →

191 production cross section in pp collisions at √s = 8 TeV and limits on anomalous ZZγ and Zγγ trilinear gauge boson couplings, Submitted to: Phys. Lett. B (2016) , arXiv:1602.07152 [hep-ex]. [85] CMS Collaboration, S. Chatrchyan et al., Measurement of the sum of WW and WZ production with W +dijet events in pp collisions at √s = 7 TeV , Eur. Phys. J. C73 (2013) no. 2, 2283, arXiv:1210.7544 [hep-ex]. [86] CMS Collaboration, V. Khachatryan et al., Measurement of the pp ZZ production cross section and constraints on anomalous triple gauge→ couplings in four-lepton final states at √s =8 TeV , Phys. Lett. B740 (2015) 250–272, arXiv:1406.0113 [hep-ex]. [87] ATLAS Collaboration, G. Aad et al., Measurement of total and differential W +W − production cross sections in proton-proton collisions at √s = 8 TeV with the ATLAS detector and limits on anomalous triple-gauge-boson couplings, arXiv:1603.01702 [hep-ex]. [88] O. J. P. Eboli, M. C. Gonzalez-Garcia, and J. K. Mizukoshi, pp jje±µ±νν ± ∓ 6 4 2 → and jje µ νν at O(αem) and O(αemαs) for the study of the quartic electroweak gauge boson vertex at CERN LHC , Phys. Rev. D74 (2006) 073005, arXiv:hep-ph/0606118 [hep-ph]. [89] E. P. Wigner, Resonance Reactions and Anomalous Scattering, Phys. Rev. 70 (1946) 15–33. http://link.aps.org/doi/10.1103/PhysRev.70.15. [90] S. U. Chung, J. Brose, R. Hackmann, E. Klempt, S. Spanier, and C. Strassburger, Partial wave analysis in K-matrix formalism, Annalen der Physik 507 (1995) no. 5, 404–430. http://dx.doi.org/10.1002/andp.19955070504. [91] G. Hermann, The HESS array: A new system of 100-GeV IACTs for stereoscopic observations, in Very high-energy phenomena in the universe. Proceedings, 32nd Rencontres de Moriond, Les Arcs, France, January 18-25, 1997, pp. 141–146. 1997. [92] IceCube Collaboration, A. Goldschmidt, The IceCube detector, in 27th International Cosmic Ray Conference (ICRC 2001) Hamburg, Germany, August 7-15, 2001, pp. 1237–1240. 2001. [93] History of CERN webpage, http://timeline.web.cern.ch/timelines/The-history-of-CERN. Accessed: 2016-04-26. [94] O. S. Br¨uning, P. Collier, P. Lebrun, S. Myers, R. Ostojic, J. Poole, and P. Proudlock, LHC Design Report. CERN, Geneva, 2004. https://cds.cern.ch/record/782076. [95] COMA database, https://atlas-tagservices.cern.ch/tagservices/RunBrowser/ runBrowserReport/rBR Period Report.php. Accessed: 2016-04-26. [96] F. Marcastel, CERN’s Accelerator Complex. La chaˆınedes acc´el´erateurs du CERN , . https://cds.cern.ch/record/1621583. General Photo. [97] J. Stirling, Parton Luminosity and Cross Section Plots, http://www.hep.ph.ic.ac.uk/∼wstirlin/plots/plots.html. Accessed: 2016-04-26. [98] ATLAS Collaboration, A. Airapetian et al., ATLAS: Detector and physics performance technical design report. Volume 1 ,.

192 [99] ATLAS Collaboration, G. Aad et al., The ATLAS Experiment at the CERN Large Hadron Collider, JINST 3 (2008) S08003. [100] K. Zenker, M. Kobel, and M. zur Nedden, Development of a data based algorithm for measuring the mistag rate of b-quarks at the ATLAS experiment. PhD thesis, Dresden, Tech. U., Dresden, 2010. Presented on 01 Dec 2010. [101] S. Hassani, L. Chevalier, E. Lan¸con, J.-F. Laporte, R. Nicolaidou, and A. Ouraou, A muon identification and combined reconstruction procedure for the ATLAS detector at the LHC using the (MUONBOY, STACO, MuTag) reconstruction packages, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 572 (2007) no. 1, 77 – 79. Frontier Detectors for Frontier Physics - Proceedings of the 10th Pisa Meeting on Advanced Detectors. [102] S. Tarem and N. Panikashvili, Muon identification and reconstruction in the ATLAS detector at the LHC , in Nuclear Science Symposium Conference Record, 2004 IEEE, vol. 4, pp. 2186–2190 Vol. 4. 2004. [103] ATLAS Collaboration, G. Aad et al., Measurement of the muon reconstruction performance of the ATLAS detector using 2011 and 2012 LHC proton–proton collision data, Eur. Phys. J. C74 (2014) no. 11, 3130, arXiv:1407.3935 [hep-ex]. [104] W. Lampl, S. Laplace, D. Lelas, P. Loch, H. Ma, S. Menke, S. Rajagopalan, D. Rousseau, S. Snyder, and G. Unal, Calorimeter Clustering Algorithms: Description and Performance, Tech. Rep. ATL-LARG-PUB-2008-002. ATL-COM-LARG-2008-003, CERN, Geneva, Apr, 2008. https://cds.cern.ch/record/1099735. [105] ATLAS Collaboration, G. Aad et al., Electron and photon energy calibration with the ATLAS detector using LHC Run 1 data, Eur. Phys. J. C74 (2014) no. 10, 3071, arXiv:1407.5063 [hep-ex]. [106] The ATLAS Collaboration, Electron efficiency measurements with the ATLAS detector using the 2012 LHC proton-proton collision data, Tech. Rep. ATLAS-CONF-2014-032, CERN, Geneva, Jun, 2014. https://cds.cern.ch/record/1706245. [107] M. Cacciari, G. P. Salam, and G. Soyez, The Anti-k(t) jet clustering algorithm, JHEP 04 (2008) 063, arXiv:0802.1189 [hep-ph]. [108] G. P. Salam and G. Soyez, A Practical Seedless Infrared-Safe Cone jet algorithm, JHEP 05 (2007) 086, arXiv:0704.0292 [hep-ph]. [109] The ATLAS Collaboration, Monte Carlo Calibration and Combination of In-situ Measurements of Jet Energy Scale, Jet Energy Resolution and Jet Mass in ATLAS, Tech. Rep. ATLAS-CONF-2015-037, CERN, Geneva, Aug, 2015. http://cds.cern.ch/record/2044941. [110] The ATLAS Collaboration, Performance of Missing Transverse Momentum Reconstruction in ATLAS studied in Proton-Proton Collisions recorded in 2012 at 8 TeV , Tech. Rep. ATLAS-CONF-2013-082, CERN, Geneva, Aug, 2013. http://cds.cern.ch/record/1570993. [111] Data Quality Information for Data - Public ATLAS WebPage, https://twiki.cern.ch/twiki/bin/view/AtlasPublic/RunStatsPublicResults2010. Accessed: 2016-04-26.

193 [112] ATLAS LumicCalc Tool, https://atlas-lumicalc.cern.ch/. Accessed: 2016-04-26. [113] ATLAS Public Page on Luminosity Results, https://twiki.cern.ch/twiki/bin/view/AtlasPublic/LuminosityPublicResults. Accessed: 2016-04-26. [114] A. Buckley et al., General-purpose event generators for LHC physics, Phys. Rept. 504 (2011) 145–233, arXiv:1101.2599 [hep-ph]. [115] Sherpa and Open Science Grid: Predicting the emergence of jets, https://sciencenode.org/feature/ sherpa-and-open-science-grid-predicting-emergence-jets.php. Accessed: 2016-06-10. [116] A. D. Martin, W. J. Stirling, R. S. Thorne, and G. Watt, Parton distributions for the LHC , Eur. Phys. J. C63 (2009) 189–285, arXiv:0901.0002 [hep-ph]. [117] L. L¨onnblad and S. Prestel, Merging Multi-leg NLO Matrix Elements with Parton Showers, JHEP 03 (2013) 166, arXiv:1211.7278 [hep-ph]. [118] T. Sj¨ostrand,S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen, and P. Z. Skands, An Introduction to PYTHIA 8.2 , Comput. Phys. Commun. 191 (2015) 159–177, arXiv:1410.3012 [hep-ph]. [119] J. Bellm et al., Herwig 7.0/Herwig++ 3.0 release note, Eur. Phys. J. C76 (2016) no. 4, 196, arXiv:1512.01178 [hep-ph]. [120] M. Bahr et al., Herwig++ Physics and Manual, Eur. Phys. J. C58 (2008) 639–707, arXiv:0803.0883 [hep-ph]. [121] N. Davidson, G. Nanava, T. Przedzinski, E. Richter-Was, and Z. Was, Universal Interface of TAUOLA Technical and Physics Documentation, Comput. Phys. Commun. 183 (2012) 821–843, arXiv:1002.0543 [hep-ph]. [122] D. J. Lange, The EvtGen particle decay simulation package, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 462 (2001) no. 1–2, 152 – 155. BEAUTY2000, Proceedings of the 7th Int. Conf. on B-Physics at Hadron Machines. [123] M. Dobbs and J. B. Hansen, The HepMC C++ Monte Carlo event record for High Energy Physics, tech. rep., 2001. [124] J. Alwall et al., A Standard format for Les Houches event files, Comput. Phys. Commun. 176 (2007) 300–304, arXiv:hep-ph/0609017 [hep-ph]. [125] G. Aad et al., The ATLAS Simulation Infrastructure, The European Physical Journal C 70 (2010) no. 3, 823–874. [126] J. Allison et al., Geant4 developments and applications, IEEE Transactions on Nuclear Science 53 (2006) no. 1, 270–278. [127] GEANT4 Collaboration, S. Agostinelli et al., GEANT4: A Simulation toolkit, Nucl. Instrum. Meth. A506 (2003) 250–303. [128] R. Brun and F. Rademakers, ROOT: An object oriented data analysis framework, Nucl. Instrum. Meth. A389 (1997) 81–86. [129] T. Gleisberg, S. Hoeche, F. Krauss, M. Schonherr, S. Schumann, F. Siegert, and J. Winter, Event generation with SHERPA 1.1 , JHEP 02 (2009) 007, arXiv:0811.4622 [hep-ph].

194 [130] W. Kilian, T. Ohl, and J. Reuter, WHIZARD: Simulating Multi-Particle Processes at LHC and ILC , Eur. Phys. J. C71 (2011) 1742, arXiv:0708.4233 [hep-ph]. [131] M. Moretti, T. Ohl, and J. Reuter, O’Mega: An Optimizing matrix element generator, arXiv:hep-ph/0102195 [hep-ph]. [132] J. Baglio et al., Release Note - VBFNLO 2.7.0 , arXiv:1404.3940 [hep-ph]. [133] K. Arnold et al., VBFNLO: A Parton Level Monte Carlo for Processes with Electroweak Bosons – Manual for Version 2.5.0 , arXiv:1107.4038 [hep-ph]. [134] K. Arnold et al., VBFNLO: A Parton level Monte Carlo for processes with electroweak bosons, Comput. Phys. Commun. 180 (2009) 1661–1670, arXiv:0811.4559 [hep-ph]. [135] P. Nason, A New method for combining NLO QCD with shower Monte Carlo algorithms, JHEP 11 (2004) 040, arXiv:hep-ph/0409146 [hep-ph]. [136] S. Frixione, P. Nason, and C. Oleari, Matching NLO QCD computations with Parton Shower simulations: the POWHEG method, JHEP 11 (2007) 070, arXiv:0709.2092 [hep-ph]. [137] S. Alioli, P. Nason, C. Oleari, and E. Re, A general framework for implementing NLO calculations in shower Monte Carlo programs: the POWHEG BOX , JHEP 06 (2010) 043, arXiv:1002.2581 [hep-ph]. [138] SVN Area housing generator options used to simulate ATLAS official samples, https://svnweb.cern.ch/trac/atlasoff/browser/Generators/MC12JobOptions/ trunk/share. Accessed: 2016-03-25. [139] H.-L. Lai, M. Guzzi, J. Huston, Z. Li, P. M. Nadolsky, J. Pumplin, and C. P. Yuan, New parton distributions for collider physics, Phys. Rev. D82 (2010) 074024, arXiv:1007.2241 [hep-ph]. [140] P. Steinbach, M. Kobel, and U. Husemann, A Cross Section Measurement Of Events With Two Muons At The Z0 Resonance And At Least One Heavy Flavour Jet At The Atlas Experiment Of The Large Hadron Collider. PhD thesis, Dresden, Tech. U., May, 2012. http://cds.cern.ch/record/1478145. Presented 16 Jul 2012. [141] S. Frixione and B. R. Webber, Matching NLO QCD computations and parton shower simulations, JHEP 06 (2002) 029, arXiv:hep-ph/0204244 [hep-ph]. [142] J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer, and T. Stelzer, MadGraph 5 : Going Beyond, JHEP 06 (2011) 128, arXiv:1106.0522 [hep-ph]. [143] J. Pumplin, D. R. Stump, J. Huston, H. L. Lai, P. M. Nadolsky, and W. K. Tung, New generation of parton distributions with uncertainties from global QCD analysis, JHEP 07 (2002) 012, arXiv:hep-ph/0201195 [hep-ph]. [144] ATLAS Collaboration, G. Aad et al., Measurement of hard double-parton interactions in W ( lν)+ 2 jet events at √s=7 TeV with the ATLAS detector, → New J. Phys. 15 (2013) 033038, arXiv:1301.6872 [hep-ex]. [145] J. M. Campbell and R. K. Ellis, An Update on vector boson pair production at hadron colliders, Phys. Rev. D60 (1999) 113006, arXiv:hep-ph/9905386 [hep-ph]. [146] J. M. Campbell, R. K. Ellis, and C. Williams, Vector boson pair production at

195 the LHC , JHEP 07 (2011) 018, arXiv:1105.0020 [hep-ph]. [147] J. M. Campbell, R. K. Ellis, and W. T. Giele, A Multi-Threaded Version of MCFM , Eur. Phys. J. C75 (2015) no. 6, 246, arXiv:1503.06182 [physics.comp-ph]. [148] ATLAS Collaboration, G. Aad et al., Measurement of the Inelastic Proton-Proton Cross-Section at √s = 7 TeV with the ATLAS Detector, Nature Commun. 2 (2011) 463, arXiv:1104.0326 [hep-ex]. [149] Pile-up subtraction and suppression for jets in ATLAS, Tech. Rep. ATLAS-CONF-2013-083, CERN, Geneva, Aug, 2013. https://cds.cern.ch/record/1570994. [150] Z. Marshall and the Atlas Collaboration, Simulation of Pile-up in the ATLAS Experiment, Journal of Physics: Conference Series 513 (2014) no. 2, 022024. [151] ATLAS Collaboration, G. Aad et al., Performance of the ATLAS muon trigger in pp collisions at √s = 8 TeV , Eur. Phys. J. C75 (2015) 120, arXiv:1408.3179 [hep-ex]. [152] E. Boos et al., Generic user process interface for event generators, in Physics at TeV colliders. Proceedings, Euro Summer School, Les Houches, France, May 21-June 1, 2001. 2001. arXiv:hep-ph/0109068 [hep-ph]. [153] M. Cacciari, G. P. Salam, and G. Soyez, FastJet User Manual, Eur. Phys. J. C72 (2012) 1896, arXiv:1111.6097 [hep-ph]. [154] ATLAS Muon Combined Performance Guidelines for Analyses of 2012 Data, https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/ MCPAnalysisGuidelinesData2012. Accessed: 2016-05-28. [155] Egamma Calibration Recommendations for 2011 and 2012 Analyses, https: //twiki.cern.ch/twiki/bin/view/AtlasProtected/EGammaCalibrationGEO20. Accessed: 2016-05-28. [156] Egamma Efficiency Measurements 2012 , https: //twiki.cern.ch/twiki/bin/view/AtlasProtected/EfficiencyMeasurements2012. Accessed: 2016-05-28. [157] Jet uncertainties 2012: Data Recommendations, https://twiki.cern.ch/twiki/bin/viewauth/AtlasProtected/JetUncertainties2012. Accessed: 2016-05-28. [158] Jet Energy Resolution Provder 2012 , https://twiki.cern.ch/twiki/bin/view/ AtlasProtected/JetEnergyResolutionProvider2012. Accessed: 2016-05-28. [159] MET Utility Systematics, https://twiki.cern.ch/twiki/bin/view/AtlasProtected/METUtilSystematics. Accessed: 2016-05-28. [160] ATLAS Collaboration, G. Aad et al., Improved luminosity determination in pp collisions at sqrt(s) = 7 TeV using the ATLAS detector at the LHC , Eur. Phys. J. C73 (2013) no. 8, 2518, arXiv:1302.4393 [hep-ex]. [161] F. Campanario, M. Kerner, L. D. Ninh, and D. Zeppenfeld, WZ Production in Association with Two Jets at Next-to-Leading Order in QCD, Phys. Rev. Lett. 111 (2013) 052003, arXiv:1305.1623 [hep-ph]. [162] M. R. Whalley, D. Bourilkov, and R. C. Group, The Les Houches accord PDFs

196 (LHAPDF) and LHAGLUE, in HERA and the LHC: A Workshop on the implications of HERA for LHC physics. Proceedings, Part B. 2005. arXiv:hep-ph/0508110 [hep-ph]. [163] R. Hochmuth, M. Kobel, and D. St¨ockinger, Studien zur Skalenabh¨angigkeitbei der Simulation und Vorhersage des Prozessses WZ WZ am LHC. PhD → thesis, Dresden, Tech. U., 2014. In German. [164] M. V. Garzelli, A. Kardos, C. G. Papadopoulos, and Z. Trocsanyi, t t¯ W ± and t t¯ Z Hadroproduction at NLO accuracy in QCD with Parton Shower and Hadronization effects, JHEP 11 (2012) 056, arXiv:1208.2665 [hep-ph]. [165] J. M. Campbell and R. K. Ellis, ttW¯ +− production and decay at NLO, JHEP 07 (2012) 052, arXiv:1204.5678 [hep-ph]. [166] J. Campbell, R. K. Ellis, and R. R¨ontsch, Single top production in association with a Z boson at the LHC , Phys. Rev. D87 (2013) 114006, arXiv:1302.3856 [hep-ph]. [167] G. J. Feldman and R. D. Cousins, Unified approach to the classical statistical analysis of small signals, Phys. Rev. D 57 (1998) 3873–3889. [168] S. S. Wilks, The Large-Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses, Ann. Math. Statist. 9 (1938) . [169] L. Moneta, K. Belasco, K. S. Cranmer, S. Kreiss, A. Lazzaro, D. Piparo, G. Schott, W. Verkerke, and M. Wolf, The RooStats Project, PoS ACAT2010 (2010) 057, arXiv:1009.1003 [physics.data-an]. [170] G. Cowan, K. Cranmer, E. Gross, and O. Vitells, Asymptotic formulae for likelihood-based tests of new physics, Eur. Phys. J. C71 (2011) 1554, arXiv:1007.1727 [physics.data-an]. [Erratum: Eur. Phys. J.C73,2501(2013)]. [171] AsymptoticCalculator Class Documentation, https://root.cern.ch/doc/master/classRooStats 1 1AsymptoticCalculator.html. Accessed: 2016-06-05. [172] FrequentistCalculator Class Documentation, https://root.cern.ch/doc/master/classRooStats 1 1FrequentistCalculator.html. Accessed: 2016-06-05. [173] A. Hocker and V. Kartvelishvili, SVD approach to data unfolding, Nucl. Instrum. Meth. A372 (1996) 469–481, arXiv:hep-ph/9509307 [hep-ph]. [174] G. D’Agostini, A Multidimensional unfolding method based on Bayes’ theorem, Nucl. Instrum. Meth. A362 (1995) 487–498. [175] G. D’Agostini, Improved iterative Bayesian unfolding, ArXiv e-prints (2010) , arXiv:1010.0632 [physics.data-an]. [176] T. Adye, Unfolding algorithms and tests using RooUnfold, . Comments: 6 pages, 5 figures, presented at PHYSTAT 2011, CERN, Geneva, Switzerland, January 2011, to be published in a CERN Yellow Report. [177] RooUnfold Documentation WebPage, http://hepunx.rl.ac.uk/∼adye/software/unfold/RooUnfold.html. Accessed: 2016-04-12. [178] EWUnfolding Source Code, svn+ssh://svn.cern.ch/reps/atlasphys/Physics/StandardModel/ElectroWeak/

197 Analyses/EWUnfolding/branches/EWUnfolding-00-00-01-branch/Code. Revision: 238853, Accessed: 2016-05-17. [179] ROOT TTree Documentation, https://root.cern.ch/doc/master/classTTree.html. Accessed: 2016-05-14. [180] C. Hasterok, Optimization of the Search for Contributions of Anomalous Quartic Gauge Couplings to Vector Boson Scattering at the Large Hadron Collider. PhD thesis, Dresden, TU Dresden, Oct, 2013. https://cds.cern.ch/record/1647794. Presented 14 Nov 2013. [181] T. Sandmann, Constraints on Anomalous Quartic Gauge Couplings in the Electroweak Gauge Boson Scattering WZjj Final State with the ATLAS Detector. PhD thesis, Dresden, TU Dresden, Oct, 2014. [182] Summary plots from the ATLAS Standard Model physics group, https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/ CombinedSummaryPlots/SM/index.html. Accessed: 2016-07-14. [183] ATLANTIS Event Viewer, http://atlantis.web.cern.ch/atlantis/. Accessed: 2016-07-14. [184] ATLAS Metadata Interface, https://ami.in2p3.fr/. Accessed: 2016-12-21. [185] SVN Area for Simulation Steering Cards, https://svnweb.cern.ch/trac/atlasoff/browser/Generators/MC12JobOptions. Accessed: 2016-12-21. [186] D. Berge, J. Haller, and A. Krasznahorkay, SFrame: A high-performance ROOT-based framework for HEP data analysis, PoS ACAT2010 (2010) 048. [187] M. Ballintijn, R. Brun, F. Rademakers, and G. Roland, The PROOF Distributed Parallel Analysis Framework based on ROOT , ArXiv Physics e-prints (2003) , physics/0306110. [188] Common Analysis Framework Documentation WebPage, http://hacol13.physik.uni-freiburg.de/∼cburgard/CAF-doc/index.php. Accessed: 2016-03-20.

198 Acknowledgment

Over the last years I had the great pleasure and honour to work with many inspir- ing people. For this experience I am truly grateful and I would like to thank them accordingly. First of all I would like to thank my thesis advisor Michael Kobel. His engaging introductory course to particle physics got me interested in the field and he gave me the opportunity to work in the Dresden ATLAS group and to contribute to the science communication effort. His patience, trust and keen eye for detail have provided a very fruitful environment to work in. My gratitude also goes to Chara Petridou who was so kind to be the second referee for this work. I would like to thank all members of the IKTP for the great work environment they create. A special thanks to the secretaries and system administrators for keeping ev- erything running smoothly. I would also like to thank all people involved in Graduiertenkolleg for the opportunities for phd students that are provided by it. Special thanks goes to Martin zur Nedden who was my second supervisor during my studies. A special thanks goes to my colleagues in the ATLAS VBS group in particular, Philipp Anger, Ulrike Schnoor, Christian Gumpert, Anja Vest, Constanze Hasterok, Carsten Bittrich, Sophie Koßagk, Stefanie Todt, Franziska Iltzsche, Alexander Melzer, Tim Herrmann, and Sarah Krebs. You made coming to work a joy and were always there to discuss ideas, argue about physics, and simply have a really good time. It is a true blessing to have colleagues that one also considers friends. I also want to thank all the people working in the WZ analysis, especially Joany Manjarr´esRamos, Sarah Barnes, Dinos Bachas, and Emmanuel Sauvan. Doing an ATLAS analysis is somewhat of a marathon and a great team goes a long way. A non-negligible share of my working hours was dedicated to science communication which always provided a great sense of purpose and I thank the people of the Netzwerk Teilchenwelt, the ATLAS outreach team and IPPOG to work with them. A special thank you goes to Kate Shaw, Arturo Sanchez Pineda and Sue Cheatham with whom I worked in the ATLAS data and tools group. It is a real blast working with you. A great thank you goes to the people who have read this thesis in advance providing invaluable comments, especially Carsten Bittrich who read the whole document and never got tired of bugging me to finish this work. I am grateful towards my families for their unwavering support which enabled me to study and work on this thesis without worry. Most of all I want to thank my wife Juliana, who is the best companion any doctor can hope for. Thank you for your patience, your love, and your all around awesomeness.

Erkl¨arung

Hiermit versichere ich, dass ich die vorliegende Arbeit ohne unzul¨assigeHilfe Dritter und ohne Benutzung anderer als der angegebenen Hilfsmittel angefertigt habe; die aus fremden Quellen direkt oder indirekt ¨ubernommenen Gedanken sind als solche kenntlich gemacht. Die Arbeit wurde bisher weder im Inland noch im Ausland in gleicher oder ¨ahnlicher Form einer anderen Pr¨ufungsbeh¨orde vorgelegt.

Diese Arbeit wurde am Institut f¨urKern- und Teilchenphysik der Technischen Univer- sit¨atDresden unter der wissenschaftlichen Betreuung von Prof. Dr. Michael Kobel angefertigt.

Es haben keine fr¨uherenerfolglosen Promotionsverfahren stattgefunden.

Ich erkenne die Promotionsordnung der Fakult¨atMathematik und Naturwissenschaften an der Technischen Universit¨atDresden vom 23.02.2011 an.

Dresden, den 15.07.2016

Felix Socher