Faculty of Science Physics and Astronomy

January 12, 2017

Experimental and Theoretical Constraints on pMSSM Models Investigating the diboson excess and fine-tuning

Ruud Peeters

Supervisor: Dr. Sascha Caron

Acknowledgements

This thesis would not have been possible without the help of many people. First and foremost, I want to thank my supervisor Sascha Caron, whose unlimited enthusiasm I will never forget. I also want to thank my unofficial second supervisor Wim Beenakker, who always had an answer to my questions. I want to thank Ronald Kleiss for agreeing to be the second corrector of this thesis.

I want to thank Krzyszof Rolbiecki and Jong Soo Kim for their help with the detector simulation in the diboson analysis and Roberto Ruiz de Austri for his implementation of the fine-tuning measure in SoftSUSY and the many discussions about the correct implementation.

A big thanks goes out to Melissa van Beekveld, her help with the physics, programming and dealing with supervisors was invaluable. I would like to thank Melissa, Bob and Milo for reading parts of my thesis and making sure that some horrible mistakes, typos and other errors did not make the final version. Finally I want to thank my family for their support during this research.

1 Contents

1 Introduction 4

2 The Standard Model6 2.1 Relativistic Lagrangian mechanics...... 6 2.2 Symmetries...... 7 2.3 The ...... 8 2.4 Symmetries of the ...... 9 2.5 Fermionic content...... 11 2.6 Problems of the Standard Model...... 12 2.6.1 Dark ...... 12 2.6.2 The ...... 13

3 15 3.1 Idea behind supersymmetry...... 15 3.2 Supersymmetry breaking...... 16 3.3 The MSSM...... 16 3.3.1 Mixing in the MSSM...... 17 3.4 The pMSSM...... 18 3.5 Conclusion...... 20

4 The diboson excess 21 4.1 Collider experiments...... 21 4.1.1 Variables used in collider physics...... 22 4.2 The diboson excess...... 23 4.2.1 Event selection...... 23 4.3 A diboson excess with pMSSM models...... 26 4.3.1 The Galactic Centre excess models...... 26 4.3.2 pMSSM processes with diboson creation...... 27 4.4 The optimal GCE model...... 30 4.5 Simulation...... 32 4.6 Analysis...... 32 4.7 pMSSM event selection...... 33 4.8 Results...... 34 4.8.1 The best parameter values...... 35 4.8.2 Detector simulation...... 38 4.9 Conclusion...... 40

5 Fine-tuning in pMSSM models 42 5.1 Theoretical background...... 43 5.1.1 Renormalisation...... 43 5.1.2 SUSY Higgs mechanism...... 43

2 5.2 Quantifying fine-tuning...... 46 5.2.1 Measures of fine-tuning...... 47 5.3 Fine-tuning in the literature...... 50 5.3.1 Requirements for minimal fine-tuning...... 51 5.3.2 Natural SUSY...... 52 5.4 Calculating fine-tuning...... 52 5.5 Fine-tuning scan...... 53 5.6 Results...... 54 5.6.1 Final results...... 56 5.7 Discussion...... 58 5.8 Conclusion...... 59 5.9 Outlook...... 60

A Minimisation of the SUSY Higgs potential 61

Bibliography 63

3 Chapter 1

Introduction

Research in elementary has been going on for a long time. The current status is that there is a theory, the Standard Model of particle physics, that can explain almost all processes we observe. However, there are some experimental observations and theoretical problems that show that the Standard Model is not complete. There are many different theories that try to address these problems, but none has been experimentally verified. In this thesis, one of these beyond the Standard Model theories will be examined in more detail. This is the theory of supersymmetry, which introduces one (or more) new for each particle in the Standard Model.

This thesis consists of two different analyses. The goal of both analyses is to investigate the viability of supersymmetry, with a focus on the dark matter properties of supersymmetric models. The analyses have a completely different approach however. In the first analysis, an excess in the ATLAS detector at CERN is investigated. This excess might be a signal of the production of supersymmetric particles. A specific set of supersymmetric models is used to find out if this excess can be caused by supersymmetric processes. The other research project is to study fine-tuning in supersymmetry. Fine-tuning is a measure of how (un)natural a theory is. An unnatural theory is a theory that only works if the parameters of the theory are very restricted, without a clear explanation. Such a theory is not very credible. The goal of this analysis is to find the most natural supersymmetric models, that also satisfy all experimental constraints.

The structure of this thesis is as follows. Chapter2 gives a theoretical background of the Standard Model of particle physics. Supersymmetry is introduced as a beyond the Standard Model theory in Chapter3. The analysis of the diboson excess in the ATLAS detector is discussed in Chapter4. The last chapter, Chapter5, treats fine-tuning in supersymmetry.

4 Conventions

• Natural units are used throughout this thesis, so ~ = c = 1. will therefore be given in units of energy, generally in GeV or TeV.

• The Einstein sum convention is used in this thesis, meaning that all repeated indices are summed over.

∂ • A partial derivative ∂xµ is written as ∂µ. • The Dirac gamma matrices γµ are defined as:

! i! 0 0 I2 i 0 σ γ = , γ = i , I2 0 −σ 0

i where I2 denotes the 2x2 identity matrix and σ are the Pauli matrices, defined by: ! ! ! 0 1 0 −i 1 0 σ1 = , σ2 = , σ3 = 1 0 i 0 0 −1

• The slashed notation a/ denotes the contraction of a four-vector aµ with the gamma matrices γµ: µ a/ = γ aµ

• The adjoint of a fermionic field ψ is defined as: ψ = ψ†γ0

5 Chapter 2

The Standard Model

In this chapter, a quick overview of the Standard Model of particle physics is given. The Lagrangian is introduced first, then the symmetries of the Standard Model Lagrangian are discussed. The Higgs mechanism is introduced, followed by a description of the bosonic and fermionic particle content of the Standard Model. Finally, some problems of the Standard Model are discussed.

This section is mainly based on [1] and [2]. An overview of group theory can be found in [3]. All other sources are cited in the text.

2.1 Relativistic Lagrangian mechanics

Quantum field theory describes the world of elementary particles, combining quantum mechanics and special relativity in one theory. The most important quantity in quantum field theory is the Lagrangian (L).∗ The Lagrangian is so important because the equations of motion of all particles can be deduced from it.

There are three different kinds of particles in the Standard Model, each with their own terms in the Lagrangian: the scalar particles (spin-0), (spin-1/2) and vector (spin-1). In this chapter, a general scalar, and vector will be represented by φ, ψ and Aµ respectively. An implicit spacetime dependence is assumed for all fields (φ = φ(xµ)).

The terms in the Lagrangian can be subdivided into three groups: kinetic terms, terms and interaction terms. The kinetic terms of the three different kinds of particles are shown in Equation 2.1.

µ ∗ Scalar: (∂µφ)(∂ φ )

Fermion: iψ∂/µψ (2.1) µν : FµνF

µ In this equation, γ are the four-dimensional Dirac matrices and Fµν = ∂µAν − ∂νAµ. The kinetic terms dictate the behaviour of a without interactions.

∗This is actually the Lagrangian density, but it is usually called the Lagrangian. The same will be done in this thesis.

6 However, most particles in the Standard Model are not massless. They have mass terms of the following form:

Scalar: m2φφ∗ Fermion: iψmψ (2.2) 2 µ Vector boson: m AµA

Interactions make up the third set of terms, dictating which interactions are possible in a theory. An example of an interaction term is L = λψφψ, which indicates an interaction between a scalar and two fermions. The interaction strength is given by λ.

The equations of motion can be derived from the Lagrangian using the Euler-Lagrange equation for all fields in the theory: δL δL = ∂µ . (2.3) δφ δ(∂µφ)

2.2 Symmetries

A Lagrangian can have one or more symmetries. These are operations that can be applied to the fields in the theory while leaving the Lagrangian invariant. This section will focus on continuous symmetries. The most elementary example of such a symmetry is a global phase transformation: a phase transformation that does not depend on the spacetime coordinate. A global phase transformation is an element of the U(1) group and it transforms fermionic fields as: ψ → ψ0 = eiqαψ. The fermionic terms in the Lagrangian are invariant under this transformation, since each term has a complex conjugate that cancels the phase transformation. It is also possible to have a local U(1) transformation, where α does depend on the space-time coordinate (α = α(x)). This symmetry is also one of the symmetries of the Standard Model. In the case of a local U(1) transformation the invariance of the Lagrangian is no longer present:

µ −iqα(x) µ iqα(x) ψγ ∂µψ → e ψγ ∂µ(ψe ) −iqα(x) µ iqα(x) iqα(x) = e ψγ [(∂µψ)e + iq(∂µα(x))ψe ] µ µ = ψγ ∂µψ + iqψγ (∂µα(x))ψ. (2.4)

The x-dependence of α breaks the invariance of the Lagrangian. The way to get rid of this term is by changing the derivative to a covariant derivative. In the case of a U(1) symmetry this is done by: ∂µ → Dµ = ∂µ − iqAµ. (2.5) 0 The new field Aµ is a vector boson, and it has the transformation property Aµ → Aµ = Aµ+∂µα. With this transformation property, the Lagrangian is invariant. The kinetic term for this new vector boson is added to the Lagrangian to add dynamical degrees of freedom to the field.

7 Non-Abelian gauge theories The Standard Model is also invariant under local transformations from the SU(3) group and the SU(2) group. Since these are both of the form SU(N), they will be discussed simultaneously. A transformation that is an element of an SU(N) group is characterised by:

a a ψ → ψ0 = eiα T ψ.

The T a in this equation are the generators of the group SU(N). The index a runs over the number of generators, which is N 2 − 1 in the case of SU(N). The generators do not commute in general, but have a commutation relation [T a,T b] = if abcT c. The f abc are known as the structure constants. A group is called non-Abelian if the generators are non-commuting.

The derivation for a local transformation is nearly identical to the U(1) case (Equation 2.5), except that the commutation relation of the generators should be taken into account. The generators appear in the covariant derivative, which is now defined as:

a a Dµ = ∂µ − igAµT , (2.6) where g is the coupling constant of the field. There is now not one new field, but N 2 − 1. The kinetic term is different as well, because the generators are non-commuting. The field strength tensor has the form: a a a abc b c Fµν = ∂µAν − ∂νAµ − gf AµAν. (2.7)

The last term results in three and four point interactions of the field Aµ.

2.3 The Higgs mechanism

A theory can have a symmetry at a high-energy scale, while lacking this symmetry at an observable low-energy scale. In such a case, symmetry breaking occurs when moving from the high scale to the low scale. This is the case in the Standard Model, where the Higgs mechanism is responsible for the symmetry breaking. In the Higgs mechanism, a new complex scalar field φ is introduced, with the Lagrangian:

µ ∗ 2 2 4 LH = (∂µφ)(∂ φ ) − µ |φ| − λ|φ| . (2.8)

2 2 4 The second part of this Lagrangian is known as the Higgs potential: VH = µ |φ| + λ|φ| . There are two free parameters in this potential: µ2 and λ. The potential should be bounded from below, otherwise the theory has no stable point. Since the potential is dominated by the λ term for large values of φ, this condition translates to the restriction that λ should be positive. For µ2 there is no restriction: it can be either positive or negative.

A field should always be analysed at the minimum of the potential. In most cases the minimum of the potential can be found at the origin, but this is not always the case. For the Higgs potential there are two different scenarios. For positive µ2 the origin is the only stable point. This scenario is not very interesting, since it will not result in symmetry breaking.

For negative µ2 the field φ is a : it has an imaginary mass. In this case, the origin is an unstable point. The minima of the potential are now located on a ring, as can be seen in Figure 2.2.

8 Figure 2.1: The shape of the Higgs potential for negative µ2.

q −µ2 The radius of this ring is v = 2λ . The shift from the origin to the real minimum breaks the U(1) symmetry that was present before. This mechanism is known as spontaneous symmetry breaking, since the symmetry is broken after spontaneously moving from the (symmetric) origin to a (non-symmetric) minimum.

To get the field content after symmetry breaking, an expansion around the minimum has to be made. It is easiest to take φcl = v, although any other point on the ring is also a valid choice. The field can now be expanded as: φ = (v + η)eiξ. (2.9) There are two new real fields here: the field ξ moving along the circle of minima and the field η moving perpendicularly to it. This expansion should be substituted in the Lagrangian to see the consequences. When this is done, one obtains:

µ 2 µ 2 2 4 3 4 LH = ∂µη∂ η + (v + η) ∂µξ∂ ξ − 4λv η + λv − 4λvη − 4λη . (2.10)

Both new fields get a kinetic term, as expected. The term −4λv2η2 is a mass term for the field η with the correct sign: it is no longer a tachyon. There is no mass term for the field ξ, so this is a massless field. Through symmetry breaking, the complex tachyonic field turns into a real massive field and a real massless field.

2.4 Symmetries of the Standard Model

The Standard Model has a gauge group SU(3) ⊗ SU(2)L ⊗ U(1)Y . The subscript L indicates that only left-handed particles interact with this gauge group, the subscript Y refers to hypercharge, the quantum number associated with the U(1) group.

The behaviour of the Standard Model particles is completely determined if the transformation properties of the particles for all three groups are given. The transformation property is defined by the representation: a set of matrices with the same group structure as the gauge group it corresponds to. For the SU(2) and SU(3) groups, the representation is defined by the number of matrices in this set. In the case of the U(1) group, all matrices are 1-dimensional. The group elements are of the form eiY θ, where Y is the hypercharge of the particle. The representation is then defined by Y .

9 The gauge group of the Standard Model is usually split in two, because the SU(3) group is not affected by symmetry breaking, while the SU(2) ⊗ U(1) is affected. These two groups will be discussed separately below.

Strong nuclear force The eight (N 2 − 1) vector bosons associated with the SU(3) group are the (g). They carry the strong nuclear force. This force only couples to particles that have colour. The theory that describes the strong nuclear force is quantum chromodynamics (QCD).

The strong force has an interesting property. The energy in the field increases when the distance between two coloured particles grows [4]. Eventually there is so much energy in the field that a pair of coloured particles can be created.

Figure 2.2: confinement. In the top picture, the energy between the two has increased to the point that is it favourable to create two new quarks, which are created in the middle picture and fly off with their partner quarks in the bottom figure.

The newly created particles will undergo the same effect, so when a quark or gluon is created in an interaction it will not be observable as a single particle, but it will create a cascade of coloured particles. They combine to form colour-neutral bound states. The process is called hadronization. The macroscopic object that is formed in this way is called a jet.

Electroweak sector The electroweak sector of the Standard Model consists of the two other groups: SU(2)L ⊗ U(1)Y . The SU(2) group has three vector bosons associated with it. These are the W 1,2,3 bosons. The quantum number of this symmetry is the weak isospin vector T~. The U(1) group only has one vector boson, the B boson, with the hypercharge as quantum number.

These four vector bosons are all massless, but in experiments we see three massive bosons (the W ± and Z bosons) and one massless boson (the ). This is a consequence of the symmetry breaking of the SU(2)L ⊗ U(1)Y group. The symmetry breaking is realised with the Higgs mechanism described above, but the symmetry that is broken is now not just U(1), but SU(2)L ⊗ U(1)Y .

10 The details of the full SU(2)L ⊗ U(1)Y symmetry-breaking are not relevant for this thesis, so only the important elements will be discussed here. A more in-depth analysis can be found in [5].

The Standard Model Higgs sector consists of an SU(2) doublet, with hypercharge -1/2. There is one positively charged complex scalar and one neutral complex scalar in this doublet: Φ = (φ+, φ0). The result of the Higgs mechanism in the Standard Model is that the SU(2)L ⊗ U(1)Y symmetry is broken to a U(1)EM symmetry, with the electric charge as its quantum number. The link between hypercharge, weak isospin and electric charge is:

QEM = T3 + Y, (2.11) with T3 the third component of the weak isospin vector. When breaking a U(1) symmetry, one massless and one massive scalar emerge (Equation. 2.10). After breaking a SU(2)L ⊗ U(1)Y symmetry, there are three massless scalars and one massive scalar. The fields of the electroweak gauge bosons can be redefined to absorb the three massless modes, giving mass to the vector boson. In this process the fields mix to form mass eigenstates. The W 1 and W 2 bosons will mix to form the massive W ± bosons, the W 3 and B mix to form the massive Z boson and the massless photon (γ). The massive scalar particle that is left is the Higgs particle that was found at the LHC in 2012 [6].

The bosonic fields and particle content of the Standard Model are listed in Table 2.1.

Table 2.1: Properties of the Standard Model bosons. The last column gives the representation in the Standard Model gauge group.

Field Content Spin QEM SU(3) ⊗ SU(2)L ⊗ U(1)Y a Gµ g 1 0 8⊗1⊗0 1,2,3 ± Wµ W ,(Z, γ) 1 ±1, 0 1⊗3⊗1/2 Bµ (Z, γ) 1 0 1⊗1⊗0 Φ h 0 0 1⊗1⊗ -1/2

2.5 Fermionic particle content

So far all the bosonic particles in the Standard Model have been introduced and their properties have been discussed. The Standard Model also contains fermions. These are discussed in this section. There are 12 elementary fermions in the Standard Model: 6 quarks and 6 .

The fermions that have a colour charge are called quarks. They can be split up in two groups. The up-type quarks are the up (u), charm (c) and (t), which have an electric charge of QEM = +2/3, in units of the charge |e|. The down-type quarks are the down (d), strange (s) and (b). Their electric charge is QEM = −1/3.

The fermions without colour charge are called leptons. There are three charged leptons, with − − − QEM = −1: the (e ), the (µ ) and the (τ ). The other three leptons are the electrically neutral . Each charged has a associated with it. They are therefore called the electron-neutrino (νe), muon-neutrino (νµ) and tau-neutrino (ντ ).

The charged fermions consist of two chiral modes: a left-handed and a right-handed one. The SU(2)L bosons only couple to left-handed modes. Neutrinos do not have strong and

11 electromagnetic interactions: they only interact with the electroweak force and gravity. So far only interactions with left-handed neutrinos have been observed, so only left-handed neutrinos are included in the Standard Model.

Mass terms for the fermions in the Standard Model are not allowed in the Lagrangian before symmetry breaking, because these terms are not invariant under SU(2) transformations. Instead, the Higgs field is used to generate fermionic masses. Terms of the form L = yf ψφψ are introduced, where yf is a Yukawa coupling. It is not immediately clear that this is a mass term and not an interaction, since there are three fields, and a mass term has only two fields. The crux is that the field φ has to be replaced with the expansion around the minimum (Equation 2.9): φ = v + η. There are now two terms: yf vψψ + yf ψηψ. The first term is of the appropriate form for a mass term with mf = yf v, while the other term is an interaction between two fermions and a . All fermions in the Standard Model get their mass through this mechanism.

The fermionic particle content of the Standard Model is summarised in Table 2.2.

Table 2.2: Properties of the Standard Model fermions. The last column gives the representation in the Standard Model gauge group.

Particles Spin QEM SU(3) ⊗ SU(2)L ⊗ U(1)Y (uL, dL) 1/2 (+2/3, -1/3) 3⊗2⊗1/6 uR 1/2 +2/3 3⊗1⊗2/3 dR 1/2 -1/3 3⊗1⊗ -1/3 (eL, νL) 1/2 (-1, 0) 1⊗2⊗ -1/2 eR 1/2 -1 1⊗1⊗ -1

The fermions in the Standard Model are often grouped into generations. These are groups of four particles: one up-type quark, one down-type quark, a charged lepton and its associated neutrino. The generations are ordered in increasing mass, so the first generation consists of the up-quark, down-quark, electron and electron-neutrino.

The fermionic content of the Standard Model is doubled due to the presence of antimatter. Each fermion in the Standard Model has a partner with opposite internal quantum numbers.

2.6 Problems of the Standard Model

Although the Standard Model is an excellent theory for most processes we observe, it is not the final theory of physics. There are some observations that can not be explained by the Standard Model. In addition, there are some internal issues that hint towards theories beyond the Standard Model. Two of these problems are discussed below.

2.6.1 Dark matter The Standard Model describes each kind of matter we can observe directly. But there is another kind of matter that is only indirectly observable. It only has gravitational and possibly weak interactions. The existence of this type of matter is established by astronomical observations [7,8].

12 Observations of stars in the Milky Way have shown that the radial velocity of these stars does not follow the distribution that is expected from the amount of matter that is observed directly. Stars far away from the centre of our galaxy should have a rapidly decreasing radial velocity, but the velocity distribution remains nearly constant (see Figure 2.3). This can either be explained by a large amount of unidentified matter in the galaxy, or by a modification of the laws of gravity on large scales. There are more discrepancies caused by this phenomenon, and it turns out that it is most naturally explained by a new particle. This is therefore the most widespread explanation.

Figure 2.3: The velocity of stars as a function of the distance to the Galactic centre. The prediction is shown by line A, the observed velocities are shown by line B. Source: [9].

This unidentified matter is called dark matter, since it has no electromagnetic interactions. The most widespread explanation is that this type of matter is made up by a new elementary particle, which has to be massive to produce the gravitational effects that are observed. If the particle has an interaction strength similar to the weak interaction and a mass around the electroweak scale (∼100 GeV), this particle can naturally explain the amount of dark matter that is observed [10]. Such a particle is called a weakly interacting massive particle (WIMP).

The Standard Model does not contain a particle that can explain dark matter. The neutrinos are the only candidate, but their mass is too small to be able to make up all dark matter [11]. Extensions of the Standard Model have therefore been proposed that contain a dark matter candidate.

2.6.2 The hierarchy problem Another problem of the Standard Model is of a theoretical nature. This problem arises from loop corrections to the Higgs propagator. In this case the 1-loop diagrams are the most interesting (see Figure 2.4).

The contribution of a loop diagram depends on which particle is in the loop. A fermionic diagram has a contribution: Z ∞ 2 2 2 2 3 k + m ∆mh ∝ −yf dk k 2 2 2 , (2.12) 0 (k − m ) with yf the (fermionic) coupling constant, m the mass of the fermion and k the momentum of the fermion. The integral over the size of k has an upper limit of infinity, since all possible

13 Figure 2.4: 1-loop corrections to the Higgs mass for fermions (a) and scalars (b). Adapted from [12]. loop momenta have to be taken into account. This makes it a divergent integral. The Higgs mass observed in experiments is finite, so the infinity originating from this integral should disappear. This can be fixed by applying renormalisation.

The first step in the renormalisation procedure is to cut off the integral at a certain value Λ. The upper limit of the integral is now Λ instead of infinity. This cut-off is made to include the fact that the Standard Model will not hold at the highest energies. At a certain energy scale, gravitational effects will start to play a role. This scale is the Planck scale, at 1016 GeV. Using the cut-off, the corrections to the Higgs mass will be of the form:

2 2 ∆mh ∝ O(Λ ) + O(log Λ). (2.13)

The first term is a quadratic divergence, while the second term is a logarithmic divergence. The integral now depends on the cut-off scale, but the low-energy observables should not depend on the exact value of this scale. The next step in the renormalisation procedure is therefore to absorb this Λ-dependence into the parameters of the theory, making the parameters energy dependent. In the case of Higgs corrections, the coupling constants can be used to absorb the scale-dependence. This is done by measuring the value of the physical coupling constants. By adding counterterms to the Lagrangian, the divergent terms are cancelled [13].

The degree of divergence determines how tuned these counterterms have to be. All other processes in the Standard Model have at most a logarithmic divergence, where there is a log(Λ) dependence. As a result, the counterterms have to be tuned up to a factor 16 log(MP ) = log(10 ) ≈ 37. In the case of a quadratic divergency, like in the corrections to the Higgs mass, this problem is much bigger. There is a tuning up to a factor 2 16 2 32 (MP ) = (10 ) = 10 . This means that the counterterms are fixed up to 32 digits [14]. A small deviation in one of the terms would lead to a much heavier Higgs boson. So, the fact that the Higgs mass is not at the Planck scale, but at the 100 GeV scale can only be explained in the Standard Model by tuning the parameters of the theory to an extraordinary amount. This is called the hierarchy problem: the parameters have to be extremely tuned due to the large scale difference between the Planck scale and the electroweak scale.

It could be that nature works this way, however most physicists agree that such a theory is not at all elegant. It would be better if there was a mechanism that solves this problem and explains why the Higgs boson has such a low mass.

14 Chapter 3

Supersymmetry

This chapter gives an introduction to the theory of supersymmetry. The focus will be on the particle content of the theory, not on the exact mathematical formulation. This chapter is mainly based on [15].

3.1 Idea behind supersymmetry

The two problems listed in the previous chapter indicate that there is a need for a new theory. A more detailed analysis of the hierarchy problem provides a new idea for such a theory. The two diagrams in Figure 2.4 have a similar contribution, but their coupling constants are different and they contribute with an opposite sign. The hierarchy problem would be solved if there would exist a fermion for each boson in the Standard Model and vice versa. Their contributions to the Higgs mass would cancel and there is no more problem with the low value of the Higgs mass. This is the idea behind supersymmetry (SUSY).

In supersymmetry, each Standard Model particle gets one or more . They are combined in a supermultiplet: a combination of states that are related through the supersymmetry operation. Each Standard Model particle is in a supermultiplet with a supersymmetric particle. The structure of the multiplet is such that each constituent has the same number of degrees of freedom. These multiplets are constructed before symmetry breaking, so all particles are massless.

There are two kinds of multiplets relevant for this thesis. The chiral multiplet contains one fermion and two real scalars. There is one scalar for each helicity state of the fermion. All fermions in the Standard Model are in such a multiplet, as are the Higgs doublets (there are two Higgs doublets needed in supersymmetry; this will be discussed later).

The other relevant multiplet is the vector multiplet. It contains one massless vector boson and one massless fermion. All Standard Model gauge bosons are in such a multiplet.

The new interactions in supersymmetry would allow for quick proton decay, but this process has never been observed. Therefore, there are very strong bounds on proton decay [16]. A new symmetry is postulated that prevents terms responsible for proton decay: R-parity. It introduces a new multiplicative quantum number that has to be preserved in every interaction. Standard model particles have R-parity +1, whereas supersymmetric particles have R-parity -1. The number of supersymmetric particles in each interaction needs to be even due to this symmetry.

15 An effect of R-parity is that the lightest supersymmetric particle (LSP) can not decay, since there is no supersymmetric particle to decay to. This has important consequences, because the LSP could be a WIMP. If R-parity is a symmetry of nature, this weakly-interacting LSP could be a viable dark matter candidate. R-parity is assumed throughout this thesis.

3.2 Supersymmetry breaking

Although supersymmetry sounds very appealing, it must be said that it can not be an exact symmetry of nature. If it were, all superpartners would have the same mass as their Standard Model counterparts. There would be multiple additional massless particles and strongly interacting particles that would definitely have been observed already, so supersymmetry has to be a broken symmetry.

The next question is how this symmetry breaking is realised. There are different ways of breaking a symmetry, the most common of which are spontaneous breaking (like the Higgs mechanism) and explicit breaking. Theoretically the most appealing option is spontaneous symmetry breaking. There are some supersymmetric theories that use spontaneous breaking, but the most general solution is to use explicit breaking. In this way the ignorance of an underlying theory is parametrised by the explicit breaking terms. Interactions and masses that break the symmetry are manually added to the Lagrangian in this case. The masses of the supersymmetric particles are all generated by this mechanism, so they are free parameters.

3.3 The MSSM

One can construct multiple theories on the basis of supersymmetry, using a different number of and adding additional multiplets without a Standard Model particle. The simplest (and most used) supersymmetric theory is called the minimal supersymmetric Standard Model (MSSM). In this theory one supersymmetry is proposed and no additional particles are added to the theory.

Particle Content The particle content of the MSSM can partly be deduced from the Standard Model. The partner particles of the Standard Model fermions are called , there are 21 of them. Then there are also 21 partners of the anti-fermions. There are 2 sfermions for each charged fermion, one for the left-handed and one for the right-handed chiral mode of the fermion. Since the Standard Model only has left-handed neutrinos, there are also only left-handed sneutrinos. The name of the individual sfermions is obtained by putting an ‘s’ (for scalar) before the name of the Standard Model particle. This results for example in squarks, selectrons and tau sneutrinos. Sfermions have the same colour quantum number and electroweak interactions as their Standard Model counterparts.

The supersymmetric partners of the electroweak gauge bosons are obtained by taking the partners of the gauge eigenstates W 1,2,3 and the B boson, not of the mass eigenstates W ±, Z and γ. This is done for reasons that will become clear later on. These superparticles are the winos and the bino (appending ‘ino’ to the name of the boson). They are collectively known as electroweakinos. The of the gluon is the .

The Higgs sector is more complex in the MSSM. To make it a valid theory, one Higgs doublet is no longer enough. There have to be two doublets to give mass to the up type quarks on the one hand and the down type quarks and charged leptons on the other hand. Both doublets

16 consist of two complex scalars, one neutral and one charged scalar:

+ ! 0 ! Hu Hd Hu = 0 ,Hd = − . (3.1) Hu Hd

Each complex scalar gets a supersymmetric partner called a .

The symbols of the superpartners are very similar to the Standard Model particles, but they have a tilde on top of them to distinguish the two. A sup squark is thus denoted byu ˜ and a sneutrino byν ˜.

3.3.1 Mixing in the MSSM All new sectors in the MSSM contain mixing. Each sector will be discussed separately in the following section.

The Higgs sector contains 8 real scalars, and just like in the Standard Model, three of those are used to give mass to the W ± and Z boson. Then there are five scalars left over, that mix to form the neutral bosons h, H0 and A, and the charged bosons H±.

The Higgs potential in the MSSM is more extended because of the presence of two Higgs doublet. It is given by:

2 2 0 2 2 2 0 2 0 0 1 2 02 0 2 0 2 2 VH = (µ +m )|H | +(µ +m )|H | −(bH H +c.c.)+ (g +g )(|H | −|H | ) . (3.2) Hu u Hd d u d 8 u d This potential only depends on the neutral Higgs fields, since the two charged fields can be rotated to zero because of gauge freedom. This potential will be discussed in a lot more detail in Chapter5. In this section, only the different parameters will be discussed.

The mass of the two Higgs doublets is governed by three parameters: the µ term which gives mass to both doublets, and the mHu and mHd terms that give mass to the individual doublets. The mixing between the two doublets is governed by the parameter b. This potential is 0 0 minimized for a certain value of Hu and Hd . These values are the vacuum expectation values, denoted by vu and vd. These variables are written as: vu = v sin β and vd = v cos β, such that 2 2 2 tan β = vu/vd and v = vu + vd.

Because of the electroweak symmetry breaking conditions, not all parameters are independent. Only three parameters are needed as input. There is one parameter that is always used as an input parameter: tan β. The other two can be either mHu and mHd , or µ and mA, where mA is the mass of the CP-odd Higgs boson A.

In the Standard Model, mixing only occurs in the electroweak sector, where the vector bosons mix to form mass eigenstates. The electroweak mix as well, but due to the presence of the , there are now eight particles that mix, instead of the four in the Standard Model. This mixing can be split up in a charged and a neutral sector. In the neutral sector the bino, the third wino and the two neutral higgsinos (with masses M1, M2 and µ 0 respectively) mix to form four neutralinosχ ˜1,2,3,4, in order of increasing mass. In the basis 0 3 0 0 ψ = (B,e Wf , Heu, Hed ), the mass term in the Lagrangian is:

1 0 T 0 L 0 = − (ψ ) M 0 ψ + c.c., (3.3) χ˜ 2 χ˜

17 where c.c. denotes the complex conjugate, and the neutralino mixing matrix is given by:   M1 0 −cβsW mZ sβsW mZ    0 M2 cβcW mZ −sβcW mZ  Mχ˜0 =   . −cβsW mZ cβcW mZ 0 −µ  sβsW mZ sβcW mZ −µ 0

In this matrix cβ = cos β, sβ = sin β, cW = cos θW and sW = sin θW , with θW the Weinberg mixing angle. The composition of the is determined by the mass of the four constituents.

The lightest neutralino is a popular dark matter candidate, since it has all the properties of a WIMP: it does not have electromagnetic and strong interactions and it is often predicted to be the LSP. Its mass and composition are not constrained by the theory, so by choosing the mass of its constituents, the composition of the LSP can be tuned to explain the dark matter observations.

In the charged sector, the two remaining winos mix to form two particles with charge ±1. ± These particles mix with the two charged higgsinos to form four :χ ˜1,2, again in order ± + + − − of increasing mass. In the ψ = (Wf , Heu , Wf , Hed ) basis, the mass term is given by:

1 ± T ± L ± = − (ψ ) M ± ψ + c.c., (3.4) χ˜ 2 χ˜ with the chargino mixing matrix:  √  0 0 √ M2 2cβmW    0√ 0 2sβmW µ  Mχ˜± =   . √ M2 2sβmW 0 0  2cβmW µ 0 0

Also in the sector the gauge eigenstates mix to form mass eigenstates. Due to the presence of supersymmetry breaking terms, there are off-diagonal elements in the mass matrices, resulting in mixing. Among the supersymmetry breaking terms are the trilinear couplings. These are the couplings associated with interactions between three scalar fields. In the fermion sector, these terms are of the form: L = yuAuu˜QH˜ u. Because of the large amount of new scalars in the MSSM, there are a lot of these couplings. They are combined in three complex 3x3 matrices Au, Ad and Ae, for the up-type squarks, down-type squarks and charged sleptons respectively.

The total amount of mixing in the sfermion sector is governed by three 6x6 matrices: one for the up-type squarks, down-type squarks and charged sleptons. These matrices have to be diagonalised to find the mass eigenstates. The mass matrices contain a large number of parameters: a careful analysis shows that there are 105 new parameters in the MSSM [17].

3.4 The pMSSM

It is difficult to work with such a large parameter space, so often only a part of the parameter space is used, instead of the full MSSM. One of the submodels of the MSSM is the phenomenological MSSM (pMSSM), where several experimental results are used to reduce the number of parameters, while still capturing the phenomenologically important part of the MSSM. The constraints used in the pMSSM are [18]:

18 • No new CP-violating terms. The MSSM introduces a lot of CP-violating terms. These terms are removed in the pMSSM, because they are phenomenologically not relevant, and there are strong experimental constraints [19, 20].

• No tree-level flavour changing neutral currents. In such a current a quark could change flavour (from u to c for example) when emitting a photon or a Z boson. There are very strong limits on these processes, and all experimental evidence suggests that they can not occur at tree-level [21].

• Experimental constraints on mixing. Limits from e.g. K0-K¯ 0 mixing put severe constraints on the mass gap between first and second generation sfermions [22].

The impact of these constraints is that (i) the SUSY breaking matrices are diagonal and real and (ii) first and second generation sfermions are mass degenerate. This reduces the number of parameters to 19. These parameters are:

• M1,2,3; bino, wino and gluino masses • µ; higgsino mass

• mA; CP odd Higgs mass • m ; first/second generation left-handed squark mass Qe1 • m ; third generation left-handed squark mass Qe3

• m ˜ ; first/second generation right-handed squark masses u˜R,dR

• m ˜ ; third generation right-handed squark masses t˜R,bR • m ; first/second generation left-handed slepton mass L˜1 • m ; third generation left-handed slepton mass L˜3

• me˜R ; first/second generation right-handed slepton mass

• mτ˜R ; third generation right-handed slepton mass

• At,b,τ ; trilinear couplings of third generation sfermions

• tan β; ratio of the vacuum expectation values of the two neutral Higgs fields: vu/vd

Mixing in the pMSSM These constraints greatly reduce the amount of mixing in the sfermion sector of the pMSSM, while the mixing in the and Higgs sector is the same as in the MSSM.

The absence of trilinear couplings for the first and second generation sfermions ensures that there is no more mixing for these particles: these sfermions are all mass eigenstates in the pMSSM.

The only mixing now occurs in the third generation. In the stop, sbottom and stau sector, the left- and right-handed particles mix to form mass eigenstates. The mass eigenstates in the stop sector are denoted as t˜1,2 in order of increasing mass. The same naming convention is used in the sbottom and stau sector.

19 3.5 Conclusion

The pMSSM is a interesting theory that solves the hierarchy problem and provides a viable dark matter candidate. Although supersymmetric particles have never been observed, it remains a promising candidate for beyond the Standard Model physics. In the remaining part of this thesis, two seperate studies of the pMSSM are done. First, an experimental anomaly is studied that might be explainable by a certain set of pMSSM models. Afterwards, the fine-tuning of pMSSM models is studied, with a focus on finding the most natural models that satisfy all experimental and theoretical constraints.

20 Chapter 4

The diboson excess

An large part of the search for supersymmetric particles has taken in place in collider experiments. Whereas astronomical experiments are plagued by large uncertainties, collider experiments are more in control of the processes they observe. In addition, collider experiments are built to detect (almost) all particles produced in an interaction, providing much more useful information. Collider experiments can therefore set strong limits on masses of supersymmetric particles. It is interesting to find out if the GCE models can also produce a signal in current collider experiments.

Currently, the world’s most powerful particle collider is the LHC at CERN [23]. There are four main experiments build around the LHC, two of which are ATLAS and CMS. In June 2015, the ATLAS experiment reported that they measured more events than expected in one of their search channels. This excess in the channel that looks for high energetic vector bosons has a local significance of 3.4 σ [24] in the WZ channel, and also the ZZ and WW channels have significant signals. Because there are so many search channels in the ATLAS and CMS experiments, it is expected that random fluctuations will sometimes produce an excess with such a high significance. But this excess was more special, because the CMS collaboration also had a excess in a similar search channel [25], but with a lower significance. This chapter will focus on the ATLAS results, because this result has a higher significance.

In this chapter, this excess is investigated in light of the pMSSM solutions that fit the GCE. First the LHC and the ATLAS experiment are discussed, along with some collider physics. The excess is then examined in more detail, with a focus on the event selection. This is followed by a discussion of how the pMSSM models can possibly explain this observation and another look at the event selection, but now with a focus on supersymmetric processes. The simulations that were used are then explained, along with the analysis software. Finally, the results are presented and discussed.

4.1 Collider experiments

The LHC is a circular collider, where two beams of particles are accelerated through a beam line in opposite directions. The two beams are brought to collision when they have reached the desired momentum. These collisions produce many particles, that might contain information on new physics. The results of these collisions are registered in detectors and the data is analysed. In the LHC, the two beams cross at four interaction points along the beam line. A detector is built around each point to register the product of the collisions. Two of these detectors are general purpose detectors (CMS and ATLAS), the other two are detectors that investigate specific processes (LHCb and ALICE). The LHC is used for proton-proton, proton- and nucleon-nucleon collisions, but since the diboson excess was found in the proton-proton run,

21 this chapter will focus on proton collisions.

4.1.1 Variables used in collider physics It is useful to define a coordinate system to talk about certain points and regions in the detector. The x-axis is defined from the interaction point towards the centre of the collider ring, the y-axis is defined as upwards from the interaction point, the z-axis runs parallel to the beam line. Since ATLAS has cylindrical symmetry, it is useful to define cylindrical coordinates as well. The r-coordinate of a point is defined as the distance in the (x, y)-plane, the φ-coordinate as the angle between the point and the positive x-axis (Figure 4.1).

Most particles can be detected in a collider experiment, but some particles interact so rarely that they generally escape detection. In the Standard Model, only neutrinos are undetectable, but beyond the Standard Model theories often predict particles that do not show up in a detector. It is hard to get information on these particles, but some information can be reconstructed. Because the in the LHC collide with exactly opposite momentum in the z-direction, the total energy and momentum in the (x,y)-plane should add up to zero, because of momentum conservation. When this is not the case, at least one particle was produced that escaped detection. The production rate and characteristics of neutrinos are well known, so the missing energy can be used to obtain information on new physics. The amount of missing transverse energy is denoted by E/ T .

The location of a particle track in a detector is often given in terms of the rapidity y, which is   defined as y = 1 ln E+pz . A difference in rapidity is Lorentz invariant, which makes it useful 2 E−pz in collider experiments where particles are boosted with respect to the rest frame of the detector. Rapidity can be hard to measure however, as both the energy and the momentum in the z-direction have to be known. That is why pseudorapidity has been introduced. It is θ defined as η = − ln(tan( 2 )) where θ is the angle from the positive z-axis. Just like rapidity, a difference in pseudorapidity is Lorentz invariant. In the limit of highly relativistic particles (E ' |~p| ), pseudorapidity is the same as rapidity.

The newly introduced parameters are often used to describe jets. It is useful to have a coordinate system where distances are invariant under boosts in the z-direction. This can be done by using the (φ,η)-plane. The (approximate) size of a jet is measured in this coordinate system, and is denoted by R = p∆φ2 + ∆η2.

Figure 4.1: The coordinate system used in the ATLAS detector. Extracted from [26] .

22 4.2 The diboson excess

The excess in the ATLAS detector was found in the diboson channel. This channel selects events with two hadronic jets originating from vector bosons with high momentum. The excess was found in the invariant mass distribution of these two jets, which is defined as: 2 µ µ 2 minv = (p1 + p2 ) .

The most straightforward explanation of an excess in this channel is the existence of a new particle. This new particle would have to couple to quarks and/or gluons to be produced and couple to to form the observed decay products. The two most popular candidates are the W 0 boson and a Kaluza-Klein mode of the . The W 0 boson is a new particle that appears in a lot of grand unified theories [27]. The Kaluza-Klein mode appears in theories with more than 4 dimensions [28, 29]. The processes that can produce an excess in the diboson channel are shown in Figure 4.4.

(a) (b)

Figure 4.2: Beyond the Standard Model processes that can produce a diboson excess, with: (a) a W 0 boson and (b) a Kaluza-Klein mode of the graviton.

The background in this channel is mostly due to t-channel QCD processes, like in Figure 4.3.

Figure 4.3: A QCD background process in the diboson search channel.

4.2.1 Event selection Each search channel has to filter the relevant events out of the data generated by the collisions. This subsection discusses the criteria used for event selection in the diboson search channel [24].

The search algorithm looks for two massive jets with a radius R ≤ 1.2 within a pseudorapidity

23 region η(j) < 2.0 and with a momentum in the transverse plane of pT > 540 GeV. The requirement on pseudorapidity is used to ensure that all particles in the jet are registered by the inner detector, since it only has a limited range.

To ensure that this search channel has no events in common with other diboson searches, there are some criteria to reject high-energetic leptons. Events are rejected if they contain an ± ± isolated electron with ET > 20 GeV in the regions η(e ) < 1.37 or 1.52 < η(e ) < 2.47. This restriction on the pseudorapidity is again made because the detector only has a limited ± range. An event is also rejected if it contains a muon with pT > 20 GeV within η(µ ) < 2.5. If an event has E/ T exceeding 340 GeV, it is also rejected.

Some other cuts are used to further differentiate between signal events and background. As already mentioned, the background mostly consists of t-channel QCD processes, so a cut is made to distinguish s-channel processes like the W 0 and Kaluza-Klein production from the background t-channel processes. This cut is made on the polar-angle (θ) distribution of the jets, since s-channel and t-channel diagrams produce particles according to a different angular distribution. This angular dependence can be derived by analysing the matrix element of a general s-channel and t-channel diagram. An s-channel diagram (see Figure 4.4a) has a matrix −1 µ µ 2 element that always contains a factor s , with s = (p1 + p2 ) , where p1 and p2 are defined as 2 in the diagram [30]. In the limit of massless particles this gives s = 4| ~p1| .

(a) (b)

Figure 4.4: Examples of a general (a) s-channel and (b) t-channel process.

The matrix element of a t-channel diagram, like in Figure 4.4b, contains a factor t−1, with µ µ 2 t = (p1 − p3 ) using the definition from the diagram, with p1 and p3 defined as in the diagram. This results in t = −2| ~p1| · | ~p3|(1 − cos θ) in the limit of massless particles, with θ the polar angle of p3. Comparing the two channels it can be seen that the t-channel processes will produce more events where the outgoing particles are close to the z-axis, since the matrix element is proportional to: 1 1 M ∝ ∝ , t 1 − cos θ so it peaks at θ=0.

A cut on the rapidity of the two outgoing jets can be used to take advantage of this difference. The rapidity difference of the two jets y(j1) − y(j2) can be written as: θ θ y(j ) − y(j ) = − ln(tan 1 ) + ln(tan 2 ) 1 2 2 2 ! tan θ1 = − ln 2 . θ2 tan 2

24 Using the fact that the two jets will be produced back to back (θ ≡ θ1 = θ2 − π): θ y(j ) − y(j ) = − ln(− tan2 ). 1 2 2 This distribution is shown in Figure 4.5. By only selecting events where the rapidity of the two jets satisfies |y(j1) − y(j2)| < 1.2, s-channel diagrams are favoured, because these events are more likely to have a θ-value in the allowed range.

Figure 4.5: The effect of the rapidity cut. The blue line shows the rapidity difference as a function of the angle θ, the yellow line shows |∆y| = 1.2.

Another cut is made to ensure that both jets have similar momentum. Since the two jets originate from the same particle in the s-channel diagram, and since they have similar mass, they should carry a similar fraction of the momentum. The diboson search only accepts events with (pT1 − pT2 )/(pT1 + pT2 ) < 0.15.

Finally, only hadronic jets are registered. The branching ratio of vector bosons to hadronic final states is 0.7 for Z bosons and 0.68 for W bosons [17]. Since both vector bosons have to decay to to appear in this search channel, the branching ratios should be squared. This cut roughly halves the total number of events.

The invariant mass distribution in the ATLAS detector after applying the cuts is shown in Figure 4.6. The three main properties of the excess can be read of from this plot:

1. The peak is located at 2 TeV

2. The width of the peak is ∼100 GeV

3. The excess consists of ∼15 events

Sometimes it is easier to use the cross section (σ) instead of the number of events, since the cross section only depends on the process, while the number of events also depends on detector specifications. To obtain the cross section from the number of events, the integrated luminosity is used. The luminosity L is defined as: 1 dN L = , (4.1) σ dt

dN where dt is the interaction rate. Integrating the luminosity over time gives the integrated −1 luminosity Lint. The diboson excess was found with an integrated luminosity of 20.3 fb .

25 Figure 4.6: Measured diboson invariant mass distribution in the ATLAS experiment. Extracted from [24].

Combining this with the 15 events that were measured, the process responsible for the excess should have a cross section of at least 0.74 fb. The actual cross section needs to be higher because of inefficiencies in the detector.

4.3 A diboson excess with pMSSM models

The goal of this research is to find out whether the diboson excess can be explained using a specific set of pMSSM models. These models will be introduced first. Afterwards, the processes that might explain the diboson excess wil be discussed in more detail.

4.3.1 The Galactic Centre excess models Observations of the centre of our Galaxy at gamma-ray wavelengths show that there are more produced than expected in the 1 to 5 GeV range (see Figure 4.7).

Figure 4.7: The photon flux from the Galactic Centre with all backgrounds subtracted. Systematic and statistical uncertainties are shown. A few fits are shown as well. Extracted from [31].

26 There are multiple explanations for this excess, from both astrophysics and particle physics. Some of the astrophysical observations are that millisecond pulsars [32] or cosmic-ray outbursts [33] are responsible for the excess, while a particle phsyics explanation is that annihilating dark matter is the source of this discrepancy, via the decay of created in the annihilation. It is also possible that multiple of these processes combine to create the excess. The annihilating dark matter explanation is therefore an interesting scenario to analyse in more detail.

The pMSSM contains a viable dark matter candidate, so it is interesting to find out if there exist pMSSM models that predict the Galactic Centre excess (GCE). A scan was done to find such regions in the pMSSM parameter space [34]. This resulted in three distinct types of models, each with a good fit to the GCE and not yet excluded by other experiments. These regions are named after the main annihilation products produced in dark matter collisions: WW1, WW2 and tt. The properties of these models will be discussed below.

WW1 The first class of models mainly produces pairs of W bosons in annihilations. The lightest neutralino is ∼50% higgsino and ∼50% bino. This model provides the best fit to the GCE, with a maximal p-value of 0.45.† An interesting feature of these models is that the value of the dark matter relic density Ωh2 (the amount of dark matter present in the current universe) is very close to the value measured by experiments. The experimental value is Ωh2 = 0.1198 ± 0.0015 [35] and these models have Ωh2 ≈ 0.07 − 0.125. The relic density was not used as a constraint in the scan, so it is interesting that this model has the right Ωh2 value.

WW2 The second class of models has the same annihilation products: pairs of W particles. The composition of the dark matter candidate is different: roughly 90% bino, 6% wino and 4% higgsino. The p-value of these models varies between 0.02 and 0.15. The relic density of these points is again in reasonable agreement with the experimental data (Ωh2 ≈ 0.05 − 0.15). tt The main annihilation products of the third set of models are top anti-top pairs. Its lightest neutralino is almost exclusively bino. A remarkable feature of these models is that the light is extremely light. This particle is often assumed to have a mass of at least 600 GeV, but these models have a stop squark with a mass of 200-250 GeV. The p-value of these models does not exceed 0.1, and the relic density of these models is more spread out than for the other models, but still in the right ballpark (Ωh2 ≈ 0.066 − 0.22).

4.3.2 pMSSM processes with diboson creation The goal is now to see if one of these pMSSM models can be used to explain the diboson excess. Examples of pMSSM processes with diboson production are shown in Figure 4.8.

The diboson search channel in the ATLAS experiment is set up to discover particles like the W 0 and the Kaluza-Klein mode that appear in s-channel diagrams, so it will be hard to produce this excess with a pMSSM model. The parameters need to be optimal to get enough events, so they will have to be tweaked in order to meet all criteria. Not all parameters were fixed by the GCE scan, so there is some freedom in some of them. Only 8 of the 19 pMSSM

†The p-value gives the probability that the annihilation spectrum of a pMSSM model produces the observed photon spectrum, taking into account the error margins of the observation.

27 (a)

(b)

Figure 4.8: Examples of supersymmetric processes that can produce a diboson excess. parameters had an effect on the GCE fit, so the other 11 were omitted from the scan. Some of these 11 parameters can be used to fit the diboson creation.

As can be seen from the Feynman diagrams in Figure 4.8, the relevant particles are the squarks, , charginos and neutralinos. The relevant parameters can be split into two groups. There are parameters that are fixed by the GCE scan, and free parameters, that were not relevant for the scan. Note that the restricted parameters are not restricted to one value, but to a certain range. There is therefore still some freedom in choosing their values. The parameters relevant for the diboson creation are shown in Table 4.1.

Table 4.1: Parameters relevant for creating a diboson excess

Restricted parameters Free parameters M1,M2, µ M3 m ˜ , m˜ m ˜ , mu˜ , m ˜ , m˜ Q3 tR Q1 R dR bR

The masses of the squarks will be very important for the diboson production. They need to be heavy in order to produce events with an invariant mass at 2 TeV. Because the masses of the third generation squarks are restricted and do not have the right value for the production of useful events, only first and second generation squarks are used for the diboson creation. This significantly reduces the computation time.

28 The pMSSM processes that produce a diboson excess can be split up into two distinct pieces. The first part is the production of squarks/gluinos. The second part is the decay of the squarks/gluinos into vector bosons and neutralinos. The number of events is governed by the production cross section of the squarks/gluinos, the decay fraction to vector bosons and the efficiency of the cuts. These three steps should give a combined result of at least σ = 0.74 fb. The relevant parameters (Table 4.1) all have a certain impact on the production cross section and the location of the peak. For example, in the s-channel diagram, the gluino should have a mass of at least 2 TeV, since the mass of the s-channel particle will be the maximum of the invariant mass of the vector bosons in the final state. But, the higher the gluino mass, the lower the cross section, so all other masses have to be tuned such that all squarks will ultimately decay into vector bosons, otherwise there will not be enough events. The mass dependence of the cross section can be seen in Figure 4.9. In the t-channel diagram the same trade-off has to be made. A high squark and gluino mass will result in higher invariant masses, but will also decrease the number of events.

Figure 4.9: The cross section for sparticle production in the LHC. Each line shows the production cross section of two supersymmetric particles as a function of the average mass of the particles. Extracted from [36].

For both scenarios the masses of neutralino and chargino (together called electroweakinos, or EWinos) are very important as well. The squarks should decay into a quark and a EWino. The final vector boson should have as much energy as possible, so it is desirable to have the heavy EWino masses as close to the squark masses as possible. This way, a minimum amount of energy is deposited in the quark. In the next step, the heavy EWino should decay to the lightest neutralino. To give the vector boson resulting from this decay the maximum amount of energy, there should be a large mass gap between the heavy and the light EWino.

In addition to these constraints, there are also constraints from the decay chain of squarks to the lightest neutralino. Only certain decay channels can produce a diboson excess, so the compositions and masses of the relevant particles should be adjusted such that the right amount of particles decay through the right channel. It will be hard to produce enough events, so as many squarks as possible should have the right decay channel. The squark can either decay to a squark plus a vector boson or to a quark plus an electroweakino. These are the only options, because there has to be a coloured particle as a decay product and there has to be one supersymmetric particle due to R-parity conservation.

The first of these two options looks like the best fit for creating a diboson excess, since it only produces a vector boson and an (undetectable) supersymmetric particle, but it will actually not create many interesting events. Only events with high-energetic vector bosons are

29 investigated by ATLAS, so the vector boson should have a large momentum. But because the squarks are so heavy, they can not have a lot of momentum. When this squark then radiates a vector boson, it can only radiate the little excess momentum it. The squark will therefore not create a high-energetic vector boson, so this process will not result in events relevant for the diboson channel.

The other decay mode does work, since there can be large mass difference between squark and EWino, so the vector boson can have a large momentum. However, there are some constraints. Only the decays where a heavy EWino is produced are desirable:

± q˜ → q χ˜1,2 0 q˜ → q χ˜2,3,4 The heavy EWino has to emit a high-energetic vector boson when decaying to the lightest neutralino. This is not possible if the squark decays directly to the lightest neutralino. As much squarks as possible should create a high energetic vector boson, otherwise there will not be enough events. Therefore, the coupling of the lightest neutralino to the squark should be as small as possible. This can be achieved by making the lightest neutralino mostly higgsino. Since the higgsino coupling is proportional to the mass of the corresponding Standard Model quark [15], this coupling is very small for the squarks in the first and second generation.

The next step is the decay of heavy EWinos. This decay has to produce the vector boson that is measured in the diboson search channel. The EWinos have multiple decay channels, and not all of them produce a vector boson. The decays that do produce a vector boson are:

± ± 0 χ˜1,2 → W χ˜1 0 0 χ˜2,3,4 → Z χ˜1 There are also decay channels possible where the EWino decays in several steps to the lightest neutralino, but these are not desirable since this will produce multiple vector bosons with a small amount of energy instead of one high-energetic vector boson.

There are some other decay channels possible for an EWino: ± ˜ ˜ χ˜1,2 → t b / b t 0 ˜ ˜ χ˜2,3,4 → t t / b b These decay modes will not produce a vector boson, so they will not contribute to the diboson excess. To make sure that these decays do not occur, the stop and sbottom squarks should be heavier than the relevant EWinos.

4.4 The optimal GCE model

The three different models that fit the Galactic Centre excess have different properties, so some model will be more suited to create a diboson excess, while others will be less suited. The tt models can be ruled out immediately. In these models the stops are very light, so a heavy EWino will always decay to a stop squark and a bottom quark. There will be no production of high energetic vector bosons and thus no possibility for a diboson excess.

For the WW1 and WW2 models it is harder to draw conclusions. The decay channels of both models were analysed with a low squark mass (∼1 TeV) to see which model has the best chance of creating a diboson excess. For both models the third and fourth heaviest neutralino

30 along with the second heaviest chargino are heavier than the squarks. Therefore, the only possible decay channels for squarks are to the two lightest neutralinos and the lightest chargino. For both models the branching ratios were calculated using SUSY-HIT [37].

To find out what the optimal model is, the composition of the different EWinos has to be compared. A neutral EWino has a bino, wino and higgsino composition. The branching ratio of an EWino is determined by its composition and the couplings strengths of the different constituents. As mentioned before, the higgsino coupling of a squark is proportional to the mass of the associated quark. Since only the first and second generation squarks will be used, the higgsino coupling will always be small. The wino and bino coupling strengths are determined by the couplings constants g and g0 respectively. The wino coupling is almost twice as strong as the bino coupling, with g ≈ 0.65 and g0 ≈ 0.35 [17]. A squark will therefore have the highest branching ratio when decaying to an EWino that is mostly wino.

There is an important distinction in the decay of left-handed and right-handed squarks. The wino interacts through the SU(2)L coupling, so it does not couple to right-handed particles. A chargino only has a wino and a higgsino component. A right-handed squark has a small higgsino coupling and does not couple to the wino, so right-handed squarks will almost never decay to charginos. Right-handed squarks will therefore mainly decay to neutralinos.

The composition of the relevant EWinos will be discussed separately for these two sets of models.

WW1 The composition of the light EWinos in the WW1 models is approximately as follows [38]: bino wino higgsino 0 χ˜1 0.5 0 0.5 0 χ˜2 0.02 0.02 0.96 ± χ˜1 0 0.05 0.95 Because none of these particles has a large wino component, the decay is not dominated by the wino component, but by the bino and higgsino components. A left-handed squark in a WW1 0 model will decay to all three particles, with a slight preference forχ ˜1 decay. This is not ideal, since the decay should produce as many heavy EWinos as possible. For the right-handed squark the decay channel is even worse, since this particle will almost exclusively decay to a 0 χ˜1. Therefore, the WW1 models are not very well suited for creating a diboson excess.

WW2 The composition of the light EWinos in the WW2 models is roughly [38]:

bino wino higgsino 0 χ˜1 0.88 0.07 0.05 0 χ˜2 0.1 0.85 0.05 ± χ˜1 0 0.86 0.14 These compositions are more promising for creating a diboson excess. The two particles that should give the highest branching ratio are mostly wino, so these are indeed the preferred decay channels. For the right-handed squarks the results are less promising. The only decay 0 0 mode isq ˜R → χ˜1 q, since the wino part of theχ ˜2 will not interact with a right-handed particle. The right-handed squarks will therefore not contribute a lot to the diboson production.

31 When all three models are considered, the WW2 models will produce the most diboson events. Therefore, the decision is made to continue with this set of models to find out if a diboson excess can be reproduced.

4.5 Simulation

To investigate whether it is possible to explain the diboson excess with pMSSM models, simulations were done with MadGraph [39]. MadGraph is a Monte Carlo event generator that first generates the Feynman diagrams for a given process and then simulates events. The event generation depends on both the input process and the properties of the supersymmetric particles. The output of MadGraph consists of two parts: the cross section of each diagram and the kinematic properties of all particles. The output can be analysed using different analysis software packages, but it can also be read in a user readable format.

4.6 Analysis

There are two steps in the analysis of the results. First of all, the events generated by MadGraph have to be analysed to see if a diboson excess emerges from the data. Then, a detector simulation is done to find out how the processes will look like in a detector. Both steps will be discussed below in more detail.

Generated Events The analysis of the events generated by MadGraph is done with ROOT. ROOT is a C++ interface developed at CERN that is designed to perform data analyses in particle physics [40]. The MadGraph output can easily be converted to a ROOT readable format. The kinematics of each event can be analysed with ROOT, so all the cuts that have to be applied to the data can be implemented in the analysis.

To simplify this part of the analysis, we chose to not include showering and hadronization in the initial event generation. Showering is the radiation of low-energetic particles from the initial and final state particles. Hadronization is the process were quarks and gluons produced colour neutral particles, combined in jets, as discussed in Section 2.4. The focus of this research is to find out if a diboson excess can be created with pMSSM models. These effects complicate the analysis, so only when the results are very promising and hadronization and showering start to become relevant, these effects must be included.

This choice does have a drawback: the measured transverse momentum of a vector boson jet will always be smaller than the transverse momentum of the original particle, because some energy will be radiated away. To account for this effect, the cut on the transverse momentum is set to a higher value. It is not clear how large this effect is, but in this analysis an increase of ∼10% was used. The pT cut was set to 600 GeV instead of the 540 GeV cut that is used in the ATLAS analysis.

Detector Simulation In the last part of the analysis, a detector simulation was performed to mimic detector effects. In the analysis by ROOT, it is assumed that all measurements are perfect, but this is not the case. A detector has a certain resolution, so the position and energy of a particle is only known to a certain precision. In addition, the inefficiencies in measuring or tagging certain particles are included. A detector simulation also accounts for pile-up: the effect that multiple interactions can not be differentiated. This can happen when there are multiple interactions in

32 the same bunch crossing or when bunch crossings follow each other so rapidly that they can not be differentiated between. This part of the analysis does include showering and hadronization.

The detector simulation in this research is done using CheckMATE [41–46], a program that is able to combine a detector simulation (DELPHES3) with the results from an LHC analysis. The selection criteria and cuts used in the ATLAS diboson analysis are implemented in CheckMATE for this research.

4.7 pMSSM event selection

It is useful to take another look at the event selection discussed in section 4.2.1, but now with a focus on the supersymmetric processes that are of interest. An event is only accepted if it meets all the criteria listed below in Table 4.2. The ATLAS analysis uses some cuts that are only relevant with hadronization included. These cuts are not included in Table 4.2, since hadronization is not included in this analysis. The cuts that are most important for the pMSSM processes will be discussed in more detail below.

Table 4.2: Cuts used in the ATLAS analysis of the diboson excess.

Two hadronic jets with R ≤ 1.2 and pT > 540 GeV, within |η| < 2.0 |y(j ) − y(j )| < 1.2 jets 1 2 (pT (j1) − pT (j2))/(pT (j1) + pT (j2)) < 0.15 minv(j1, j2) > 1.05 TeV ± No isolated e with ET > 20 GeV in the region

leptons η(e±) < 1.37 or 1.52 < η(e±) < 2.47 ± ± No isolated µ with pT > 20 GeV in the region η(µ ) < 2.5 missing energy E/ T < 350 GeV.

The pT requirement in the first cut is very important for our analysis. Both particles that originate from the decay of the EWino need to have a mass much smaller than the energy of the heavy EWino. Otherwise, there would not be enough energy left over to be transferred to the vector boson and the lightest neutralino. Because of this large mass difference, the vector boson will have a very wide energy distribution. There will thus be a lot of events where the vector boson has a small pT . Cutting away these events affects both the invariant mass of the jets and the number of events that is selected.

The missing energy cut is mainly used to remove Z boson decay to neutrinos, but in the supersymmetric analysis it is a significant cut due to the production of two neutralinos. These are invisible to the detector, so they will contribute to the missing energy. As mentioned before, the neutralino can carry a large fraction of the momentum of the heavy EWino, so this cut can be very restrictive. One should keep in mind that parts of the missing energy can cancel each other: when two neutralinos are produced back to back there is no missing energy. This reduces the effect of this cut significantly.

The rapidity cut differentiates s-channel and t-channel diagrams (Figure 4.5). The t-channel diagrams (Figure 4.8a) are very important for the pMSSM models, because the s-channel diagrams (Figure 4.8b) need a heavy particle with a low cross section. This cut will decrease the significance of those t-channel diagrams.

33 When all cuts are combined, roughly 2% of the events are kept. It is therefore necessary to tune the relevant masses such that they are optimal for the cuts. This is highly non-trivial however, since it is hard to predict what the exact influence of the parameters is on the effect of the cuts. The parameters already need to be tuned to obtain the highest possible branching ratio and production cross section. Adding the tuning to meet the cut requirements, it is clear that it will be hard to find the optimal set of parameters.

4.8 Results

Keeping all of this in mind, the event generation can be started. To get a feeling for the effect of some of the relevant parameters, a model point is selected and the masses of the first and second generation squarks are set to 1200 GeV, while all other parameters are left unchanged. In this model, the two heaviest neutralinos and the heaviest chargino are heavier than the squarks, so they will not be produced. In addition, the gluino mass is not changed, so it stays 4 TeV. This has the effect that the t-channel diagrams will not contribute. This is not optimal for the diboson production, but it greatly reduces the computation time. The s-channel diagrams might already produce enough events for a diboson excess. If this is not the case, the gluino mass will be lowered to included t-channel diagrams as well.

This process has a total cross section of 0.25 fb. Using the integrated luminosity of the LHC in the 8 TeV run of 20.3 fb−1, this results in only 5 events. Since the cuts will generally have an efficiency of a few percent, there will not be enough events left to have a measurable excess.

It is interesting however to see the shape of the distribution and the effect of the cuts. The cuts are certainly necessary to get a peak around 2 TeV. This can be seen by looking at the shape of the invariant mass distribution of the two vector bosons. The distribution before applying cuts can be seen in Figure 4.10.

Figure 4.10: Diboson invariant mass distribution before applying cuts.

There is already a peak in this distribution, but the location is totally different from the location of the diboson excess. Applying the cuts ensures that the peak is shifted towards higher masses. This is mainly due to the pT cut. Events where the jets have a low transverse momentum will also have a small invariant mass. By only selecting jets with a pT higher than 600 GeV only the events with a high invariant mass are selected. The drawback of this has already been mentioned: only a very small fraction of the events remains. The invariant mass

34 distribution after the cuts is shown in Figure 4.11.

Figure 4.11: Diboson invariant mass distribution after applying cuts, demanding pT > 600 GeV.

The effect of the cuts is clearly visible. The entire low energy part of the spectrum is gone and only a small fraction of events, located at much higher invariant masses, remains.

4.8.1 The best parameter values In order to increase the number of events, some changes are made to this model point. First of all the gluino mass is lowered. This has the effect of allowing t-channel diagrams with a gluino exchange and s-channel diagrams where gluinos are produced that decay to squarks. This has a large effect on the cross section, as can be seen in Figure 4.9 on page 29. The cross section roughly doubles, and since gluinos can only decay to squarks, all of these events can contribute to the diboson signal.

The next step is to change the mass of the EWinos. By doing this, the model points can no longer be classified as WW2 models. There was some room in these models to change the EWino masses, but not enough to have the effect needed for the diboson excess. As mentioned before, the relevant EWino masses should be set just below the squark mass, to give as much 0 ± energy as possible to the EWino. In this case that means that the masses ofχ ˜2 andχ ˜1 , both of which are dominated by the wino mass, should be adjusted. So, the wino mass should be set just below the squark mass. However, the squark can also decay through other channels. If the squark and EWino have a mass gap that is too small, the EWino decay is disfavoured and other decay channels will dominate.

The higgsino mass parameter µ is set to high values to ensure that the two heavy neutralinos and the heavy chargino are decoupled from the spectrum. This will not have a large influence on the invariant mass distribution, but it greatly reduces the computation time, since only three EWinos will contribute to the process, instead of all six. The bino mass is not changed, since it should be low to ensure that the lightest neutralino keeps its bino composition, which is needed to keep the optimal decay channel intact.

The squark masses, the gluino mass and the wino mass are the main variables left. The next step is then to find the optimal set of parameter values such that the peak is in the right place and there are enough events after the cuts. This is time consuming, since generating events for

35 a single model takes about a day and it is not trivial how each parameter will influence the signal exactly. To this end, a more systematical search is done. The most important parameters (the gluino mass, squark mass and the wino mass) are changed systematically to see the effect of each variable.

The squark mass is varied between 1000 GeV and 1400 GeV, with 1000 GeV being the first/second generation squark mass limit after the first run of the LHC [47]. The gluino mass is set just above the squark mass, except for squark masses of 1000 GeV, since the first run of ‡ the LHC put a limit on the gluino mass of mg˜ > 1200 GeV . The heavy EWino mass is varied between 1200 GeV and 800 GeV. For the scenario with a 1000 GeV squark and a 1200 GeV neutralino no events were generated, since this would not yield any events, as the squark will not decay to the neutralino.

The results of these runs can be seen in Figure 4.12.

‡This is actually a very optimistic interpretation of the LHC results.

36 m 0 = 800 GeV m 0 = 1000 GeV m 0 = 1200 GeV χ˜2 χ˜2 χ˜2

mq˜ = 1000 GeV mg˜ = 1200 GeV

m = 1200 GeV 37 q˜ mg˜ = 1200 GeV

mq˜ = 1400 GeV mg˜ = 1400 GeV

Figure 4.12: Diboson invariant mass distribution for different squark and neutralino masses. A number of conclusions can be drawn from these plots. First of all, there should be a clear mass gap between the heavy EWino and the squarks. This can be seen in the two plots with the neutralino mass just below the squark mass. For these models, the decay to the lightest neutralino is favoured. This effect is visible for mass gaps up to 150 GeV, so the difference between the squark mass and the heavy EWino mass needs to be more than 150 GeV.

In the rightmost column, the effect of different squark and gluino masses is clearly visible. Each step up from 1400 GeV increases the number of events drastically. For a squark mass of 1 TeV and a gluino mass of 1.2 TeV this results in 7 events. Although this is not yet enough to explain the 15 events in the ATLAS analysis, it is an indication that it might be possible to explain all events, when tuning the parameters some more.

In Figure 4.12, it can also be seen that the location of the peak does not change much despite all the variations, and the peak is not located at 2 TeV, but around 1500 GeV. The width of the peak seems to be good, although it should be noted that this analysis does not yet include a detector simulation, which generally increases the width.

The peak location is affected by the pT cut. When the cut is placed higher, more low-energy events will be cut away and the peak will shift towards higher values. However, the pT cut is not a variable, it is only unknown because of the difference due to the absence of showering. It is unlikely that this difference is larger than the 10% used here. The peak location at 1500 GeV seems to be the highest value that can be achieved.

4.8.2 Detector simulation The final stage of the research is to do a detector simulation with the points from the grid search. One of the model points has a similar number of events as the ATLAS signal, so it is interesting to see how the detector effects influence the signal.

The detector simulation was done with the same events as the grid search, but one point was left out, because it gave too few events (mq˜ = 1200 GeV, m 0 = 1200 GeV). The results are χ˜2 shown in Figure 4.13.

38 m 0 = 800 GeV m 0 = 1000 GeV m 0 = 1200 GeV χ˜2 χ˜2 χ˜2

mq˜ = 1000 GeV mg˜ = 1200 GeV 39

mq˜ = 1200 GeV mg˜ = 1200 GeV

mq˜ = 1400 GeV mg˜ = 1400 GeV

Figure 4.13: Diboson invariant mass distribution for different squark and neutralino masses after a detector simulation. These results show that a diboson excess can not be created with these pMSSM models. Some of the models with a low squark mass produce a reasonable number of events, but the distribution is much broader than the ATLAS excess and the peak is not located at 2 TeV. It is interesting to note that after a full detector simulation there is a much clearer relation between the location of the peak and the mass of the particles. In the upper row, with a squark and gluino mass of 1400 GeV, the peak shifts towards lower values when decreasing the neutralino masses. To see if this effect can produce a peak around 2 TeV, two more simulations where done, both with a squark and gluino mass of 1800 GeV. One simulation used a neutralino mass of 1600 GeV, the other used a mass of 1700 GeV. The results are shown in Figure 4.14.

(a) (b)

Figure 4.14: Diboson invariant mass distribution of models with a squark and gluino mass of 1800 GeV after detector simulation. (a) has a neutralino mass of 1600 GeV, (b) of 1700 GeV.

From these plots it is clear that it is possible to get a peak at higher invariant masses as well, but the width is still to large. In addition, the number of events will decrease drastically for higher squark and gluino masses, so unless there is an unknown mechanism to strongly enhance squark and gluino production, these pMSSM models can not be used to explain the diboson signal.

When comparing Figure 4.12 with Figure 4.13, the number of events is much higher after a detector simulation. This is an effect of showering that was underestimated, but it does not influence the results, since the width and location of the peak distribution is so different from the signal in ATLAS.

4.9 Conclusion

The conclusion of this research is clear: it is not possible to use any of the models that fit the Galactic Centre excess to explain the excess in the diboson channel of the ATLAS experiment, even with some deviations from these models. The combination of getting the right number of events and the peak location of 2 TeV was not found. In addition, a detector simulation showed that the width of the invariant mass distribution was too large compared to the signal in ATLAS.

Just when this research was finished, the End of Year event was held at CERN. At this meeting, the latest results from ATLAS and CMS were presented, including the results from the diboson channel. The new data (the first data with a centre of mass energy of 13 TeV) showed that there was no longer a sign of an excess in the diboson channel [48]. Because of

40 this, it is no longer interesting to study this excess in the search for signals of supersymmetric processes.

41 Chapter 5

Fine-tuning in pMSSM models

Searches for supersymmetry have been going on for decades, but still no sign of it has been found. A problem with finding supersymmetry is that it can never be ruled out completely. There is no upper bound on the supersymmetric masses, so the supersymmetric particles can always stay out of the reach of experiments. It is therefore interesting to find out if there are criteria that can be used to determine when supersymmetry is no longer a viable beyond the Standard Model theory.

One of these criteria is the naturalness of a theory. A theory is unnatural if the parameters are very restricted, without any underlying explanation. An example of this is the hierarchy problem in the Standard Model (Section 2.6.2): the Higgs boson can only be light when tuning the counterterms to many order of magnitude. The naturalness of a theory is related to the fine-tuning: the lower the fine-tuning of a theory, the more natural the theory is.

A clear example of fine-tuning in physics is the cosmological flatness problem [49]. Depending on the energy content of our Universe, it is either flat or curved. If the energy density ρ is higher than the critical density ρc, the Universe has a positive curvature. If the density is lower than ρc, the Universe has a negative curvature. Our Universe is flat only if the density is exactly equal to the critical density. The ratio of the energy density and the critical energy density is denoted by Ω = ρ . The deviation from a flat Universe is then defined as ρc Ωk = 1 − Ω. The data from the Planck satellite, combined with other experiments, shows that +0.0040 Ωk = 0.0008−0.0039 [35]. Our Universe is therefore flat to a high degree, without any clear reason. The problem is even worse when looking at the early Universe. The gradual expansion of the Universe is responsible for increasing the curvature. So the curvature should have been even smaller in the early Universe to result in the value that is measured nowadays. The early −60 Universe should have had a curvature of Ωk ' 10 . If there is no mechanism to explain this flatness, the theory is very unnatural. The chance that a value so close to zero is chosen randomly, is extremely small. This fine-tuning problem is solved by introducing inflation [50]. If the Universe went through a period of rapid expansion in its early stages, the curvature would decrease automatically. This mechanism provides an explanation for why the Universe is flat and makes sure that the early Universe could have had a natural density, solving the fine-tuning problem.

There is no law in physics that states that a valid theory should have a low fine-tuning, but there are good reasons to pursue supersymmetric theories with a low amount of fine-tuning. Supersymmetry solves the hierarchy problem in the Standard Model, but it creates its own loop corrections that can give sizeable contributions to bosonic masses. If these contributions introduce an unavoidable fine-tuning problem, the theory loses much of its appeal. The goal of our research is to show that natural supersymmetric theories are still viable.

42 Because searches for supersymmetry have been going on for a long time, one would expect that natural theories have already been ruled. The goal of this research is to show that there are pMSSM models with a low amount of fine-tuning that satisfy all theoretical and experimental constraints. The main focus is to make a prediction of the composition of the LSP in these models with minimal fine-tuning.

The structure of this chapter is as follows. First of all, the theoretical background needed for the fine-tuning analysis is explained. This is followed by a discussion of the calculation of fine-tuning in supersymmetric models, along with the advantages and drawbacks of a number of fine-tuning measures. An overview of the fine-tuning literature is given next, where the goal is to find out what conclusions are drawn from naturalness arguments and what the source of these conclusions is. Then follows a discussion of our own framework to calculate the fine-tuning of supersymmetric models. This framework is then used to do a scan of the pMSSM parameter space to find regions of minimal fine-tuning. The results of this scan are analysed and some comments are made on the composition of models with minimal fine-tuning. Finally some issues are discussed and the chapter ends with a conclusion and an outlook.

5.1 Theoretical background

5.1.1 Renormalisation The concept of renormalisation was already discussed in Section 2.6.2. It will also be important in the fine-tuning research, but in a somewhat different way. It was mentioned that the parameters of the theory are energy dependent after applying the renormalisation procedure. This energy dependence is captured in the renormalisation group equations. These equations are of the form: ∂g i = β(g), (5.1) ∂ log(µ) where gi is a parameter in the theory, µ is an energy scale and β(g) is a function that can depend on all the parameters in the theory. These functions are known as beta functions. Each parameter in the theory has its own beta function.

The beta functions can be used to evolve the parameters from one energy scale to another. This can be useful, since some equations have to be evaluated at a certain energy scale, while the parameters are defined at another scale. In the pMSSM there are 19 independent parameters, but the dependent parameters also have their own beta function. Because some of the parameters have to be determined through an iterative procedure, it is easier to determine all parameters (dependent and independent) at the starting scale and run all of them to the other scale using only renormalisation group evolution (RGE). There are 26 beta functions in the pMSSM, that have to be solved simultaneously. These differential equations are coupled, so it is impossible to solve this system analytically. Therefore, the renormalisation group equations have to be solved numerically.

5.1.2 SUSY Higgs mechanism Before discussing the various measures of fine-tuning that are used in the literature, it is useful to look at the Higgs mechanism in SUSY, since this will give a formula that is used in all fine-tuning measures.

43 The supersymmetric Higgs potential is more complicated than the Standard Model version, mainly because of the presence of two Higgs doublets instead of one. The full Higgs potential is [15]:

V = (µ2 + m2 )|H |2 + (µ2 + m2 )|H |2 + [b( HaHb) + c.c.] H Hu u Hd d ab u d 1 1 (5.2) + (g2 + g02)(|H |2 − |H |2)2 + g2|H · H∗|2, 8 u d 2 u d where ab is the completely anti-symmetric tensor of rank 2, with 10 = 1. The abbreviation ‘c.c.’ means complex conjugate. The two fields Hu and Hd are doublets:

+ ! 0 ! Hu Hd Hu = 0 ,Hd = − . Hu Hd

The Higgs potential can be simplified by applying some constraints on the charged scalars. The minimum of the potential should be neutral, since otherwise electromagnetism would be broken. + One of the fields can be rotated to zero because of the SU(2) symmetry, so we choose Hu = 0. − ∂VH H is then automatically zero by demanding + = 0. The resulting potential is then: d ∂Hu

2 2 0 2 2 2 0 2 0 0 1 2 02 0 2 0 2 2 VH = (µ +m )|H | +(µ +m )|H | −(bH H +c.c.)+ (g +g )(|H | −|H | ) . (5.3) Hu u Hd d u d 8 u d The goal is now to find the minimum of the potential. Because there are two fields, there are two minimisation conditions:

∂VH ∂VH = 0; = 0. (5.4) ∂H0 ∂H0 u min d min The full derivation of this minimisation procedure can be found in AppendixA. The two conditions that arise from this procedure are: 1 b = sin(2β)(m2 + m2 + 2µ2) 2 Hu Hd (5.5) m2 m2 − m2 tan2 β Z = Hd Hu − µ2. 2 tan2 β − 1 The second equation is very powerful. It gives a relation between several supersymmetric variables and an electroweak scale observable (the Z boson mass).

Coleman-Weinberg Potential

This formula for mZ is a tree-level result, but in the case of the MSSM Higgs potential, the higher order corrections are also relevant. These corrections can be calculated using the Coleman- Weinberg effective potential [51]. The Coleman-Weinberg potential is the effective one-loop correction that should be added to the tree-level Higgs potential. It is given by:  !  X (−1)si m2 3 V = (2s + 1)c m4 log i − . (5.6) CW 64π2 i i i  Q2 2 i

The sum is over all the particles in the theory, si denotes the spin of a particle, mi is the mass of a particle and ci = ccol · cchar with ccol the number of colour degrees of freedom and cchar the charge degrees of freedom. So, ccol = 3 for coloured particles and 1 for uncoloured particles, while cchar = 2 for charged particles and 1 for neutral particles. These factors, combined with the (2si + 1) count the degrees of freedom of each particle. The energy scale Q

44 is the scale at which the effective potential formula is evaluated.

This potential can be derived by computing all one-loop diagrams without external lines. The full derivation of this effective potential will not be discussed here, since it is quite technical, while its added value is low. The original derivation can be found in the paper by Sidney Coleman and Erick Weinberg [51]. An overview with three different derivations can be found in appendix B of [52].

Corrections to mZ Using the formula for the effective potential, the corrections to the mass of the Z boson can be calculated [53]. The minimisation of the Higgs potential has to be repeated, but with V = VH + VCW as the potential. The minimisation conditions are now: m2 1 m2 + µ2 − b cot β − Z cos(2β) + Σ = 0 Hu 2 v u u (5.7) m2 1 m2 + µ2 − b tan β + Z cos(2β) + Σ = 0, Hd d 2 vd where the new terms are defined as:

∂VCW Σu,d = . (5.8) ∂vu,d These equations can again be solved to get an expression for the mass of the Z boson, but a simplification can be made by using the SU(2) invariance of the potential. Because of this ∗ ∗ ∗ ∗ invariance, the potential can only depend on terms of the form HuHu,Hd Hd , or (HuHd +HuHd ). The derivatives can therefore be written as: ∂V Σ = CW = Σuv + Σd v , u ∂v u u u d u (5.9) ∂VCW u d Σd = = Σd vu + Σdvd, ∂vd where

u ∂VCW Σu = 2 ∂vu d ∂VCW Σd = 2 (5.10) ∂vd d u ∂VCW Σu = Σd = . ∂vuvd

Using these relations, a new formula for mZ can be derived:

m2 m2 + Σd − (m2 + Σu) tan2 β Z = Hd d Hu u − µ2. (5.11) 2 tan2 β − 1 The formulas for the terms originating from the effective potential can be worked out, using the knowledge that only the masses depend on the vacuum expectation values:

1 ∂m2 d X 2si i 2 Σd = 2 (−1) (2si + 1)ci 2 F (mi ) 32π ∂vd i (5.12) X 1 ∂m2 Σu = (−1)2si (2s + 1)c i F (m2). u 32π2 i i ∂v2 i i u

45 with ! m2 F (m2) = m2 log − 1 . (5.13) Q2 These contributions have to be worked out for all particles separately. This has to be done for the mass eigenstates, at tree-level. This means that the mass matrices of all particles have to 2 2 be diagonalised and then the derivatives with respect to vu and vd have to be computed. All d u formulas for Σd and Σu are listed in the appendix of [53]. These were all checked, and one u ˜ inconsistency was found and reported to the authors. This inconsistency regards the Σu(b1,2) term. It should be:   f 2µ2 + 8g2 ( 1 − 1 x )∆ u ˜ 3 2 2 b Z 4 3 W b Σu(b1,2) = F (m˜ ) gZ ∓  , 16π2 b1,2 m2 − m2 ˜b2 ˜b1 with all terms defined as in [53].

There is another way to derive the formula for the Z boson mass. This method uses tadpole diagrams. Tadpole diagrams are Feynman diagrams like in Figure 5.1.

Figure 5.1: A tadpole diagram.

The propagator in a tadpole diagram carries no momentum, since there is no momentum going out of the tadpole. In a quantum field theory, the tadpoles should vanish. This corresponds to minimising the effective potential, as can be seen from the effective action. The effective action Γ consists of kinetic terms and the (tree-level plus 1-loop) effective potential. The amplitude of a diagram with n external lines§ is equal to the n-th derivative of the effective action, multiplied by i [30]: δnΓ Γ = i , (5.14) n δφ1 . . . δφn where each φi is one of the fields in the theory. In the case of 1 external line (a tadpole diagram) 0 0 this corresponds exactly to a first derivative of the effective action, with respect to Hu and Hd . This is just the first derivative of the effective potential, since the derivatives of the kinetic terms vanish. Therefore, computing the tadpole diagrams can be used as an alternative way to derive equation 5.11.

5.2 Quantifying fine-tuning

We can now return to the original subject of this research: fine-tuning. When discussing fine-tuning, there are two questions that arise: How do you quantify the fine-tuning of a

§the 1-particle irreducible diagram to be precise

46 model? And if you have a measure for fine-tuning, what value should it have to call a theory fine-tuned?

It turns out that both these questions do not have a straightforward answer. There are multiple ways to quantify fine-tuning, each one defined by a measure. Each measure produces a number (∆) that can be compared between models. The degree of fine-tuning is given by the inverse of ∆. A fine-tuning of ∆ = 1000 indicates that at least one parameter in the theory has to be constrained up to permille level, so 3 significant figures are fixed. A large amount of fine-tuning therefore corresponds to a high value of ∆.

There are two ways to look at which values of ∆ are acceptable to call a theory natural. One option is to specify a certain threshold and all theories below that threshold are considered natural. There is no consensus on the value of this threshold, but in general theories with ∆ ≤ 10 are considered natural, although limits as high as ∆ ≤ 30 or even higher are used as well. The other option is to compare different models, and see which model has the lower fine-tuning. This approach is used in this thesis, since we want to find the models with minimal fine-tuning.

5.2.1 Measures of fine-tuning The different fine-tuning measures in supersymmetry all use the mass of the Z boson as their starting point, because supersymmetry provides a formula for this mass that incorporates contributions from (almost) all particles in the theory. It is therefore a good criterion for naturalness.

The Higgs mass could also be used as the benchmark to compare all supersymmetric contributions, but there are two reasons why the Z boson mass is used instead. The first reason is that the value of the Higgs mass was unknown for a long time. Since fine-tuning arguments where made long before the discovery of the Higgs boson, there is a vast literature concerning fine-tuning with respect to the Z boson mass. It is easier to compare results when the same benchmark is used. The second reason is that the Higgs mass is closely related to the Z boson mass in supersymmetry. The tree-level result for the Higgs mass is: 2 2 2 mh = mZ cos (2β). The results obtained by using the Higgs mass will therefore only be slightly different from the results obtained with the Z boson mass.

In general there are two effects that can be used in a fine-tuning measure. The first effect is that a parameter can have a contribution to the Z boson mass that is much larger than the Z boson mass itself. Then there has to be at least one other independent parameter that cancels this large contribution to obtain the correct value for mZ . This parameter has to be fixed to multiple digits to produce the correct observable. This indicates a large fine-tuning.

The other effect used in a fine-tuning measure is how the change of a parameter influences the Z boson mass. If a small change has a large effect, the value of the parameter is very restricted, since a small deviation would result in a wrong observable.

The most important condition for a fine-tuning measure is that it has to compare independent contributions. There can be different terms in the calculation of the Z boson mass that all have a large contribution, but if there is some internal mechanism that explains the cancellation between these terms, this does not indicate a high amount of fine-tuning. This is the reason why the simplification in the formula for the Z boson mass is used (formula 5.11). d By splitting the Σu and Σd terms, it becomes clear that the Σu terms cancel in the mZ formula. If this simplification is not used, there is an internal mechanism that changes the

47 fine-tuning. Whether this overestimates or underestimates the fine-tuning depends on the u d d relative sign of the Σu and Σd terms on the one hand, and the Σu term on the other hand. Some other examples of such an internal mechanism will appear later on.

There are three main measures of fine-tuning that are used in the literature. They will be discussed separately below. This discussion is mainly based on [54]. Sources for each individual measure are cited separately.

High-scale measure [55] The high-scale measure ∆HS uses the SUSY parameters defined at the unification scale Λ (∼1016 GeV). It takes into account renormalisation group evolution and compares both the high scale masses and the radiative corrections with the mass of the Z boson.

This measure is essentially a generalisation of the fine-tuning measure in the Standard Model. As mentioned in Section 2.6.2, the hierarchy problem is characterised by a large difference between the physical Higgs mass and the radiative corrections to the Higgs mass. The fine-tuning measure in the Standard Model can thus be defined as:

2 δmh ∆SM = 2 . (5.15) mh/2

A similar procedure is used to obtain the high-scale measure, with the differences that the formula for the Z boson mass is used instead of the Higgs mass and that there are multiple parameters, each with their own radiative corrections.

Using the high scale formula for the Z boson mass, we obtain:

m2 m2 (Λ) + δm2 + Σd − (m2 (Λ) + δm2 + Σu) tan2 β Z = Hd Hd d Hu Hu u − (µ2(Λ) + δµ2), (5.16) 2 tan2 β − 1 where each δm term denotes the radiative correction when evolving the mass parameter from the scale Λ to the low-energy scale.

The fine-tuning measure ∆HS is defined as the maximum of the absolute value of all the separate contributions in this formula: ( ) m2 (Λ)/(tan2 β − 1) δm2 /(tan2 β − 1) δµ2 ∆ = max Hd , Hd ,..., . (5.17) HS 2 2 2 mZ /2 mZ /2 mZ /2 This measure has two major drawbacks. First of all, it is not model independent. This measure compares high-scale parameters and corrections due to renormalisation. Two different high-scale theories can produce the same low-energy spectrum, but from different high-scale parameters, leading to a different fine-tuning for the same observable spectrum.

The second drawback is a much larger problem: dependent contributions are compared against each other in this measure. The RGE of some of the parameters (mHu for example) contains the parameter itself. In the case of mHu , this effect is large and negative, in the sense that a large positive value of mHu will result in a large negative value for the radiative correction δmHu . This is an example of an internal mechanism that would result in a high fine-tuning if the individual contributions are considered separately. The terms should be combined before calculating the contribution to the fine-tuning. Because of this problem, this measure is not used anymore.

48 Barbieri-Giudice measure [56] This problem is not present in the Barbieri-Giudice measure ∆BG, which takes into account both effects that determine the fine-tuning of a model: relative size and the effect of small variations. This is done with a logarithmic derivative:

2 2 ∂ ln mZ ai ∂mZ ∆BG = max = max , (5.18) i i 2 ∂ ln ai mZ ∂ai where ai are the different parameters of the theory. For each parameter, the relative size with respect to mZ is calculated. The result of this is then multiplied by the derivative of the Z boson mass with respect to the parameter. In this way, a parameter with a small contribution and a large derivative can still have a significant fine-tuning contribution, just like a parameter with a large contribution and a small derivative.

This measure has an advantage over ∆HS since it combines the high scale value of a parameter with the radiative corrections. This makes sure that only independent terms are compared.

There are some other drawbacks when using this method. First of all, the measure is still model dependent: different theories that produce the same low-scale spectrum can have different fine-tuning, due to the difference in the derivative. Furthermore, it is not clear which parameters should be used as ai. Are the Yukawa couplings free parameters? Should one use the Lagrangian parameter at, or the more commonly used At = at/yt? Different answers to these questions give different fine-tuning values, and there is no clear answer.

Electroweak measure [53] The electroweak measure (∆EW ) is constructed as a model independent minimal measure of fine-tuning. It is model independent since it only uses parameters at the electroweak scale. This makes it stand out from the other two measures. Current experiments will not be able to make conclusive remarks on the exact high scale origin of beyond the Standard Model physics. This measure gives the minimum amount of fine-tuning, without any knowledge of its high-scale behaviour.

A drawback of this model independence is that it makes this fine-tuning measure only a minimal measure: a high value of ∆EW means that the model is highly fine-tuned, but a low value does not automatically mean that there is little tuning. There can be high-scale effects that introduce fine-tuning that do not present themselves at the electroweak scale.

As a starting point, this measure uses the formula for the Z boson mass including the one-loop potential terms (equation 5.11). The fine-tuning measure is then defined as how much each term on the right hand side contributes to the mass of the Z boson. The definition is:

Ci ∆EW = max , (5.19) i 2 mZ /2

49 where the different Ci are defined as: m2 Hd Cm = Hd tan2 β − 1 −m2 tan2 β C = Hu mHu tan2 β − 1 2 Cµ = −µ (5.20) d max(Σd) CΣd = d tan2 β − 1 u 2 − max(Σu) tan β C u = . Σu tan2 β − 1

u d The two effective potential terms Σu and Σd consist of many different contributions. Each particle that couples to the Higgs field has its own contribution, so all supersymmetric u d particles except the gluino have both a Σu and a Σd term. All these contributions should be considered separately, so the maximum of all these terms is used in the C d and CΣu terms. Σd u

However, when there is an internal mechanism that decreases the fine-tuning of different terms automatically, the terms should be combined before taking the maximum. Such an internal u mechanism is present in the sfermion sector. Working out the Σu contribution for the sfermions, one obtains:   u ˜ ccol 2 1 2 02 2 Σ (f) = F (m ) (g + g )(T3 − Q ˜ sin θW ) + δ ˜ yf , (5.21) u 16π2 f˜ 2 f˜ f f,u where ccol is the number of colour degrees of freedom and δf,u˜ equals 1 when the sfermion is up-like and 0 otherwise. The isospin and charge of the fermion are denoted by T and Q 3f˜ f˜ respectively. In each generation, the isospin and electric charge add up to zero. These terms are therefore not independent. So there is a mechanism to ensure a low total contribution to the Z boson mass, even when some of the individual terms can be large. The total cancellation only holds when all sfermions in a generation have the same mass, because of the mass dependent

F (mf˜) term, but even when the sfermions are not mass degenerate, the terms should be added separately in each generation before determining the fine-tuning. This minimises the effect of the isospin and charge adding up to zero.

5.3 Fine-tuning in the literature

The Barbieri-Giudice measure was already proposed in 1987, so naturalness arguments have been around for a long time. The research in the area of naturalness has led to some general conclusions, that will be discussed in this section. Our own analysis uses the electroweak measure, so this section will focus on results obtained with this measure, although most conclusions can also be deduced from the other measures.

An argument that is often heard in the physics community is that supersymmetric theories with a fine-tuning below 100 are ruled out [57–60]. One of the most used and simplest simplified theories (mSUGRA) does have this problem. mSUGRA is a theory where all sfermion masses are unified at the grand unification theory (GUT) scale, as are the gaugino masses and the trilinear couplings. This model has only 5 parameters as a result and is because of its simplicity a popular model for publishing e.g. exclusion limits. However, the fine-tuning in mSUGRA is always large, with a minimal fine-tuning of 100 [61].

50 Results with other theories, with more free parameters, show that a fine-tuning below 100 can be achieved [61]. These results indicate that it is probably also possible to have pMSSM models with fine-tuning in the order of 10.

5.3.1 Requirements for minimal fine-tuning There are some requirements that need to be met to have low fine-tuning. The three most accepted requirements are listed below.

Light higgsinos The most obvious requirement for low fine-tuning is a low value for µ. A fine-tuning of 10 can only be achieved if the higgsino mass is smaller than ∼200 GeV (see Equation 5.19 and 5.20). This has the consequence that the LSP will be a higgsino. However, a pure higgsino LSP can easily annihilate to e.g. W ± pairs or tt pairs, so it can only explain the dark matter relic density if it is heavy (µ ∼ 1 TeV). A theory with such a large value of µ would have a fine-tuning of at least 240. So it seems that natural supersymmetric theories are unable to predict a viable dark matter candidate.

Light stops d u For most models, the largest contribution to Σd and Σu is the stop sector. This is mainly due to the top Yukawa coupling being the largest of all Yukawa couplings. While higgsinos are hard to detect at the LHC, stops can be detected more easily. Naturalness predicts that the heaviest stop is not much heavier than 1 TeV. There are currently many searches for stop squarks, both in ATLAS [62–66] and CMS [67–71].

An interesting observation is that the requirement of light stops in combination with the stronger experimental limits on first and second generation squarks ensures that the mass hierarchy of the Standard Model is reversed in supersymmetric extensions with low fine-tuning.

Light gluinos The gluino does not couple to the Higgs field, so it is not present in any of the effective potential terms. One would therefore expect that there are no bounds on the gluino by naturalness arguments, but this is not the case. The gluino mass is important in the RGE, especially in the stop sector. Even though the EW measure does not contain derivatives, it can place limits on gluino masses. The gluino has a large effect on the RGE of the stop sector, because of the large value of the strong coupling constant. Fine-tuning arguments can therefore set limits on the gluino mass. The gluino mass can also be proped at the LHC [72], with the advantage that the limits on the gluino mass are less model dependent. This makes it easier to determine the minimum amount of fine-tuning in the pMSSM.

However, the effect of gluinos on fine-tuning is very dependent on the theory that is being studied. Because gluino only contribute through RGE, theories that need to run parameters over a large range of energy scales will have stricter bounds on gluino masses than theories that are need less running. High scale theories will therefore have stronger limits on gluino masses in general, although the gluino will still have a noticeable effect for low-scale theories. This is the case because some RGE is needed in the calculation of the supersymmetric spectrum. This RGE has a surprisingly large effect and can thus be used to set limits on the gluino mass.

51 5.3.2 Natural SUSY Not all pMSSM parameters are as important when minimising fine-tuning, so a simplified model is often used in this context: natural SUSY. The parameters that are used in this model are the stop mass terms m and m , the top trilinear coupling A , the higgsino mass Q˜3 t˜R t µ, the gluino mass M3 and tan β.

One of the goals of this research is to find out if naturalness arguments can be used to predict the composition of the LSP. Natural SUSY does not contain enough parameters to answer this question. In addition, all experimental constraints on dark matter and supersymmetry are included. This is not possible in a theory where not all parameters are taken into account. This makes it more interesting to investigate the 19 dimensional pMSSM instead of the 6-dimensional Natural SUSY. Therefore, Natural SUSY is not used in this research.

5.4 Calculating fine-tuning

Now that we have a general idea of what fine-tuning is, and what its predictions are, it is time to switch focus to building our own fine-tuning implementation. It would be easiest to use an existing program to calculate the fine-tuning, and there are some spectrum generators that have a fine-tuning measure implemented, but these measures only look at the influence of a few parameters. The goal of this research is to do a general search, so all parameters have to be taken into account.

We made the choice to use the electroweak measure ∆EW in our research. This choice is made because our main interest is the spectrum at the electroweak scale. The other two measure are both model dependent, and we do not want to assume any high-scale theory, since only the low-scale behaviour is accessible in current experiments. In addition, the electroweak measure does not have problems with comparing dependent contributions or with dependencies on the choice of parameters.

It would be easiest to use an existing fine-tuning calculation in a spectrum generator. However, most spectrum generators have the Barbieri-Giudice measure built in, and the only spectrum generator that does use the electroweak measure (Isasugra [73, 74]) only works with high scale. It is therefore not possible to use an existing fine-tuning calculation for our research. The choice is therefore made to built our own program to calculate the fine-tuning of supersymmetric models. A spectrum generator, Suspect2 [75], is used to generate a supersymmetric spectrum from the 19 pMSSM input parameters. This spectrum file can then be used as input for our fine-tuning program. Suspect2 outputs a file in the SLHA format [76]. This is the universal format for supersymmetric spectra. The parameters from this file are read using the PySLHA package [77].

The fine-tuning calculation basically consists of two parts. The first part of the calculation is the renormalisation group evolution of all the parameters from the SUSY scale √ M = m m to the electroweak scale m . This is necessary since the pMSSM SUSY t˜1 t˜2 Z parameters are defined at the SUSY scale in the SLHA format, while the electroweak measure is defined at the electroweak scale.

A check on the RGE is implemented to ensure that all formulas are entered correctly. This is done using renormalisation group invariants. These non-trivial combinations of parameters do not depend on the energy scale, so they can be used to check if the evolution from one scale to

52 the other was performed correctly. These invariants are known for the pMSSM [78].

The second part of the computation is the actual calculation of the fine-tuning. This was first attempted in our own program, but the lack of a check on the results made it difficult to guarantee the validity of the results. Roberto Ruiz de Austri from the Instituto de F`ısica Corpuscular in Valencia was contacted next. He is one of the authors of the spectrum-generator code SoftSUSY and he was willing to help us by building the electroweak fine-tuning measure into SoftSUSY. The fine-tuning implementation and RGE in SoftSUSY were checked for bugs, again using the renormalisation group invariants to determine the validity of the RGE.

5.5 Fine-tuning scan

This set-up can now be used to do a scan of the pMSSM parameter space to find regions of minimal fine-tuning. This is not an easy task, since the pMSSM parameter space is 19 dimensional. It already takes more than a billion points to just sample three points for each parameter. The computation time for each point is in the order of a second, so it is impossible to find models with lowest fine-tuning using a random sampling of the parameter space.

There are some techniques to scan the parameter space in a smart way, one of which is the particle filter. The idea behind this technique is to start by random sampling the parameter space. The best points (in this case points with minimal fine-tuning) are used as seeds in the next iteration, where a Gaussian curve is drawn around all the parameter values of the seeds. The new points are sampled from these Gaussians. The best points from this sampling are saved again and the procedure is repeated with decreasing width of the Gaussians.

The main advantage of this method is that it can zoom in at a small region of parameter space, while the algorithm also has the possibility to move to another region in parameter space. This prevents the algorithm from getting stuck in a local minimum. The same algorithm was used to find the pMSSM models that explain the Galactic Centre excess.

Not only the fine-tuning of these models is calculated, but many other properties of each model are computed in order to find out if a model satisfies all experimental and theoretical constraints. To this end, each model point is sent through a chain of programs. The first of these is Micromegas [79], which calculates the dark matter properties of a model. The relic density and all direct and indirect detection rates are calculated by Micromegas. These values can than be compared with the relic density measured by the Planck satellite [17] and the WIMP cross section limits set by LUX [80] and Xenon1T [81].

Micromegas uses Suspect as its spectrum generator. Suspect has some theoretical constraints built in: it checks that the minimum of the Higgs potential is neutral in colour and electric charge and that the Higgs potential is bounded from below. There are also experimental constraints built in, like the branching fraction b → sγ [82] and the magnetic dipole moment of the muon [83].

The spectrum generator is also used to check the limits on chargino and neutralino masses set by the LEP collider [17]. In addition, the mass of the Higgs is checked. Although the experimental error on the Higgs mass is small (mh = 125.09 ± 0.21(stat.) ± 0.11(syst.)[84]), the theoretical error is much larger. The two-loop result that is used in Suspect still has an error of a few GeV [85]. Therefore, all model points with a Higgs mass in the range 122 GeV < mh < 128 GeV are accepted.

53 There is another set of exclusion limits that has to be satisfied: the limits set by LHC experiments. Recently a tool has been published to quickly determine whether a pMSSM model satisfies the ATLAS limits: SUSY-AI [86]. The results from ATLAS are usually presented in the context of simplified models, so it is hard to interpret these results for non-simplified models. SUSY-AI analyses all the data gathered by ATLAS in searches for supersymmetry and uses machine learning to generalise the results to the full 19 dimensional pMSSM. Using this information, the program can quickly determine how likely it is that a model point is excluded. In addition, SUSY-AI determines how confident it is in its own classification. If there is not a lot of training data for a certain region of the parameter space, the classification will have a low confidence level, indicating that results in that region are less trustworthy. In a region with a lot of training data, the confidence level will be high and a result in that region can be trusted. SUSY-AI is used to check all points that were saved. Only if a point is excluded with a high confidence level (90% or more), the point is rejected. √ The results in this thesis are obtained using the results from the s = 8 TeV data from the LHC.

Set-up The results that are shown here are obtained on a cluster with 16 cores. A random sampling is done for one week, after which the best 500 points are selected. These points have ∆ < 500, with a minimum at ∆ = 135. Then the iterative particle filter is used, with the strategy to keep the width of the Gaussian broad. After each iteration, the best 500 points from all previous iterations are selected.

When an iteration does not produce models with a lower fine-tuning than the previous minimum, or if it takes very long to generate points that satisfy the constraints, the width of the Gaussians is decreased slightly. With a smaller Gaussian, new points are more likely to be close to the seeds, so the chance that they are accepted is increased. In addition, the parameters that should be minimised will not jump back to higher values that often, increasing the probability of finding points with low fine-tuning. Since there are 19 parameters, each with their own Gaussian curve, there will always be a parameter that jumps to higher values.

To limit the number of accepted models, only models with a fine-tuning below a certain maximum are accepted. This maximum is chosen manually for each iteration and is chosen such that only the last few iterations will contribute, while also making sure that iterations do not take too much time.

5.6 Results

The first runs of the particle filter scan are used to get a feeling for the parameter space in terms of fine-tuning. The particle filter algorithm is used with some different widths and running times. These runs show that the parameter space is quite smooth. In these first runs, with only a few iterations, the dominant effects are the stop sector and the gluino mass. This can be seen in figure 5.2.

54 Figure 5.2: Fine-tuning of pMSSM models in the At-mg˜ plane. The colour code indicates the electroweak fine-tuning measure ∆EW .

The first observation is that both At and mg˜ need to be small to have a low fine-tuning. This was already expected for the gluino, but for At this result is more surprising. The fact that the fine-tuning is monotonically increasing for both the gluino mass and At indicates that it is best to use wide Gaussians in the first iterations of the scan. If the parameter space is less smooth, wide Gaussian would skip over the interesting regions, so a smaller Gaussian would be needed. This is not necessary in this case, so wide Gaussians can be used to converge quickly to regions with minimal fine-tuning.

In Figure 5.2, there are also points with a low gluino mass and low At, but with a high amount of fine-tuning. For these points there is another parameter that is responsible for the high fine-tuning. This highlights a big challenge in this research. All the parameters need to work together to satisfy all experimental and theoretical constraints, but if one particle is too heavy, its contribution to the fine-tuning is too large and the point is not interesting. The algorithm needs to work out which parameters are important for fine-tuning and have to be small, and which parameters are less important for fine-tuning and can be used to e.g. make the Higgs boson heavy enough, or satisfy the b → sγ limit.

This problem gets more and more difficult with lower fine-tuning values. At fine-tuning values lower than ∼20, contributions that were previously unimportant for fine-tuning, because the stop squark contributions dominated, start to become relevant. This narrows the available parameter space drastically when trying to further decrease fine-tuning. An example of this will be discussed later.

55 5.6.1 Final results The final run consists of 20 iterations, where a minimal fine-tuning of ∆ = 8.22 is found. The composition of the models with lowest fine-tuning will be discussed next. First of all, it is good check if the three requirements for low fine-tuning that were mentioned in Section 5.3.1 are also visible in our scan. Two of these requirements are a low value of µ and a low gluino mass. These two requirements are visible in Figure 5.3.

Figure 5.3: Fine-tuning of pMSSM models in the mg˜-µ plane. The fine-tuning measure ∆EW is indicated with the colour code.

This figure shows a clear µ dependence, which is to be anticipated in this region. The electroweak fine-tuning measure sets a very strict bound on µ for low fine-tuning values. The contribution of µ is:

2µ2 ∆(µ) = . (5.22) EW 2 mZ

The electroweak measure uses the maximum of all fine-tuning contributions, so ∆EW < 10 is only possible for µ < 200. This is exactly the range that is visible in Figure 5.3. The other requirement visible in this figure is the low gluino mass. Although these points all have gluino mass in a region where most models are excluded [47], these models all survive the SUSY-AI classification.

The other requirement for low fine-tuning is that the stop squarks have a low mass. This condition is also visible in our scan (Figure 5.4).

This figure shows that the stop sector is no longer the dominant contribution to fine-tuning is this region of parameter space, since there is no clear correlation between fine-tuning and stop mass visible in this figure. The main challenge for stop squarks in this region is that they need to be as heavy as possible in order to push the Higgs mass to 125 GeV, but they need to be light to minimise fine-tuning. In the last few iterations of the scan, the Higgs mass is pushed down by the low stop masses, but all points are still within the range of mh = 125 ± 3 GeV.

56 Figure 5.4: Fine-tuning of pMSSM models in the m -m plane. t˜1 t˜2

New features These features of natural pMSSM models were already well known, but there are some new results that can not be obtained by using simplified models like natural SUSY. The most interesting new result is the composition of the lightest supersymmetric particle (LSP). Naturalness arguments normally predict that µ is low, so the LSP has a large higgsino component. Because of the limits on dark matter placed by observations of spheroidal dwarf galaxies [87], there is a strong limit on the velocity averaged cross section hσvi of a dark matter particle. This limit is roughly: m hσvi < DM · 10−27cm3/s. (5.23) 100 GeV We found that if the lightest neutralino is a pure higgsino, this limit is saturated for µ = 200 GeV. This means that the minimal fine-tuning achievable in the MSSM is:

µ2 2002 ∆ = = = 9.6. (5.24) EW 2 2 mZ /2 91.2 /2 As can be seen in Figure 5.3 and Figure 5.4, our scan found models with lower fine-tuning, so there has to be a mechanism to get around this limit.

As mentioned in Section 5.3.1, an higgsino-like LSP will have a large annihilation cross section. The observations on dwarf galaxies put a mass dependent limit of the cross section (Equation 5.23), so a light LSP needs to have a small cross section, otherwise it is excluded. The only way to lower the cross section is to add a bino component to the LSP. It is not possible to lower the cross section by adding a wino component, since it has a strong coupling and couples to nearly all relevant particles (see Table 2 in [88]). The bino component couples only to a few particles, and its coupling strength is much lower than the higgsino coupling to e.g. W ± pairs and tt pairs. The bino component effectively decreases the coupling of the LSP, and therefore the annihilation cross section. This mechanism allows a lower value of µ, and therefore a lower fine-tuning. This effect is clearly visible in the results of the 17th iteration of our scan (Figure 5.5).

57 Figure 5.5: Fine-tuning of pMSSM models in the M1-µ plane.

This shows that the models with lowest fine-tuning will have a bino-higgsino-like LSP. A remarkable result is that the composition of these models starts to look like the WW1 GCE models discussed in Section 4.3.1, where the LSP is also a mixture of bino and higgsino components.

Another interesting result is that first and second generation sfermions should be very heavy. These particles have practically no contribution to fine-tuning, so they can be heavy. However, the validity of this result is not guaranteed. There are some unresolved issues with SUSY-AI that might have influenced the results. The details of these issues will be described in the discussion.

The final result of our scan is the preferred value of At. The general consensus in the literature is that At should be as high as possible, so that there is maximal mixing in the stop sector and the Higgs mass is pushed upwards. We do not see this in our scan. The region of minimal fine-tuning has the top trilinear coupling in the range 650 GeV < At < 750 GeV. This might also be the result of some issues that need further investigation.

5.7 Discussion

There are several unresolved issues that need some more attention. Because of time constraints these could not be resolved in time for this thesis, but the research is still ongoing. The results should therefore be seen as preliminary results, not as final conclusions.

During the scan, a bug was found in the fine-tuning implementation in SoftSUSY: the u d program does not calculate the Σu and Σd terms, but only the Σu and Σd terms (see page 45). u These both contain a Σd term that is cancelled in the mZ calculation. This introduces an u error in the fine-tuning cancellation. If these Σd terms are large and have the same sign as the u d Σu and Σd terms, the fine-tuning is overestimated. If the sign is opposite, the fine-tuning is underestimated. These terms are not nicely separated in the SoftSUSY calculation, so it is not easy to account for this difference.

58 Another problem is that the scan was not performed completely correctly. One of the 2 parameters (mA) was not used as a free parameter. All the results have a mA of 1000 GeV. This will probably not have a very large effect on the results [53], but it should be taken into account correctly in a future scan.

During the scan of the parameter space, there was a discussion about the energy scale at which the fine-tuning is calculated. There are two options for this scale: the SUSY scale √ M = m m and the electroweak scale m . The analysis presented above uses SUSY t˜1 t˜2 Z parameters defined at the electroweak scale, because this is the scale where electroweak symmetry breaking occurs and the Z boson gets its mass. However, the SUSY scale might be a better choice, since the loop corrections are minimised at this scale. The stop squarks are usually the largest contribution to the loop correction, so by choosing the SUSY scale as a reference scale, the logarithm in the loop correction of the heaviest stop quark will be of the form: 2 ! 2 ! ! m˜ m˜ m˜ δm2 ∝ log t2 = log t2 = log t2 . (5.25) Z Q2 m m m t˜1 t˜2 t˜1 One factor of the stop mass drops out and the loop correction is much smaller. In higher order terms this effect is even larger, so the one-loop result gives the most accurate results when evaluated at the SUSY scale.

One of the reasons why this choice can be so important is that some parameters only have an effect through RGE. Especially the gluino mass has a large effect, but only through its renormalisation effects in the stop sector (see Section 5.3.1). When using the SUSY scale as a reference scale, the RGE effects are reduced and the gluino contribution to fine-tuning decreases. The gluino mass parameter M3 is also present in the beta function of the At parameter, so this change of energy scale might also influence the result that was found of the optimal value of At.

The final point of discussion is the classification by SUSY-AI. The models that were used as training data of SUSY-AI had masses below 4 TeV. In the fine-tuning scan, the best points have first and second generation squarks that can be as heavy as 16 TeV. SUSY-AI contains a function that maps all points that lie outside the training range to the closest point in the training range, but this does not yet work completely correctly. There could therefore be some unknown errors in the classification by SUSY-AI.

5.8 Conclusion

Fine-tuning is an interesting measure to rate the viability of supersymmetry. One would expect that the increasing bounds on supersymmetric theories ensure that natural theories are already excluded. Although there are papers that claim that the minimal fine-tuning of supersymmetric models is ∼1%, our scan gives different results, with a fine-tuning as low as ∆ = 8.22, indicating a tuning better then 10% level. In addition to the usual naturalness requirements, our scan also shows that the points of minimal fine-tuning have a bino-higgsino-like LSP. There are some issues that need more attention, so the exact numbers may not be correct. Also because of these issues, the squark and At results might not hold up, but the LSP composition result is expected to hold up, since it is not heavily influenced by the gluino mass and the µ parameter is not influenced by the Σu,d error.

59 The conclusion of this research is therefore that natural supersymmetric theories are not yet excluded and theories with minimal fine-tuning will have an LSP that is a mixture of bino and higgsino components.

5.9 Outlook

While finishing this thesis, the research continued and the points mentioned in the discussion were resolved. A new scan was done to find the true regions of minimal fine-tuning, yielding models with a fine-tuning as low as ∆ = 2.7. A remarkable result is that the most natural points have a fairly good prediction for the dark matter relic density Ωh2. Without imposing any constraints, the relic density of the sampled models is in the range 10−7 ≤ Ωh2 ≤ 106. By imposing all contraints and demanding a fine-tuning of ∆ ≤ 10, the relic density has a value 10−4 ≤ Ωh2 ≤ 10 (see Figure 5.6).

Figure 5.6: The dark matter relic density of points with a fine-tuning of ∆ ≤ 1000. The blue points satisfy all contraints, the other points are excluded, with the exclusion reason being indicated by the colour coding. The yellow band shows a ±10% band around the measured value of Ωh2 = 0.118. The purple solid line shows the predicted XENON1T sensitity, the pink dashed line indicates the predicted sensitivity for a proposed bino-higgsino search at the LHC. Extracted from [89].

Models with an Ωh2 value that is within 10% of the measured value of Ωh2 = 0.118 can have a fine-tuning as low as ∆ = 4.7. The composition of the LSP in these models is mostly bino, with a small higgsino component. This is a similar composition as was found in this thesis. The XENON1T experiment is sensitive to these models [90], so the next few years will show whether a natural pMSSM model is realized in nature. The details of this research can be found in [89].

60 Appendix A

Minimisation of the SUSY Higgs potential

The full tree-level Higgs potential in supersymmetry is:

V = (µ2 + m2 )|H |2 + (µ2 + m2 )|H |2 + [b( HaHb) + c.c.] H Hu u Hd d ab u d 1 1 + (g2 + g02)(|H |2 − |H |2)2 + g2| HaHb∗|2, 8 u d 2 ab u d + − By rotating the Hu field to zero, and thereby also setting the Hd field to zero, one obtains the potential:

2 2 0 2 2 2 0 2 0 0 1 2 02 0 2 0 2 2 VH = (µ + m )|H | + (µ + m )|H | − (bH H + c.c.) + (g + g )(|H | − |H | ) . Hu u Hd d u d 8 u d The two minimisation conditions that have to be satisfied are:

∂VH ∂VH = 0; = 0. ∂H0 ∂H0 u min d min

Applying these constraints, and inserting the vacuum expectation values vu and vd for the two Higgs fields, yields the conditions: 1 2(µ2 + m2 )v − 2bv + (g2 + g02)(v2 − v2)v = 0 Hu u d 2 u d u 2 2 1 2 02 2 2 2(µ + m )v − 2bvu − (g + g )(v − v )v = 0. Hd d 2 u d d

The first equation can be divided by vu and the second by vd. The ratio of the vev’s can then be replaced by tan β = vu/vd. Using vu = v sin β and vd = v cos β, and dividing by an overall factor 2, yields: 1 m2 + µ2 − b cot β + (g2 + g02)v2(sin2 β − cos2 β) = 0 Hu 4 1 m2 + µ2 − b tan β − (g2 + g02)v2(sin2 β − cos2 β) = 0. Hd 4

2 1 2 2 02 2 2 Using the Standard Model relation mZ = 2 v (g +g ) and the relation cos(2β) = cos β −sin β gives the final conditions:

m2 m2 + µ2 − b cot β − Z cos(2β) = 0 Hu 2 m2 m2 + µ2 − b tan β + Z cos(2β) = 0. Hd 2

61 These two equations can be solved for two parameters. The choice was made to solve for b and 2 mZ . This results in the equations: 1 b = sin(2β)(m2 + m2 + 2µ2) 2 Hu Hd m2 m2 − m2 tan2 β Z = Hd Hu − µ2 2 tan2 β − 1

62 Bibliography

[1] D. Griffiths, Introduction to Elementary Particles. Wiley-VCH, 2012. [2] B. Schellekens, “Beyond the Standard Model,” February 2016. url: http://www.nikhef.nl/~t58/BSM.pdf. [3] H. Jones, Groups, Representations and Physics. CRC Press, 1998. [4] R. Alkofer and J. Greensite, “Quark confinement: the hard problem of hadron physics,” Journal of Physics G: Nuclear and Particle Physics, vol. 34, no. 7, p. S3, 2007. [5] I. van Vulpen, “The Standard Model Higgs Boson,” October 2013. Lecture notes Particle Physics II, url: https://www.nikhef.nl/~ivov/HiggsLectureNote.pdf. [6] ATLAS Collaboration, “Observation of a new particle in the search for the standard model higgs boson with the ATLAS detector at the LHC,” Physics Letters B, vol. 716, no. 1, 2012. [arXiv:1506.00962]. [7] F. Zwicky, “Die Rotverschiebung von Extragalaktischen Nebeln,” Helvetica Physica Acta, vol. 6, 1933. [8] V. Rubin and K. J. Ford, “Rotation of the Andromeda nebula from a spectroscopic survey of emission regions,” Astrophysical Journal, vol. 159, 1970.

[9] PhilHibbs, “Velocity dispersion,” 2005. url: https://en.wikipedia.org/wiki/File: GalacticRotation2.svg, accessed August 23, 2016. [10] G. Jungman, M. Kamionkowski, and K. Griest, “Supersymmetric dark matter,” Physics Reports, vol. 267, no. 5, 1996. [hep-ph/9506380]. [11] H. Murayama, “Physics beyond the Standard Model and dark matter.” Lectures at Les Houches Summer School, Session 86, Particle Physics and Cosmology: the Fabric of Spacetime, July 31 - August 25, 2006, [arXiv:0704.2276]. [12] Z. Burell, “Radiative symmetry breaking in the supersymmetric minimal B-L extended Standard Model,” Master’s thesis, The University of Alabama, 2011. [arXiv:1608.05888]. [13] M. Brak, “The hierarchy problem in the standard model and little higgs theories,” Master’s thesis, Utrecht University, 2004. [14] J. March-Russell, “Hierarchy, naturalness... seeking help from symmetry yet again?,” 2013. Lecture 8 in the series ’The Standard Model in the LHC Era’, url: http://www.cbpf. br/~maciel/evjas/a08_hierarchy.pdf. [15] S. P. Martin, A Supersymmetry Primer. World Scientific, 2011. [hep-ph/9709356]. [16] Super-Kamiokande Collaboration, “Search for proton decay via p → νK+ using 260 kiloton · year data of Super-Kamiokande,” Phys. Rev. D, vol. 90, Oct 2014. [arXiv:1408.1195].

63 [17] Particle Data Group Collaboration, “Review of Particle Physics,” Chin. Phys., vol. C38, 2014.

[18] A. Djouadi et al., “The minimal supersymmetric Standard Model: Group summary report,” in GDR (Groupement De Recherche) - Supersymetrie Montpellier, France, April 15-17, 1998. [hep-ph/9901246].

[19] ACME Collaboration, “Order of magnitude smaller limit on the electric dipole moment of the electron,” Science, vol. 343, no. 6168, 2014. [arXiv:1310.7534].

[20] W. B. Dress, P. D. Miller, J. M. Pendlebury, P. Perrin, and N. F. Ramsey, “Search for an electric dipole moment of the ,” Phys. Rev. D, vol. 15, 1977.

[21] E. Yazgan, “Flavor changing neutral currents in top quark production and decay,” Proceedings, 6th International Workshop on Top Quark Physics (TOP2013): Durbach, Germany, September 14-19, 2013, 2014, 1312.5435. [arXiv:1312.5435].

[22] J. L. Feng, C. G. Lester, Y. Nir, and Y. Shadmi, “Standard model and supersymmetric flavor puzzles at the CERN ,” Phys. Rev. D, vol. 77, 2008. [arXiv:0712.0674].

[23] L. Evans and P. Bryant, “LHC machine,” Journal of Instrumentation, vol. 3, no. 08, 2008.

[24] ATLAS Collaboration, “Search for high-mass diboson with boson-tagged jets √ in proton-proton collisions at s = 8 TeV with the ATLAS detector,” Journal of High Energy Physics, vol. 2015, no. 12, 2015. [arXiv:1506.00962].

[25] CMS Collaboration, “Search for massive resonances in dijet systems containing jets tagged √ as W or Z boson decays in pp collisions at s = 13 TeV,” Journal of High Energy Physics, vol. 2014, no. 8, 2014. [arXiv:1405.1994].

[26] M. Schott and M. Dunford, “Review of single vector boson production in pp collisions at √ s = 7 tev,” The European Physical Journal C, vol. 74, no. 7, 2014. [arXiv:1405.1160].

[27] G. Altarelli, B. Mele, and M. Ruiz-Altaba, “Searching for new heavy vector bosons in pp¯ colliders,” Zeitschrift fur Physik C Particles and Fields, vol. 45, no. 1, 1989.

[28] L. Randall and R. Sundrum, “Large mass hierarchy from a small extra dimension,” Phys. Rev. Lett., vol. 83, Oct 1999. [hep-ph/9905221].

[29] T. Han, J. D. Lykken, and R.-J. Zhang, “On Kaluza-Klein states from large extra dimensions,” Phys. Rev. D, vol. 59, Mar 1999. [hep-ph/9811350].

[30] M. E. Peskin and D. V. Schroeder, An Introduction to . Westview Press, 1995.

[31] F. Calore, I. Cholis, and C. Weniger, “Background model systematics for the Fermi GeV excess,” Journal of Cosmology and Astroparticle Physics, vol. 2015, no. 03, 2015. [arXiv:1409.0042].

[32] Q. Yuan and B. Zhang, “Millisecond pulsar interpretation of the Galactic center gamma-ray excess ,” Journal of High Energy Astrophysics, vol. 34, 2014. [arXiv:1404.2318].

[33] I. Cholis, C. Evoli, F. Calore, T. Linden, C. Weniger, and D. Hooper, “The Galactic Center GeV excess from a series of leptonic cosmic-ray outbursts,” Journal of Cosmology and Astroparticle Physics, vol. 2015, no. 12, 2015. [arXiv:1506.05119].

64 [34] A. Achterberg, S. Amoroso, S. Caron, L. Hendriks, R. R. de Austri, and C. Weniger, “A description of the Galactic Center excess in the Minimal Supersymmetric Standard Model,” Journal of Cosmology and Astroparticle Physics, vol. 2015, no. 08, 2015. [arXiv:1502.05703].

[35] Planck Collaboration, “Planck 2015 results,” A&A, vol. 594, 2016. [arXiv:1502.01589].

[36] W. Beenakker, R. H¨opker, M. Kr¨amer,M. Spira, P. Zerwas, and T. Plehn, “Prospino2.” url: http://www.thphys.uni-heidelberg.de/~plehn/includes/prospino/prospino_ lhc8.eps, Accessed on 21 November 2016.

[37] A. Djouadi, M. M. Muhlleitner, and M. Spira, “Decays of supersymmetric particles: the program SUSY-HIT (SUspect-SdecaY-Hdecay-InTerface),” Acta Phys. Polon., vol. B38, 2007. [hep-ph/0609292].

[38] M. van Beekveld, “Possible indirect detection of dark matter and its impact on LHC supersymmetry searches,” Master’s thesis, Radboud University Nijmegen, 2016.

[39] J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H.-S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, “The automated computation of tree-level and next-to- leading order differential cross sections, and their matching to parton shower simulations,” Journal of High Energy Physics, vol. 2014, no. 7, 2014. [arXiv:1405.0301].

[40] R. Brun and F. Rademakers, “ROOT an object oriented data analysis framework,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 389, no. 1, 1997.

[41] M. Drees, H. Dreiner, D. Schmeier, J. Tattersall, and J. S. Kim, “CheckMATE: Confronting your Favourite New Physics Model with LHC Data,” Comput. Phys. Commun., vol. 187, 2015. [arXiv:1312.2591].

[42] M. Cacciari, G. P. Salam, and G. Soyez, “FastJet User Manual,” Eur. Phys. J., vol. C72, 2012. [arXiv:1111.6097].

3 [43] M. Cacciari and G. P. Salam, “Dispelling the N myth for the kt jet-finder,” Phys. Lett., vol. B641, 2006. [hep-ph/0512210].

[44] M. Cacciari, G. P. Salam, and G. Soyez, “The Anti-k(t) jet clustering algorithm,” JHEP, vol. 04, 2008. [arXiv:0802.1189].

[45] A. L. Read, “Presentation of search results: The CL(s) technique,” J. Phys., vol. G28, 2002.

[46] J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaˆıtre, A. Mertens, and M. Selvaggi, “Delphes 3: a modular framework for fast simulation of a generic collider experiment,” Journal of High Energy Physics, vol. 2014, no. 2, 2014. [arXiv:1307.6346].

[47] ATLAS Collaboration, “Summary of the ATLAS experiment’s sensitivity to supersymmetry after LHC Run 1 — interpreted in the phenomenological MSSM,” Journal of High Energy Physics, vol. 2015, no. 10, 2015. [arXiv:1508.06608].

[48] ATLAS Collaboration, “Search for resonances with boson-tagged jets in 3.2 fb-1 of pp √ collisions at s = 13 TeV collected with the ATLAS detector,” Tech. Rep. ATLAS-CONF- 2015-073, CERN, Geneva, Dec 2015.

[49] P. M. Boylan-Kolchin, “Cosmology.” Slides for the course ‘Astronomy 422: Cosmology’ given at the University of Maryland, url: https://www.astro.umd.edu/~mbk/Teaching/ ASTR_422_S2015/Lectures/lecture_17.pdf.

65 [50] A. H. Guth, “Inflationary universe: A possible solution to the horizon and flatness problems,” Phys. Rev. D, vol. 23, pp. 347–356, Jan 1981.

[51] S. Coleman and E. Weinberg, “Radiative corrections as the origin of spontaneous symmetry breaking,” Phys. Rev. D, vol. 7, Mar 1973. [hep-th/0507214].

[52] F. Tanedo, “Seibergology.” Notes based on the spring 2009 lectures by Csaba Cs´aki. url: http://www.physics.uci.edu/~tanedo/files/notes/Seibergology.pdf, 2013. [53] H. Baer, V. Barger, P. Huang, D. Mickelson, A. Mustafayev, and X. Tata, “Radiative natural supersymmetry: Reconciling electroweak fine-tuning and the Higgs boson mass,” Phys. Rev. D, vol. 87, Jun 2013. [arXiv:1212.2655].

[54] H. Baer, V. Barger, and D. Mickelson, “How conventional measures overestimate electroweak fine-tuning in supersymmetric theory,” Phys. Rev. D, vol. 88, Nov 2013. [arXiv:1309.2984].

[55] H. Baer, V. Barger, and M. Padeffke-Kirkland, “Electroweak versus high-scale fine tuning in the 19-parameter supergravity model,” Phys. Rev. D, vol. 88, Sep 2013. [arXiv:1304.6732].

[56] R. Barbieri and G. Giudice, “Upper bounds on supersymmetric particle masses,” B, vol. 306, no. 1, 1988.

[57] M. E. Cabrera, J. A. Casas, A. Delgado, S. Robles, and R. R. de Austri, “Naturalness of MSSM dark matter,” Journal of High Energy Physics, vol. 2016, no. 8, 2016. [arXiv:1604.02102].

[58] A. Arvanitaki, M. Baryakhtar, X. Huang, K. Van Tilburg, and G. Villadoro, “The last vestiges of naturalness,” Journal of High Energy Physics, vol. 2014, no. 3, 2014. [arXiv:1309.3568].

[59] J. March-Russell, “The future of (beyond-thestandard-model) theory,” 2014. Talk given at the joint meeting of the Institute of Physics High Energy Particle Physics and Astro Particle Physics groups, Royal Holloway University of London, url: https://indico.cern.ch/ event/266149/contributions/1602455/attachments/474760/657104/IOPtalk.pdf.

[60] N. Seiberg, “Now what?,” 2013. Talk given at the Higgs Quo Vadis conference in Aspen.

[61] H. Baer, V. Barger, D. Mickelson, and M. Padeffke-Kirkland, “SUSY models under siege: LHC constraints and electroweak fine-tuning,” Phys. Rev. D, vol. 89, Jun 2014. [arXiv:1404.2277].

[62] ATLAS Collaboration, “Search for direct top squark pair production and dark matter √ production in final states with two leptons in s = 13 TeV pp collisions using 13.3 fb−1 of ATLAS data,” Tech. Rep. ATLAS-CONF-2016-076, CERN, Aug 2016.

[63] ATLAS Collaboration, “Search for top squarks in final states with one isolated lepton, √ jets, and missing transverse momentum in s = 13 TeV pp collisions with the ATLAS detector,” Tech. Rep. ATLAS-CONF-2016-050, CERN, Aug 2016. [arXiv:1606.03903].

[64] ATLAS Collaboration, “Search for the Supersymmetric Partner of the Top Quark in the √ Jets+Emiss Final State at s = 13 TeV,” Tech. Rep. ATLAS-CONF-2016-077, CERN, Geneva, Aug 2016.

[65] ATLAS Collaboration, “ATLAS Run 1 searches for direct pair production of third- generation squarks at the Large Hadron Collider,” The European Physical Journal C, vol. 75, no. 10, 2015. [arXiv:1506.08616].

66 [66] ATLAS Collaboration, “Search for new phenomena in final states with an energetic jet √ and large missing transverse momentum in pp collisions at s = 13 TeV using the ATLAS detector,” Phys. Rev. D, vol. 94, Aug 2016. [arXiv:1604.07773].

[67] CMS Collaboration, “Search for supersymmetry in events with jets and missing transverse momentum in proton-proton collisions at 13 TeV,” Tech. Rep. CMS-PAS-SUS-16-014, CERN, 2016.

[68] CMS Collaboration, “Search for new physics in the all-hadronic final state with the MT2 variable,” Tech. Rep. CMS-PAS-SUS-16-015, CERN, 2016.

[69] CMS Collaboration, “An inclusive search for new phenomena in final states with one or more jets and missing transverse momentum at 13 TeV with the AlphaT variable,” Tech. Rep. CMS-PAS-SUS-16-016, CERN, 2016.

[70] CMS Collaboration, “Search for direct top squark pair production in the single lepton √ final state at s = 13 TeV,” Tech. Rep. CMS-PAS-SUS-16-028, CERN, 2016.

[71] CMS Collaboration, “Search for direct top squark pair production in the fully hadronic √ final state in proton-proton collisions at s = 13 TeV corresponding to an integrated luminosity of 12.9 fb-1,” Tech. Rep. CMS-PAS-SUS-16-029, CERN, 2016. √ [72] ATLAS Collaboration, “Summary of the searches for squarks and gluinos using s = 8 tev pp collisions with the ATLAS experiment at the LHC,” Journal of High Energy Physics, vol. 2015, no. 10, p. 54, 2015. [arXiv:1507.05525].

[73] H. Baer, J. Ferrandis, S. Kraml, and W. Porod, “Treatment of threshold effects in supersymmetric spectrum computations,” Phys. Rev. D, vol. 73, Jan 2006. [hep-ph/0511123].

[74] H. Baer, C. H. Chen, R. Munroe, F. E. Paige, and X. Tata, “Multichannel search for minimal supergravity at pp and e+e− colliders,” Phys. Rev. D, vol. 51, Feb 1995. [hep-ph/9408265].

[75] A. Djouadi, J.-L. Kneur, and G. Moultaka, “SuSpect: A Fortran code for the supersymmetric and Higgs particle spectrum in the MSSM,” Computer Physics Communications, vol. 176, no. 6, 2007. [hep-ph/0211331].

[76] P. Skands and etal, “SUSY Les Houches accord: interfacing SUSY spectrum calculators, decay packages, and event generators,” Journal of High Energy Physics, vol. 2004, no. 07, 2004. [hep-ph/0311123].

[77] A. Buckley, “PySLHA: a Pythonic interface to SUSY Les Houches Accord data,” The European Physical Journal C, vol. 75, no. 10, 2015. [arXiv:1305.4194].

[78] W. Beenakker, T. van Daal, R. Kleiss, and R. Verheyen, “Renormalization group invariants in supersymmetric theories: one- and two-loop results,” Journal of High Energy Physics, vol. 2015, no. 10, 2015. [arXiv:1507.03470].

[79] G. Blanger, F. Boudjema, A. Pukhov, and A. Semenov, “micrOMEGAs4.1: Two dark matter candidates,” Computer Physics Communications, vol. 192, 2015. [arXiv:1407.6129].

[80] LUX Collaboration, “The Large Underground Xenon (LUX) experiment,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 704, 2013. [arXiv:1211.3788].

67 [81] E. Aprile, The Xenon1T Dark Matter Search Experiment. Dordrecht: Springer Netherlands, 2013. [arXiv:1206.6288].

[82] BABAR Collaboration, “Exclusive measurements of b → sγ transition rate and photon energy spectrum,” Phys. Rev. D, vol. 86, Sep 2012.

[83] Muon g-2 Collaboration, “Final report of the E821 muon anomalous magnetic moment measurement at BNL,” Phys. Rev. D, vol. 73, Apr 2006. [hep-ex/0602035].

[84] ATLAS collaboration and CMS collaboration, “Combined Measurement of the Higgs √ Boson Mass in pp Collisions at s = 7 and 8 TeV with the ATLAS and CMS Experiments,” Phys. Rev. Lett., vol. 114, May 2015. [arXiv:1503.07589].

[85] B. C. Allanach, A. Djouadi, J.-L. Kneur, W. Porod, and P. Slavich, “Precise determination of the neutral Higgs boson masses in the MSSM,” Journal of High Energy Physics, vol. 2004, no. 09, 2004. [hep-ph/0406166].

[86] S. Caron, J. S. Kim, K. Rolbiecki, R. R. de Austri, and B. Stienen, “The BSM-AI project: SUSY-AI - Generalizing LHC limits on Supersymmetry with Machine Learning,” 2016, 1605.02797. [arXiv:1605.02797].

[87] MAGIC collaboration, “Limits to dark matter annihilation cross-section from a combined analysis of MAGIC and Fermi-LAT observations of dwarf satellite galaxies,” Journal of Cosmology and Astroparticle Physics, vol. 2016, no. 02, 2016. [arXiv:1601.06590].

[88] M. van Beekveld, W. Beenakker, S. Caron, R. Castelijn, M. Lanfermann, and A. Struebig, “Higgs, di-Higgs and tri-Higgs production via SUSY processes at the LHC with 14 TeV,” Journal of High Energy Physics, vol. 2015, no. 5, p. 44, 2015. [arXiv:1501.02145].

[89] M. van Beekveld, W. Beenakker, S. Caron, R. Peeters, and R. Ruiz de Austri, “This year’s holiday present: Supersymmetry with Dark Matter is still natural,” 2016, 1612.06333. arXiv:1612.06333.

[90] XENON Collaboration, “Physics reach of the XENON1T dark matter experiment,” JCAP, no. 04, 2016. arXiv:1512.07501.

68