Master’s thesis

Entanglement entropy of a one-dimensional scalar field

Author: Supervisor: K.R. de Ruiter BSc. Prof. dr. ir. H.T.C. Stoof Theoretical Physics Utrecht University

June 2020 ii

Abstract

In this thesis, we study the entanglement entropy of a chain of coupled harmonic oscillators, which is used to model a one-dimensional bosonic scalar field. Initially, a model for a massless scalar field is discussed. When calculating the entanglement entropy from the reduced density we run into a problem due to the presence of a zero mode, and we are only able to compute the entanglement entropy of one particular case. These difficulties are avoided when we consider a massive scalar field instead, which we accomplish by adding a mass term to the dispersion relation. We numerically study the entanglement entropy of the massive scalar field model as a function of the number of coordinates and the mass parameter and compare with analytical results. By fitting our numerical data we construct a single cross-over function, which describes the entanglement entropy as a function of both the number of coordinates and the mass parameter. The cross-over function agrees well with the numerical data, especially for small values of the mass parameter. Taking then the massless limit, the entanglement entropy of a massless scalar field can nevertheless be determined and reproduces the prediction of conformal field theory.

Title page image taken from: https://www.symmetrymagazine.org/article/gravitys-waterfall iii

Contents

1 Introduction 1 1.1 Motivation ...... 3

2 Physical background 7 2.1 Entropy ...... 7 2.2 ...... 8 2.3 Entanglement entropy ...... 9

3 Massless scalar field 11 3.1 Three coupled harmonic oscillators ...... 11 3.1.1 Ground state wave function ...... 11 3.1.2 Density matrix of the ground state ...... 15 3.1.3 Reduced density matrix with two coordinates integrated out . . . . 16 ′ 3.1.4 Entanglement entropy of reduced density matrix ρred x1, x1 .... 17 3.1.5 Reduced density matrix with one coordinate integrated out . . . . . 18 ( ′ ) ′ 3.1.6 Entanglement entropy of reduced density matrix ρred q1, q1, q2, q2 . 21 3.2 Chain of N harmonic oscillators ...... 22 ( ) 3.2.1 Ground state wave function ...... 22 3.2.2 Density matrix of the ground state ...... 23 3.2.3 Reduced density matrix ...... 23 3.2.4 Numerical results for the entanglement entropy ...... 28

4 Massive scalar field 31 4.1 Ground state wave function and density matrix ...... 31 4.2 Rewriting the reduced density matrix ...... 32 4.3 Numerical results for the entanglement entropy ...... 33 4.4 Dimensional analysis ...... 36 4.5 Determining a cross-over function ...... 38 4.5.1 First order fit ...... 41 4.5.2 Pad´efit ...... 41 4.6 Testing the cross-over function ...... 43 4.7 The massless limit ...... 44

5 Discussion and outlook 45 iv

5.1 Discussion ...... 45 5.2 Outlook ...... 47

A Proof of dispersion relation product 51 1

Chapter 1

Introduction

In 1915, Einstein published his theory of general relativity in which he unified the theory of special relativity and gravity in terms of the curvature of spacetime. One year later, Schwarzschild found an exact solution to Einstein’s field equations of general relativity, which contained a coordinate singularity. At that time, Schwarzschild did not understand the physical meaning of this singularity. Nowadays, we know that it describes a black hole. Black holes have an extremely high mass and density, resulting in a very strong curvature of the spacetime around them. The gravitational pull of a black hole is so strong that nothing can escape from it. Not even light particles, photons, can escape, which are the fastest moving particles according to Einstein’s relativity theory. The strong gravitational field of a black hole marks a surface beyond which nothing can escape, known as the event horizon of the black hole. This horizon effectively seals off the interior of the black hole from the rest of the universe, making it impossible to observe. Hawking proved that the surface area of a black hole can never decrease as a function of time [1]. Bekenstein noted a strong analogy to the thermodynamical entropy, which also is a quantity that can never decrease in time, according to the second law of thermodynamics. In 1973, this inspired Bekenstein to write a paper in which he tried to unify thermodynam- ics with black hole dynamics [2]. Thermodynamical systems are typically characterised by a handful of macroscopic quantities, such as its temperature, pressure, and volume. The entropy of a system is a measure of the number of different possible internal states, called microstates, that correspond to the same macrostate. Analogously, there exists a theorem for black holes called the no-hair theorem, which states that any black hole that is a solu- tion of the Einstein-Maxwell equations can be fully characterised by only three externally observable quantities: mass, charge, and angular momentum. This would imply, however, that information about the internal states of the black hole is lost. For example, it would be impossible to distinguish between a black hole that formed from collapsing matter or one that formed from collapsing antimatter, if their mass, charge, and angular momentum are equal. This means there is a lack of information about the internal configuration of black holes. Bekenstein noted the analogy to thermodynamical entropy and argued that 2 Chapter 1. Introduction there must be a black-hole entropy related to the lack of information about the interior of a black hole. With the theorem of the black-hole surface area of Hawking in mind, Bekenstein suggested that the black-hole entropy should be proportional to the surface area of the black hole. Previously, physicists had struggled to explain the apparent loss of information of an object with some entropy falling into a black hole. It seemed that the information of the infalling object was lost behind the event horizon, thus violating the second law of thermodynamics, which states that the entropy of an isolated system (the universe) can never decrease. The notion of black-hole entropy provided a solution to this problem, as the entropy of the infalling object is added to the entropy of the black hole. This led to the formulation of the generalised second law, which states that the entropy of the exterior of the black hole plus the entropy of the black hole itself never decreases [3]. One year later, in 1974, Stephen Hawking proved that black holes emit thermal radiation, and thus have a temperature [4]. This confirmed the earlier ideas of Bekenstein, and this quickly led to the famous Bekenstein-Hawking entropy of a black hole,

c3k A S B . (1.1) BH 4Gh

The fact that the entropy of a black hole indeed= ̵ scales with its area was a striking result because from thermodynamics entropy was known to be an extensive quantity, thus scaling with the volume of the system. This suggests that all the information of the interior of the black hole is somehow encoded on its surface, the event horizon. This insight is what inspired the formulation of the holographic principle, which states that a description of an n 1 -dimensional curved spacetime can be understood as an n-dimensional theory living on its boundary [5]. ( + ) Physicists have been trying to understand the area law behaviour of black-hole entropy ever since it was discovered. A promising candidate to explain the area law of black holes is entanglement entropy, which is a measure of the degree of entanglement in a system. To calculate the entanglement entropy of a system, one first needs to effectively remove a part of the system, which is done by taking the partial trace over a number of the degrees of freedom, thus essentially dividing the system into two subsystems. Tracing out coordinates results in a loss of information, and therefore the leftover state is a mixed state. The mixed state has some entropy associated with it, which is known as the entanglement entropy. To understand why entanglement entropy might help to explain the area law, let us consider an example of a black hole. In the case of a black hole formed by gravitational collapse, the degrees of freedom of the quantum field in the exterior and the interior of the black hole are entangled with each other [6]. Since an observer exterior to the black hole cannot access the interior, the interesting state for an external observer of a black hole is the state where the internal states are removed. Mathematically this can be done by tracing over the degrees of freedom of the interior of the black hole, resulting in a mixed state with some associated entropy. Entanglement entropy is defined as the entropy associated with a mixed state that describes a subsystem that is part of a larger system. Due to the analogy between the state of interest for an external observer of a black hole and the definition of entanglement entropy, it is thought that entanglement entropy might be a 1.1. Motivation 3 good candidate to try to explain the area law of black-hole entropy. In 1986, Bombelli et al. showed that the entanglement entropy of a system of coupled oscillators, which is used to model quantum fields, is indeed proportional to the area of the inaccessible traced out region [7]. Later, in 1993, Srednicki separately found similar results, showing that the entanglement entropy of a quantum field obeys an area law [8]. Bombelli and Srednicki argue that the entanglement of a massless free scalar field might be related to the entropy of a black hole and that entanglement entropy might, therefore, help us to better understand the area law of black-hole entropy. However, to this day it is still not yet understood why the entropy of a black hole obeys an area law. This is because the Hawking temperature of astronomical black holes is extremely small, making it impossible to study the entropy of black holes experimentally. To avoid these difficulties, Unruh was the first to propose an analogue of a black hole in 1981. The system he proposed was a sonic black hole, in which phonons are not able to escape from a flowing fluid, thus mimicking the event horizon of a black hole [9]. Though this setup was still purely theoretical, with this model Unruh marked the start of a new field of research; analogue black holes. The idea of this field is to better understand black holes by studying a system which has the same characteristics as an astronomical black hole, while still being measurable experimentally. The first realisation of an analogue black hole in the lab was in 2009, and through the years many different analogue black hole systems have been proposed, consisting of superfluid Helium [10], light in a moving dielectric medium [11], Bose-Einstein condensates [12], a fluid flowing in a shallow basin [13], electromagnetic waves propagating in waveguides [14], fermions in Weyl semimetals [15], magnons [16], and more. In several of these models, an analogue of Hawking radiation has been measured, which is a strong indication that the model is indeed a good analogy of an astronomical black hole and that it might provide us with new insights. Studying the entanglement entropy of analogue black holes may help in finally understanding the area law of black-hole entropy. In this thesis, we calculate the entanglement entropy of a simple model for a quantum field to find an area law behaviour. This work is motivated by a specific analogue black hole system, which we shall now introduce.

1.1 Motivation

This thesis is inspired by the black-hole analogue proposed by Liao, van der Wurff, van Oosten, and Stoof, which is a two-dimensional Schwarzschild black-hole analogue in a pho- ton Bose-Einstein condensate [17]. By introducing a hole in the cavity from which light can escape the condensate, a radial flow is induced within the system. Closer to the hole in the cavity, the condensate flows at a faster rate. The photons in the light condensate create pairs of phonons, which are quanta of density fluctuations. As opposed to the flow rate of the condensate, the speed of these phonons remains roughly constant throughout the condensate. This results in an effective event horizon in the system, behind which phonons always end up in the hole in the cavity. If a pair of phonons is created near the 4 Chapter 1. Introduction horizon, one phonon will go past the horizon and fall into the hole in the cavity, while the other phonon escapes. The latter can be measured and this is what constitutes the Hawk- ing radiation in this black-hole analogue. In their paper, the authors discuss the results of measuring the Hawking radiation of the system. The presence of Hawking radiation in this black-hole analogue is an indication that this system might be useful to help us better understand the behaviour of astronomical black holes. The ground state of the sonic black hole proposed by Liao et al. has a thermal spectrum, similar to the Hawking radiation of astronomical black holes. It is thought that this ther- mal spectrum can be interpreted as entanglement entropy, though there is no conclusive evidence for this yet. The exact quantum state of an astronomical black hole is not known, making it impossible to calculate its entropy. On the contrary, the ground state of the analogue black hole is known, making it possible to calculate the exact density matrix of the ground state, from which one can calculate the entanglement entropy of the system. The ultimate goal is to calculate the entanglement entropy of this analogue system. We expect to find that it obeys an area law, and we hope that we can learn why. In this thesis we lay the groundwork for this by studying a simpler model consisting of a chain of harmonic oscillators. This is an important model, because in the continuum limit this describes a scalar field, which is the quantum field of interest in the black-hole ana- logue. Specifically, to describe the Schwarzschild black-hole analogue proposed by Liao et al., one requires a massless scalar field in a curved spacetime. The curvature is necessary to properly describe the metric of the black hole, which can be accomplished with a positional dependent spring constant of the harmonic oscillators. In this thesis we restrict ourselves to constant spring coupling, resulting in a scalar field in a flat background. Therefore, in order to be able to describe the Schwarzschild black-hole analogue, the discussion of this thesis needs to be generalised to a curved background in future work. The model of coupled oscillators can describe either a massless or a massive scalar field in the continuum limit, depending on the choice of the dispersion relation. The distinction between these two cases plays an important role in this thesis, and therefore we already shed some light on it here. To model a massless scalar field, one uses the following dispersion relation

4γ ka ω sin2 , (1.2) k ¾ m 2 where γ is the spring constant, m and= k are the‹ mass and momentum of the oscillators, and a is the lattice spacing. This dispersion is zero when the momentum is equal to zero. The dispersion relation used to describe a massive scalar field is obtained by introducing a mass parameter to the dispersion relation, and is given by

4γ ka ω sin2 ω2, (1.3) k ¾ m 2 0

2 M 2c4 2 γa = ‹  + where ω0 h̵ 2 , and c m , and M is the mass parameter of the quantum field. For simplicity, we will refer to ω2 as the mass parameter in this thesis, since they are directly = ≡ 0 related. The important difference between the two dispersion relations is that the latter does not vanish for zero momentum, which is a direct result of the inclusion of the mass 1.1. Motivation 5

ω(k) ω(k)

( π ) ω( π ) ω a a

ω0 k k − π 0 π − π 0 π a a a a

(a) Sketch¼ of the massless dispersion (b) Sketch¼ of the massive dispersion = 4γ 2 ‰ ka Ž = 4γ 2 ‰ ka Ž + 2 ωk m sin 2 . ωk m sin 2 ω0

Figure 1.1 – A schematic overview of the dispersion relation for both the massless (a) and 2 massive (b) case. The addition of a constant term ω0 in the massive case introduces a gap in the dispersion for zero momentum.

2 parameter. Note that taking the massless limit M 0, and thus ω0 0, in Eq. (1.3), one regains the massless dispersion of Eq. (1.2). A sketch of both dispersion relations is → → 2 given in Figure 1.1, where the gap created at k 0 due to the inclusion of the ω0 term is clearly visible in Figure 1.1b. Ultimately, the goal is to model the Schwarzschild black = hole proposed by Liao et al.. To do this we require a massless quantum field, since the model consists of a photon Bose-Einstein condensate. Therefore, it seems straightforward to use the massless dispersion relation of Eq. (1.2), which has a linear dispersion for small k. In Chapter 3, the model of harmonic oscillators with this dispersion is discussed. However, in the end we run into a problem when trying to calculate the entanglement entropy due to the zero-energy mode of k 0. This difficulty is avoided by considering the same model, but with the massive dispersion relation of Eq. (1.3), which is discussed = in Chapter 4. The calculation of this model is nearly identical to that of the massless model. The entanglement entropy is calculated numerically and compared to analytical 2 expressions. Taking the massless limit ω0 0, we can nevertheless study the behaviour of the entanglement entropy of a massless scalar field. Finally, in Chapter 5, the results → are discussed and an outlook for future research is given. 6 Chapter 1. Introduction 7

Chapter 2

Physical background

There is a handful of physical topics that play a big role throughout this thesis. The purpose of this chapter is to provide a quick overview of these topics and explain them using a few examples. If you have a background in theoretical physics, this chapter can be skipped without losing the bigger picture of the thesis.

2.1 Entropy

Thermodynamical systems are characterised by properties such as temperature, pressure, volume, and also their entropy. Entropy is an extensive quantity, meaning that it scales with the size of the system. To get an idea what entropy means, we first introduce macrostates and microstates from statistical mechanics. The macrostate of a system is given by its macroscale properties, such as temperature, pressure, volume, and entropy. On the other hand, microstates are all the microscopic configurations within the system that correspond to a specific macrostate. Therefore, several different microstates may correspond to the same macrostate. To make this more clear, consider throwing two dice. In this case, a macrostate corre- sponds to the outcome of the throw, and the microstate is the specific configuration of the dice. Since there is only one way to throw twelve with two dice, the macrostate twelve only has one microstate, namely two sixes. However, there are six different ways to throw seven, and therefore the macrostate seven has six microstates. The entropy of a thermodynamical system expresses the number of microstates corre- sponding to a macrostate in the following way

S kB ln Ω, (2.1) where Ω is the number of different microstates,= and kB is the Boltzmann constant. This expression assumes that all microstates are equally likely to occur, which is the case in isolated systems in equilibrium. In systems such as the canonical ensemble, in which the energies of different microstates are not necessarily equal, the microstates do not occur with the same probability. The entropy can then be expressed in terms of the probabilities 8 Chapter 2. Physical background

pj with which each microstate occurs as follows

N SGibbs kB pj ln pj, (2.2) j=1 which is called the Gibbs entropy, named= − afterQ Josiah Willard Gibbs who defined it in 1878 following earlier work by Ludwig Boltzmann. This is the most general expression for the entropy in thermodynamic systems. Entropy can also be understood from an informational point of view. As mentioned, the entropy is directly related to the number of microstates corresponding to a macrostate. If a macrostate has many different microstates, there is a lot of uncertainty about the microscopic configuration of the system, and therefore there is not enough information to properly describe the system. Such a state with many different possible microstates has a high entropy. On the other hand, if it has very few microstates, it is less difficult to predict the system configuration and it will have a lower entropy. In this sense, entropy is said to be a measure of the (lack of) information in a system. In this thesis, we are interested in the entropy of a quantum mechanical model, and therefore we need an extension of the classical Gibbs entropy to quantum mechanics. This extension exists and it is called the von Neumann entropy. It is related to the density matrix of the quantum mechanical system of interest, and therefore we begin by introducing the notion of density matrices.

2.2 Density matrix

The possible states of a quantum mechanical system are described by wave functions, which contain information about the probabilities of the outcome of measurements. Wave functions can take infinitely many values, which are represented by state vectors. These vectors are elements of a complex Hilbert space and they are used to calculate probabil- ities of finding a system in a given state. A quantum state can be either pure or mixed. A H pure quantum state is a state that is represented by a single state vector ψj , and mixed states are all quantum states that are not pure. S ⟩ Instead of describing quantum states using state vectors, it is also possible to express the statistics of a quantum state in matrices. This is exactly what the density matrix is, and it contains the same information as the state vector ψj . The density matrix description was formulated simultaneously by John von Neumann and Lev Landau in 1927. The general S ⟩ expression for the density matrix is

ρ pj ψj ψj , (2.3) j = Q S ⟩⟨ S where pj is the probability for the system to be in pure state ψj . The density matrix describes a pure state if Tr ρ2 1, and a mixed state if Tr ρ2 1. The classical Gibbs S ⟩ entropy can be extended to a quantum mechanical setting by using the density matrix.   =   < This quantum version is called the von Neumann entropy, and it is defined as

SvN Tr ρ ln ρ . (2.4)

= − [ ] 2.3. Entanglement entropy 9

Taking the of a matrix can be a difficult task, and therefore it is convenient to decompose the density matrix in terms of its eigenvectors j

ρ pj j j , S ⟩ (2.5) j = Q S ⟩⟨ S such that the von Neumann entropy simplifies to

SvN pj ln pj. (2.6) j = − Q The von Neumann entropy is zero if and only if ρ describes a pure state. It takes a maximal value of ln N when ρ describes a maximally mixed state, where N is the dimension of the Hilbert space. This makes sense from an information theory point of view: If the system ( ) of interest is described by a pure state, the state the system is in is uniquely determined. Consequently, there is no lack of information and thus the entropy is zero. On the other hand, if the system is in a mixed state there is uncertainty about which state the system is in. In this case, there is a lack of information, resulting in a non-zero entropy.

2.3 Entanglement entropy

The density matrix helps us to describe the quantum statistics of our system and it allows for a quantum version of the classical entropy. In this thesis, we are particularly interested in the entanglement entropy, which is derived directly from the von Neumann entropy. Entanglement entropy is a measure for the quantum entanglement in a quantum mechanical system. First, let us briefly discuss the concept of entanglement. Particles are said to be entangled if their physical properties are perfectly correlated with each other. For example, measuring the spin or momentum of one particle from an entangled pair directly determines the outcome of the measurement of the second particle. In quantum mechanical terms, two particles are entangled when their wave function cannot be written as a product of two separate wave functions corresponding to the two different particles. This means that you cannot describe the particles of an entangled pair independently. To visualise this, consider the following Bell state, which is the simplest example of an entangled quantum state 1 Ψ 0 1 1 0 . (2.7) AB 2 A B A B S ⟩ = √ (S ⟩ ⊗ S ⟩ − S ⟩ ⊗ S ⟩ ) The two systems A and B have their own separate Hilbert spaces A and B. System A is in state ψ , with basis vectors 0 and 1 , and system B is in state ψ , with A A A H H B basis vectors 0 and 1 . The composite state Ψ is a pure state in the Hilbert space S ⟩B B S ⟩ S ⟩ AB S ⟩ A B. It is impossible, however, to rewrite this quantum state as a product of pure S ⟩ S ⟩ S ⟩ states of the separate Hilbert spaces A and B, and therefore the two systems A and B H ⊗ H are said to be entangled. Since the total state Ψ is a pure state in the Hilbert space H H AB A B, its corresponding von Neumann entropy is zero. On the other hand, the two S ⟩ subsystems A and B are mixed states and therefore have a non-zero entropy. H ⊗ H 10 Chapter 2. Physical background

The entanglement of the two particles becomes more explicit when we consider a measure- ment of the Bell state of Eq. (2.7). We measure one of the two systems, say system A. The two possible outcomes of this measurement are either 0 or 1, both of which occur with probability p 1 2. Depending on the outcome, the quantum state collapses, determining the outcome of the measurement in system B. For example, if system A is measured to be = ~ 0, the state collapses to

Ψ AB 0 A 1 B , (2.8) and therefore the measurement of systemS ⟩ = BS will⟩ ⊗ giveS ⟩ 1. Thus the measurement of system A directly determines the outcome of the measurement of system B, showing that the two systems are entangled. We have seen that in entangled systems the subsystems are described by mixed states, and thus have non-zero contributions to the entropy. The entropy associated with the subsystems is called the entanglement entropy and it is defined as

SEE Tr ρA ln ρA , (2.9) where ρA TrB ρ , or, equivalently, = − [ ]

= [ ] SEE Tr ρB ln ρB , (2.10) where ρB TrA ρ , and A and B are subsystems= − [ of the] composite system A+B. Further- more, ρA and ρB are called reduced density matrices. The reduced density matrix of a = [ ] subsystem is obtained by taking the partial trace of density matrix of the composite system over the other subsystem, so ρA TrB ρ , and ρB TrA ρ . So the entanglement entropy is the von Neumann entropy of the subsystem. Therefore, a reduced density matrix with = [ ] = [ ] a non-zero entropy is an indication that there is entanglement within the system. It might not be immediately evident that both expressions of the entanglement entropy given above should be equal, but this can be clarified as follows. Consider the biparti- tion of some system into two subsystems A and B. One can understand intuitively that if subsystem A is entangled with subsystem B, that subsystem B must be equally entangled with subsystem A. Because of this symmetry the resulting entanglement entropy will be the same, regardless of which subsystem is traced out. In the following two chapters, we will study the entanglement entropy of a chain of coupled harmonic oscillators. We create a bipartition of the system into two subsystems and cal- culate the entanglement entropy associated with the resulting states. The entanglement entropy is studied as a function of the number of coordinates of the subsystem to find a one-dimensional equivalent of an area law in our system of oscillators. 11

Chapter 3

Massless scalar field

We start by studying the massless case, for which the dispersion relation is given by

4γ ka ω sin2 . (3.1) k ¾ m 2

Note that the massless case does not= imply that the‹ harmonic oscillators have zero mass, but rather that the system of coupled oscillators describes a massless scalar field in the continuum limit. This is due to the fact that the dispersion relation is zero for k 0. First, we discuss a simplified model consisting of three coupled oscillators, after which we = generalise the calculations to a system of N oscillators.

3.1 Three coupled harmonic oscillators

3.1.1 Ground state wave function

To familiarize ourselves with the procedure of computing the entanglement entropy, we first discuss the case of a chain of three harmonic oscillators. Each has a spring constant γ, and they are separated by lattice constant a. A schematic overview of the system is given in Figure 3.1. The Hamiltonian describing this system of three equally coupled oscillators is

3 2 3 pj 1 H xjKj,j′ xj′ , (3.2) j=1 2m 2 j,j′=1 = Q + Q where pj, xj, and m are the momentum, position, and mass of the oscillators, respectively. Furthermore, K is the symmetric matrix of spring constants and for three oscillators it is of the following form

2γ γ γ K γ 2γ γ . (3.3) ⎛ − − ⎞ ⎜ γ γ 2γ⎟ = ⎜− − ⎟ ⎝− − ⎠ 12 Chapter 3. Massless scalar field

γ x1 x3

γ x2 γ

L Na

Figure 3.1 – An illustrative description of the system of three oscillators. The oscillators are connected to each other in a circle of circumference Na, where N is the number of

oscillators. We denote their momentum in the plane with parallel momentum k∥. This movement is restricted as a result of the spring constant γ. However, the center of mass of the system is free to move transversely along the cylinder, in the direction of L. This motion perpendicular to the plane in which the oscillators live is described by the transverse

momentum k⊥.

It is easy to check that this matrix has eigenvalues λ1 λ2 3γ and λ3 0, with corre- sponding normalised eigenvectors = = = 1 1 1 1 1 1 v1 0 , v2 1 , v3 1 . (3.4) 2 ⎛ ⎞ 2 ⎛ ⎞ 3 ⎛ ⎞ ⎜ 1⎟ ⎜ 0 ⎟ ⎜1⎟ = √ ⎜ ⎟ = √ ⎜− ⎟ = √ ⎜ ⎟ The Hamiltonian of Eq. (3.2)⎝− can⎠ also be diagonalised⎝ ⎠ using a⎝ Fourier⎠ transformation, resulting in a system of three uncoupled oscillators. For general N the Fourier transfor- mation is given by

N ikja xk xj e . (3.5) j=1 = Q After this transformation, the spring constant matrix K is diagonal, and we define the matrix Ω as the square root of K, divided by the mass of an oscillator

1 K 2 Ω , (3.6) m such that it has the dispersion relation on≡ its‹ diagonal

ωk1 0 0

Ω 0 ωk2 0 . (3.7) ⎛ ⎞ 0 0 ω ⎜ k3 ⎟ = ⎜ ⎟ In the massless case, the dispersion relation⎝ is given by⎠

4γ ka ω sin2 . (3.8) k ¾ m 2 = ‹  3.1. Three coupled harmonic oscillators 13

Note that the massless case does not mean that the harmonic oscillators have zero mass, but rather that the system of coupled oscillators describes a massless scalar field in the continuum limit. This is due to the fact that the dispersion relation of Eq. (3.8) is zero for k 0. It is important to clarify that we distinguish between two different momenta in the system = of Figure 3.1. The transverse momentum along the direction of L, so perpendicular to the plane in which the oscillators are, is denoted by k⊥. This momentum is quantised as k⊥ 2πn L, and takes values k⊥ , when we take the continuum limit L , as we will discuss later. The other momentum in our system is the longitudinal momentum in = ~ ∈ {−∞ ∞} → ∞ the plane of the oscillators, in the direction of Na. This momentum is denoted by k∥, and by imposing periodic boundary conditions we find the quantisation of this momentum.

For a 1, the allowed momenta are k∥ 2π 3, k∥ 2π 3, and k∥ 0, such that Ω 2 has eigenvalues ω1 ω2 3γ m, and ω3 0, as expected, since ω λi m. Since the = = ~ = − ~ i = system is uncoupled in momentum» space, we can write the full wave function of the ground = = ~ = = ~ state as the product of the wave functions of the three independent oscillators. The wave function of a single harmonic oscillator with natural frequency ω γ m is given by » 1 2 mω 4 mωx = ~ ψ x exp k , (3.9) 0 k πh 2h ( ) = ‹  − where the prefactor arises from the normalisation,̵ and x̵k is the momentum space coordi- nate as introduced in Eq. (3.5). We can write the wave function of our ground state as a product of three times this wave function, for the different eigenfrequencies. However, we need to be careful, because the Ω matrix has a zero eigenvalue. For this particular case the normalisation constant is different, and therefore we need to treat it separately.

Physically, the k∥ 0 mode can be understood as a motion of the center of mass of the system. Normalising gives the following wave function = 1 2 1 1 2 mω3x 1 2 ψ x exp k3 , (3.10) 0 k3 2h 3L ⎡ ⎤ 3L ⎢ ⎥ ( ) = Œ√ ‘ ⎢− ⎥ = Œ√ ‘ ⎢ ̵ ⎥ with ω3 0. The factor 3 comes from the⎣ fact that⎦ the center of mass coordinate is restricted to the range [0,√ 3L]. This constant can be interpreted as a center of mass = motion of the system as a whole.√ The total wave function of the system is then given by

1 1 2 1 m 2 1 m 2 2 2 ψ0 xk ω1ω2 4 exp ω1x ω2x ω3x , (3.11) 3L πh 2h k1 k2 k3 ( ) = Œ√ ‘ ‹  ( ) − ‰ + + Ž where ω3 0. Note that if̵ there were no zero̵ eigenvalue, the prefactor would be the product of all eigenfrequencies ωi, which is equal to the of Ω. In the original = position coordinates the wave function is

1 1 1 2 mω 2 m ψ x exp x Ω x , (3.12) 0 L πh 2h ( ) = ‹  ‹  − ⋅ ⋅  where we have written ω1 ω2 ω 3, where̵ ω ̵γ m is the natural frequency of a harmonic oscillator. We will continue√ to use this notation» throughout this chapter to = = = ~ 14 Chapter 3. Massless scalar field lighten the notation. Next we rewrite this expression in the basis of its eigenvectors, which are given in Eq. (3.4). However, the eigenvectors corresponding to the degenerate eigenvalues, v1 and v2, are not orthogonal to one another. It is preferable to work with an orthonormal basis, and therefore we complexify the eigenvectors by writing them as eik∥j for the allowed values of the momentum, and j 1, 2, 3. Normalising the newly obtained vectors results in the following orthonormal eigenvector basis = 1 i 1 i 2 2 3 2 2 3 1 1 1 i 1 1 i 1 v1 √3 , v2 √3 , v3 1 . (3.13) 3 − 2 + 2 3 − 2 − 2 3 ⎛ √ ⎞ ⎛ √ ⎞ ⎛ ⎞ ⎜ 1 ⎟ ⎜ 1 ⎟ ⎜1⎟ = √ ⎜− − ⎟ = √ ⎜− + ⎟ = √ ⎜ ⎟ ∗ ⎝ ∗ ⎠ ⎝ ⎠ ⎝ ⎠ Note that now v1 v2 v2 v1 0, as desired. Next we make the following coordinate transformation to this orthonormal basis ⋅ = ⋅ =

x v1xk1 v2xk2 v3xk3 , (3.14)

→ + + where xk1 and xk2 are two complex coordinates, and xk3 is the coordinate of the center of mass of the three oscillators, which is real. The term in the exponent of the wave function in Eq. (3.12) transforms as follows

x Ω x v1xk1 v2xk2 v3xk3 Ω v1xk1 v2xk2 v3xk3 v x v x v x ω v x ω v x ω v x ⋅ ⋅ → ( 1 k1 + 2 k2 + 3 k3 ) ( 1 1 k+1 2 2+k2 3) 3 k3 ω x 2 ω x 2 ω x 2, (3.15) = ( 1 k1 + 2 k+2 3) ⋅k(3 + + ) where we used for the= realS innerS + productsS S + thatS v1 Sv1 v2 v2 0, and that v1 v2 v2 v1 1, since v and v are complex vectors. Furthermore, we imposed that x∗ x , and 1 2 k1 k2 ∗ ⋅ = ⋅ = ⋅ = ⋅ = x xk . The wave function of Eq. (3.12) takes the following diagonal form k2 1 = 1 1 = 1 2 mω 2 m ψ x exp ω x 2 ω x 2 ω x 2 . (3.16) 0 k L πh 2h 1 k1 2 k2 3 k3

The coordinate( transformation) = ‹  ‹ ̵ we applied,− written̵ ‰ S outS + forS eachS component+ S S Ž of x, is

x √1 1 i 3 x √1 1 i 3 x √1 x 1 3 2 2 k1 3 2 2 k2 3 k3 ⎪⎧x √1 1 i √3 x √1 1 i √3 x √1 x , (3.17) ⎪ 2 → 3 ‰− 2 + 2 Ž k1 + 3 ‰− 2 − 2 Ž k2 + 3 k3 ⎪ ⎪x √1 x x√ x √ ⎨ 3 → ‰−k1 − k2 Ž k3 + ‰− + Ž + ⎪ 3 ⎪ ⎪ → ( + + ) and we can invert⎩⎪ this transformation to obtain

x √1 x x 2x i x x k1 2 3 1 2 3 2 2 1 ⎪⎧x √1 x x 2x i x x . (3.18) ⎪ k2 → − 2 3 ( 1 + 2 − 3) + 2 ( 2 − 1) ⎪ ⎪x √1 x x x ⎨ k3 → − (1 +2 −3 ) − ( − ) ⎪ 3 ⎪ From this we can clearly⎪ see→ that( x∗+ x+ , and) x∗ x , as expected, and that x is ⎩ k1 k2 k2 k1 k3 indeed the center of mass coordinate of the system. Also, since the positional coordinates = = xi run from 0 to L, it is clear that the center of mass coordinate xk3 runs from 0 to 3.1. Three coupled harmonic oscillators 15

3L, which explains the normalisation factor we obtained in Eq. (3.10). For N harmonic √oscillators, this range of the center of mass coordinate, and with it the normalisation, generalises to NL, as we will see in Section 3.2. Using the coordinate transformation from Eq. (3.18)√ to transform back to the real space coordinates, the wave function of Eq. (3.16) becomes

1 1 1 2 mω 2 mω 2 2 2 ψ0 x exp x x x x1x2 x1x3 x2x3 . (3.19) L πh h 3 1 2 3 ( ) = ‹  ‹  − √ ‰ + + − − − Ž It is easy to check that this̵ wave function̵ is properly normalised, which can be done by calculating the following integral

2 dx1dx2dx3 ψ0 x . (3.20)

After completing the square in theS exponent, theS ( integrals)S over the first two coordinates come down to a simple Gaussian integral. The remaining integral over the last coordinate then is trivial and gives a factor L, cancelling the normalisation. One then finds that the expression of Eq. (3.20) is equal to one, and therefore the wave function is properly normalised. This is crucial, since we want our density matrix to be normalised as well, so that it resembles a proper probability distribution. Another way we could have obtained an orthonormal set of basis vectors from our eigen- vectors is by using the Gram-Schmidt procedure, which is a method to orthonormalise a set of vectors. Using this method yields a similar, but slightly different looking expression for the wave function. Taking ω1 ω2, the wave functions are equal to each other, as expected, and therefore this method is equally correct. However, we will use the complex- = ification procedure, since it is more straightforward to extend to the case of N harmonic oscillators. This is due to the fact that the complex eigenvectors are easily expressed in terms of N as eik∥j e2πinj~Na, and they all have the same normalisation factor of 1 N. √ = ~ 3.1.2 Density matrix of the ground state

As discussed in Section 2.2, the density matrix of a system with a probability pj to be in state ψj , is given by

S ⟩ ρ pj ψj ψj . (3.21) j = Q S ⟩⟨ S If we assume our system to be in the ground state, this reduces to

′ ∗ ′ ρ0 x, x ψ0 x ψ0 x 1 mω mω ( ) = ( ) ( )exp x2 x′ 2 x2 x′ 2 x2 x′ 2 L πh h 3 1 1 2 2 3 3 (3.22) = ‹  ‹ mω − √ Š′ ′+ + +′ ′ + + ′  ′ exp ̵ x1̵x2 x x x1x3 x x x2x3 x x , h 3 1 2 1 3 2 3 × − √ ‰− − − − − − Ž where we substituted the̵ ground state wave function of Eq. (3.19). As explained in Section 2.3, to calculate the entanglement entropy we require the reduced density matrix 16 Chapter 3. Massless scalar field of our system. This can be obtained by taking the partial trace over a subsystem within the complete system. This comes down to integrating out some of the position coordinates from our density matrix. By doing this, we divide our system into two subsystems; one of the size of the number of coordinates we integrate out, and one of the size of the number of coordinates that is leftover after integrating. First we integrate out two of the three coordinates, which corresponds to tracing over a subsystem of size two. Later we will also discuss the case where we only integrate out one of the three coordinates. In both cases we calculate the entanglement entropy from the resulting reduced density matrices. Since in both cases we divide the system of three oscillators into a subsystem of size one, and a subsystem of size two, we expect that both entanglement entropies will be the same.

3.1.3 Reduced density matrix with two coordinates integrated out

The reduced density matrix is calculated as follows

′ ′ ′ ′ ′ ′ ρred x1, x1 dx2dx2dx3dx3 δ x2 x2 δ x3 x3 ρ0 x, x 1 mω mω ( ) = S exp ( − x)2 ( x′−2 ) ( ) L πh h 3 1 1

= ‹  ‹  × 2−mω√ Š2 +2  dx2dx̵3 exp ̵ x x (3.23) h 3 2 3

× S mω − √′ ‰ + Ž ′ exp x2 x1 ̵ x x3 x1 x 2x2x3 . h 3 1 1 ×  √ ‰ ( + ) + ( + ) + Ž The integrals are of Gaussian form̵ and can easily be solved by completing the square. Performing the integrals results in the final expression for the reduced density matrix

′ 1 mω ′ 2 ρred x1, x exp x1 x . (3.24) 1 L 2h 3 1 ( ) = ‹  − √ ‰ − Ž Note that this density matrix only depends on the̵ difference of the remaining coordinates. To calculate its entanglement entropy, we need to find the eigenvalues of the reduced density matrix. We do this by solving the eigenvalue equation

∞ ′ ′ ′ dx1 ρred x1, x1 fk x1 pkfk x1 , (3.25) −∞ S ( ) ( ) = ( ) where fk x are the eigenfunctions of ρred, and pk are the corresponding eigenvalues. The eigenfunctions can be found by trying different kinds of functions, and they turn out to ( ) be complex exponents. Solving the eigenvalue equation gives

fk x exp ik⊥x (3.26) 1 2 ( ) = 1 [ 2πh] 3 h 3 2 pk exp k , (3.27) L mω√ 2mω√ ⊥ ̵ ̵ = ‹  Œ ‘ − where k⊥ is the perpendicular momentum, which takes values k⊥ 2πn L.

= ~ 3.1. Three coupled harmonic oscillators 17

′ 3.1.4 Entanglement entropy of reduced density matrix ρred(x1, x1) Now that we have found the eigenvalues of the reduced density matrix, we can calculate the entanglement entropy from

S pk log pk . (3.28) k However, we want to extend this formula= − Q to a( continuum) of values of the momentum k⊥ in the limit L . To do this we must first rewrite our discrete expression for the eigenvalues pk, to one that holds for a continuum of k⊥ values, which we will call p k . → ∞ We know that the reduced density matrix is given by ( ) ′ ′ ρred x, x pk x k k x , (3.29) k and that it is normalised. Therefore,( it) must≡ Q hold⟨ S that⟩⟨ S ⟩

eik(x−x) dxρred x, x pk dx x k k x pk dx pk 1. (3.30) k k L k This doesS not come( ) as= Q a surprise,S since⟨ S ⟩⟨ weS want⟩ = Q theS eigenvalues of= theQ density= matrix to describe a normalised probability distribution. The eigenvalues are some function of k, so we can write

f k 1, (3.31) k

Q ( ) = 2πn where f k resembles the eigenvalues. The transverse momentum is quantised as k⊥ L , where n is an integer. Therefore, we can also write the sum over k as a sum over integers ( ) = n

f k n . (3.32) n As we take larger values of N, we sumQ over( ( more)) and more integer values n. In the continuum limit, we have

f k n dnf k n . (3.33) n Q ( ( )) → S L ( ( )) From the quantisation of k, we can see that dn 2π dk. Rewriting the integral back to an integral over momentum, we obtain = L dnf k n dkf k . (3.34) 2π In short, by taking the continuumS limit( ( we)) have= S rewritten( ) L pk dk pk dk p k 1. (3.35) k 2π Thus we have found the followingQ → expressionS for≡ S the eigenvalues( ) = in the continuum limit

1 2 L 1 2πh 3 h 3 2 p k pk exp k , (3.36) 2π 2π mω√ 2mω√ ⊥ ̵ ̵ ( ) = = ‹  Œ ‘ − 18 Chapter 3. Massless scalar field which is independent of L, as it should be. To find the entanglement entropy we integrate over all momenta. Note that p k has units of length, since we multiply it by a factor of L. This is compensated by the fact the we now integrate over the momentum, which has ( ) units of inverse length. However, the argument of the logarithm must be dimensionless, and therefore we need to scale p k in the logarithm. The integral over the momentum is a simple Gaussian integral, and we find ( ) p k 1 S dk p k ln . (3.37) p 0 2 ( ) = − S ( ) Œ ‘ = 3.1.5 Reduced density matrix with one( coordinate) integrated out

Instead of integrating out two of the three position coordinates, we can also choose to only integrate out one. Following the same procedure, this time we have

′ ′ ′ ′ ′ ρred x1, x1, x2, x2 dx3dx3 δ x3 x3 ρ0 x, x

1 2 1 ( mω 3 ) = S m ω 3( − )′ 2( ) ′ 2 exp x2 x x2 x L 2π√h 2h √2 1 1 2 2 = ‹  Œ m̵ ‘ω 3 − ̵ Œ ‘ Š ′+ ′ +′ ′ +  exp x1x2 x2x1 x1x2 x2x1 2h √2 (3.38) × − m Œ−ω ‘ ‰ +′ + ′ + Ž′ ′ exp ̵ x1 x x1 x x2 x x2 x 4h 2 3 1 1 2 2

× − m Œ √ω ‘ ‰( − ′ )( − ′ ) + ( − ′ )( − ′ )Ž exp ̵ x1 x x2 x x2 x x1 x . 4h 2 3 1 2 2 1 We have rewritten× the− density̵ Œ √ matrix‘ ‰( − to)( the following− ) + ( form,− )( which− is) discussedŽ in the paper by Bombelli et al. [7], 1 1 ρ exp M x x x′ x′ N x x′ x x′ , (3.39) red 2 ab a b a b 4 ab a a b b where we sum over∝ repeated− indices,( and+ the matrices) − ( are− given)( by− )

ω 3 1 1 ω 1 1 M , N . (3.40) √2 1 1 2 3 1 1 ⎛ − ⎞ ⎛ ⎞ = = √ It is easy to see that these two matrices⎝− commute,⎠ since M⎝ N ⎠0, which tells us that we can diagonalise the matrices simultaneously. This is convenient, because this allows us to ⋅ = rewrite the reduced density matrix in a way that we can more easily find its eigenvalues. The M and N matrices share the same set of orthonormal eigenvectors, given by

1 1 1 1 e1 , e2 . (3.41) 2 1 2 1 ⎛− ⎞ ⎛ ⎞ = √ = √ To diagonalise the matrices simultaneously,⎝ ⎠ we make a⎝ coordinate⎠ transformation to new coordinates q using these eigenvectors. Let x e q e1q1 e2q2, and thus 1 1 x1 q2 q1 ,→ x2 ⋅ = q1 +q2 . (3.42) 2 2 → √ ( − ) → √ ( + ) 3.1. Three coupled harmonic oscillators 19

After this transformation the matrices take the following diagonal forms

1 0 ω 0 0 M ω 3 , N . (3.43) 0 0 3 0 1 √ ⎛ ⎞ ⎛ ⎞ = = √ Note that for the M matrix the eigenvector⎝ ⎠ e2 corresponds⎝ to⎠ the zero eigenvalue, whereas for the N matrix, it is the eigenvector e1 that corresponds to its zero eigenvalue. With this in mind, the terms in the exponent of the reduced density matrix transform as

x M x e1q1 e2q2 M e1q1 e2q2

⋅ ⋅ → (e1q1 +e2q2 ) ω (3 e1+q1 0 )e2q2 2 √ = ω( 3 q+1, ) Š ⋅ + ⋅  (3.44) √ and =

′ ′ ′ ′ ′ ′ x x N x x e1q1 e2q2 q1e1 q2e2 N e1q1 e2q2 q1e1 q2e2

′ ′ ω ′ ω ′ ( − ) ( − ) → ‰e1q1 +e2q2 −q e1 −q e2 Ž 0 ‰e1q1 + −e2q2 −0 q e1Ž q e2 1 2 3 1 3 2 ω ′ 2 = ‰ q+2 q −. − Ž Œ ⋅ + √ ⋅ − ⋅ − √ ⋅ (3.45)‘ 3 2 = √ ‰ − Ž The reduced density matrix of equation Eq. (3.38) then becomes

1 1 mω 3 2 mω 3 ρ q , q′ , q , q′ exp q2 q′ 2 red 1 1 2 2 L 2πh 2h 1 1 √ √ (3.46) ( ) = ‹  Œ ‘ − mω ( + ′ ) 2 ̵ exp ̵ q2 q . 4h 3 2 × − √ ( − ) Note that the dependence on the position coordinates q1 ̵and q2 is now separated into ′ ′ ′ ′ two different exponents, such that ρred q1, q1, q2, q2 ρred q1, q1 ρred q2, q2 . Therefore, to find the eigenvalues of the reduced density matrix, we can solve the eigenvalue equation ′ ′( ) = ( ) ( ) separately for ρred q1, q1 and ρred q2, q2 . The latter is of the form we encountered before, depending only on the difference of its two coordinates. Therefore the eigenfunctions of ( ) ( ) this part will again be complex exponents. In the other exponent, however, the two coordinates are completely separated from each other. As discussed in the paper by Srednicki, a density matrix of the form

1 α 2 α ρ exp x2 x′2 , (3.47) red π 2 has eigenfunctions = ‹  − Š + 

1 α 2 f x H α 2 x exp x , (3.48) n n 2 ( ) = Š  −  where H are the Hermite polynomials [8]. From the exponential of q and q′ from the n 1 √1 mω 3 reduced density matrix of Eq. (3.46) we can conclude that in our case α h̵ . = 20 Chapter 3. Massless scalar field

′ Calculating the entanglement entropy of ρred q1, q1

We want to solve the eigenvalue equation ( ) ∞ ′ ′ ′ dq1 ρred q1, q1 fn q1 pnfn q1 , (3.49) −∞ √ 1 ( ) ( ) = ( ) 2 S α 2 mω 3 where fn q1 Hn α q1 exp 2 q1 , and α h̵ . However, the integrand

( ) = Š ′ −′  1 =′ ′ 2 α 2 ρ q , q f q H α 2 q exp αq exp q , (3.50) red 1 1 n 1 n 1 1 2 1 ( ′ ) ( ) ∝ Š  −  − ′  is an odd function of q1 for all n 0. Therefore, all integrals over q1 for n 0 vanish, and the only non-zero contribution comes from the n 0 term. In this case H0 1, and the ′ ≠ α ′ 2 ≠ eigenfunctions are simply f0 q exp q . Ignoring the normalisation factor in front 1 2 1 = = of the density matrix for now, solving the eigenvalue equations gives ( ) = −  ∞ ′ ′ ′ dq1 ρred q1, q1 f0 q1 p0f0 q1 , −∞ 1 ∞ 2 ′ Smω 3 2 ( ′ 2 ) ( ′ ) = π(h ) dq1 exp q1 q1 f0 q1 f0 q1 , (3.51) −∞ 2h√ mω 3 ̵ √ − ( + ) ( ) = Œ √ ‘ ( ) S mω 3 2 ̵ ′ where f0 q1 exp 2h̵ q1 . Since the two coordinates q1 and q1 are uncoupled, we expect that this part of the density matrix will have zero entanglement entropy. From its ( ) = −  eigenfunctions we understand that this part of the density matrix has only one configu- ration it can be in. Therefore, the corresponding eigenvalue of this state must be p0 1, as it will always be in this state. To ensure that the eigenvalue is indeed equal to one, ′ = ρred q1, q1 needs an additional normalisation factor, which we can determine from the right hand side of Eq. (3.51). Then we have ( ) 1 2 ′ mω 3 mω 3 2 ′ 2 ρred q1, q exp q q , (3.52) 1 πh√ 2h√ 1 1 such that ( ) = Œ ̵ ‘ − ̵ ( + ) ∞ ′ ′ ′ dq1 ρred q1, q1 f0 q1 1 f0 q1 , (3.53) −∞ and p0 1. We can now simplyS calculate( the) entanglement( ) = × ( entropy) by using the discrete formula, and we find =

Sq2 pn log pn 0, (3.54) n = − Q ( ) = since p0 1, and pn≠0 0. This result is what one would expect for the entanglement entropy of a reduced density matrix of completely separated coordinates. = =

′ Calculating the entanglement entropy of ρred q2, q2 ′ Next we discuss the other part of the reduced density( matrix,) ρred q2, q2 . We have

′ 1 mω ′ 2 ( ) ρred q2, q exp q2 q , (3.55) 2 L 2 4h 3 2 ( ) = Œ √ ‘ − √ ( − ) ̵ 3.1. Three coupled harmonic oscillators 21 where the prefactor has been chosen in such a way that ′ ′ ′ ′ ρred q1, q1 ρred q2, q2 ρred q1, q1, q2, q2 from Eq. (3.46). Solving the eigenvalue equation ∞ ( ) ( ) = ( ′ ) ′ ′ dq2 ρred q2, q2 fk q2 pkfk q2 , (3.56) −∞ where fk q2 exp ik⊥q2 ,S gives ( ) ( ) = ( )

1 ( ) = [ ] 2 1 2πh 3 h 3 2 pk exp k , (3.57) L mω√ mω√ ⊥ ̵ ̵ = ‹  Œ ‘ − where k⊥ is again the transverse momentum. The eigenvalues are now again a continuous function of the momentum. Therefore, to calculate the entanglement entropy, we again have to take the continuum limit as discussed in Section 3.1.4. Since we are working with coordinate q √1 x x , and x and x take values in the range 0,L , the maximal 2 2 1 2 1 2 value of q is L 2. Therefore, to take the continuum limit we must now multiply the 2 √ = ( + ) L 2 [ ] eigenvalues pk with√ a factor of 2π . This yields

1 2 L 2 1 2πh 3 h 3 2 p k pk exp k⊥ . (3.58) 2√π 2π mω√ mω√ ̵ ̵ ( ) = = Œ√ ‘ Œ ‘ − The entanglement entropy of this part of the density matrix then is

p k 1 S dk p k ln (3.59) q2 p 0 2 ( ) = − S ( ) Œ ‘ = ( ) ′ ′ 3.1.6 Entanglement entropy of reduced density matrix ρred(q1, q1, q2, q2) Now we can go back to the total reduced density matrix,

1 1 mω 3 2 mω 3 ρ q , q′ , q , q′ exp q2 q′ 2 red 1 1 2 2 L 2πh 2h 1 1 √ √ (3.60) ( ) = ‹  Œ ‘ × − mω ( + ′ 2 ) ̵ exp ̵ q2 q . 4h 3 2 × − √ ( − ) The entanglement entropy of the total reduced density matrix̵ is the sum of the entangle- ment entropies of its two parts 1 1 S S S 0 . (3.61) q1 q2 2 2 Thus we find the same entanglement= entropy+ = when+ we= integrate out one coordinate of the density matrix, as when we integrate out two of its coordinates. This is what we expected to find, since in both cases we divide our system of oscillators into one subsystem of size one, and one subsystem of size two. The fact that both calculations yield the same entanglement entropy tells us that one subsystem is equally entangled to the other as the other way around, as it should be. 22 Chapter 3. Massless scalar field

3.2 Chain of N harmonic oscillators

3.2.1 Ground state wave function

The discussion of N oscillators is analogous to the case of three oscillators. The Hamilto- nian describing a chain of N coupled harmonic oscillators is given by

N 2 N pj 1 H xjKj,j′ xj′ , (3.62) j=1 2m 2 j,j′=1 where K is the symmetric matrix= Q of spring+ constantsQ γ of the system, given by

2γ γ 0 γ γ 2γ γ 0 0 ⎛ − ⋯ − ⎞ K ⎜ 0 γ 2γ 0 ⎟ . (3.63) ⎜− − ⎟ ⎜ ⎟ ⎜ 0 γ⎟ = ⎜ − ⋱ ⎟ ⎜ γ 0 0 γ 2γ⎟ ⎜ ⎟ ⎜ ⋮ ⋱ ⋱ − ⎟ This Hamiltonian can be diagonalised⎝− by using an−N dimensional⎠ Fourier transformation given by Eq. (3.5). After this transformation, the matrix K is diagonal. Again we define Ω as the square root of this matrix

1 K 2 Ω , (3.64) m such that it has the dispersion relation on≡ its‹ diagonal

ωk1 0 0 0 0 ωk 0 0 ⎛ 2 ⋯ ⎞ Ω 0 0 ω 0 0 , (3.65) ⎜ k3 ⎟ ⎜ ⋯ ⎟ ⎜ ⎟ ⎜ 0 0 ⎟ = ⎜ ⎟ ⎜ 0 0 0 0 ω ⎟ ⎜ kN ⎟ ⎜ ⋮ ⋮ ⋱ ⎟ 4γ 2 ka ⎝ ⎠ where ωk m sin 2 are the eigenvalues of Ω. Extending the wave function of Eq. (3.11) to general¼ N, we find the following general momentum space wave function = ‰ Ž 1 N−1 1 2 m 4 1 m 4 2 2 2 ψ0 xk ω1ω2 . . . ωN−1 exp ω1xk ω2xk ωN xk , NL πh 2h 1 2 N (3.66) ( ) = Œ√ ‘ ‹ ̵  ( ) − ̵ ‰ + + ⋅ ⋅ ⋅ + Ž where ωN 0. Going back to real space coordinates, the wave function becomes

1 N−1 1 2 mω 4 m = ψ x exp x Ω x , (3.67) 0 L πh 2h which is the generalised version( ) = ‹ of Eq.‹ (3.12).̵  In rewriting− ̵ ⋅ the⋅ normalisation factor of the wave function, we made use of the identity of the massless dispersion relation that

N−1 N−1 ωk Nω , (3.68) k=1 M = where ω γ m, which is proven in Appendix A. Next we will construct the density matrix from» the wave function. = ~ 3.2. Chain of N harmonic oscillators 23

3.2.2 Density matrix of the ground state

Assuming that the system is in the ground state, the density matrix is given by

N−1 1 mω 2 m ρ x, x′ ψ x ψ∗ x′ exp x Ω x x′ Ω x′ , (3.69) 0 0 0 L πh 2h

Since we( want) = to( study) ( the) = entanglement‹  ‹ ̵  entropy− of̵ the‰ ⋅ system,⋅ + we⋅ need⋅ Ž the reduced density matrix, which is obtained by taking a partial trace over the system. This comes down to integrating out a number of coordinates. Once again, we are free to choose how many of the coordinates we integrate out. In Chapter 3.1 all the terms in the exponent in the reduced density matrix were explicitly written out. Now we will integrate out coordinates in a more general matrix notation, starting by integrating out one coordinate. Afterwards we will integrate out more coordinates one by one to obtain a general expression for the integration procedure.

3.2.3 Reduced density matrix

The first coordinate we will integrate out is x1. To do this we write

x Ω11 o x 1 , Ω , (3.70) x o Ω ⎛ ⎞ ⎛ ⃗⎞ = = where ⎝ ⃗ ⎠ ⎝ ⃗ ⃗⎠

x2 Ω22 Ω23 Ω2N Ω11

x3 Ω32 Ω33 Ω21 x ⎛ ⎞ , Ω ⎛ ⋯ ⎞ , o ⎛ ⎞ , (3.71) ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⋮ ⎟ ⎜ ⎟ ⃗ = ⎜x ⎟ ⃗ = ⎜Ω ...... Ω ⎟ ⃗ = ⎜Ω ⎟ ⎜ N ⎟ ⎜ N2 NN ⎟ ⎜ N1⎟ ⎜ ⋮ ⎟ ⎜ ⋮ ⋱ ⋮ ⎟ ⎜ ⋮ ⎟ such that the product⎝ ⎠ in the exponent⎝ can be written as⎠ ⎝ ⎠

Ω11 o x1 x Ω x x1 x o Ω x ⎛ ⃗⎞ ⎛ ⎞ ⋅ ⋅ = ( ⃗) ⃗ x1Ω11x⎝1 ⃗x ox⃗⎠1 ⎝ x⃗1⎠o x xΩx, (3.72) ⃗ and the x1 dependence is separated= from the+ ⃗ other⋅ ⃗ + coordinates.⃗ ⋅ ⃗ + ⃗ Next,⃗ we find the reduced density matrix by tracing over x1 as follows

′ ′ ′ ′ ρred x, x dx1dx1δ x1 x1 ρ0 x, x

N−1 1 mω 2 m (⃗ ⃗ ) = S ( −exp ) ( x )Ω x x′ Ω x′ L πh 2h m ⃗ ⃗ = ‹  ‹dx exp −2x ΩŠ⃗x⋅ ⃗ ⋅ ⃗x+ ⃗x′⋅ ⃗ ox⋅ ⃗  x o x x′ 1̵ 2h 1̵ 11 1 1 1 N−1 ×1 mω 2− ‰ m + (⃗ + ⃗ ) ⋅ ⃗ + ⃗ ⋅ (⃗ + ⃗ )Ž S exp̵ x Ω x x′ Ω x′ L πh 2h (3.73) m ⃗ m ⃗ = ‹  ‹dx exp −2x ΩŠ⃗x⋅ ⃗ ⋅ ⃗ + ⃗x⋅ ⃗x⋅′⃗ ox . 1̵ 2h 1̵ 11 1 h 1

× S − ̵ ( ) − ̵ (⃗ + ⃗ ) ⋅ ⃗  24 Chapter 3. Massless scalar field

The integral over x1 is of the following general Gaussian form

1 2π 1 dx exp x A x J x exp J A−1 J , (3.74) 2 ¿det A 2 Á ÀÁ ( ) S − ⋅ ⋅ + ⋅  =  ⋅ ⋅  2mΩ11 m ′ ( ) where in our case A h̵ , and J h x x o, so the integral is m m dx= exp 2x= Ω− x(⃗ + ⃗ ) ⋅ ⃗x x′ ox 1 2h 1 11 1 h 1 1 ′ ′ (3.75) − π(h 2 ) −m x(⃗ +x ⃗ )o⋅ ⃗ o x x S ̵ exp ̵ , mΩ11 4h Ω11 ̵ (⃗ + ⃗ ) ⋅ ⃗ ⊗ ⃗ ⋅ (⃗ + ⃗ ) = ‹   where we take the tensor product of the vector̵ o with itself. For one-dimensional vectors this is equivalent to taking the cross product. The resulting reduced density matrix is ⃗ N−2 1 N−1 2 ′ 1 m 2 ω m ′ ′ ρred x, x exp x Ω x x Ω x L πh Ω11 2h (3.76) m ⃗ o o⃗ (⃗ ⃗ ) = ‹  ‹ ̵  Œ ‘ exp− ̵ Šx⃗ ⋅ x⋅′ ⃗ + ⃗ ⋅ ⋅ ⃗x x′ . 4h Ω11 ⃗ ⊗ ⃗ Next, we rewrite the cross terms in the second× exponent ̵ (⃗ + ⃗ ) ‹  (⃗ + ⃗ ) o o o o o o o o o o x x′ x′ x x x′ x′ x x x x′ x′, (3.77) Ω11 Ω11 Ω11 Ω11 Ω11 ⃗ ⊗ ⃗ ⃗ ⊗ ⃗ ⃗ ⊗ ⃗ ⃗ ⊗ ⃗ ⃗ ⊗ ⃗ and⃗ ‹ collect ⃗ the+ ⃗ quadratic‹  ⃗ terms= −(⃗ to− find⃗ ) ‹  (⃗ − ⃗) + ⃗ ‹  ⃗ + ⃗ ‹  ⃗

N−2 1 N−1 2 ′ 1 m 2 ω ρred x, x L πh Ω11 m o o o o (⃗ ⃗ ) = ‹  ‹ exp̵  Œ x Ω‘ x x′ Ω x′ (3.78) 2h Ω11 Ω11 ⃗ ⊗ ⃗ ⃗ ⊗ ⃗ m ⃗′ o o ′ ⃗ × exp − ̵ ‹x⃗ ‹x −  ⃗ +x ⃗ ‹x −.  ⃗  4h Ω11 ⃗ ⊗ ⃗ We have again rewritten the× reduced− density̵ (⃗ − ⃗ matrix) ‹ to ( the⃗ − form⃗) of Eq. (3.39), which is discussed in the paper by Bombelli et al. [7]. Thus, we have found the following expressions for the M and N matrices in terms of the components of the Ω matrix o o o o M Ω , N . (3.79) Ω11 Ω11 ⃗ ⃗ ⊗ ⃗ ⃗ ⊗ ⃗ Note that these expressions are= for the− case where we= integrate out one coordinate. Now we integrate the next coordinate, x2, for which the procedure is completely analogous to the first coordinate. We again split off the coordinate, and write the matrices in terms of their components as follows

x M11 m N11 n x 2 , M , N , (3.80) x˜ m M n N ⎛ ⎞ ⎛ ⃗ ⎞ ⎛ ⃗ ⎞ ⃗ = = ⃗ = ⃗ to separate the x2 dependence⎝ ⎠ in the exponent⎝ ⃗ ⃗ of⎠ the reduced⎝ density⃗ ⃗⎠ matrix. Taking the trace over x2 then results in a Gaussian integral of the same type we encountered before. 3.2. Chain of N harmonic oscillators 25

Solving the integral, rewriting the cross terms, and finally collecting terms results in the following density matrix with two coordinates integrated out

N−3 1 N−1 2 ′ 1 m 2 ω ρred x,˜ x˜ L πh Ω11M11 m m m m m ( ) = ‹  ‹ exp̵  Œ x˜ M ‘ x˜ x˜′ M x˜′ (3.81) 2h M11 M1 ⃗ ⊗ ⃗ ⃗ ⊗ ⃗ m ⃗′ m m ′ ⃗ × exp − ̵ ‹x˜ ‹x˜ − N  + x˜‹ x˜− .   4h M11 ⃗ ⃗ ⊗ ⃗ From this we can read off× the− two̵ ( expressions− ) ‹ + for the new ( M− ′ )and N′ matrices in terms of the components of the old M and N matrices m m m m M′ M , N′ N . (3.82) M11 M11 ⃗ ⃗ ⊗ ⃗ ⃗ ⃗ ⊗ ⃗ Repeating the process of integrating= ⃗ − out coordinates= one⃗ + by one reveals a structure for the M and N matrices, which is clarified in Table 3.1.

z M matrix N matrix

1 M Ω o⃗ ⊗ o⃗ N o⃗ ⊗ o⃗ Ω11 Ω11 = ⃗ − = 2 M′ M m⃗ ⊗ m⃗ N′ N m⃗ ⊗ m⃗ M11 M11 ⃗ ⃗ ′′ = ′ − m⃗ ′ ⊗ m⃗ ′ ′′ = ′ + m⃗ ′ ⊗ m⃗ ′ 3 M M ′ N N ′ M11 M11 = ⃗ − = ⃗ +

⋮ (N−1) (N−2)⋮ m⃗ (N−2) ⊗ m⃗ (N−2) (N−1) (N−2)⋮ m⃗ (N−2) ⊗ m⃗ (N−2) N M M (N−2) N N (N−2) M11 M11 = ⃗ − = ⃗ + Table 3.1 – Overview of the different expressions of the matrices M and N, where z is the number of coordinates that are integrated out. The components of the different M and N matrices are defined as in Eq. (3.80).

So the M and N matrices for a specific number of coordinates integrated out can be determined in an iterative fashion from the matrices for a smaller number of coordinates integrated out. The general expression for the reduced density matrix then is

n−1 1 N−1 2 ′ 1 m 2 ω m 2 ′2 m ′ ′ ρred x, x exp M x x x x N x x , L πh Ω11M11 ... 2h 4h (3.83) ( ) = ‹  ‹ ̵  Œ ‘ − ̵ ( + ) − ̵ ( − ) ( − ) where M and N are the matrices corresponding to a specific number of coordinates n that are leftover, as shown in Table 3.1. The reduced density matrix we obtained is of the same form of Eq. (11) in the paper by Bombelli [7]. The normalisation is different, which is due to the fact that we studied the massless case, and, therefore, had to treat the center of mass 26 Chapter 3. Massless scalar field mode separately. Next, we want to calculate the entanglement entropy using our reduced density matrix of Eq. (3.83). In his paper, Bombelli performs a coordinate transformation −1 to construct the composite matrix Λ M N. From the eigenvalues λi of Λ one can then easily calculate the entanglement entropy. This is where we run into a problem, because = in the massless case we are studying, the dispersion relation is zero for k 0, and therefore the Ω matrix has a zero eigenvalue. As a result of this, the M matrix has a zero eigenvalue = as well, and hence the inverse of the M matrix does not exist. Therefore, the method of Bombelli does not work for the massless case. As we shall see in Chapter 4, in the massive case an extra parameter is added to the dispersion relation, which causes the M matrix to be positive definite and then the inverse of M is well-defined. For now we try to avoid the issue by rewriting the reduced density matrix to a different form, which is inspired by the paper by Srednicki [8]. In this paper, he finds that a reduced density matrix of the form x2 x′2 ′ − 1 1 ′ ρ x, x π 2 γ β 2 exp γ βxx , (3.84) red ⎡ 2 ⎤ ⎢ ⎥ ⎢ Š +  ⎥ ( ) = ( − ) ⎢− + ⎥ ⎢ ⎥ has eigenvalues and eigenfunctions ⎢ ⎥ n ⎣ ⎦ pn 1 ξ ξ , (3.85) 1 2 fn x = (Hn− α)2 x exp αx 2 , 1 2 2 2 where Hn is a Hermite polynomial,( ) =α Šγ ⏠, and− ξ ~ β γ α . We will rewrite our reduced density matrix from the form of Bombelli in Eq. (3.83) to the form of Srednicki in = ‰ − Ž = ~ ( + ) Eq. (3.84). Then we can easily determine the eigenvalues of our reduced density matrix, from which we are able to calculate the entanglement entropy. Rewriting the reduced density matrix gives

n−1 1 N−1 2 ′ 1 m 2 ω m N 2 ′2 m N ′ ρred x, x exp M x x xx . L πh Ω11M11 ... 2h 2 h 2 (3.86) ( ) = ‹  ‹ ̵  Œ ‘ − ̵ ‹ +  ( + ) + ̵ ‹   Note that in this expression we have two matrcies instead of the scalars γ and β from Eq. (3.84). This is because Srednicki discusses the case where he has only one coordinate left, while we consider an arbitrary number of coordinates. By diagonalising the matrices in the exponent we can obtain an expression of the form of Eq. (3.84) for each diagonal element of the matrices. However, our two matrices do not commute and therefore we can not diagonalise them simultaneously. Therefore, we first apply the following coordinate scaling − 1 − 1 N 2 N 2 x M x, x′ M x′, (3.87) 2 2 so that the reduced density→ ‹ matrix+ becomes → ‹ +  n−1 1 − 1 ′ 1 mω 2 1 2 N 2 ρred x, x det M L πh Ω11M11 ... 2 −1 (3.88) ( ) = ‹  ‹  ‹ mω  ‹′2 +mω N N ′ ̵ exp x2 x M xx , 2h h 2 2

× − ̵ ( + ) + ̵ ‹  ‹ +  3.2. Chain of N harmonic oscillators 27 where we took out the dimensions ω of the matrices in the exponent and the matrix components in the prefactor. Furthermore, the Jacobian of the transformation appeared N −1~2 N N −1~2 in front of the exponent. Technically it should be M 2 2 M 2 in the exponent, but this is equivalent to what is written in Eq. (3.88), so we use this shorter ‰ + Ž ‰ Ž ‰ + Ž notation. Here we note an important difference with the method of Bombelli: in our N coordinate transformation we take the inverse of the matrix M 2 , where Bombelli used the inverse of M. As mentioned earlier, in the massless case the M matrix always ‰ + Ž has one zero eigenvalue, and therefore its inverse is not defined. This problem is now N partially fixed, because the matrix M 2 only has a zero eigenvalue when N has a zero eigenvalue for the same eigenvector as M does. This means that, in contrast to the method ‰ + Ž from Bombelli, there might still be cases where we can apply our method in the massless case as well. After the coordinate transformation, the two matrices in the exponent of N N −1 the reduced density matrix are a unit matrix and the matrix 2 M 2 . Obviously these two commute, and therefore it is possible to diagonalise them simultaneously with ‰ Ž ‰ + Ž a coordinate transformation. Doing this yields the following reduced density matrix

n−1 1 − 1 ′ 1 mω 2 1 2 N 2 ρred x, x det M L πh Ω11M11 ... 2 (3.89) mω 2 ′2 mω ′ ( ) = ‹  ‹ ̵  ‹  exp‹ + I x x Λxx , 2h h

× − ̵ ( + N ) + ̵ N −1  where I is the unit matrix, and Λ is the diagonalised form of 2 M 2 . For each eigenvalue λj of Λ, we have a reduced density matrix of the form of Eq. (3.84), where mω mω ‰ Ž ‰ + Ž γ h̵ , and β h̵ λj. Therefore, we can write

− 1 1 = = 1 mω 2 mω 2 1 mω 2 mω ′ 2 2 ′ ′ ρred xj, xj 1 λj exp xj xj λjxjxj , L πh j πh 2h h ( ) = ‹  ‹ ̵  M ‹ ̵  ( − ) − ̵ ( + ) + ̵ (3.90) where the prefactor

1 n 1 − 1 mω 2 1 mω 2 1 2 N 2 1 λj 2 det M , (3.91) j πh πh Ω11M11 ... 2 M ‹  ( − ) = ‹  ‹  ‹ +  as it should be, such̵ that the reduced density̵ matrix is properly normalised. Now that we have expressed the reduced density matrix in a form similar to that discussed by Srednicki, we can find the eigenvalues and calculate the entanglement entropy from them. However, the coordinate transformation we performed in Eq. (3.87) is not well defined in many cases. As mentioned previously, the M matrix has a zero eigenvalue, which is a result of the fact that the massless dispersion is zero for zero momentum. It turns out that the N matrix has a lot of zero eigenvalues. In fact, after integrating out one coordinate, it has only one eigenvalue that is non-zero. When the M and N matrices have a zero N eigenvalue corresponding to the same eigenvector, the inverse of the matrix M 2 is not defined. However, the more coordinates we integrate out, the less zero eigenvalues ‰ + Ž N has, and it is only after we integrate out N 2 coordinates that we can consistently calculate the entanglement entropy. Then M and N are 2 2 matrices that commute, and − × 28 Chapter 3. Massless scalar field thus are simultaneously diagonalisable. If we integrate out all but two coordinates, we get a reduced density matrix of the form

1 mω 2 1 mω 2 mω ′ 2 2 ′ ′ ρred xj, xj mj nj exp mj xj xj njxjxj , (3.92) j πh 2h h ( ) ∝ M ‹ ̵  ( − ) − ̵ ( + ) + ̵  where mj and nj are the eigenvalues corresponding to the same eigenvector of the 2 2 matrices M and N. The calculation of the entanglement entropy from the reduced density × matrix is the topic of the next section, for which we follow the method by Srednicki [8].

3.2.4 Numerical results for the entanglement entropy

In order to calculate the entanglement entropy of the reduced density matrix of Eq. (3.92), we need its eigenvalues, which are found by solving the eigenvalue equation

∞ ′ ′ ′ dxj ρred xj, xj fn xj pnfn xj . (3.93) −∞ As discussed in Eq. (3.85),S this yields( the eigenvalues) ( ) = ( )

n pn 1 ξ ξ , (3.94)

2 2 1~2 = ( − ) where ξ nj mj mj nj . In terms of the eigenvalues of the reduced density matrix, the entanglement entropy is given by = ~( + ( − ) )

S pn log pn , (3.95) n = − ( ) where n runs from zero to infinity. ExpressingQ this as a function of ξ, we have

∞ S 1 ξ ξn log 1 ξ ξn n=0 = − Q∞ ( − ) (( − ) ∞) 1 ξ ξn log 1 ξ 1 ξ ξn log ξn n=0 n=0 = − Q( − ) ( ∞− ) − Q( − ) ∞( ) 1 ξ log 1 ξ ξn 1 ξ log ξ n ξn n=0 n=0 = −( − ) ( −ξ ) − ( − ) ( ) log 1 ξ Qlog ξ , Q (3.96) 1 ξ = − ( − ) − ( ) where we used the following two summation− identities ∞ 1 ξn , n=0 1 ξ ∞ (3.97) Q n = ξ n ξ − 2 , n=0 1 ξ Q = which hold for ξ 1, which is true. We are now( − ready) calculate the entanglement entropy as a function of ξ using the formula S S < ξ S ξ log 1 ξ log ξ . (3.98) 1 ξ ( ) = − ( − ) − ( ) − 3.2. Chain of N harmonic oscillators 29

Figure 3.2 – The entanglement entropy as a function of the number of oscillators N. For each value of N, all but two coordinates are integrated out.

The numerical results are shown in Figure 3.2. We calculated the entropy numerically in the range N 4, 200 . In Section 3.1, we found that the entanglement entropy for N 3 with two coordinates leftover is S 1 , and thus we added this to the figure. The entropy = [ ] 2 = quickly converges for large N. = For massless theories, it is possible to calculate the entanglement entropy of a quantum field exactly. Due to the absence of a mass term, the theory enjoys a conformal symmetry, and an exact expression for the entanglement entropy can be found from studying conformal field theories. In their paper, Calabrese and Cardy provide a review of the conformal field theory approach to entanglement entropy in one dimension [18]. They mention that for a finite one-dimensional system of length L, with a subsystem of length l na, in its ground state, and with periodic boundary conditions, the entanglement entropy is given by = c N πn S log sin c′ . (3.99) CFT 3 π N 1 = ‹ ‹  + ′ where c is the central charge of the conformal field theory, and c1 is a constant. For bosonic and fermionic conformal field theories, the central charges are c 1 and c 1 2, respectively. In the continuum limit, N , this reduces to = = ~ c S → ∞log n c′ , (3.100) CFT 3 1 which shows that the entanglement entropy= of( a) + one-dimensional massless scalar field scales with the logarithm of the number of coordinates that are leftover after taking the partial trace. The fact that the entanglement entropy in one dimension is independent 30 Chapter 3. Massless scalar field of the length of the subsystem makes sense, because the relevant area between the two subsystems is trivial, and thus the entropy can only depend on the number of coordinates. Since the area between the subsystems is trivial in one dimension, the one-dimensional equivalent of an area law would mean the entanglement entropy is proportional to a constant. However, from Eq. (3.100) we see that the entanglement entropy of a massless scalar field is proportional to a constant times a logarithmic correction, suggesting that the entanglement entropy of a massless scalar field does not obey an area law in one dimension. In his paper, Srednicki studies the entanglement entropy of a massless scalar field and finds that in three dimensions the entanglement entropy scales with the area of the subsystem [8]. From numerical calculations, he finds that the entanglement entropy scales with the length of the subsystem in two dimensions and that it scales with the logarithm of the number of coordinates of the subsystem in one dimension, which is in agreement with Eq. (3.100). From Eq. (3.100) we understand that for a constant value of n, the entropy as a function of N should converge to a constant value in the continuum limit. This explains the convergence of the entanglement entropy in Figure 3.2, which was plotted for n 2. We are interested in studying the entanglement entropy as a function of the number = of coordinates so that we can investigate whether the entanglement entropy obeys the logarithmic behaviour as predicted by Eq. (3.100). However, due to the difficulties with the zero eigenvalues in the M and N matrices, we are restricted to the case in which we have two coordinates left, so we are not able to study its dependence on n. In the next chapter, we introduce a mass parameter in the dispersion relation, which introduces a gap in the dispersion for zero momentum. As a result, the zero eigenvalue of the M matrix disappears and we can effortlessly calculate the entanglement entropy using the method described in this chapter. We obtain a numerical expression for the entanglement entropy as a function of the number of coordinates and the mass parameter. Taking the massless limit, we nevertheless manage to find an expression for the behaviour of the entanglement entropy of a massless scalar field as a function of the number of coordinates. 31

Chapter 4

Massive scalar field

After stumbling on an issue in the massless case in the previous chapter, we will now study the massive case, where the chain of oscillators describes a massive scalar field in the continuum limit. The dispersion relation is given by

4γ ka ω sin2 ω2, (4.1) k ¾ m 2 0 = ‹  +

2 M 2c4 2 γa where ω0 h̵ 2 , with M some mass parameter, and c m . Note that taking the limit ω2 0 corresponds to taking the massless limit M 0, for which we recover the massless 0 = ≡ dispersion relation. The big difference compared to the massless case is the inclusion of → 2 → this ω0 parameter in our dispersion relation. This creates a gap for zero momentum, due to which the zero eigenvalue of the M matrix vanishes, and the procedure discussed in the previous chapter can be executed without problems. We will follow the same steps we took in the previous chapter, starting once again from the ground state wave function. Next, we construct the density matrix for the ground state and integrate out an arbitrary amount of coordinates to obtain the reduced density matrix. From the eigenvalues of the reduced density matrix we can then calculate the entanglement entropy. In this chapter, we mostly just provide the important expressions, because, other than the normalisation, the calculation is identical to that of Section 3.2, in which you can find the details.

4.1 Ground state wave function and density matrix

In the massive case, the dispersion relation is given by Eq. (4.1). Due to the constant term 2 ω0, the dispersion is no longer zero for k 0, and therefore we do not have to treat this case separately. The Hamiltonian is the same as in Eq. (3.62), where the spring constant = 32 Chapter 4. Massive scalar field matrix is now given by

2 2γ γ 0 γ mω0 0 0 0 2 γ 2γ γ 0 0 0 mω0 0 0 0 ⎛ − ⋯ − ⎞ ⎛ 2 ⋯ ⎞ K ⎜ 0 γ 2γ 0 ⎟ ⎜ 0 0 mω0 0 ⎟ . (4.2) ⎜− − ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ 0 γ⎟ ⎜ 0 0 ⎟ = ⎜ − ⋱ ⎟ + ⎜ ⋱ ⎟ ⎜ γ 0 0 γ 2γ⎟ ⎜ 0 0 0 0 mω2⎟ ⎜ ⎟ ⎜ 0⎟ ⎜ ⋮ ⋱ ⋱ − ⎟ ⎜ ⋮ ⋱ ⋱ ⎟ The ground state⎝ wave− function− is given⎠ by ⎝ ⎠ mΩ 1~4 m ψ x det exp x Ω x , (4.3) 0 πh 2h where, in contrast to the wave( ) = function‹ ̵ of the massless− ̵ ⋅ case⋅ in Eq. (3.67), we have the determinant of Ω as the normalisation. The density matrix of the ground state is then simply given by

1 mΩ 2 m ρ x, x′ ψ x ψ∗ x′ det exp x Ω x x′ Ω x′ , (4.4) 0 0 0 πh 2h which is, other( than) = the( normalisation,) ( ) = ‹ the̵ same as the− density̵ ‰ ⋅ matrix⋅ + of⋅ Eq.⋅ Ž (3.69) of the massless case. Therefore, the integration procedure to find the reduced density matrix is identical to what is discussed in Section 3.2, and thus the general expressions for M and N of Table 3.1 still hold.

4.2 Rewriting the reduced density matrix

The reduced density matrix for an arbitrary number of coordinates is of the form

n 1 ′ m 2 1 1 2 ρred x, x det Ω 2 πh Ω11M11 ... (4.5) m 2 ′2 m ′ ′ ( ) = ‹ ̵  ( ) ‹ exp M x x x x N x x , 2h 4h where n is the number of coordinates× that− ̵ is leftover( + after) − integrating,̵ ( − ) ( and− the) prefactor gains a term for each integration. It turns out that det Ω det M , (4.6) Ω11M11 ... ( ) for any number of integrations, and therefore= we can( write)

1 mM 2 m m ρ x, x′ det exp M x2 x′2 x x′ N x x′ , (4.7) red πh 2h 4h which is exactly( ) of= the‹ same̵  form as− Eq.̵ (11)( + in the) − paper̵ ( − by) Bombelli( − ) [7]. Now that the inverse of M is well-defined, we could calculate the entanglement entropy using the method proposed by Bombelli. However, we will again rewrite our density matrix and calculate the entanglement entropy using the results of Srednicki instead, so that we can better compare to our previous results. After rewriting, the density matrix becomes

1 mωM 2 mω N mω N ρ x, x′ det exp M x2 x′2 xx′ , (4.8) red πh 2h 2 h 2

( ) = ‹ ̵  − ̵ ‹ +  ( + ) + ̵ ‹   4.3. Numerical results for the entanglement entropy 33 where we took out the dimensions ω of the matrices in the exponent and the prefactor. Next, we apply the coordinate transformation from Eq. (3.87) and diagonalise the matrix in the exponent, resulting in the same expression for the reduced density matrix as before

1 mω 2 1 mω 2 mω ′ 2 2 ′ ′ ρred xj, xj 1 λj exp xj xj λjxjxj , (4.9) j πh 2h h ( ) = M ‹  ( − ) − ( + ) +  ̵ N ̵ N −1 ̵ where λj are the eigenvalues of the matrix 2 M 2 , and the prefactor is given by

1 1 − 1 mω 2 1 ‰ mωŽ ‰M +2 Ž N 2 1 λj 2 det det M , (4.10) j πh πh 2 M ‹  ( − ) = ‹  ‹ +  so the reduced density matrix̵ is properly normalised.̵ For each term out of the product of Eq. (4.9), we can calculate the entanglement entropy as a function of ξ using the formula derived in the previous chapter

ξ S ξ log 1 ξ log ξ , (4.11) 1 ξ ( ) = − ( − ) − ( ) where ξ is given in terms of λj as −

λj 1 ξ , α 1 λ2 2 . (4.12) 1 α j = = ‰ − Ž The entanglement entropy of the total+ reduced density matrix of Eq. (4.9) can then be calculated by summing over all the separate parts as follows

S Sj ξ . (4.13) j = Q ( ) The calculation of the entanglement entropy is done numerically, and the discussion of the results is the topic of the next section.

4.3 Numerical results for the entanglement entropy

The numerical analysis is done in Python 3.7. The Python script builds the Ω matrix for a specific number of oscillators and a given value of the mass parameter. From that, the M and N matrices are composed following the iterative procedure shown in Table 3.1, which corresponds to integrating out a given number of coordinates. Finally, the coordinate N N −1 transformations are applied, and using the eigenvalues λj of the matrix 2 M 2 , the entanglement entropy is calculated using the formula in Eq. (4.13). ‰ Ž ‰ + Ž In Figure 4.1 the entanglement entropy is plotted as a function of the number of harmonic 2 oscillators N, for different values of the parameter ω0. The number of leftover coordinates after integrating is n 50 for all lines. The entanglement entropy tends to converge for large N. The rate of convergence depends on the value of the mass parameter; the lower 2 = the value of ω0, the slower the entanglement entropy reaches its final value. The conver- gence of the entanglement entropy is a signal that we are in the continuum limit N . This is an important limit, because in this limit our model describes a scalar field. For the → ∞ 34 Chapter 4. Massive scalar field

2 four highest values of ω0, the entanglement entropy has fully converged, whereas the lines 2 2 for ω0 0.0001 and ω0 0.00001 have not yet fully converged for N 200. Furthermore, the value to which the entropy converges increases for decreasing values of ω2, implying = = = 0 that the value of the entanglement entropy of a massive scalar field is higher if the mass of the theory is small.

Figure 4.1 – The entanglement entropy as a function of the number of oscillators N. Different 2 lines correspond to different values of the mass parameter ω0, and the leftover number of coordinates is n 50 for all lines.

= Figure 4.2 shows the entanglement entropy as a function of the number of leftover coor- 2 dinates n, for different values of ω0, and a constant number of oscillators N 200. One immediately notes that the figure is symmetric. This is exactly what we expect because = integrating out either n or N n coordinates both results in one subsystem of size n, and one of size N n, and therefore the entanglement entropy is the same for both cases. For − n N 2, the number of traced out coordinates is the largest, which corresponds to the − largest loss of information, and thus the entanglement entropy is maximal here. = ~ 2 For the larger values of ω0, we see that the entanglement entropy reaches a plateau of some maximal value. Whether the entanglement entropy of the system reaches this plateau or not is related to the correlation length of the system. As we will discuss in the next section, 2 the correlation length is inversely proportional to the mass of the theory, so ξ 1 ω0, and hence the correlation length of the system grows as ω2 becomes smaller. We can» see 0 ∝ ~ this in Figure 4.2 as the value of n for which the entropy has converged, which essentially 2 2 is the correlation length, shifts to the right as ω0 decreases. For small values of ω0 the entanglement entropy does not converge at all, which means that the correlation length is 2 larger than N 2. In the massless limit, ω0 0, the correlation length goes to infinity and

~ → 4.3. Numerical results for the entanglement entropy 35 the entropy will never converge.

Figure 4.2 – The entanglement entropy as a function of the leftover number of coordinates 2 n. The different plots are for varying values of the mass parameter ω0, with a constant number of harmonic oscillators N 200.

= In Section 3.2.4 we already discussed the analytical results for the entanglement entropy in the continuum limit from conformal field theory

c S log n c′ , (4.14) CFT 3 1 which shows that the entanglement entropy= of( a massless) + scalar field scales with the log- arithm of the number of coordinates that are leftover after taking the partial trace. To compare this expression to the data in Figure 4.2, we only consider the domain n 1, 100 of the figure, because this is the relevant domain in the continuum limit. From the ex- ∈ [ ] pression of Eq. (4.14), we expect that the entanglement entropy behaves as S log n in the massless limit, which is in agreement with the observation that the correlation length ∝ ( ) goes to infinity in the massless limit and, thus, the entropy will never converge. In Figure 2 4.2 we can already see this happening, where for small ω0, the correlation length grows, and the logarithmic behaviour for small n takes over in the domain n 1, 100 . On the other hand, for more general, massive quantum field theories, the conformal in- ∈ [ ] variance is broken and therefore the conformal field theory approach does not apply. In theory, one could still calculate the entanglement entropy of the massive quantum field theory exactly, though this has not yet been done for any massive theory. However, from studying simple cases of general 1+1-dimensional quantum field theories, there are ex- pectations of the behaviour of the entanglement entropy in the limiting regimes of the 36 Chapter 4. Massive scalar field important length scales ξ a ω and l na [19] ω0 = = c l S log l ξ , QFT 3 a (4.15) c ξ S ≈ log ‹  (l ≪ ξ) , QFT 3 a ≈ ‹  ( ≫ ) where a is the ultraviolet cut-off, corresponding to the lattice spacing, which is introduced to avoid UV-divergences in the theory. Recall that from the convergence of the entangle- ment entropy in Figure 4.1 we know that for the four highest values of the mass parameter we are in the continuum limit for N 200. Therefore, according to the expressions of Eq. (4.15), the entanglement entropy for the four highest values of the mass parameter shown = in Figure 4.2 is proportional to log n for l ξ, and converges to a value proportional to log ξ for l ξ. The limit l ξ is essentially the massless limit, so the fact that the ( ) ≪ entropy goes as log n in this limit is in agreement with the result from conformal field ( ) ≫ ≪ theory in Eq. (4.14). On the other hand, the convergence of the entropy in the limit l ξ ( ) is different compared to massless theories, where the entanglement entropy is not bounded. ≫ In massive theories, the entropy converges to a constant value that is independent of the size of the subsystem, which is determined by the mass parameter. Therefore, this can be interpreted as the one-dimensional analogue of an area law, since in one dimension the area between the subsystems is also independent of the size of the subsystem [18]. So, in contrast to the massless case, we do find an area law behaviour in the entanglement entropy of a massive scalar field. As mentioned, the behaviour of the entanglement entropy in the cross-over regime be- tween the limits is not known exactly. However, we can try to bridge the gap by using our numerical results for the entanglement entropy to estimate a cross-over function. Before we do this, we analyse what the parameters are on which the entropy depends, so that we know what kind of function we are looking for.

4.4 Dimensional analysis

To know what parameters we have to fit, we first study what parameters influence the behaviour of the entanglement entropy. In the reduced density matrix from Eq. (4.9), the coordinates x and x′ still have units of length. Of course, the argument of the exponent must be dimensionless and, therefore, there must be some factor with units of inverse length squared. Indeed, there is an inverse squared factor of the natural length scale of −2 the harmonic oscillator in front, lh.o. mω h, which compensates for the dimensions of x and x′. We can make the position coordinates dimensionless by scaling them with the = ~̵ natural length scale of the harmonic oscillator as follows

mω mω x x, x′ x′. (4.16) ½ h ½ h → ̵ → ̵ 4.4. Dimensional analysis 37

Taking into account the Jacobian of this scaling, the reduced density matrix takes the following form

1 1 2 1 1 2 ′ 2 2 ′ ′ ρred xj, xj 1 λj exp xj xj λjxjxj , (4.17) j π 2 ( ) = M ‹  ( − ) − ( + ) +  which is now clearly independent of the length scale, and therefore only depends on the number of coordinates. Specifically, it are only the eigenvalues λj that determine on what variables the entanglement entropy will depend. Recall that λj are the eigenvalues of N N −1 the matrix 2 M 2 . The exact form of this matrix, and hence its eigenvalues, is completely determined by the dispersion relation ‰ Ž ‰ + Ž ka ω 4ω2 sin2 ω2, (4.18) k ¾ 2 0 = ‹  + where ω2 γ m. Since the quantised momenta are given by k 2πn Na, the a dependence completely drops out, and the dispersion only depends on the number of coordinates and = ~ 2 2 = ~ the parameters ω and ω0. From this we can conclude that the eigenvalues λj also only depend on these three variables. The correlation length of the system can be determined from the dispersion relation. For small k, the dispersion relation is approximated by

2 2 2 2 ωk ω k a ω0. (4.19) ¼ Replacing k 1 ξ, and setting the two≈ terms under+ the square root equal to each other, one finds the correlation length → ~ ω ξ a . (4.20) ω0 = As we have just concluded, the eigenvalues λj, and hence also the reduced density matrix, does not depend on a. Thus, they can only depend on the dimensionless ratio ω ξ0 . (4.21) ω0 Since we calculate the entanglement entropy≡ directly from the reduced density matrix, we conclude that the entanglement entropy is a function of only the number of coordinates n that is left after integrating, and the dimensionless correlation length ξ0. The important message is that the entanglement entropy is independent of the lattice spacing length scale a. This is also reflected in both expressions of Eq. (4.15), where, on first sight, it might seem that the entropy does depend on the length a. However, the a dependence drops out since l na and ξ a ω , and so the entanglement entropy only depends on the ω0 dimensionless parameters n, and ξ0. = = 38 Chapter 4. Massive scalar field

4.5 Determining a cross-over function

We would like to find a cross-over function that correctly describes the behaviour of the entanglement entropy in the intermediate regime between the limiting cases of Eq. (4.15). This function must correctly preserve the behaviour in both these limits. In the previous section we concluded that the only variables on which the entanglement entropy, and hence our cross-over function, depends are n and ξ0. A simple initial guess for a cross-over function is c A B S n, ξ0 log , (4.22) 3 n ξ0 where A and B are fit parameters.( Note) = that− this‹ function+  indeed behaves as dictated by Eq. (4.15). Of course it is possible to directly try to fit this function. However, we choose to first slightly rewrite the expression so that we can fit an easier function. Taking out a factor of 1 n in the logarithm, we get

~ c 1 n S n, ξ0 log f , (4.23) 3 n ξ0 where we defined the function ( ) = − ‹ ‹  n n f A B . (4.24) ξ0 ξ0 Inverting Eq. (4.23), we obtain the following‹  ≡ expression+ n 3 f n exp S n, ξ0 . (4.25) ξ0 c

‹  = − ( ) 2 In Figure 4.3, this expression is plotted as a function of n ξ0 for several values of ω0. For all 2 2 values of ω0, the functions behave linearly, as expected. For values of about ω0 0.01 the 2~ lines overlap, signaling that for small enough values of ω0 we can describe the entanglement 2 ≤ entropy with a single function of n ξ0. Physically, this means that for ω0 0.01, we are in the limit where the answers of field theory are valid. ~ ≤ To find this general cross-over function we consider one specific case more closely. As 2 we have just concluded, we must take a value of ω0 0.01 to ensure the data overlaps. Furthermore, we must choose a case where the entanglement entropy has fully converged ≤ as N , which corresponds to the continuum limit. With this in mind, let us take ω2 0.001, for which the entanglement entropy has converged for N 200, as we know 0 → ∞ from Figure 4.1. Since we consider the continuum limit, only the domain n 1, 100 is = = relevant. ∈ [ ] First we check if our numerical data behaves as expected from the analytical results. In 2 Figure 4.4 the entanglement entropy for ω0 0.001 is shown again, now together with the limiting behaviour of Eq. (4.15). Since our model of harmonic oscillators describes a = bosonic field in the continuum limit, the central charge is taken to be c 1. Furthermore, a constant is added to the analytical expressions to overlap them with the numerical data. = Clearly, the behaviour of the data is in agreement with the analytical expressions of Eq. (4.15). 4.5. Determining a cross-over function 39

n Figure 4.3 – The function f from Eq. (4.25), plotted against n ξ0 for different values ξ0 2 2 2 of ω0. Note that the lines plotted for ω0 0.1 and ω0 1 are only part of their data. Š  ~ However, since they are straight lines no information about their behaviour is lost. The = = 2 range is shortened to ensure that the overlap of the data for small ω0 is clearly visible.

2 Figure 4.4 – The entanglement entropy as a function of n for ω0 0.001. The behaviour of the entanglement entropy in the limiting cases as expected from theory from Eq. (4.15) is = shown alongside the numerical data. 40 Chapter 4. Massive scalar field

n 3 2 Figure 4.5 – The function f n exp S n, ξ0 plotted against n ξ0 for ω 0.001. ξ0 c 0 Š  = − ( ) ~ = n In Figure 4.5 the function f is plotted against n ξ0. The data is seemingly linear, with ξ0 a notable exception for small values of n ξ0. In the next section, we fit this data several Š  ~ ways using Wolfram Mathematica 12.0 to find a numerical expression for the function ~ f n . From this expression we can then determine the cross-over function using Eq. ξ0 (4.23). Š  4.5. Determining a cross-over function 41

4.5.1 First order fit

The most straightforward approximation of our data is a linear fit with two fit parameters. This yields n n f 0.11 0.95 , (4.26) ξ0 ξ0 resulting in the cross-over function‹  = + c 0.11 0.95 S n, ξ0 log . (4.27) 3 n ξ0 The cross-over function is compared( ) to= − the original‹ + data in Figure 4.6. Though it is not a terrible fit, it deviates significantly from the original data, especially for lower values of n. It is likely that this is a consequence of the non-linear behaviour for small n as seen in Figure 4.5. The linear fit fails to properly incorporate this behaviour, resulting in the deviations of the numerical results.

Figure 4.6 – The cross-over function of Eq. (4.27), obtained from the linear fit, plotted 2 against the numerical data for the entanglement entropy with ω0 0.001 and c 1.

= =

4.5.2 Pad´efit

Logically, the next step is to fit the data using more fit parameters to try and enhance the fit. However, we must make sure that the fit we choose correctly obeys the behaviour that we expect. From Figure 4.3, we observe that for large n ξ0 the function behaves linearly. 2 Therefore, we cannot simply use a fit of the form A B n C n , because this behaves ξ~0 ξ0 quadratically for large n ξ0. The fit we use is inspired by an approximation technique + + Š  ~ 42 Chapter 4. Massive scalar field developed by Henri Pad´e.This method approximates a function using a rational function, which is a fraction of polynomials. By fitting our function with a polynomial of order N 1 as the numerator, and a polynomial of order N as the denominator, the linear behaviour + for large n ξ0 is preserved. After the linear fit, the next most simple fit of the Pad´eform is ~ 2 A B n C n n ξ0 ξ0 f n . (4.28) ξ0 1 D + + ξ0Š  ‹  = Fitting this function to the data of Figure 4.5+ gives

2 0.15 0.88 n 0.22 n n ξ0 ξ0 f n , (4.29) ξ0 1 0.20 + + ξ0 Š  ‹  = and its corresponding cross-over function +

2 n n c 1 0.15 0.88 ξ 0.22 ξ S n, ξ log 0 0 , (4.30) 0 3 n 1 0.20 n ⎛ + + ξ0 Š  ⎞ ( ) = − ⎜ ⎟ ⎜ + ⎟ which is compared to the numerical⎝ data in Figure 4.7. The accuracy⎠ of the fit has improved greatly compared to the linear fit.

Figure 4.7 – The cross-over function of Eq. (4.30), obtained from the Pad´efit, plotted against 2 the numerical data for the entanglement entropy with ω0 0.001 and c 1.

= = 4.6. Testing the cross-over function 43

Especially for large n the fit agrees with the data perfectly. For smaller values of n there are minor differences and the fit slightly overshoots the data. This might be explained by the fact that here we are close to the massless limit, where the entropy goes as log n , which shoots off to minus infinity for small n. All in all, the cross-over function of Eq. ( ) (4.30) describes the behaviour of the entanglement entropy very well.

4.6 Testing the cross-over function

From the Pad´efit we have found the following cross-over function

2 n n c 1 0.15 0.88 ξ 0.22 ξ S n, ξ log 0 0 . (4.31) 0 3 n 1 0.20 n ⎛ + + ξ0 Š  ⎞ ⎜ ⎟ ( ) = − ⎜ ⎟ +2 As indicated by the overlap of the data⎝ for small ω0 in Figure 4.3,⎠ the cross-over function 2 also describes the entropy for other values of ω0 0.01. In Figure 4.8 the cross-over function is plotted together with the numerical data of the entanglement entropy. For 2 ≤ ω each value of ω , the cross-over function is plotted for the corresponding value of ξ0 . 0 ω0 For smaller values of the mass parameter, the cross-function approximates the data better, 2 = which is likely due to the fact that for smaller values of ω0 we get closer to the field theory limit. This is also what was illustrated in Figure 4.3, from which we concluded that the mass parameter must be small enough in order to be describable by a single function.

Figure 4.8 – The numerical data of the entanglement entropy is plotted for different values 2 of ω0. For each line, the cross-over function of Eq. (4.31) is plotted for the corresponding value of ξ0 and c 1.

= 44 Chapter 4. Massive scalar field

4.7 The massless limit

Originally, we were interested in the case of a massless scalar field. However, we encoun- tered a problem when calculating the entanglement entropy, which we managed to avoid by studying the massive case. Now that we have found a function that describes the entanglement entropy as a function of the number of coordinates and the dimensionless 2 correlation length, we can take the conformal limit ω0 0 in our cross-over function to ω find its behaviour in the massless case. Since ξ0 , this corresponds to taking ξ0 ω0 → in the cross-over function of Eq. (4.31), which yields = → ∞ c n S n, log . (4.32) 3 0.15 Thus we find that the entanglement( ∞ entropy) = of‹ a massless scalar field scales with the logarithm of the number of leftover coordinates, which is in agreement with the analytical answer from conformal field theory c S log n c′ , (4.33) CFT 3 1 ′ c = ( ) + where in our case c1 3 log 0.15 0.62. = − ( ) ≈ 45

Chapter 5

Discussion and outlook

This thesis was motivated by the paper by Liao et al., in which they propose an ex- perimental setup of a two-dimensional Schwarzschild black hole analogue consisting of a Bose-Einstein condensate. Since its ground state is known, it is possible to calculate the exact entanglement entropy of this analogue black hole. Studying the entanglement en- tropy of this analogue black hole might help us develop a better understanding of the area law of the entropy of astronomical black holes. To describe the black hole analogue, one needs a model of a two-dimensional scalar field on a curved background. In this thesis, we made a start to this by studying a simpler model consisting of a chain of coupled har- monic oscillators, which describes a one-dimensional scalar field in the continuum limit. Depending on the dispersion relation, the scalar field is either massless or massive.

5.1 Discussion

Initially, we studied the case of a massless scalar field. The corresponding dispersion re- lation vanishes for zero momentum, resulting in the presence of a zero eigenvalue in the dispersion matrix. This caused a problem when calculating the entanglement entropy, and we were only able to calculate the entanglement entropy in the case where we had two coordinates left after integrating. For this particular case, we found that the entanglement entropy as a function of the number of oscillators N converges to a constant value in the continuum limit N . This is in agreement with the expression of conformal field theory, according to which the entanglement entropy of a one-dimensional massless scalar → ∞ field in the continuum limit is proportional to log n , which is constant as a function of N for a constant value of n. Because we are restricted to the n 2 case, we were unable ( ) to study the entanglement entropy as a function of the number of coordinates n following = this approach. Next, we discussed the case of a massive scalar field by introducing a mass parameter into the dispersion relation. This parameter creates a gap in the dispersion relation for zero momentum, thus removing the zero eigenvalue of the dispersion matrix. This enabled us to study the entanglement entropy of a massive scalar field as a function of both N and 46 Chapter 5. Discussion and outlook n. Plotting the entanglement entropy as a function of the number of harmonic oscillators N for a constant number of coordinates n 50, we found that the entanglement entropy converges for large N, similar to the result of the massless case. Next, we studied the = entanglement entropy as a function of the number of coordinates n, which is expected to obey an area law. We found that for large n the entanglement entropy of a one-dimensional massive scalar field converges to a value proportional to the logarithm of the dimensionless ω correlation length ξ0 . This is a consequence of the inclusion of a mass in our theory, ω0 which introduces a finite correlation length. As discussed by Calabrese and Cardy, this = can be interpreted as a one-dimensional area law, since in one dimension the area between the two subsystems is trivial, and thus is a constant which is independent of the subsystem size [18]. Therefore, we can conclude that the entanglement entropy of a massive scalar field indeed obeys an area law in one dimension, hinting that this might also be true for higher dimensions. Our data suggests that for small enough values of the mass parameter, such that the answers from field theory are valid, it is possible to describe the behaviour of the entangle- ment entropy of the massive scalar field using a single expression. We made a numerical fit to our data to obtain a general expression that describes the entanglement entropy as a function of both the mass parameter and the number of coordinates not only in the lim- iting cases but also in the intermediate regime. Comparing the function to the numerical data for different values of the mass parameter, we found that the function agrees with 2 2 the data especially well for small values of ω0. This is explained from the fact that ω0 must be taken small enough such that the expressions from field theory hold. The goal of this thesis was to study if the entanglement entropy of a massless scalar field obeys an area law. Taking the massless limit in our cross-over function, we found that the entanglement entropy depends logarithmically on the number of leftover coordinates. Thus the entanglement entropy scales with a constant times a logarithmic correction, and therefore we must conclude that the entanglement entropy of a massless one-dimensional scalar field does not obey an area law. All in all, we conclude that in one dimension the entanglement entropy of a massive scalar field obeys an area law and that the area law is violated for a massless scalar field. How- ever, according to Srednicki the entanglement entropy of a massless scalar field does obey an area law in both two and three dimensions [8]. Because of this, we expect that by extending our calculations to a two-dimensional scalar field, we will find an area law for the massless case as well. Though it is still uncertain whether the area law holds in the curved spacetime of the analogue black hole, the results of this thesis together with those of Srednicki provide a promising outlook. In the future, if we can confirm that the entan- glement entropy of the analogue black hole obeys an area law, it might help us to develop a better understanding of astronomical black holes. 5.2. Outlook 47

5.2 Outlook

Ultimately, it would be interesting to study the entanglement entropy of the Schwarzschild black-hole analogue proposed by Liao et al.. To do this, a model of a two-dimensional scalar field on a curved background is needed since the geometry of the Bose-Einstein condensate is characterised by a Schwarzschild metric. The calculations in this thesis are restricted to the simple model of a one-dimensional scalar field in flat spacetime. A natu- ral next step would be to study the chain of harmonic oscillators in a curved background, which can be accomplished by introducing position dependence to the spring constant. In the discrete system of oscillators, this can be done by introducing hopping within the model. A discussion of such models is given in a recent paper by Yang et al. [20]. In this thesis, we studied the entanglement entropy of a chain of coupled harmonic oscil- lators, which models a scalar field in the continuum limit. It is also possible to study the entanglement entropy of a completely different model. For example, one might wonder if it is also possible to find area laws for fermionic fields. In his paper, Wolf studies the entanglement entropy of an infinite, translationally invariant fermionic system in d dimen- sions [21]. He finds that the entanglement entropy of such a system scales with the surface area of the subsystem times a logarithmic correction, from which he concludes that the area law is violated for fermions. An extensive overview of the current state of area laws of entanglement entropy in a variety of systems and dimensions is given in [22]. The focus of this thesis was to find an area law in the von Neumann entanglement en- tropy. A different, more general notion of entropy that is also defined in terms of the reduced density matrix is the R´enyi entropy, which is characterised by the R´enyi index α. Different definitions of entropy are associated with specific values of the R´enyi index. For example, in the limit α 1 the R´enyi entropy reduces to the von Neumann entanglement entropy, which was studied in this thesis. It might be interesting to study whether the → R´enyi entropy also obeys an area law for other values of the R´enyi index, which could lead to new ways of studying area laws. 48 Chapter 5. Discussion and outlook 49

Acknowledgements

First and foremost, I would like to thank Henk Stoof, who has supervised me with great enthusiasm throughout the past year. He was available for questions and discussions at all times, and I have learned a lot from his impressive physical insight. Secondly, I would like to thank my fellow students Daan Brinkhof and Stijn Claerhoudt for studying together, our numerous discussions, and interesting games of chess from time to time. Lastly, I thank my family and Franka for supporting me, and I specifically thank my brothers Daan and Lars for supplying me with constructive feedback. 50 Acknowledgements 51

Appendix A

Proof of dispersion relation product

In the massless case, where the dispersion relation is given by 4γ ka ω sin2 , (A.1) k ¾ m 2 the normalised wave function for a system= of N oscillators‹  is given by

1 N−1 1 2 m 4 1 m ψ0 x ω1ω2 . . . ωN−1 4 exp x Ω x . (A.2) NL πh 2h As claimed in Section( ) = Œ√ 3.2, we‘ can‹ ̵ rewrite ( the normalisation) factor− ̵ such⋅ ⋅ that the general wave function becomes 1 N−1 1 2 mω 4 m ψ x exp x Ω x , (A.3) 0 L πh 2h ( ) = ‹  ‹  − ⋅ ⋅  where we expressed ωk in terms of ω ̵γ m. To see this,̵ we will show that N−1» = ~ N−1 ωk Nω , (A.4) k=1 such that M = 1 − 1 2 N 1 ω1ω2 . . . ωN−1 4 N ω 4 , (A.5) √ and the two preceding expressions( of the) wave= Š function ( are) equal. From Eq. (A.1) we see that the dispersion relation in terms of ω is given by ka ω 2ω sin , (A.6) k 2 where the momenta are given by k 2πn= NaV , with‹ nVan integer. In general, n takes values in the range N−1 , N−1 , where . is the floor function. Rewriting the product over 2 2 = ~ momenta as a product over integers n, we have ⌊− ⌋ ⌊ ⌋ ⌊ ⌋ ⌊ N−1 ⌋ ⌊ N−1 ⌋ N−1 2 πn 2 πn ω 2ω sin 2N−1ωN−1 sin . (A.7) k N N k=1 =⌊− N−1 ⌋ =⌊− N−1 ⌋ n 2 n 2 M = M V ‹ V = M V ‹ V 52 Appendix A. Proof of dispersion relation product

The product of the absolute value of the sine can be rewritten as a product of the square of the sine over only positive values, which we can compute

⌊ N−1 ⌋ ⌊ N−1 ⌋ 2 πn 2 πn sin sin2 2−(N−1)N. (A.8) N N =⌊− N−1 ⌋ n=1 n 2 M V ‹ V = M ‹  = Substituting this result in our previous expression, we find that

N−1 N−1 ωk Nω , (A.9) k=1 M = as expected, and thus the general expression for the wave function is given by Eq. (A.3). I

References

[1] S. W. Hawking, Communications in Mathematical Physics 25, 152 (1972).

[2] J. D. Bekenstein, Phys. Rev. D 7, 2333 (1973).

[3] J. D. Bekenstein, Physical Review D 9, 3292 (1974).

[4] S. W. Hawking, Nature 248, 30 (1974).

[5] L. Susskind, Journal of Mathematical Physics 36, 6377–6396 (1995).

[6] E. Mart´ın-Mart´ınez,L. J. Garay, and J. Le´on,Phys. Rev. D 82, 064028 (2010).

[7] L. Bombelli, R. K. Koul, J. Lee, and R. D. Sorkin, Phys. Rev. D 34, 373 (1986).

[8] M. Srednicki, Physical Review Letters 71, 666–669 (1993).

[9] W. G. Unruh, Phys. Rev. Lett. 46, 1351 (1981).

[10] T. A. Jacobson and G. E. Volovik, Phys. Rev. D 58, 064021 (1998).

[11] U. Leonhardt and P. Piwnicki, Phys. Rev. Lett. 84, 822 (2000).

[12] L. J. Garay, J. R. Anglin, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. 85, 4643 (2000).

[13] R. Sch¨utzholdand W. G. Unruh, Phys. Rev. D 66, 044019 (2002).

[14] R. Sch¨utzholdand W. G. Unruh, Phys. Rev. Lett. 95, 031301 (2005).

[15] G. E. Volovik, JETP Letters 104, 645–648 (2016).

[16] A. Rold´an-Molina,A. S. Nunez, and R. A. Duine, Phys. Rev. Lett. 118, 061301 (2017).

[17] L. Liao, E. C. I. van der Wurff, D. van Oosten, and H. T. C. Stoof, Physical Review A 99 (2019), 10.1103/physreva.99.023850.

[18] P. Calabrese and J. Cardy, Journal of Physics A: Mathematical and Theoretical 42, 504005 (2009).

[19] M. Headrick, “Lectures on entanglement entropy in field theory and holography,” (2019), arXiv:1907.08126 [hep-th] .

[20] R.-Q. Yang, H. Liu, S. Zhu, L. Luo, and R.-G. Cai, Phys. Rev. Res. 2, 023107 (2020), arXiv:1906.01927 [gr-qc] . II

[21] M. M. Wolf, Phys. Rev. Lett. 96, 010404 (2006).

[22] J. Eisert, M. Cramer, and M. B. Plenio, Reviews of Modern Physics 82, 277–306 (2010).