<<

ABSTRACT

SLOW AND STOPPED WITH MANY ATOMS, THE ANISOTROPIC RABI MODEL AND A COUNTING EXPERIMENT ON A DISSIPATIVE OPTICAL LATTICE

by Tyler Thurtell

First, we study electromagnetically induced transparency (EIT) in the case that interactions between the atoms are allowed. Under the right circum- stances EIT can lead to a dramatic slowing or even stopping of a pulse. We consider EIT in the case of incoherent collective emission. We discuss how collective emission can be used to enhance the similarity between the input and output pulse in stopped light. Next, we introduce coherent, ’spin flip’ interactions that allow the atoms to ’trade’ excitations. For coupling on the probe transition we find that the minimum speed to which the light can slowed depends on how homogeneous the behavior of the atoms is. In the case of probe transition coupling, we find that EIT may not even be possible and if it is a frequency shift of the atomic resonance will occur. We then move on to discuss some generalizations of the Jaynes-Cummings model. The Jaynes-Cummings model is actually a simplification of the Rabi model. We review these two models and then discuss the anisotropic Rabi model in which co-rotating and counter-rotating terms may have distinct coupling strengths. We find eigenstates of the interaction Hamiltonian with zero eigenvalue for the two couplings not equal and investigate the behavior as the couplings become more similar. Finally, we discuss the beginnings of a photon counting experiment on a dissipative optical lattice. The photon counting apparatus will be used to study the well depths of an optical lat- tice. Unfortunately, due to time constraints, we were unable to complete the experiment and so we will discuss only the experimental equipment and data analysis techniques. SLOW AND STOPPED LIGHT WITH MANY ATOMS, THE ANISOTROPIC RABI MODEL AND A PHOTON COUNTING EXPERIMENT ON A DISSIPATIVE OPTICAL LATTICE

A Thesis

Submitted to the Faculty of Miami University in partial fulfillment of the requirements for the degree of Master of Science Department of Physics by Tyler Thurtell Miami University Oxford, Ohio 2018

Advisor: (Perry Rice)

Reader: (Samir Bali)

Reader: (Carlo Samson) Contents

List of Figures v

1 Introduction 1

2 Open Quantum Systems 6

3 Numerical Techniques 19

4 Electromagnetically Induced Transparency 24 4.1 Fundamental Processes Approach ...... 24 4.2 Adiabatic Elimination ...... 27 4.3 Dark States ...... 28 4.4 Electromagnetically Induced Transparency ...... 31 4.5 Slow Light ...... 34 4.6 Stopped Light ...... 35

5 Superradiance, Subradiance, and Selective Radiance 37 5.1 Standard Formalism ...... 39 5.2 Green’s Function Approach ...... 44

6 EIT and Slow Light with Spin Flip Interactions 50 6.1 Control Transition Interactions ...... 50 6.2 Probe Transition Interactions ...... 53

ii 7 The Jaynes-Cummings Model and its Generalizations 56 7.1 Cavity QED Basics ...... 56 7.2 The Jaynes-Cummings Model ...... 58 7.3 The Rabi Model ...... 60 7.4 The Anisotropic Rabi Model ...... 60

8 Photon Counting in a Dissipative Optical Lattice 66 8.1 Theory of Optical Lattices ...... 67 8.2 Numerical Simulations ...... 70 8.3 Experimental Techniques ...... 76 8.4 Data Analysis ...... 79

9 Summary 85

A Properties of the Density Operator 88

B Alternative Derivation of the Master Equation 90

C Guide to the Use and Abuse of QuTiP 95

D Green’s Functions 109

E Pulse Propagation Programs 112

F Jaynes-Cummings Model Generalization Program 122

G Semi-Classical Diffusion Programs 125

H Photon Counting Experimental Apparatus Details 134 H.1 Single Photon Counting Module Details ...... 134 H.2 FPGA Multichannel Acquisition Board ...... 135

I Proof of the Cross Correlation Theorem 138

iii J Photon Counting Data Analysis Programs 140 J.1 Cross Correlation Theorem Program ...... 140

K Bibliography 153

iv List of Figures

3.1 A flow chart indicating the algorithm at work in quantum trajectory theory...... 22

4.1 A lambda system with three decay channels interacting with two near resonant laser beams...... 25 4.2 A diagrammatic representation of the sum over processes in- volved in EIT...... 26 4.3 (a) A plot of Re[α(ω)] versus δ/Γ for an EIT atom. (b) A plot of Im[α(ω)] versus ∆/Γ for an EIT atom. Both (a) and (b) are for the case of Raman resonance. (c) A plot of Re[α(ω)] versus δ/Γ as predicted by Lorentz oscillator theory for comparison with (a). (d) A plot of Im[α(ω)] versus δ/Γ as predicted by Lorentz oscillator theory for comparison with (b)...... 33

5.1 Heat plots of the probe Rabi frequency for a pulse propagat- ing through a gas of EIT atoms with time on the x-axis and 0 distance on the z-axis for the case Γ1 = Γ = 0. From left to right the storage times are 0.6T , 1.2T , 1.6T , 2.0T where

T = 1/Γr for the top row. (a) The case where Γr = 10Γ3. (b)

The case enhanced by selective radiance where Γr = 20Γ3. . . 38

v 5.2 The relationship of the spin and the magnetic field at (a) the beginning of the radiative process, (b) halfway through the radiative process when the intensity is the greatest, and (c) at the end of the process...... 43 5.3 Illustration of atoms coupled to a tapered optical nanofiber. . 48

7.1 An illustration of the situation studied in Cavity QED. . . . . 57 7.2 The atomic inversion vs time in the Jaynes-Cummings model. 59 7.3 The atomic inversion (blue curves) and photon number (green curves) vs time in the Rabi mode. In (a) the atom is initially in the excited state and the initial photon number is 3. In (b) the atom is initially in the ground state and the initial photon number is 0...... 61 7.4 The atomic inversion (blue curves) and photon(green curves)

number vs time in the anisotropic Rabi model for g1 = ω = ωeg

with (a) g2 = 0.01g1, (b) g2 = 0.1g1, (c) g2 = 0.5g1, (d)

g2 = 2g1, (e) g2 = 5g1 and (f) g2 = 10g1. Notice that for g2 ≈ 0

Jaynes-Cummings dynamics are recovered, for g2 = g1 the

Rabi model dynamics recovered, and for g2 >> g1 a distinct beating appears...... 63

8.1 Illustration of the lattice created by two linearly polarized beams perpendicular to each other and propagating in oppo- site directions. The result is periodic net right and left circu- lar polarization. In between the lattice sites the polarization varies continuously between these two extremes. The atoms are trapped in the locations of pure circular polarization and oscillate about these points as shown, with a characteristic

vibrational frequency denoted by ωv...... 67

8.2 Energy level diagram of a six level,Fg = 1/2 → Fe = 3/2, atom. 67

vi 8.3 Illustration of the five possible processes for an atom in the +1/2 ground state. A blue arrow signifies that the process

involves a right circularly polarized photon, σ+, red arrow indicates a linearly polarized photon, π, and a green arrow

indicates a left circularly polarized photon, σ−. (a) A σ+ is

absorbed, another σ+ is emitted and the atom remains in the same state. (b) A π is absorbed, another π is emitted and the

atom stays in the same state. (c) A π is absorbed, a σ− is

emitted and the atom changes state. (d) A σ− is absorbed, a

π is emitted and the atom changes state. (e) A σ− is absorbed,

a σ− is emitted and the atom changes state...... 68 8.4 Examples of position vs. time and momentum vs. time be- havior in an optical lattice as predicted by the semi-classical algorithm for an atom beginning at rest at origin in the -1/2 ¯h2k2 state with (a) U0 = 20ER and (b) U0 = ER where ER = m2 is the photon recoil energy...... 73 8.5 Examples of position vs. time and momentum vs. time be- havior in an optical lattice as predicted by the modified semi- classical algorithm for an atom beginning at rest at origin in

the -1/2 state with (a) U0 = 20ER and (b) U0 = ER where ¯h2k2 ER = m2 is the photon recoil energy...... 75 (2) 8.6 An example of the σ+ − σ+ g (τ) predicted by the modified semi-classical algorithm...... 76 8.7 Example of position vs. time and momentum vs. time be- havior in an optical lattice as predicted by the modified semi- classical algorithm for an atom beginning at rest at origin in

the -1/2 state with U0 = ER allowed to run for long enough for the atom the change lattice sites many times...... 77

vii 8.8 An example of position vs time and momentum vs time be- havior with a constant applied force as simulated based on the modified semi-classical algorithm...... 78 8.9 A picture of the single photon counting module. Connected to the left side of the detector is multimode optical fiber. Con- nected to the right side is the 5V DC power supply and two BNCs. One of the BNC carries the TTL output pulse when a photon is detected. The other BNC is a gating input. When the gate has a TTL high pulse it should turn the detector off. 79 8.10 A picture of the (a) outside and (b) inside of the FPGA mod- ule. The BNC ports on the front of the FPGA module are well labeled. The output from the photon counting module should be connected to one of the ports labeled detector. The other important port is the one labeled external clock. It may be used to have the FPGA module keep track of shorter timescales. 80 8.11 The averaged white light data. It should show a flat line at g(2)(τ) = 1 but it shows a distinct decay...... 81 8.12 The unaveraged white light data for (a) 6446 counts and (b) 64460 counts. In case (a) th value is clearly elevated above one. In case (b) it is not so obviously above one but it most likely is...... 82 8.13 The unaveraged white light data for (a) 6446 counts and (b) 64460 counts as calculated by the shift register program. Again, in case (a) th value is clearly elevated above one. In case (b) it is not so obviously above one but it most likely is...... 84

C.1 The atomic inversion vs time in the Jaynes-Cummings model. 99 C.2 The atomic inversion vs time in the Jaynes-Cummings model for an initial state that is mixed...... 101 C.3 The atomic inversion vs time in the damped Jaynes-Cummings model for an initial state that is mixed...... 102

viii C.4 The atomic inversion vs time in the damped Jaynes-Cummings model for a coupling with a sinusoidal time dependence for an initial state that is mixed...... 103 C.5 The atomic inversion vs time in the damped Jaynes-Cummings model for a coupling with a sinusoidal time dependence for an initial state that is mixed produced using quantum trajec- tories. In (a) only one trajectory was used and in (b) 100 trajectories were averaged over...... 104 C.6 The steady state atomic inversion vs coupling strength in the driven Jaynes-Cummings model...... 106 C.7 The steady state population of level |3i vs detuning for a lambda system showing the famous EIT dip...... 108

H.1 The light tighting apparatus from the (a) front and (b) back. The opening in the front is covered with lens tissue and marked with pencil. Even with the light tighting the flashlight was not shown directly at the apparatus but off at an angle. The laser light was shown at the very edge of the opening with a current of 90mA...... 136

ix ACKNOWLEDGEMENTS

Foremost, I would like to thank my thesis advisor, Dr. Perry Rice for all the important lessons he has taught me and his constant encouragement. I must also acknowledge the many great teachers I have add, including my other research advisor Dr. James Clemens and committee members Dr. Samir Bali and Dr. Carlo Samson for teaching me the methods and joys of quantum mechanics. I also must thank Dr. Xiaodong Qi and Dr. Ivan Deutsch from the University of New Mexico who helpfully supplied the code used to solve for the decay rates and frequency shifts of atoms near an optical nanofiber. I am also indebted to my friends and fellow students, including but not limited to: Nazar Al-Aayedi, Zeeshan Ali, Dharma Raj Basaula, Subhash Bhatt, Ben Blankartz, Patrick Carroll, Lyndon Cayayan, Jijun Chen, Ken DeRose, Ajithamithra Dharmasiri, Billy Drake, Arkan Hassan, Martin Heidelman, Patrick Janovick, Daniel King, Eitan Lees, Mitch Mazzei, Anthony Rapp, Jayson Rook, Micheal Saaranen, Joshua Schussler, Dinesh Wagle, Anthony Young, and Sara Zanfardino for many helpful discussions and for making the past two years an enjoyable experience. Above all, I must thank my family and loved ones, especially, my parents and Elizabeth, without whom I would never have achieved this.

x Chapter 1

Introduction

First, we will examine electromagnetically induced transparency (EIT)[1][2][3][4][5][6][7] in the case that additional interactions are allowed between the atoms. We will examine EIT in the case that ’spin flip’ interactions are allowed and in the case that the atoms radiate collectively. By ’spin flip’ interactions we essentially mean interactions in which an excitation in one atom can become an excitation in another atom with negligible loss. When atoms radiate col- lectively they may effectively radiate more quickly or more slowly than an isolated atom would. These questions are of particular interest because of their applications to quantum information processing. Many quantum information processing devices will require some physical hardware to function as a memory. However, this is difficult since measuring a single quantum state does not reveal the exact state due to the proba- bilistic nature of measurement in quantum mechanics. In other words, this is a challenge because quantum tomography, the process of determining the quantum state based on repeated measurements on an ensemble of identi- cally prepared states, is an open and difficult problem[8] when only a finite number of samples is available. Even if good procedures for quantum state determination were available it could take many classical bits to record a single qubit of information. The alternative solution is to develop hardware

1 that can store a qubit directly. One physical realization of a qubit is the polarization state of a photon. A memory that can store a photon state is call a photonic memory. One proposed photonic memory is based on the phenomenon of stopped light or light storage[4]. In this phenomenon the information about the state of the photon is transfered to the matter where it is held until some later retrieval time. Specifically, it is transfered to atoms exhibiting EIT. EIT is a non- perturbative effect in which the presence of a strong control laser beam can make a medium transparent to a weak probe beam it was previously opaque to. This effect occurs in three level atoms and is a result of coherent interfer- ence between the levels. Associated with this effect is a rapid change of the index of of the medium with frequency and therefore a low for the light propagating through the medium. This is referred to as slow light[4][6][7][9]. If the control beam is manipulated properly, the group velocity of the light can be reduced to zero resulting in stopped light[4][6][10]. One problem with this photonic memory is that the fidelity of the stor- age is limited by the spontaneous emission rate of the atoms[11]. However, while the spontaneous emission rate of isolated atoms is determined solely by properties of the atom the spontaneous emission rate of a collection can be modified by coherent interference between the atoms. When this inter- ference increases the spontaneous emission rate, it is referred to as suppradi- ance and when it decreases the spontaneous emission rate it is referred to as subradiance[11]. The phenomena of superradiance was first investigated by Dicke in 1954[12]. Since then it has been extensively studied[6][13][14]. These effects usually only play a role when the atomic separation is less than the wavelength of the emitted light[13]. However, recent advances in nanotech- nology have made nanophotonic waveguides available. Since the field inside the waveguide does not decay, or decays much more slowly, the atoms can un- dergo the same interference effects when separated by distances significantly greater than a wavelength. For example, this effect has been used for appli-

2 cations in quantum metrology[15][16]. This suggests that when the modes of a nanophotonic waveguide are properly engineered, three level atoms around the waveguide can serve as a higher fidelity photonic memory[11]. However, consideration of this problem also leads one to consider the problem of EIT with spin flip interactions since any situation in which the atoms may radiate collectively is likely to be a situation in which they may trade excitations because the atoms will likely either be close together or be linked by some dielectric. Rather than examine particular atomic geometries we will introduce an effective coupling between the atoms, the form of which would have to be derived from first principles for any particular circumstance. Next, we study the anisotropic Rabi model which may be viewed as a gen- eralization of the Jaynes-Cummings model. The Jaynes-Cummings model is the central model of cavity quantum electrodynamics (QED)[7][17][18][19]. It models the interaction of a two level atom with a single field mode. This model is quite general and emerges outside of atomic physics in a number of situations. In atomic physics, it actually emerges as a simplification of the Rabi model which includes so called non-energy conserving terms[20][21]. The Rabi model contains some new phenomena. For example, as the cou- pling strength is tuned, this system exhibits a phase transition[22]. We re- view these models and then discuss the anisotropic Rabi model in which the relative coupling of the energy conserving and energy non-conserving terms becomes arbitrary[23]. This model has been physically realized, for example in strongly coupled superconducting circuits[24]. In particular, in the in- teraction picture, we find Hamiltonian eigenstates with zero eigenvalues for non-equal couplings. Chapter 8 is conceptually separate from the rest of this manuscript. It discusses an experiment preformed under the supervision of Dr. Samir Bali on photon counting in a dissipative optical lattice. Optical lattices, which allow neutral atoms, usually alkalis, to be trapped at temperatures below the Doppler limit in specific locations known as lattice sites, were first discussed

3 by C. Cohen-Tannoudji in 1989[25]. Optical lattices are interesting in their own right but have also found wide applications, for example in quantum information[26]. Dr. Bali’s group is interested in using the optical lattice to produce thermally directed motion of the atoms. That is, to create a ratchet. Such cold atom ratchets have been studied previously[27]. Dr. Bali’s group hopes to use the ratchet to simulate biomolecular motors in an effort to understand the origin of their surprisingly high efficiency[28]. Correlations between the types of emitted from the lattice carries important information about the lattice[29]. To understand how this works think of each lattice site as a potential well the atom may become trapped in. When trapped in some of the wells the atom will emit only right circularly polarized, σ+, light. When the atom is trapped in an adjacent site, it will radiate only left circularly polarized, σ−, light. So that if the observation of a

σ+ photon is only correlated with the observation of a σ+ photon some ’long’ time later it indicates that the atoms are well trapped. On the other hand, if the observation of a σ+ is correlated with the observation of a σ− photon a short time later this indicates that the atoms are not well trapped. These types of experiments have been preformed before but many details remain to be worked out[29][30][31]. For example, the behavior in the limit of shallow wells has not been extensively studied. These ideas will be explored further and the experimental techniques will be discussed in Chapter 8. The layout of the manuscript is as follows: In Chapter 2 the density opera- tor is introduced, the master equation is derived in a manner that emphasizes tracing over the reservoir degrees of freedom, and quantum trajectory theory is discussed. In Chapter 3 the numerical techniques used throughout are briefly discussed with a special emphasis on the use of the Quantum Tool- box in Python (QuTiP)[32][33], although readers interested in the details of any particular simulation will have to refer to the appropriate appendix. Chapters 4 though 6 discuss EIT in the situations mentioned above. Chap- ter 7 discusses the generalizations of the Jaynes-Cummings model. Finally,

4 chapter 8 discusses photon counting in a dissipative optical lattice.

5 Chapter 2

Open Quantum Systems

In the normal formulation of quantum mechanics[34], the state of a system is described by an element of a Hilbert space and is denoted by

|ψi. (2.1)

These elements are referred to as kets. Properties of the Hilbert space imme- diately provide an alternative representation as elements of the dual vector space referred to as bras. These are denoted by hψ|

hψ| = |ψi†, (2.2) where the dagger indicates Hermitian conjugation. Except for in the case of measurement, the time evolution of the kets is unitary and given by the Schrodinger equation i |ψ˙ i = H|ψi, (2.3) h¯ whereh ¯ is Planck’s constant divided by 2π and H is the Hamiltonian of the system. The time evolution of the bra representation is given by the Hermitian conjugate of this equation. Unfortunately, these representations have some issues. Notably, they cannot handle statistical uncertainty in the

6 state. As we shall see this also means they cannot handle open quantum systems[19][35]. If we have perfect knowledge of a quantum system we assign it a state exactly as above. The state is then referred to as a pure state. If we have imperfect knowledge of a quantum system we require a new way to assign it a state that reflects this lack of information. Let’s review how this is done in classical mechanics[36]. If we have perfect knowledge of a classical system, its canonical coordinates and momenta are assigned exact values and their time evolution is given by Hamilton’s equations

q˙i = (qi,H) (2.4)

p˙i = (pi,H), (2.5) where the qi are the canonical coordinates, the pi are the canonical momenta, and the () are the Poisson brackets. If we have imperfect knowledge of the system it is described by Liouville’s equation of motion

∂ D = (H,D), (2.6) ∂t where D is a probability distribution on phase space. There is a representation is called the density operator, which can be appropriately generalized[34]. It is given by

X ρ = Pi|ψiihψi|, (2.7) i where Pi is the probability that the system is in the state represented by

|ψii. To understand why this is a good generalization of the probability den- sity consider the function of a probability density in calculating expectation values Z hOi = DOdV, (2.8) V

7 where O is an observable and V is the volume of phase space. For a pure quantum state, expectation values are calculated according to

hOi = hψ|O|ψi. (2.9)

However, since the trace of a scalar is simply the original scalar this may be written as

T r (hOi) = T r (hψ|O|ψi) = T r (|ψihψ|O) = T r(ρO), (2.10) where the cyclic property of the trace has been used. This equation is of the same form as the classical equation if the integral generalizes to the trace and D generalizes to ρ. This calculation has also shown how to generalize the calculation of expectation values from pure states to mixed states. The time evolution of the ρ follows from the Schrodinger equation

X h   i i X ρ˙ = P |ψ˙ i hψ| + |ψi hψ˙ | = − P (H|ψihψ| − |ψihψ|H) . i h¯ i (2.11) This may be written compactly as

i ρ˙ = − [H, ρ]. (2.12) h¯

This equation is usually referred to simply as the Schrodinger equation but it is sometimes called the von Neumann equation or the Liouville equation. Notice, there is an important physical distinction between coherent su- perpositions and statistical mixtures[7][34]. The former is a fundamental property of quantum mechanics. It is responsible for the famous double slit experiment and electromagnetically induced transparency which will be dis- cussed below. The latter is not fundamental but rather arises out of our ignorance about the state of the system. The two kinds of uncertainty can also lead to different results when calculating expectation values. Consider

8 the following two states in the density operator representation

1 ρ = (|1ih1| + |1ih2| + |2ih1| + |2ih2|) (2.13) sup 2 1 ρ = (|1ih1| + |2ih2|) . (2.14) mix 2 The first is a coherent superposition of the states |1i and |2i while the latter is a statistical mixture. They both give the same probabilities for measuring the system to be in states |1i or |2i but they do not give the same expectation value for all observables. As an example, consider the observable

O = |1ih2| + |2ih1|. (2.15)

The expectation values of this observable for the two states above are

hOisup = T r(ρsupO) = 1 (2.16)

hOimix = T r(ρmixO) = 0. (2.17)

Some important properties of the density operator are discussed in more detail in Appendix A. This discussion is important for our purposes because open quantum sys- tems will naturally become mixed even if they are initially pure. To un- derstand why this happens consider a classical system of interest coupled to an environment that has many more degrees of freedom than the classical system. Clearly the environment can affect the system but any energy that flows from the system to the environment will be spread out among all of the many degrees of freedom. This means two things. First, the environment will not be appreciably affected by the system. Second, energy and information that flow into the environment will not flow back to the system. A system that has this property is described as Markovian[35]. As concrete example consider an ice cube placed in the ocean. The ocean will cause the ice cube

9 to melt but the ice cube will not appreciably decrease the temperature of the ocean. The ice cube will also never spontaneously reassemble itself in part or in whole. Clearly, if we only keep track of the system degrees of freedom in- formation about the state of the system will be lost in time. In the quantum generalization, the system becomes entangled with the environment. Since the environment degrees of freedom are not kept track of, this leads to a loss of information and system initially in a pure state evolving to a mixed state. The appropriate generalization of the Schrodinger equation was worked out in the early 1960s[37][38]. To see how it arises we follow a well trodden path[39][40], consider the combined state of a system of interest and a reser- voir that the system interacts with described by a density operator ρ and assume that initially the system and the reservoir are not entangled so we may write

ρ = ρs ⊗ R, (2.18) where ρs depends only on the state of the system and

X R = Pn|nihn|, (2.19) n describes the state of the reservoir. In order to find a description that only references the state of the system we trace over the reservoir degrees of free- dom. The system density operator at a time t is then related to the initial system density operator by

† X X p † p ρs(t) = T rR(Uρs ⊗ Pn|nihn|U ) = Pnhµ|U|niρs(0)hn|U |µi Pn, µ n (2.20) where the |µi span the reservoir section of the Hilbert space. Defining the Kraus[41] operators to be

Mµ,n(t) = Mν(t) = hµ|U|ni, (2.21)

10 where the first equal sign is a relabeling. The system density operator at time t is then given by

X † ρs(t) = Mν(t)ρsMν (t) ≡ a(t)[ρs(0)] (2.22) ν

This map is referred to as completely positive since it will always map a positive operator to a positive operator even if there are other subsystems present on which it must act as the identity. An important property of this map is that it is trace preserving.

! ! X † X † X T rs (ρs(t)) = T rs Mµ,nρs(0)Mµ,n = T rs Pnhn|U |µihµ|U|niρs(0) = T rs (ρs(0)) . µ,n n µ (2.23) It is important to note that this property held true because

X † Mν Mν = 1s. (2.24) ν

This is sometimes referred to as the completeness property of the Kraus operators. One subtlety here is that the map is only trace preserving if the |ni and the |µi are capable of forming resolutions of the identity in some section of the Hilbert space. To first order in dt we may write one of the Kraus operators as

M0 = 1s + G(t)dt, (2.25)

and define a new set of operators Lν(t) by √ Mν(t) = Lν(t) dt. (2.26)

11 The completeness property of the Kraus operators implies

† X † M0 M0 = 1s − Mν Mν. (2.27) ν=1

To first order in dt this implies

†  X † 1s + G(t) + G (t) dt = 1s − Lν(t)Lν(t)dt (2.28) ν=1

This implies that

1 X G(t) = − L† (t)L (t) + A(t), (2.29) 2 ν ν ν where A(t) is anti-Hermitian. Plugging this into the map above and differ- entiating with respect to t gives

X † † † ρ˙s = [A, ρs] + (2LiρLi − Li Liρ − ρLi Li). (2.30) i

This suggests that a reasonable guess for A(t) is

i A(t) = − H. (2.31) h¯

This turns out to be correct as will be shown below. The differential equation for ρs then becomes

i X ρ˙ = − [H, ρ ] + (2L ρL† − L†L ρ − ρL†L ), (2.32) s h¯ s i i i i i i i where the Li are called Lindblad or jump operators. This equation is usually referred to as the master equation in diagonal Lindblad form or simply the master equation. Sometimes it is also called the Lindbladian or the Gorini- Kossakowski-Sudarshan-Lindblad (GKSL) equation after a group of scientists

12 who developed this formalism based on semigroups[42][43]. In atomic and p γ optical physics applications the Li are usually operators of the form 2 |iihj| where the γ are decay rates and |ji is a state of higher energy than |ii. Strictly speaking however, this is only true at zero temperature because at nonzero temperatures thermal photons may cause jumps to higher energy levels. For example, the master equation that describes the evolution of two level atom interacting with the reservoir of electromagnetic field modes is

ω γ γ ρ˙ = −i [σ , ρ]+ (¯n+1)(2σ ρσ −σ σ ρ−ρσ σ )+ n¯(2σ ρσ −σ σ ρ−ρσ σ ), 2 z 2 − + + − + − 2 + − − + − + (2.33) where ω is the light shifted frequency difference between the ground and ex- cited states, γ is the atomic decay rate,n ¯ is the thermal photon number and

σz and σ± are the inversion and atomic raising/lowering operators, respec- tively. In the second term the lowering operators play the roll of the jump operators representing spontaneous and stimulated emission. In the third term the raising operators play the roll of the jump operators representing absorption. Since this is an important example in atomic physics we should note that at optical frequencies and room temperaturen ¯ ≈ 0, so we can make the zero temperature approximation and the master equation reduces to

ω γ ρ˙ = −i [σ , ρ] + (2σ ρσ − σ σ ρ − ρσ σ ). (2.34) 2 z 2 − + + − + −

The approach most commonly taken to derive the form of A(t) and the jump operators Li, and therefore to obtain the master equation from first principles, is to the write the Schrdinger equation as an integro-differential equation and then trace over the reservoir. This path is pursued in Appendix B. Here, we will, instead, explicitly write down the Kraus operators using a particular basis to preform the trace, plug these into the map and then differentiate. To proceed, we must specify the state of the reservoir. By far the most common choice is for the reservoir to be a collection of harmonic

13 oscillators in a thermal state[35]

O X eβn¯hωk R = |n(ω )ihn(ω )|, (2.35) Z k k k n

where ωk is the frequency of oscillator labeled by k, the |n(ωk)i are eigen-

states of the harmonic oscillator Hamiltonian, β = kBT , where kB is Boltz- mann’s constant and T is the temperature, and Z is the partition function. We will also use these number states as a basis for the partial trace. Work- ing in the interaction picture and to second order in dt the Kraus operators become

−βn¯hω/2   Z  2 ZZ e i i 0 0 0 Mµ,n = √ h[µ(ωk)]|1s+ − dtHint(t)+ − dtdt Hint(t)Hint(t )|[n(ω )]i, Z h¯ h¯ k (2.36)

where the brackets, [n(ωk)], indicate that the occupation numbers of all os- cillators must be specified. The interaction Hamiltonian will be taken to be of the form X † ∗ † Hint =h ¯ gij(ωk)sijak + gij(ωk)sijak (2.37) k,i,j,i6=j Since each term of the interaction Hamiltonian only involves one oscillator, k, it is convenient to split the Kraus operators up so that each one is associated with only one oscillator. There are three possible forms for these operators.

First, if |n(ωk)i = |µ(ωk)i, then

" # −βn¯hωk/2 ZZ e 0 2 X  † †  M0,n,ωk = √ 1s − dtdt |g(ωk)| (n(ωk) + 1)sij(ω, t)sij(ω, t) + n(ωk)sij(ω, t)sij(ω, t) , Z i,j,i6=j (2.38)

where the sij(ω, t) are system operators that connect state i to state j. If

14 |n(ωk)i = |µ + 1(ωk)i, then

−βn¯hωk/2 Z e ∗ p X M+,n,ω = −i √ g (ωk) n(ωk) + 1 dt sij(ω, t). (2.39) Z i,j,i6=

Finally, if |n, ω0i = |µ − 1, ωi

−βn¯hωk Z e /2 p X † M−,n,ωk = −i √ g(ωk) n(ωk) dt sij(ω, t). (2.40) Z i,j,i6=j

Technically, terms that involve sij(ω, t)sij(ω, t) are possible but they are al- ready second order in dt and we wish to eventually deal with products of Kraus operators so these will not remain second order. Accordingly, they have been dropped. At this point we could use these Kraus operators to write down jump operators and A(t) but it is more enlightening to plug them back into the map and study the master equation as a whole. When plugging these back into the map we keep terms only to second order in dt. We must now add up all the terms associated with various values of n(ωk) and k. Two important e−βnhω¯ things happen. First, we will get sums of the form Z (n + 1) =n ¯ + 1. The effect is to replace all of the n withn ¯. There will also be time integrals of the form τ Z 0 dt0e−i(ω−ωij )(t −t), (2.41) t where ωij is the frequency difference between system states since in the in-

−iωij t teraction picture the operators are of the form sije where sij are the Schrodinger picture operators. Integrals of this form may be familiar since they emerge in the Wigner-Wiesskopf theory of spontaneous emission[39]. This integral can be evaluated by making the Markoff approximation. That 0 is, if reservoir correlations ha†ae−i(ω−ω0)(t −t)i are sharply peaked around t = t0

15 then we may extend the integral to infinity and write the integral as

τ   Z 0 1 0 −i(ω−ωij )(t −t) lim dt e = πδ(ω − ωij) − iP , (2.42) τ→∞ t ω − ωij where P denotes Cauchy’s principle part. The next step is to replace the sum over k with an integral over ω

X Z → dωD(ω), (2.43) k where D(ω) is the density of states. After this coarse graining is preformed, the remaining integrals are of the form.

∞ Z Z 0 Γ i 2 0 −i(ω−ωij )(t −t) ij dω|g(ω)| D(ω)(¯n+1) dt e = (¯n+1) − δE+,ij, (2.44) t 2 h¯ where 2 Γij = 2π|g(ωij)| D(ωij), (2.45) and Z D(ω)(¯n(ω) + 1)|g(ω)|2 δE+,ij = P dω. (2.46) ω − ωij

Similar integrals appear for then ¯ terms. In atomic physics the Γij usually ar decay rates and the δE+,ij are light shifts such as the partial Lamb shift. The light shifts group with the system Hamiltonian to provide terms of the form

Z 2 Z 2 X † D(ω)¯n(ω)|g(ω)| † D(ω)(¯n(ω) + 1)|g(ω)| δH =h ¯ sijsijP dω+sijsijP dω ω − ωij ω − ωij i,j,i6=j (2.47) Plugging all of this back into the map and differentiating as above gives

i X Γij Γij ρ˙ = − [δH, ρ ]+ (¯n+1)(2s ρs† −s† s ρ−ρs† s )+ n¯(2s† ρs −s s† ρ−ρs s† ). s h¯ s 2 ij ij ij ij ij ij 2 ij ij ij ij ij ij i,j,i6=j (2.48)

16 This is the expected form of the master equation in the interaction picture.

Note that we could’ve explicitly written down the form of the Lµ(t) and A(t) but it was not necessary. It is somewhat more common to work with the master equation in the Schrodinger picture

i X Γij Γij ρ˙ = − [H +δH, ρ ]+ (¯n+1)(2s ρs† −s† s ρ−ρs† s )+ n¯(2s† ρs −s s† ρ−ρs s† ). s h¯ s s 2 ij ij ij ij ij ij 2 ij ij ij ij ij ij i,j,i6=j (2.49) As mentioned above, even in the time evolution of pure states there is one aspect that is not unitary and destroys coherent superpositions. This is measurement. This suggests that the interaction of a system with its environment can be regarded as measuring the system but not telling us about the result. This turns out to be a very useful way of thinking about this interaction. It leads to a way of studying open quantum system time evolution called quantum trajectory theory that was independently developed in the late 1980s and early 1990s by Howard Carmichael[44]. If a system is continu- ously measured, it will remain in a pure state. An alternative method of analyzing open quantum system time evolution is then to allow the sys- tem to be represented by a ket and undergo time evolution according to the Schrodinger with a non-Hermitian Hamiltonian punctuated by quantum jumps. In atomic physics, the jumps usually correspond to the atom inco- herently transitioning to a lower energy state and emitting a photon. The Hamiltonian must be non-Hermitian in order to account for the possibility that the absence of a jump indicates that the atom was already in the ground state. This perspective is usually referred to as quantum trajectory theory. As discussed above the evolution of the system is given by the following map

i i X ρ(t+dt) = a(dt)[ρ(t)] = (1− H dt)|ψ(t)ihψ(t)|(1+ H† )+dt L |ψ(t)ihψ(t)|L† , h¯ eff h¯ eff µ µ µ=1 (2.50)

17 where i X H = H − L† L , (2.51) eff h¯ µ µ µ is an effective Hamiltonian that contains both unitary time evolution and decay. This equation may be interpreted stochastically as follows. In time dt the system jumps with probability

† Pµ = dthψ(t)|LµLµ|ψ(t)i, (2.52) according to Lµ|ψ(t) |ψ(t)i → † , (2.53) hψ(t)|LµLµ|ψ(t)i where the denominator is present to ensure that the state remains normalized. Alternatively, with probability

X X i i P = 1− P = 1−hψ(t)| L† L |ψ(t)idt = hψ(t)|(1+ H† dt)(1− H )|ψ(t)i, 0 µ µ µ h¯ eff h¯ eff µ (2.54) where in the last equality only terms first order in dt were kept, the system does not jump. Instead it evolves according to

i −iHeff dt/¯h (1 − ¯h )|ψ(t)i e |ψ(t)i |ψ(t)i → i i ≈ † iH t¯h −iH dt/¯h hψ(t)|(1 + ¯h )(1 − ¯h |ψ(t)i) hψ(t)|e eff e eff |ψ(t)i (2.55) This point of view suggests that a Monte Carlo algorithm may be used to analyze this time evolution. This algorithm is discussed in detail in the next section.

18 Chapter 3

Numerical Techniques

In most cases of interest, it is difficult or impossible to find closed form solutions to the master equation. In such situations, we must turn to a numerical approach. This manuscript makes extensive use of the Quantum Toolbox in Python (QuTiP)[32][33]. QuTiP is a module in Python that greatly simplifies the process of obtaining numerical solutions to quantum mechanics problems. The properties and uses of QuTiP are reviewed briefly here and discussed extensively in Appendix C. QuTiP includes an important class of object referred to as a Qobj (short for quantum object). These objects are somewhat like arrays but QuTiP keeps track of some extra information about them. The most notable piece of extra information QuTiP keeps track of is the tensor product structure of the Hilbert space in which the object lives. For example, one might be interested in the dynamics of two two-level quantum systems (two qubits). The Hilbert space needed to describe this system is of dimension 2 ⊗ 2. An operator that might be needed is

(2) σ+ = σ+ ⊗ 12, (3.1)

(2) where σ+ is the atom that moves the first qubit from its |1i state to its |2i

19 state and 12 is the two dimensional identity operator. This is an acceptable operator because it is constructed from the tensor product of two operators from two dimensional Hilbert spaces in the correct order. One might also need the identity and think to construct it as

1 = 14. (3.2)

However, QuTiP will reject this operator as it does not have the correct tensor product structure. The identity must be constructed as

1 = 12 ⊗ 12. (3.3)

This feature of QuTiP is occasionally frustrating but mostly it serves the important task of keeping you honest. It can also be useful when trying to debug a program. QuTiP is capable of performing various types of time evolution for quan- tum systems. The most commonly used is the master equation solver called mesolve. The master equation solver function takes as inputs the Hamilto- nian of the system, the initial state of the system, a list of times, a list of collapse operators, and list of operators to take the expectation values of. It returns either the state of the system at the list of times or the expectation values of the specified operators at the specified times. If the states are re- quested, expectation values may be calculated from the list of states with the QuTiP function expect. Of course, it is also straight forward to calculate the expectation values from the formula in the preceding chapter but the expect function avoids the need to explicitly loop through the list of states every time. QuTiP’s master equation solver actually performs the time evolution us- ing Scipy’s ode integrator. Scipy’s integrator is already straight forward to use so you might be led to think that QuTiP is an unnecessary intermedi- ary. However, QuTiP makes life considerably easier by organizing the master

20 equation into a form that the integrator will accept, calculating expectation values at the specified times, dealing with time-dependent Hamiltonians, and performing many other tasks automatically. As mentioned above, the use of the quantum trajectory theory to solve problems in quantum mechanics inherently involves a numerical approach. The algorithm that must be used is outlined nicely in Figure 3.1. The algo- rithm may be summarized as follows. The systems begins, and remains, in a pure state. A random number is selected and compared to the probabilities of different jumps occurring to determine which jump occurs if any occur at all. If no jump occurs, the system is evolved according to a non-hermitian Hamil- tonian. The importance of it being non-hermitian can be seen immediately. The idea is to update the state we assign to the system at each time step based on the new information gained during that time step. If a jump occurs the information gain is obvious. The system is now in whatever state results from the action of that particular collapse operator. In the case of no jump the increase in information is more subtle but still present. If no jump occurs maybe we were wrong about the system being in a state that could collapse at all. That is, if we don’t observe a jump, we should decrease the ampli- tude of the state that is in any state that can collapse. The non-hermitian Hamiltonian ensures this happens. A jump amounts to a measurement and a non-hermitian Hamiltonian results in a non-unitary time evolution operator so either way the state must be normalized each time step. The time is then advanced one step and then another random number is drawn. In addition to the the standard functions of QuTiP we also make use of the module MaxwellBloch. This module is an expansion of QuTiP that allows pulse propagation to be easily studied. In particular, it has a function called MBSolve which allows the Maxwell-Bloch equations to be solved nu- merically. An example of the use of this function is given in Appendix E. The algorithm at work can be summarized simply. The algorithm discretizes the space through which the pulse is to propagate. At the first point in space

21 Figure 3.1: A flow chart indicating the algorithm at work in quantum tra- jectory theory.

22 it uses a QuTiP time evolution function to solve for the matter dynamics at that point for all times. Then it uses that information to produce the time-dependent Hamiltonian for the next space point then uses QuTiP to solve the dynamics at that point. This process is repeated until the end of the space is reached.

23 Chapter 4

Electromagnetically Induced Transparency

This chapter discusses the basics of electromagnetically induced transparency (EIT)[1][2][4][5][6][7]. First, a fundamental processes perspective is taken to demonstrate how this phenomenon emerges as a result of the interference be- tween amplitudes for different quantum processes. Then an adiabatic elimi- nation approach is used to obtain the index of refraction of an EIT medium.

4.1 Fundamental Processes Approach

Consider an atom interacting with two laser fields that are resonant to the transition between two separate states, |1i and |2i ,to a common third state, |3i. Such a system is referred to as a three level atom. There are three possible types. If |3i has the highest energy it is referred to as lambda-type system, if its energy is in between that of the other two states it is referred to as a ladder-type system, and if it has the lowest energy of the three levels it is referred to as a v-type system. The situation where Γ1 is the rate at which the excitation is radiated from |3i out of the three levels, Γ2 is the rate at which |3i decays to |1i, and Γ3 is the rate at which |3i decays to |2i is

24 Figure 4.1: A lambda system with three decay channels interacting with two near resonant laser beams. shown schematically in Figure 4.1. The distinction between the decay rates is usually not important so it is convenient to introduce the definition

Γ = Γ1 + Γ2 + Γ3. (4.1)

We consider the case that the atom begins in state |1i and wish to cal- culate the probability for the atom to be found some time later in the state |3i. We simply have to add up the amplitudes for all the different ways the process could possibly occur. These are the amplitude for the atom to go from state |1i to state |3i (denoted 1 → 3 )and stay there, the amplitude for the atom to go from |1i to |3i to |2i to |3i ((1 → 3)(3 → 2)(2 → 3)), etc. That is, we wish to calculate h3|U(t, 0|1i) = 1 → 3+(1 → 3)(3 → 2)(2 → 3)+(1 → 3)(3 → 1)(1 → 3)+... (4.2) This may be diagrammatically represented as in Figure 4.2. We need to know the form of the amplitudes 1 → 3. They can be calculated from

25 Figure 4.2: A diagrammatic representation of the sum over processes involved in EIT.

Fermi’s golden rule or by expanding the time evolution operator in h3|U|1i, but they have a very intuitive form

t 1 → 3 = 3 → 1 = −i Ω , (4.3) 2 p where t is the interaction time and Ωp is the probe Rabi frequency so we won’t do that here. Similarly,

t 2 → 3 = 3 → 2 = −i Ω , (4.4) 2 c where Ωc is the control Rabi frequency. The amplitudes may be combined according to

1  t 3 (1 → 3)(3 → 2)(2 → 3) = −i Ω1Ω2, (4.5) 3! 2 p c where the initial factorial is necessary because otherwise there is no algebraic difference between (1 → 3)(3 → 2)(2 → 3) and (1 → 3)(2 → 3)(3 → 2) but the second of these is clearly unphysical so we must divide by the number of permutations. The total amplitude is then

t 1  t 3 h3|U(t, 0)|1i = −i Ω + −i (Ω3 + Ω2Ω ) + .... (4.6) 2 p 3! 2 p p c

26 If the control beam is much stronger than the probe beam,

Ωc >> Ωp, (4.7)

then we can keep only terms that are first order in Ωp. This simplifies the amplitude to ! t 1  t 3 t 1  t 2 h3|U(t, 0)|1i = −i Ω + −i Ω Ω2+.... = −i Ω 1 + −i Ω +..., 2 p 3! 2 p c 2 p 3! 2 c (4.8) Collecting the terms in this series gives

∞ 2n t X (−iΩct/2) Ωp h3|U(t, 0)|1i = −i Ω = − sin(Ω t/2), (4.9) 2 p (2n + 1)! Ω c n=0 c which becomes arbitrarily small as Ωp becomes arbitrarily small. This is Ωc what is meant when it is said that EIT is the result of coherent interference between the amplitudes for different processes.

4.2 Adiabatic Elimination

We will now use the method of adiabatic elimination to obtain quantitative results about EIT. The idea behind adiabatic elimination is simple. If

Γ ∆ + i >> Ωc, Ωp, (4.10) 2 then after transient phenomena have died out the dynamics will be such that

ρ33 = ρ31 = ρ32 = 0. (4.11)

To understand this intuitively recall that the amount of time an atom spends 1 in state |3i is determined by Γ and, via the time-energy uncertainty principle,

27 1 whereas the time scales on which the atomic dynamics play out are 1 and ∆ Ωc 1 . This means that the atom spends only a negligible amount of time in |3i Ωp compared to the time scale over which significant dynamics play out. This technique is often used to eliminate state |3i from the three-level dynamics thus simplifying the problem. This is similar to how we will use this technique but since we eventually wish to examine dynamics near resonance the condition for adiabatic elimination becomes

Γ >> Ω , Ω . (4.12) 2 c p

This condition will be assumed throughout the next section.

4.3 Dark States

Proceeding along a standard route[3], the effective Hamiltonian of such a system is

 Γ  h¯Ω h¯Ω H =h ¯ −∆ − i 1 |3ih3|−hδ¯ |2ih2|+ p |3ih1|+|1ih3| c |3ih2|+|2ih3|, eff 2 2 2 (4.13) where ∆ is the detuning of the probe transition, δ is the Raman detuning, Ωp is the Rabi frequency of the probe transition, and Ωc is the Rabi frequency of the control transition. The jump operators associated with this system are

r Γ L = 2 |1ih3| (4.14) 1 2 p L2 = Γ32|2ih3|. (4.15)

Based on the master equation in Lindblad form the coherences between the levels obey

iΩ iΩ Γ ρ˙ = i∆ρ + p (ρ − ρ ) − c ρ − ρ (4.16) 31 31 2 33 11 2 21 2 31

28 iΩ iΩ Γ ρ˙ = i(∆ − δ)ρ + c (ρ − ρ ) − p ρ − ρ (4.17) 32 32 2 33 22 2 12 2 32 iΩ iΩ ρ˙ = iδρ + p ρ − c ρ . (4.18) 21 21 2 23 2 31 In steady state the population of |3i is adiabatically eliminated. The solution of the first equation is 1 Ω ρ + Ω ρ ρ = p 11 c 21 . (4.19) 31 2 Γ ∆ + i( 2 ) This coherence is also zero implying

Ω ρ = − c ρ . (4.20) 11 Ωp 21

The steady state solution of theρ ˙32 equation under the same conditions is

1 Ω ρ + Ω ρ ρ = p 12 c 22 . (4.21) 32 2 Γ (∆ − δ) + i( 2 )

This coherence also vanishes implying

Ωc ρ12 = ρ21 = − ρ22, (4.22) Ωp where the fact that ρ is Hermitian has been used. Combining this with equation (4.18) gives 2 Ωc ρ11 = 2 ρ22. (4.23) ΩP Normalization requires 2 Ωc ρ11 = 2 2 (4.24) Ωp + Ωc 2 Ωp ρ22 = 2 2 . (4.25) Ωp + Ωc

29 Plugging this back in gives the value of the non-zero coherences,

ΩcΩp ρ12 = ρ21 = − 2 2 . (4.26) Ωc + Ωp

Thus a state has been specified with zero population in |3i and zero coherence between |3i and any other level. This state does not radiate and will not for all time. Accordingly, it is referred to as the dark state. It’s worth pausing to examine some of the properties of the dark state. First, note that the dark state is a pure state as can be checked by computing

4 4 2 2 2 Ωc + Ωp + Ωc Ωp T r(ρ ) = 2 2 2 = 1. (4.27) (Ωc + Ωp)

This will be discussed further below in the section on stopped light. It is also important to understand that the populations in the dark state depend on the relative intensities of the probe and control beams. Most notably, if there is no probe beam, then the dark state is simply |1i. This is intuitive since there is nothing to pump the population out of |1i. This means that if a system is allowed to reach steady state with the probe beam turned off and a weak probe beam is subsequently turned on, the system will be in a state that is near (but not identical to) the dark state. This is the situation in which EIT is usually considered and will be very important in what follows. Γ In summary, we have seen that as long as the adiabatic condition ( 2 >> Ωc, Ωp) is met, the three level atom will be driven into a state that does not interact with the electromagnetic field. This is somewhat intuitive since if at any point the atom enters a state that doesn’t interact with the electro- magnetic field it will clearly remain there. This phenomenon is referred to as coherent population trapping[7].

30 4.4 Electromagnetically Induced Transparency

Now consider a situation discussed briefly above where the probe beam is initially off and the control beam has pumped all of the population into the dark state, |1i. The probe beam is then turned on such that

Ωc >> Γ2 >> Γ1 >> Ωp >> Γ3. (4.28)

The Rabi frequencies are ordered in this way so that the system remains near the dark state. Γ3 must be very small as it carries population into |2i and the system away from the dark state. Γ1 should be smaller than Γ2 as

Γ1 carries population out of the three-level subspace. It’s important to note that this is no longer the adiabatic elimination condition since Ωc is large.

In this case the steady state solution to theρ ˙21 equation implies

Ω ρ = c ρ . (4.29) 21 2δ 31

Plugging this into the equation for ρ31 and using the fact that ρ11 ≈ 1 gives

−d31/2¯h ρ31 = 2 · Ep, (4.30) Ωc Γ ∆ − 2δ + i( 2 ) where Ep is the electric field of the probe beam, and we have used the defi- nition of the Rabi frequency

h3|d · E |1i Ω = 31 p . (4.31) p h¯

If the medium is taken to be linear, then the expectation value of the dipole moment in the direction of the probe field is

2 −|d31| /2¯h hd31i = 2 Ep = α(ω)Ep, (4.32) Ωc Γ ∆ − 2δ + i( 2 )

31 where α(ω) is the complex polarizability, given by

2 |d31| /2¯h α(ω) = 2 . (4.33) Ωc Γ ∆ − 2δ + i( 2 )

The imaginary part of the complex polarizability is given by

2 |d31| Γ/2 Im[α(ω)] = 2 (4.34) 2¯h Ωc 2 Γ 2 (∆ − 2δ ) + ( 2 )

The real part of α(ω) given by

2 2 Ωc |d31| ∆ − 2δ Re[α(ω)] = − 2 (4.35) 2¯h Ωc 2 Γ 2 (∆ − 2δ ) + ( 2 )

The real and imaginary parts of the polarizability are plotted in Figure 4.3 along with familiar plots of the real and imaginary parts of the polarizabil- ity predicted by Lorentz oscillator theory[7][14]. The imaginary part of the complex polarizability is related to the absorption in the medium. Lorentz oscillator theory predicts that it should be a Lorentzian function of the de- tuning peaked at resonance. The EIT polarizability has this shape far from resonance but dips to zero at resonance. This dip is the defining charac- teristic of EIT[7]. It indicates that the medium becomes transparent at a frequency it was previously opaque to. The real part of the polarizability is related to the dispersive response of the medium. The derivative of the real part of the polarizability with respect to the detuning determines the propagation speed of the light. In both cases the derivatives have a large magnitude at resonance but for atoms that obey Lorentz oscillator theory, the change in propagation speed is overshadowed by the strong absorption near resonance. An EIT atom can greatly modify the propagation speed of a pulse since absorption is small near resonance.

32 (a) (b)

(c) (d) Figure 4.3: (a) A plot of Re[α(ω)] versus δ/Γ for an EIT atom. (b) A plot of Im[α(ω)] versus ∆/Γ for an EIT atom. Both (a) and (b) are for the case of Raman resonance. (c) A plot of Re[α(ω)] versus δ/Γ as predicted by Lorentz oscillator theory for comparison with (a). (d) A plot of Im[α(ω)] versus δ/Γ as predicted by Lorentz oscillator theory for comparison with (b).

33 4.5 Slow Light

The complex index of refraction is related to the polarizability by

n(ω) = p1 + 4πNα(ω) ≈ 1 + 2πNα(ω), (4.36) where N is the density of the gas and the binomial approximation was used in the last step. The use of this approximation is valid near resonance where both Re[α(ω)] and Im[α(ω)] are small. The real part of the index of refrac- tion is related to the magnitude of the wave vector of the light by

ω ω |k| = Re[n(ω)] ≈ (1 + 2πNRe[α(ω)]) . (4.37) c c

The group velocity of the light is given by

d|k| v = ( )−1. (4.38) g dω

This derivative is given by d|k| 1  d  1  d  = 1 + 2πNRe[α(ω)] + 2πωN Re[α(ω)] ≈ 1 + 2πωN Re[α(ω)] , dω c dω c dω (4.39) where the last step is justified near resonance. The group velocity is then

c vg = d . (4.40) 1 + 2πωN dω Re[α(ω)]

d Within the transparency window dω Re[α(ω)] is large and the group velocity is greatly reduced. This is the phenomena of slow light. The derivative in this expression can be evaluated as

34 d |d |2 (4δ∆ + 2δ2 − Ω2) [(2δ∆ − Ω2)2 + δ2Γ2] − 4(2δ2∆ − δΩ2) [(2δ∆ − Ω2)(δ + ∆) + δΓ2] Re[α(ω)] = 31 c c c c . 2 2 2 2 2 dω h¯ [(2δ∆ − Ωc ) + δ Γ ] (4.41) After some algebra, the group velocity can be written as

 hA¯  vg = c 2 , (4.42) hA¯ + 2πωN|d31| [B − C]

where  2 2 2 22 A = (2δ∆ − Ωc ) + δ Γ , (4.43)

2 2  2 2 2 2 B = (4δ∆ + 2δ − Ωc ) (2δ∆ − Ωc ) + δ Γ , (4.44)

2 2  2 2 C = 4(2δ ∆ − δΩc ) (2δ∆ − Ωc )(δ + ∆) + δΓ . (4.45) When both beams are on resonance this reduces to

 2  h¯Ωc vg = c 2 2 . (4.46) h¯Ωc + 2πωN|d31|

Thus EIT can be used to greatly reduce the propagation speed of a pulse provided the number density of the gas can be made sufficiently large.

4.6 Stopped Light

On resonance, the Hamiltonian of this system is

h¯Ω h¯Ω H = p |3ih1| + |1ih3| + c |3ih2| + |2ih3|. (4.47) 2 2

By solving the characteristic equation it can be seen that the dark state

1 |ψ i = Ω |1i − Ω |2i, (4.48) dark p 2 2 c p Ωc + Ωp

35 is an eigenstate of this Hamiltonian with eigenvalue zero. The other two eigenstates are ! 1 Ω Ω |±i = √ p |1i + c |2i ± |3i , (4.49) p 2 2 p 2 2 2 Ωc + Ωp Ωc + Ωp and they have associated energies

h¯ q E = ± Ω2 + Ω2. (4.50) ± 2 c p

The adiabatic theorem[34][45] then implies that if Ωc is decreased to zero the atom will remain in the dark state and the group velocity will be reduced to zero as long as the time scale over which this reduction takes place is much larger than all other time scales in the problem.This is stopped light. If Ωc is later adiabatically increased the pulse will be released. This is the type of photonic memory that will be discussed here. For completeness, it should be noted that since ∂ψdark hψdark| i = 0, (4.51) ∂Ωc no geometric phase is acquired during this process.

36 Chapter 5

Superradiance, Subradiance, and Selective Radiance

The rate at which EIT atoms radiate affects how well a stopped pulse can be recovered. In order to study this it is convenient to write,

0 Γ2 = Γr + Γ , (5.1)

where Γr is the emission rate on the probe transition into a desired continuum of modes and Γ0 is the emission rate on the probe transition into all other modes. It has been shown[46] that the error in such a memory process is given by the error in photon retrieval which is

Z tr 0 0 0 0 0  = (Γ1(t ) + Γ (t ) + Γ3(t ))dt , (5.2) ts where ts is the time at which the photon is stored and tr is the time at which 0 the photon is retrieved. Thus, it is clearly desirable to minimize Γ1,Γ , and

Γ3 and maximize Γr. This increase of some emission rates and suppression of others is known as selective radiance. The benefit of this type of selective radiance is shown in Figure 5.1, which illustrates the dynamics of a photonic

37 (a)

(b)

Figure 5.1: Heat plots of the probe Rabi frequency for a pulse propagating through a gas of EIT atoms with time on the x-axis and distance on the 0 z-axis for the case Γ1 = Γ = 0. From left to right the storage times are 0.6T , 1.2T , 1.6T , 2.0T where T = 1/Γr for the top row. (a) The case where Γr = 10Γ3. (b) The case enhanced by selective radiance where Γr = 20Γ3. memory. Figure 5.1(a) shows the probe Rabi frequency versus distance and time in the case of normal emission. Figure 5.1(b) shows the same plots but for the case of selective emission. The Rabi frequency,and therefore the intensity, of the output pulses can be seen to be qualitatively greater in the case of selective emission. These plots were created using the Maxwell-Bloch module in python. Another measure of the efficiency of the photon storage is the ratio of the output beam area to the input beam area given by

R tr 0 0 R tr 0 0 |Eout(t )|dt Ωc;out(t )dt A = (tr−ts)/2 = (tr−ts)/2 . (5.3) R (ts−tr)/2 |E (t0)|dt0 R (tr−ts)/2 Ω (t0)dt0 ts in ts c;in

The beam area ratios associated with Figure 5.1 are shown in Table 5.1. These results demonstrate that selective radiance can be used to improve the efficiency of a photonic memory. The next step is to determine how to prepare selectively radiant states.

38 Table 5.1: The beam area ratios for the normal and enhanced cases shown above for the storage times shown as well as for several other storage times.

0.2T 0.4T 0.6T 0.8T T 1.4T 1.8T 2.0T normal 0.7731 0.7218 0.6705 0.6207 0.5793 0.5099 0.4802 0.4726 enhanced 0.8430 0.8106 0.7746 0.7429 0.7152 0.6707 0.6622 0.6662

If a collection of atoms are present in the problem, they’re radiative prop- erties can be modified if the separation of the atoms is very small or if they are in the presence of specially designed dielectrics. This is known as collec- tive emission. There are two important formalisms for discussing collective emission. The Standard formalism is based on time dependent perturbation theory. Alternatively, a formalism based on the Green’s function for the sys- tem can be used to calculate the decay rates[11]. The fundamental ideas are more easily seen in the traditional formalism. However, this formalism is not effective in the presence of complex dielectric structures.

5.1 Standard Formalism

Following the standard approach[6], consider a gas of noninteracting hydro- genic atoms that begin in some excited state that decays strongly into a stable ground state and no other state. The amplitude for each atom to be in any other state is then nearly zero. Such an atom is referred to as a two level atom. The rate at which light is emitted by the gas is given by

R(t) = Γ e−Γt, (5.4) where Γ is the natural linewidth of the transition and t is the time since the atoms were definitely in the excited state. The probability for the atoms to

39 be in the ground state is given by

Z t 0 0 −Γt Pg(t) = R(t )dt = 1 − e . (5.5) 0

Now consider that the gas consists of pairs of atoms separated by less than a wavelength of the emitted light that share one excitation. Since the atoms are separated by less than a wavelength the physical state must be unchanged under the interchange of the atoms. In this case there are two possible initial states which will be suggestively referred to as

1 |Superradianti = √ |gei + |egi, (5.6) 2 and 1 |Subradianti = √ |gei − |egi. (5.7) 2 Since the distance between the two atoms is small compared to the wave- length of the emitted light, the atoms may be considered to be at the same location and the interaction Hamiltonian is

Hint = −d1 · E(r, t) − d2 · E(r, t) = −D · E(r, t), (5.8)

where di is the dipole moment operator for atom i, E(r, t) is the electric field operator at point r and time t, and

D = d1 + d2. (5.9) is called the collective dipole operator. This can be rewritten as

+ − † + − † + − † Hint =hg ¯ (σ1 + σ1 )(a + a ) +hg ¯ (σ2 + σ2 )(a + a ) =hg ¯ (Σ + Σ )(a + a ), (5.10) − + † where σi = (σi ) is the lowering operator for atom i, a is the lowering

40 operator for the electric field, and

2 ± X ± Σ = σi , (5.11) i=1 is called the collective atomic raising or lowering operator. In these situations the matrix elements linking the excited state to the ground state are √ hgg|Hint|Superradianti = 2¯hg, (5.12) and

hgg|Hint|Subradianti = 0. (5.13)

Fermi’s golden rule states that the transition rate is proportional to the square of the matrix element so that the superradiant states decay twice as fast and the subradiant states do not decay at all. Since all two dimensional Hilbert spaces are isomorphic to each other this phenomena can be understood in terms of two spin-1/2 particles in a magnetic field. For example, |eei is identified with | ↑↑i or using the total angular momentum basis |S = 1,M = 1i and Σ− is identified with S−. Now the physical states that are unchanged under the interchange of the particles are the familiar singlet and triplet states. Since there are no terms in the interaction Hamiltonian that change the total spin of the system, only the z-component, the singlet and triplet states are completely decoupled. Now consider N spin-1/2 particles in a magnetic field. This is the phe- nomenon of nuclear magnetic resonance (NMR)[6]. They behave like one spin-1/2 particle with a dipole moment N times as large. This ”super spin” undergoes Larmor precession and radiates. The power radiated is propor- tional to the dipole moment squared and therefore to N 2. This phenomena is analogous to superradiance in optical systems but is much more familiar because it is more common for the sample size to be smaller than the radiated wavelength in NMR than in optical systems.

41 To make this more quantitative consider one spin in a magnetic field. The matrix element associated with the spin’s z-component of angular momentum decreasing by one is proportional to

p hS,M − 1|S−|S,Mi = (S − M + 1)(S + M). (5.14)

The intensity of the radiation is proportional to the square of the matrix element I ∝ (S − M + 1)(S + M). (5.15)

In the case of a spin-1/2 particle that is initially spin up this is

p(S − M + 1)(S + M) = 1. (5.16)

Now consider a spin with S = N/2. When M = N/2,

I ∝ N, (5.17) indicating that the spins radiate independently. When M = 1,

N N  I ∝ − 1 ∝ N 2, (5.18) 2 2 the intensity is increased by a factor of N. This is an important characteristic of superradiance. The intensity is largest at this point because this is when the dipole moment is rotating in a plane perpendicular to the magnetic field. When M = −S + 1, I ∝ N, (5.19) indicating that the spins again radiate independently. This process is illus- trated in Figure 5.2. The analogous phenomenon in an optical system is that if all the atoms begin excited, they initially emit independently, then after half of the excitations have decayed, the intensity has an N 2 enhancement,

42 (a) (b) (c) Figure 5.2: The relationship of the spin and the magnetic field at (a) the beginning of the radiative process, (b) halfway through the radiative process when the intensity is the greatest, and (c) at the end of the process. and finally when only one excitation remains the atoms again emit indepen- dently. So far the discussion has been entirely for emission but the process of absorption is proportional to the same matrix element and therefore ex- periences the same enhancement. In the above discussion the atoms were all treated as if they were located at a single point. This assumption will need to be relaxed in order to model realistic systems. The effect of having the atoms be separated is that they acquire a phase factor. For example, in the case of two atoms the superradiant state becomes √ |Superradianti = 1 2eiφ·r1 |gei + eiφ·r2 |egi. (5.20)

If the interaction Hamiltonian is also a function of space such that

2 2 X X −iφ·ri Hint = di · E(ri, t) = di · E0e , (5.21) i=1 i=1 super and subradiance are still possible. If this condition is only approxi- mately met, some enhancement can still occur. One way to achieve such a situation is to couple the atoms to a waveguide.

43 5.2 Green’s Function Approach

In electrostatics, a Green’s function is often used as part of the solution to Poisson’s equation. In this case, the Green’s function is a scalar because both the field (the scalar potential) and the sources (electric monopoles) are scalars. If instead the field of interest is the Electirc field itself and the sources are electric dipoles a Green’s function that is a rank-2 tensor, Gij with i, j = 1, 2, 3, must be used. Since atoms behave approximately like electric dipoles it is logical that this approach will be more successful. It is often referred to as the dyadic Green’s function[11]. In essence, the Green’s function must be a tensor in order to take into account the vector nature of the field and sources. Component i,j of the Green’s function represents the field’s response in direction i due to the oscillation of the dipole in direction j. The development here is standard[14]. The behavior of the Green’s function for the electromagnetic field follows from the wave equation for the electromagnetic field. First Faraday’s Law

1 1 5 × E = − B˙ = − H˙ , (5.22) c c where it has been assumed that no magnetic materials are present, and Am- pere’s Law 1 5 × H = (D˙ + 4πj). (5.23) c Here B is the magnetic induction, H is the magnetic field, and j is electric current density. In the frequency domain, these equations read

iω 5 × E(x, ω) = H(x, ω), (5.24) c and 1 5 × H(x, ω) = (−iωD(x, ω) + 4πj(x, ω)) (5.25) c

44 For a linear dispersive media

D = (x, ω)E(x, ω), (5.26) where  is the dielectric constant of the material. Taking the curl of Faraday’s Law gives iω 5 × 5 × E(x, ω) = 5 × H. (5.27) c Combining this with Ampere’s equation gives

ω2(x, ω) ω 5 × 5 × E(x, ω) − E(x, ω) = j(x, ω). (5.28) c2 c2

Therefore, the classical electromagnetic Green’s function, G(x, x0, ω), may be chosen to obey

ω2 ω2 5 × 5 × G(x, x0, ω) − (x, ω)G(x, x0, ω) = δ(3)(x − x0), (5.29) c2 c2 where x denotes the point at which the field is measured, x0 denotes the location of a source of the field. The electric field can then be calculated from the Green’s function as

i Z E(x, ω) = d3x0G(x, x0, ω) · j(x0, ω). (5.30) ω

The multipole expansion for the charge density is

Z  Z  Z  3 0 0 (3) 3 0 (3) 3 0 (3) ρ(x, ω) = d x ρ(x , ω) δ (x)+ d x xαρ(x, ω) ∂αδ (x)+ d x xαxβρ(x, ω) ∂α∂βδ (x), (5.31) where the Einstein summation convention is in use. The first term is the monopole contribution, the second term is the dipole contribution, the third

45 term is the quadrupole contribution, etc. A localized current density satisfies

iω eω2 j = δ(3)(x − x0)ˆd`, (5.32) c2 c2 where ˆ is the direction the current moves in and d` is the small length the current moves. In the frequency domain the continuity equation is

5 · j(x, ω) = iωρ(x, ω). (5.33)

Combining these equations allows the charge density to be written as

ρ(x, ω) = −eˆ· 5δ(3)(x − x0)d`. (5.34)

This implies that the monopole contribution must vanish. The dipole mo- ment may be calculated as Z Z Z 3 0 0 0 3 0 0 (3) 0 00 3 0 (3) 0 00 d(ω) = d x xαρ(x , ω) = −ed` d x xαˆ·5δ (x −x ) = ed`ˆ d x δ (x −x ) = ed`.ˆ (5.35) The other terms in the expansion contribute a constant, which is zero if x0 is chosen to be the origin. The electric field of a localized current density is then just the electric field of an oscillating dipole. For example, the Green’s func- tion that will produce this electric field and is subject to free-space boundary conditions is

 1 k  k3 G(0)(x, 0, ω) = [3ˆx xˆ − δ ] − i − [ˆx xˆ − δ ] eik·x, (5.36) αβ α β αβ x3 x2 α β αβ x where k is the wavevector. For spherically symmetric problems the origin may be moved by simply replacing x with |x − x0|. Once the Green’s function is known, the electric field can be calculated

46 according to

N 2 X E(r, ω) = Ep(r, ω) + µ0ω G(r, ri, ω) · di(ω), (5.37) i=1 where Ep is the field from sources other than the dipoles, µ0 is the perme- ability of free space, N is the number of dipoles, ri is the location of dipole i, and di(ω) is the dipole moment of dipole i. Since the quantized electric field has the same spatial behavior as the classical electric field the above equa- tion may be valid quantum mechanically. In fact, the quantum fluctuations of the field follow from the quantum nature of sources. Thus, if the dipoles are atoms then the di(ω) are operators. The field scattered by individual atoms in always quantum mechanical field so that if Ep(r, ω) is an operator, E(r, ω) will be an operator and equation 5.37 is a valid quantum mechanical equation. Atoms also have a very weak response away from resonance so the

ω dependence can be replaced with the resonance frequency ω0. The rate of coherent interaction between atoms i and j is given by

µ ω2 J ij = − 0 0 d · Re[G(r , r0 , ω )] · d , (5.38) h¯ ge i j 0 eg where deg = he|d|gi. The rate at which these two atoms collective decay is given by 2µ ω2 Γij = 0 0 d · Im[G(r , r , ω )] · d . (5.39) h¯ ge i j 0 eg

These values can be repackaged into an NxN matrix, gij, that will be referred to as the coupling matrix. This matrix has components

iΓij g = J ij − . (5.40) ij 2

Often instead of considering the oscillations of the original N dipoles it is easier to decompose the collective oscillation into contributions from the normal modes of oscillation. These normal modes also behave as N oscillating

47 Figure 5.3: Illustration of atoms coupled to a tapered optical nanofiber.

dipoles but they are uncoupled. This is accomplished by diagonalizing gij. This formalism has been used to show that if a collection of EIT atoms have no decay on the control transition, superradiant into the guided mode of a waveguide on the probe transition, and subradiant into all free space modes on the probe transition the photon storage error is decreased[11]. As an example, the elements of the coupling matrix for atoms in a 1D cavity are ij g = g(ωp)cos(kcxi)cos(kcxj), (5.41) kc is the wave number of standing waves in the cavity near resonance with atoms and xi is the location of atom i. Diagonalizing this matrix gives only one nonzero eigenvalue indicating that there is only one normal mode of atom oscillation, the one corresponding to the standing wave in the cavity. For the case of N atoms coupled to a waveguide, the coupling matrix takes the form Γ gij = i eikp|xi−xj |. (5.42) 2 Such a coupling can be achieved by trapping the atoms around a tapered optical nanofiber like the one illustrated in Figure 5.3. In this case there will be N normal modes that all decay at a different rate. So far, the average of all these decay rates appears to always be equal to the individual atomic decay rates. Some of these rates will be lower than the individual atomic decay rate and some of them will be greater than the individual rate. These are the super and subradiant states. The decay rates are shown in Table 5.2 for the cases of even atomic spacings of one wavelength and one-half

48 Table 5.2: Decay rates of the 5 normal modes associated with 5 evenly spaced atoms interacting with a waveguide for atomic separations of a wavelength and half a wavelength as a fraction of the individual atomic decay rate. These are calculated with a program in Appendix E.

Half-Wavelength 1.3821 2.9251 0.6084 0.0749 0.0096 Wavelength 2.5375 2.4316 0.0297 0.0012 0.0 wavelengths. If the pulse is stored in the lower decay rate modes the pulse will be more effectively stored.

49 Chapter 6

EIT and Slow Light with Spin Flip Interactions

Let us now complicate the situation a little by allowing for a spin flip inter- action between the atoms. There are two possibilities, a |1i − |3i interaction and a |2i − |3i interaction. For simplicity, in each case we will examine two important limits. First, the limit in which only one atom is required to experience EIT and the opposite limit in which all atoms must experience EIT.

6.1 Control Transition Interactions

First, consider the case where the atoms in a gas of m atoms are coupled on the |3i − |2i transition but no entanglement is allowed to form between the atoms. If the other atoms may be in any state they may drive Rabi oscillations in the atom of interest. The effective Hamiltonian describing one atom is

Γ h¯Ω h¯Ω  H =h ¯−∆−i 1 |3ih3|−hδ¯ |2ih2|+ p |3ih1|+|1ih3|+ c +hg ¯ |3ih2|+|2ih3|, eff 2 2 2 (6.1)

50 where g is the affect coupling of the atom of interest to all the other atoms. The rest of the calculation then follows right away. The dark state is specified by 2  (Ωc+2g) (Ωc+2g)Ωp  2 2 − 2 2 0 Ωp+(Ωc+2g) (Ωc+2g) +Ωp 2  (Ωc+2g)Ωp Ωp  ρdark = − 2 2 2 2 0 , (6.2)  (Ωc+2g) +Ωp Ωp+(Ωc+2g)  0 0 0 the polarizability is

2 |d31| /2¯h α(ω) = 2 , (6.3) (Ωc+2g) Γ1+Γ2+Γ3 ∆ − 2δ + i( 2 )

and on resonance the group velocity is

 2  h¯(Ωc + 2g) vg = c 2 2 . (6.4) h¯(Ωc + 2g) + 2πωN|d31|

The adiabaticity condition becomes

q 2 2 df (Ωc + 2g) + Ωp << . (6.5) dt 2

An important distinction in this case is that if any of the atoms are in either states |2i or |3i and they are not all in that state, then completely stopped light will not be possible since these excitations will drive oscillations. If the dynamics of all the atoms is important, then the appropriate Hamil- tonian is

m     X Γ1 h¯Ωp h¯Ωc H = h¯ −∆ − i |3i h3| − hδ¯ |2i h2| + |3i h1| + |1i h3| + |3i h2| + |2i h3| eff 2 ii ii 2 ii ii 2 ii ii i (6.6) m X  + hg¯ |3iijh2| + |2iijh3| . (6.7) i,j,i6=j

51 If all of the atoms are required to be near the dark state then the calculated results will actually be identical to the single atom case. To see this, consider the following equations for the density matrix elements for the |3i − |1i coherence of atom one and the populations of the other atoms.

iΩ ρ˙ = p (ρ − ρ (31)...(11)...(22)...(33)... 2 (31)...(13)...(22)...(33)... (31)...(31)...(22)...(33)...

+ ρ(31)...(11)...(22)...(31)... − ρ(31)...(11)...(22)...(13)... + ...) iΩ + c (ρ − ρ 2 (31)...(11)...(23)...(33)... (31)...(11)...(32)...(33)...

+ ρ(31)...(11)...(22)..(32)... − ρ(31)..(11)...(22)...(23)... + ...)

+ i(gij(ρ(31)...(11)...(23)...(32)... − ρ(31)...(11)...(32)...(23)...) + ...)

+ i(g1iρ(21)...(11)...(32)...(33) + ...) Γ + 2 (ρ − ρ + ...) 2 (31)...(33)...(22)...(33)... (31)...(11)...(22)...(33)... Γ + 3 (ρ − ρ ) 2 (31)...(11)...(33)...(33)... (31)...(11)...(22)...(33)...

+ρ ˙311 , (6.8)

whereρ ˙311 indicates terms that only involve dynamics of atom one. The strategy at this point is to compute all the terms of this form, permuting the type of population for all atoms other than atom 1, and then sum the resulting equation. Clearly, almost all of the terms that appear above will cancel with the exception ofρ ˙311 . The only other term that doesn’t cancel is the one appearing on line six of the above equation. However, if the system is near the dark state is negligible. The result of the sum is then

ρ˙31 ≡ pρ˙311 , (6.9) where p is the number of permutations. This result only involves the dynam- ics of the first atom. To see this recall that density matrix elements are the

52 products of density matrices of individual atoms.

(1) n x (y) ρ(31)...(11)...(22)...(33)... = ρ(31)...ρ(11)...ρ(22)...ρ(33)... (6.10)

This calculation can now be preformed with the density matrix elements with the |2i − |1i coherence for atom one. The result is now the same except that line six of the above equation is replaced with terms of the form −ig1iρ(31)...(11)...(23)...(33)..., which are also negligible near the dark state. Therefore, the calculation goes exactly as the above calculation and the group velocity is unchanged. One notable subtlety is that now

ˆ d31 ≡ d31|3ih1| ⊗ 1(m−1)3 . (6.11)

6.2 Probe Transition Interactions

If instead, the spin flip is allowed on the |1i − |3i transition and the atoms provide background driving the correct Hamiltonian is

 Γ  h¯Ω h¯Ω  H =h ¯ −∆ − i 1 |3ih3|−hδ¯ |2ih2|+ c |3ih2|+|2ih3|+ p +hg ¯ |3ih1|+|1ih3|. eff 2 2 2 (6.12) In this case the calculation breaks down at

− d31·Ep + g ρ = ¯h , (6.13) 31  2  Ωc Γ 2 ∆ − 2δ + i( 2 ) which indicates that

if there are too many excitations in the gas and the coupling on the probe transition is strong enough then the dipole moment in the atom will not pri- marily be due to the probe beam. Therefore EIT will not be possible unless

53 g ≈ 0. It is more interesting to examine the case where atom must all be near the dark state. In this case we must consider the Hamiltonian

m    X Γ1 h¯Ωp H = h¯ −∆ − i |3ih3| − hδ¯ |2i h2| + |3i h1| + |1i h3| eff 2 ii 2 ii ii i (6.14)  m h¯Ωc X + |3i h1| + |1i h3| + hg¯ |3i h1| + |1i h3|. (6.15) 2 ii ii ij ij ij i,j,i6=j

The density matrix elements of the |3i − |1i coherence of atom one are of the form

iΩ ρ˙ = p (ρ − ρ (31)...(11)...(22)...(33)... 2 (31)...(13)...(22)...(33)... (31)...(31)...(22)...(33)...

+ ρ(31)...(11)...(22)...(31)... − ρ(31)...(11)...(22)...(13)... + ...) iΩ + c (ρ − ρ 2 (31)...(11)...(23)...(33)... (31)...(11)...(32)...(33)...

+ ρ(31)...(11)...(22)..(32)... − ρ(31)..(11)...(22)...(23)... + ...)

+ i(gij(ρ(31)...(13)...(22)...(31)... − ρ(31)...(31)...(22)...(13)...) + ...)

+ i(g1iρ(11)...(11)...(22)...(13) + ...) Γ + 2 (ρ − ρ + ...) 2 (31)...(33)...(22)...(33)... (31)...(11)...(22)...(33)... Γ + 3 (ρ − ρ ) 2 (31)...(11)...(33)...(33)... (31)...(11)...(22)...(33)...

+ρ ˙31µ . (6.16)

This time dropping the negligible terms and summing over permutations as above leaves

p X ρ˜˙31 ≡ ρ˙31 − i (gijρ(11)...(31)...(22)...(33)... + ...). (6.17) µ

54 Performing the sums over permutations and assuming that the coupling be- tween all the atoms is identical

m ˙ X (i) (j) ρ˜31 =ρ ˙31 − ig ρ(31)ρ(11). (6.18) i,j,j6=i=1

That is the dynamics of one atom cannot be separated from the dynamics of all the other atoms. Performing this calculation for each atom, assuming the density matrix elements for each atom are identical and summing over the result gives

m ˙ X P ≡ ρ˜˙31 =ρ ˙(31),total − igmρ31,total. (6.19) µ

The whole calculation then carries through with ∆ → ∆ + mg. That is the effect of having the atoms coupled in this way is to shift the location of resonance by an amount that depends on the coupling strength and the number of atoms.

55 Chapter 7

The Jaynes-Cummings Model and its Generalizations

The purpose of this chapter is to review the Jaynes-Cummings model[17][18] and discuss some of the dynamics of two of its generalizations. In particular we will discuss the Rabi model that generalizes the Jaynes-Cummings model to allow non-energy conserving terms and the anisotropic Rabi model that allows the coupling strengths of energy conserving and energy non-conserving terms to be arbitrary.

7.1 Cavity QED Basics

To begin let’s review the basics of cavity quantum electrodynamics[7][19]. The main idea is to consider a two-level atom in a resonant cavity interacting with the light interacting with the light in the cavity. This situation is illustrated in Figure 7.1. If there is either an excitation in the atom or the single field mode the atom will undergo Rabi oscillations. The Hamiltonian that describes the interaction can be derived from first principles to be

hω¯ H = eg +hωa ¯ †a − d · E. (7.1) 2

56 Figure 7.1: An illustration of the situation studied in Cavity QED.

The dipole moment operator may be written as

d = degσ+ + dgeσ−, (7.2)

† where deg = dge. The electric field in the cavity mode may be written as

r 2πhω¯ E = ua, (7.3) V where V is the quantization volume and u is the mode function at the atomic position. The Hamiltonian then becomes

hω¯ H = eg +hωa ¯ †a +h ¯(gaσ + g∗a†σ + gaσ + g∗a†σ ), (7.4) 2 + − − + where r 2πhω¯ g = −d · u . (7.5) eg V The is actually the Rabi model. Each term in the interaction part of the Hamiltonian can be thought of as emerging from a particular process. The

57 † first, aσ+, corresponds to the absorption of a photon. The second, a σ−, to emission of a photon. These are known as the co-rotating terms. The third term, aσ− corresponds to a photon being annihilated and the atom moving † from the excited state to the ground state. The final term, a σ+, corresponds to the creation of a photon and the atom moving from the ground state to the excited state. These processes do not conserve energy. Accordingly, these terms are known as the energy non-conserving terms or the counter-rotating terms. They are of course physical; they are the origin of light shifts, such as the Lamb shift. In the context of cavity QED it is usual to make one further approxi- mation which will reduce the problem to the Jaynes-Cummings model. To justify the next step we consider the interaction Hamiltonian in the interac- tion picture

˜ −i(ω−ωeg)t ∗ † i(ω−ωeg)t −i(ω+ωeg)t ∗ † i(ω+ωeg)t Hint =h ¯ gaσ+e + g a σ−e + gaσ−e + g a σ+e . (7.6) The last two terms rotate much faster than the other terms and so average to zero on the time scale of interest. These are therefore usually dropped to leave. hω¯ H = eg +hωa ¯ †a +hg ¯ (aσ + a†σ ), (7.7) 2 + − where g has been taken to be real. This is the Janyes-Cummings model Hamiltonian.

7.2 The Jaynes-Cummings Model

Although the Jaynes-Cummings model arose in the study of light matter interactions, this is far from its only area of application. Notably, it has found application in the field of superconducting circuits, which is often called circuit QED[18][47]. It is therefore worthwhile to sometimes think of the Hamiltonian as the starting point instead of thinking of electric fields

58 Figure 7.2: The atomic inversion vs time in the Jaynes-Cummings model. and atomic dipole moments. The Jaynes-Cummings model is exactly solvable although we will not repeat this calculation here. As an example, if the cavity is exactly on reso- nance with the atom and the atom begins in the excited state with n atoms in the cavity, then the state some time t later is √ √ |ψi = cos( n + 1gt)|e, ni + sin( n + 1gt)|g, n + 1i. (7.8)

The properties of the Jaynes-Cummings model can also be easily investigated numerically as discussed in Appendix C. The result of a numerical calculation of the atomic inversion vs time is shown in Figure 7.2. This Figure, as with all the figures in this section, can be produced with the code discussed in Appendix E.

59 7.3 The Rabi Model

As discussed above the Rabi model Hamiltonian is

hω¯ H = eg +hωa ¯ †a +hg ¯ (aσ + a†σ + aσ + a†σ ). (7.9) 2 + − − +

This is simply the Jaynes-Cummings Hamiltonian before the rotating wave approximation is made. This model is also exactly solvable but the tech- nical difficulties here become considerable so we will restrict ourselves to a numerical discussion. The numerically predicted photon number and atomic inversion versus time are shown in Figure 7.3. The green curves denote the photon number and the blue curves denote the atomic inversion. Figure 7.3 (b) illustrates an important fact about the Rabi model alluded to above; the Rabi model does not conserve excitation number. As expected it continues to show Rabi oscillations, however, they are not as smooth and there is some beating between the different oscillatory processes. It is also worth noting that the Rabi model exhibits a phase transition at the coupling strength

r κ2 g = 1 + , (7.10) c ω2 where κ is the cavity decay rate[22].

7.4 The Anisotropic Rabi Model

In this section we will explore a model which generalizes the Rabi model to allow for different coupling strengths to excitation conserving and excitation non-conversing processes. The Hamiltonian of interest is

hω¯ H = eg +hωa ¯ †a +hg ¯ (aσ + a†σ ) +hg ¯ (aσ + a†σ ), (7.11) 2 1 + − 2 − +

60 (a)

(b) Figure 7.3: The atomic inversion (blue curves) and photon number (green curves) vs time in the Rabi mode. In (a) the atom is initially in the excited state and the initial photon number is 3. In (b) the atom is initially in the ground state and the initial photon number is 0.

61 where g1 6= g2 in general. The interaction Hamiltonian of this model is given by † † H =hg ¯ 1(σ+a + σ−a ) +hg ¯ 2(σ+a + σ−a), (7.12)

If the couplings are taken to be time dependent, this may actually be regarded as simply the Rabi model Hamiltonian where g2 oscillates very quickly com- pared to g1. However, it can also be viewed in a more general sense and we will usually take the couplings to be constant. The dynamics of this model, obtained numerically, for a variety of values of g1 and g2 are shown in Figure 7.4. In the low g2 limit the dynamics are essentially those of the

Jaynes-Cummings model. In the high g1 limit the beating that appeared in the Rabi model becomes more pronounced. This system has some interesting properties that depend on the relation- ship of g1 and g2. For example, in the interaction picture the dynamics generated by this Hamiltonian naturally fall into two groups

˙ ih¯Cg,0 =hg ¯ 2Ce,1 (7.13) √ ˙ ih¯Ce,1 =hg ¯ 1 2Cg,2 +hg ¯ 2Cg,0 (7.14) √ √ ˙ ih¯Cg,2 =hg ¯ 1 2Ce,1 +hg ¯ 2 3Ce,3 (7.15) √ √ ˙ ih¯Ce,3 =hg ¯ 1 4Cg,4 +hg ¯ 2 3Cg,2 (7.16) etc..., and ˙ ih¯Ce,0 =hg ¯ 1Cg,1 (7.17) √ ˙ ih¯Cg,1 =hg ¯ 1Ce,0 +hg ¯ 2 2Ce,2 (7.18) √ ˙ ih¯Ce,2 =hg ¯ 1 3Cg,3 +hg ¯ 2Cg,1 (7.19) √ √ ˙ ih¯Cg,3 =hg ¯ 1 3Ce,2 +hg ¯ 2 4Ce,4 (7.20) etc... If we consider the case where all of the amplitudes are constant we

62 (a) (b)

(c) (d)

(e) (f)

Figure 7.4: The atomic inversion (blue curves) and photon(green curves) number vs time in the anisotropic Rabi63 model for g1 = ω = ωeg with (a) g2 = 0.01g1, (b) g2 = 0.1g1, (c) g2 = 0.5g1, (d) g2 = 2g1, (e) g2 = 5g1 and (f) g2 = 10g1. Notice that for g2 ≈ 0 Jaynes-Cummings dynamics are recovered, for g2 = g1 the Rabi model dynamics recovered, and for g2 >> g1 a distinct beating appears. obtain

Cg,n = Ce,n = 0, (7.21) if n is odd and r  n/2 (n − 1)!! g2 Cg,n = − Cg,0 (7.22) n!! g1 r  n/2 (n − 1)!! g1 Ce,n = − Ce,0, (7.23) n!! g2 for even n. That is there are two states that decouple from all other states . In one of these states the atom is in the excited state and in another the atom is in the ground state. In both states only even photon numbers are allowed. However, for a given set of parameters at most one of these can be normalized. If g1 > g2 then the state with the atom in the ground state is normalizable as

s ∞ r  n δg X (2n − 1)!! g2 |gi = − |g, 2ni, (7.24) g 2n!! g 1 n=0 1

p 2 2 where δg ≡ g1 − g2. On the other hand, if g2 > g1, the normalizable state is s ∞ r  n iδg X (2n − 1)!! g1 |ei = − |e, 2ni. (7.25) g 2n!! g 2 n=0 2

In the Rabi model limit, g1 = g2, neither of the states are normalizable. It is easy to see from the two states above where the problem lies. For g1 = g2 both of the above states become identically zero since δg = 0. The photon number expectation value may be calculated in each of these states. In the case of the atom being in the ground state it is

2 † 1 g2 g2 g2 hg|a a|gi = 2 = . (7.26) 2 δg 2(g1 + g2) (g1 − g2)

The second term demonstrates how the photon number diverges as g1 → g2.

64 In the other case the expectation value of the photon number is

2 † 1 g1 g1 g1 he|a a|ei = − 2 = . (7.27) 2 δg 2(g2 + g1) (g2 − g1)

Again, the second term shows how the photon number diverges as g2 → g1.

It is possible that the behavior of this system around g1 = g2 is indicative of a phase transition. A dissipative phase transition has been discussed in the open anisotropic Rabi model[48].

65 Chapter 8

Photon Counting in a Dissipative Optical Lattice

This chapter is somewhat disconnected from the flow of the rest of this manuscript. It outlines an experiment that was conducted under the su- pervision of Dr. Samir Bali. The goal of the experiment was t0 use an avalanche photo diode (APD) photon detector and a field programmable gate array (FPGA) to obtain a record of σ+ (right circularly polarized) and

σ− (left circularly polarized) emission events in the light scattered from cold atoms contained in a dissipative optical lattice. These records can then be used to calculate the g(2)(τ) correlations between events. The correlations contain important information about the optical lattice. In particular, the correlation functions can be used to calculate the dwell time inside a partic- ular well and the cross over time between adjacent wells. The dwell-time, in other words, is how long, on average, an atom remains at a particular lattice site and the cross-over time is how long, on average, the atom takes to move to an adjacent lattice site. The layout is as follows: First, some theoretical background surround- ing dissipative optical lattices and g(2)(τ) correlations is provided. Second, some numerical results obtained with a semi-classical approach are presented.

66 Figure 8.1: Illustration of the lattice created by two linearly polarized beams perpendicular to each other and propagating in opposite directions. The re- sult is periodic net right and left circular polarization. In between the lattice sites the polarization varies continuously between these two extremes. The atoms are trapped in the locations of pure circular polarization and oscil- late about these points as shown, with a characteristic vibrational frequency denoted by ωv.

Figure 8.2: Energy level diagram of a six level,Fg = 1/2 → Fe = 3/2, atom.

Third, the experimental techniques and apparatus are discussed. Finally, the data analysis techniques are presented.

8.1 Theory of Optical Lattices

In the 1980s the technique of Sisyphus cooling was developed to allow for cooling trapped atoms below the Doppler limit[25]. This technique can be used to trap alkali atoms in lattice sites. To see how this is possible consider the situation shown in Figure 8.1, where the atoms are trapped near the sites of perfect circular polarization as described below. In particular, consider a six level atom whose levels are shown in Fig- ure 8.2. A similar configuration can be obtained in the hyperfine structure

67 (a) (b)

(c) (d)

(e) Figure 8.3: Illustration of the five possible processes for an atom in the +1/2 ground state. A blue arrow signifies that the process involves a right circu- larly polarized photon, σ+, red arrow indicates a linearly polarized photon, π, and a green arrow indicates a left circularly polarized photon, σ−. (a) A σ+ is absorbed, another σ+ is emitted and the atom remains in the same state. (b) A π is absorbed, another π is emitted and the atom stays in the same state. (c) A π is absorbed, a σ− is emitted and the atom changes state. (d) A σ− is absorbed, a π is emitted and the atom changes state. (e) A σ− is absorbed, a σ− is emitted and the atom changes state. of Alkali atoms such as Rubidium. Although the actual structure of real atoms is usually more complicated, this is sufficient to explain the main features of Sisyphus cooling. The atom will absorb and emit photons but the timescale over which this happens is much shorter than the timescale of atomic transport so we may simple consider the atom transitioning between the two ground states. This is another example of adiabatic elimination as discussed in Chapter 4. For an atom in the +1/2 ground state there are five possible processes which are illustrated in Figure 8.3. Clearly, an atom that in the presence of σ+, or nearly σ+, light will rapidly be driven to the +1/2 ground state and a atom in the presence of σ− light will be optically pumped in to the -1/2 ground state.

68 There is one further concept that must be appreciated in order to under- stand Sisyphus cooling. The light shift gives rise to a force on the atomic center of mass that depends on the internal state of the atom and the driven polarization of the light at the atomic position. In a 1D lattice with the origin chosen to be a σ− lattice site, the potential energy experienced by the atom is given by U U = 0 (−2 ± cos 2kz), (8.1) ± 2 where the ± are associated with the +1/2 and -1/2 ground states respectively,

U0 is the well depth, k is the wavenumber, and z is the atomic position. This potential gives rise to a force

F = ±U0k sin 2kz. (8.2)

This indicates that a six level atom moving through this light will essentially always feel a force pushing against it. It is this force that the internal atomic degrees of freedom affect the external atomic degrees of freedom.

For instance an atom in the +1/2 ground state near a σ+ lattice site but moving away from it will feel a force pulling it back toward the σ+ lattice site. If it happens to overcome this force and reach a σ− lattice site it will rapidly be driven to the -1/2 ground state. If it continues to move in the same direction it will now be pulled back toward the σ− lattice state. Thus, the atom is always moving up a potential hill. This is why the method is named Sisyphus cooling after the Greek myth of Sisyphus who as punishment for his sins had to push a boulder up the same hill over and over again. This process continues until the atom does not have enough kinetic energy to make it to the next lattice site. Of course, even after the cooling is complete interesting dynamics will still occur. The photon emission process is stochastic in nature so that even a cooled atom will occasionally receive a series of photon kicks large enough to propel it into the next lattice site. This is the type of dynamics we are

69 primarily concerned with here. The primary quantity of interest is the second order correlation function, (2) g (τ), between the σ+ and σ− emissions which is given by

hI (t)I (t + τ)i g(2)(τ) = + − , (8.3) hI+ihI−i where I± is the intensity of σ± light, and τ is referred to as the delay time. In the single photon limit, which we are interested in, this is more precisely written as † † (2) ha+(t)a+(t)a−(t + τ)a−(t + τ)i g (τ) = † † , (8.4) ha+a+iha−a−i where a± is the photon annihilation operator for σ± light. The importance of the g(2)(τ) function is given by its relationship to the lattice diffusion constant, D,[29][49] g(2)(τ) ∝ e−2δk2Dτ , (8.5) where δk = k(1 − cos θ) and θ is the angle at which the photo detector is elevated above the horizontal.

8.2 Numerical Simulations

In this section a semi-classical description, which treats the external atomic degrees of classically and the internal atomic degrees of freedom quantum mechanically, is used as the basis of a Monte-Carlo algorithm as described in[27]. The main idea is to stochastically implement one of the processes in Figure 8.3 above. Actually, the processes which do not change the internal state are grouped together and the processes which do change the internal state are grouped together. The first step in the simulation is to calculate the probability that the atom changes internal state. If the atom is in the +1/2 ground state the probability of it transitioning to the -1/2 ground state in a time ∆t is given

70 by 2 γ (z)∆t = Γ0 cos2(kz)∆t, (8.6) +− 9 where γ+−(z) is the transition rate from the +1/2 state to the -1/2 state 0 and Γ = s0Γ is the light-shifted atomic decay rate and Γ is the unmodified atomic decay rate. For completeness

2Ω s = 0 , (8.7) 0 4δ2 + Γ2 where δ = ωL − ωA is the laser detuning (ωL is the laser frequency and ωA is the frequency difference between the ground and excited atomic states) and Ω0 is the Rabi frequency is the case that light is polarized exactly for a particular transition. Similarly, the probability of the atom transitioning from the -1/2 state to the +1/2 state is given by

2 γ ∆t = Γ0 sin2(kz)∆t (8.8) −+ 9

Usually, when performing these simulations, it is only necessary to specify a value of Γ0 but it is good to be aware of exactly what Γ0 is. Whether or not a transition occurs is determined by comparing this value with that of a random number between zero and one: This algorithm is referred to as a Monte-Carlo algorithm for this reason. The next step is to calculate the deterministic force from the light shift and the stochastic diffusion force arising from random photon kicks. The de- terministic force is calculated according to the formula in Equation 8.2. The diffusion constant has a different form depending on which of the processes in Figure 8.3 occurs. If the state does not change, the diffusion constant is given by h¯2k2Γ0 D (z) = (35 ± 7 cos 2kz), (8.9) ± 90 with ± corresponding to the atom being in the +1/2 or -1/2 state respec- tively. If the atom changes from the +1/2 to the -1/2 state the diffusion

71 constant is given by

h¯2k2Γ0 D (z) = (6 − cos 2kz). (8.10) +− 90

If the reverse process occurs, the diffusion constant is given by

h¯2k2Γ0 D (z) = (6 + cos 2kz). (8.11) −+ 90

If the atom does not change state, then the stochastic force is given by

p f = 2D±(z)δtN(0, 1), (8.12) where N(0, 1) is a random number drawn from a Gaussian distribution with zero mean and unit variance. If the atom changes from the +1/2 to the -1/2 state the stochastic force is s 2D (z) f = +− N(0, 1), (8.13) γ+−(z) and if the reverse process occurs it is

s 2D (z) f = −+ N(0, 1). (8.14) γ−+(z)

From here the change in momentum and position are simply calculated ac- cording to ∆p = (F + f)∆t (8.15) p ∆z = ∆t. (8.16) m A program written in Python which implements this algorithm to plot the position and momentum of an atom versus time is shown in Appendix F. Ex- amples of its output for different well depths in shown Figure 8.4 The graphs

72 (a)

(b) Figure 8.4: Examples of position vs. time and momentum vs. time behavior in an optical lattice as predicted by the semi-classical algorithm for an atom beginning at rest at origin in the -1/2 state with (a) U0 = 20ER and (b) ¯h2k2 U0 = ER where ER = m2 is the photon recoil energy.

73 clearly demonstrate two regimes: One in which the atom is well trapped in a particular lattice site and thus oscillates about that site, and one in which the atom is not well trapped and thus moves rather freely through the lattice. These correspond to deep and shallow wells respectively. This simple algorithm is able to effectively reproduce the qualitative fea- tures of diffusion but it sometimes predicts nonphysically large momentum jumps. More importantly for our purposes, it does not seem to be able to reproduce the expected g(2)(τ) curves. For this reason, a slightly modified algorithm is used. The modified algorithm makes two major changes. First, the force felt by the atom is evaluated in two steps. Second, the atom only feels the stochastic diffusion force if the atom does not change internal state. An intermediate position is calculated based on the force felt at the beginning of the time step. The force felt at the intermediate position is calculated and the average of the two forces is used to calculate the new position. An example of the diffusion predicted by this algorithm is shown in Figure 8.5 The output of this algorithm can be used to calculate the expected g(2)(τ) behavior. All that is needed is to keep track of what type of photon is emitted at each time-step in two lists. One list has a ”1” if a σ+ photon is emitted and a ”0” otherwise, and the other does the same for σ− photons. The lists are then used to implement a shift register algorithm on the resulting lists. A shift register algorithm is simple. For example, in order to construct (2) the g (τ) correlation of σ+ photons with σ+ photons, the first step is to create a list that will become the g(2)(τ) values. Next, begin at the first element of σ+ events and check each element until a σ+ emission event is found. Once an emission event is found, check the next elements of the list for a second event up to a time delay τ. If an event is found at a delay time t, then tth value of the g(2)(τ) is increased by one. Then go back to looking for an initial event to correlate with. One important subtlety is that any event that will be correlated with must occur at least a time τ before the

74 (a)

(b) Figure 8.5: Examples of position vs. time and momentum vs. time behavior in an optical lattice as predicted by the modified semi-classical algorithm for an atom beginning at rest at origin in the -1/2 state with (a) U0 = 20ER and ¯h2k2 (b) U0 = ER where ER = m2 is the photon recoil energy.

75 (2) Figure 8.6: An example of the σ+ − σ+ g (τ) predicted by the modified semi-classical algorithm. end of the data set so that it has enough list entries to correlate with. An example of the predicted g(2)(τ) is shown in Figure 8.6. When calculating g(2)(τ) one should take precautions to ensure that the atom has been able to move between lattice sites several times. This means that the position and momentum plots should look something like those shown in Figure 8.7 Both of the algorithms described above may be easily modified to simulate motion through a ratchet rather than through a lattice by the inclusion of a time and position dependent force. For example, a constant force should lead to the behavior shown in Figure 8.8.

8.3 Experimental Techniques

The photons are detected using the Perkin Elmer SPCM-AQR Series photon counting module, which uses a silicon avalanche photodiode. A picture of

76 Figure 8.7: Example of position vs. time and momentum vs. time behavior in an optical lattice as predicted by the modified semi-classical algorithm for an atom beginning at rest at origin in the -1/2 state with U0 = ER allowed to run for long enough for the atom the change lattice sites many times.

77 Figure 8.8: An example of position vs time and momentum vs time be- havior with a constant applied force as simulated based on the modified semi-classical algorithm.

78 Figure 8.9: A picture of the single photon counting module. Connected to the left side of the detector is multimode optical fiber. Connected to the right side is the 5V DC power supply and two BNCs. One of the BNC carries the TTL output pulse when a photon is detected. The other BNC is a gating input. When the gate has a TTL high pulse it should turn the detector off. the single photon counting module is shown in Figure 8.9. The practical functioning of the photon counting module can be summarized simply. A photon enters a multimode fiber that is connected to the module. If the module detects the photon it outputs a transistor-transistor logic (TTL) signal through a BNC. More details about the operation of the single photon counting module can be found in[50] and in Appendix G. The TTL pulse output from the detector is sent to the field programmable gate array (FPGA) multichannel acquisition board. The board is shown in Figure 8.10. A USB cable connects to the back of the board and to computer. The board is run from a LabView program on the computer. Further details about the operation of the board can be found in Appendix G and in the manual[51].

8.4 Data Analysis

The first step in analyzing the data is to turn the pseudo-binary saved by the FPGA board into a list of ones and zeros with a one corresponding to a time increment in which a pulse was detected. Technically, one list is required for

79 (a)

(b) Figure 8.10: A picture of the (a) outside and (b) inside of the FPGA module. The BNC ports on the front of the FPGA module are well labeled. The output from the photon counting module should be connected to one of the ports labeled detector. The other important port is the one labeled external clock. It may be used to have the FPGA module keep track of shorter timescales.

80 Figure 8.11: The averaged white light data. It should show a flat line at g(2)(τ) = 1 but it shows a distinct decay. each board input that was detecting pulses. Once this is accomplished, there are two distinct routes that may be taken to calculating g(2)(τ). One is based on the cross correlation theorem discussed in Appendix H. The method writes g(2)τ in terms of the Fourier transform of the list discussed above. The other method performs a shift register algorithm, as discussed in Section 8.2, on the lists. The programs used to calculate these are discussed in Appendix I. Unfortunately, problems persist in the analysis of the data. For example, as shown in Figure 8.11 the averaged white light g(2)(τ), which should be flat at one, shows a decay with τ. At the moment, this decay is not under- stood. It is possible that this issue arises from not including enough counts to correlate. If fewer counts are used, then the calculated g(2)(τ) tend to be above one. Examples of unaveraged g(2)(τ) with different numbers of counts are shown in Figure 8.12 This could be the origin of the apparent decay. Unfortunately, using more counts in a single run requires more memory than MatLab can allocate. In order to overcome this we began working on C++ data analysis program based on the Shift register algorithm. Unfortunately, it is not yet complete in the sense that it still requires the MatLab program to translate the pseudo-binary output by the board. However, the section that implements the shift register algorithm itself does seem to be working.

81 (a)

(b)

Figure 8.12: The unaveraged white light data for (a) 6446 counts and (b) 64460 counts. In case (a) th value is clearly elevated above one. In case (b) it is not so obviously above one but it most likely is.

82 The output of the C++ program for the two data sets shown in Figure 12 are shown in Figure 8.13.

83 (a) (b)

Figure 8.13: The unaveraged white light data for (a) 6446 counts and (b) 64460 counts as calculated by the shift register program. Again, in case (a) th value is clearly elevated above one. In case (b) it is not so obviously above one but it most likely is.

84 Chapter 9

Summary

We have reviewed the physics of EIT and stopped light. In section 4.1, an amplitude based toy model was used to demonstrate some of the qualitative features of EIT. It is likely possible, and would be interesting, to extend this model to calculate some of the more quantitative features of EIT and compare the results to the more standard approaches to the subject. In chapter 5, we discussed how stopped light is modified if collective emission is allowed. This especially as applications for the development of photonic memories. We found that making all modes, other than the mode of the pulse being stored, subradiant increased the beam area of the output beam; this would lead to an improved quantum memory. It may be possible to experimentally realize such an improvement with atoms trapped around an optical nanofiber. It would be interesting to continue this analysis by calculating the decay rates for various specific geometries. It would also be interesting to analyze the effects of different types of control field dynamics. Next, in chapter 6, we analyzed stopped light in the presence of coherent interactions between the atoms. For interactions on the control transition, we found that if there are excitations present in the atomic cloud a minimum effective pulse speed will be set. If all the atoms are required to be in the dark state, then this interaction will have no effect on stopped light. For interactions on the probe transition, we found

85 that if there are excitations present, total transparency of the medium may not even be possible as it effectively rotates the dark state. If all the atoms are required to be in the dark state we found that a frequency shift results that depends on the coupling between the atoms and the number of atoms. In the future, it would be worthwhile to obtain numerical confirmation of these results. It would also be interesting to examine a situation in which both collective radiance and coherent atomic interactions are present. In chapter 7, we review the Jaynes-Cummings and Rabi models and then discussed the anisotropic Rabi model. Using numerical techniques, we show how both Jaynes-Cummings and Rabi dynamics can be recovered by the appropriate choice for the two couplings. We also show how that neither the Rabi model nor the anisotrpic Rabi model conserve excitation number. We then go on to find an eigenstate of the interaction Hamiltonian that is valid for g1 < g2 and another that is valid for g2 < g1; they both have eigenvalue 0. We also show that the photon number in both of these states diverges

g1 as as g1 → g2. This provides evidence of a phase transition. A g1−g2] good future project would be to further characterize this phase transition and compare the results to the work on the dissipative phase transition in the anisotropic Rabi model. Finally, we discussed the beginnings of a photon counting experiment on a dissipative optical lattice. We presented some numerical results obtained with a semi-classical Monte Carlo algorithm. In particular, we demonstrate that depending on the depth of the well sites a variety of different types of dynamics can be observed. These regimes require further exploration and characterization. We then discuss the testing of the experimental apparatus (the APD and FPGA) and data analysis. With the exception of the gating mechanism on the APD, the experimental apparatus appears to be function- ing properly. There are deeper problems with the data analysis. The analy- sis program based on the cross correlation theorem applied to a white light source demonstrates an apparent decay in coherence that we do not currently

86 understand. It is possible that this issue could be resolve by ignoring the cor- relation results for very short and very long delay times. Alternatively, we began work on a program that is based on a shift register algorithm. While the shift register appears to be working, the program requires modification to allow for averaging over many data sets and to translate the output from the FPGA into something that the shift register can use.

87 Appendix A

Properties of the Density Operator

This appendix proves some mathematical properties of the density operator that will be useful in the text. First, applying normalization of the quantum state implies ! ! X X X T r(ρ) = T r Pi|ψiihψi| = T r Pihψi|ψii = Pi = 1. (A.1) i i i

This is how normalization is applied to the density operator. It is also easy to see that the density operator is Hermitian

!† † X X ρ = Pi|ψiihψi| = Pi|ψiihψi| = ρ. (A.2) i i

Since the density operator is Hermitian it also has a unique diagonal decom- position d X ρ = λj|jihJ|, (A.3) j

88 where d is the dimension of the Hilbert space, the λj are the real eigenvalues of ρ and the |ji are the orthogonal eigenvectors. Actually, the λj are also positive to see this note that the diagonal elements must be the sum of products of a probability, a probability amplitude and the complex conjugate of the probability amplitude. Each of these terms must be positive so the sum must be positive. In fact, this is true of the diagonal elements in any basis not just the one discussed above

hφ|ρ|φi ≥ 0. (A.4)

This is the statement that ρ is a positive operator. This is often denoted by

ρ ≥ 0. (A.5)

This result clearly has to be correct if the interpretation of the diagonal elements of the density operator as probabilities is going to hold up. It is sometimes useful to have a quick method for determining whether a state is pure or mixed. To that end, consider the trace of the square of the density operator ! 2 X 2 T r(ρ ) = T r PiPj|hψi|ψji| . (A.6) i,j

The inner products in the trace become Dirac deltas and what remains is

2 X 2 T r(ρ ) = Pi ≤ 1, (A.7) i with equality and positivity implying that the sum has only one term given and it is given by P1 = 1. In other words equality holds if and only if the state is a pure state. This test is frequently used if the purity of a state is in doubt.

89 Appendix B

Alternative Derivation of the Master Equation

In this appendix, we present an alternative derivation of the master equation based on writing the Schrdinger as a integro-differential equation. Once again, consider a system interacting with some reservoir. It is convenient to further assume that the system and the reservoir are initially uncorrelated. In the interaction picture, the density operator evolves according to

dρ(t + ∆t) i = − [H (t), ρ(t)]. (B.1) dt h¯ int

This can be written as an integral equation as

Z t+∆t i 0 0 0 ρ(t + ∆t) = ρ(t) − [Hint(t ), ρ(t )]dt . (B.2) h¯ t

Using the Dyson series this may be expanded as

Z t+∆t Z t+∆t Z t0 i 0 0 i 2 0 00 00 0 ρ(t+∆t) = ρ(t)− [Hint(t ), ρ(t)]dt +(− ) [Hint(t )[Hint(t ), ρ(t)]]dt dt +... h¯ t h¯ t t (B.3)

90 Keeping only terms up to second order and tracing over the reservoir gives

Z t+∆t i 0 0 ρs(t + ∆t) =ρ(t) − T rR ([Hint(t ), ρ(t)]) dt h¯ t  2 Z t+∆t Z t0 i 0 00 00 0 + − T rR ([Hint(t )[Hint(t ), ρ(t)]]) dt dt . h¯ t t

This can be rewritten as an integro-differential equation by taking the limit that ∆t → 0

Z t dρs(t) i 1 0 0 = − [Hint(t), ρ(0)] − 2 T rR ([Hint(t), [Hint(t ), ρ(t)]]) dt , (B.4) dt h¯ h¯ t0 where t0 is the initial time. Now we coarse grain by assuming that the time scale of interest, ∆t, is much greater than the correlation time τc. If we further the reservoir begins in a stable equilibrium which it returns to after correlations with the system die out then the density operator may always be written as

ρ(t) = ρs(t) ⊗ ρR(0). (B.5)

This amounts to the statement that the information that flows to the reser- voir from the system has a negligible effect on the reservoir will never return to effect the system. This approximation, known as the Markov approxima- tion, is valid as long as the reservoir has many more degrees of freedom than the system. Usually, it is assumed that in the reservoir equilibrium state the reservoir operators have zero mean which means

[Hint(t), ρ(0)] = 0. (B.6)

The integro-differential equation above can then be written as

Z t dρs(t) 1 0 0 = − − 2 T rR ([Hint(t), [Hint(t ), ρ(t)]]) dt . (B.7) dt h¯ t0

91 Performing the commutators and using the partial trace to define an expec- tation value over the reservoir degrees of freedom as

hOiR ≡ T rR(ρRO), (B.8) puts the integro-differential equation in the form

Z t dρs(t) 1 0 0 = − 2 (hHint(t)Hint(t )ρs(t)iR + hρs(t)Hint(t )Hint(t)iR dt h¯ t0 0 0 0 − hHint(t)ρs(t)Hint(t )iR − hHint(t )ρs(t)Hint(ts)iR)dt .

At this point it is useful to recognize that the interaction Hamiltonian may be written as X † † Hint =h ¯ siΓ + si Γ , (B.9) i where the si are system operators and the Γi is called the reservoir noise operator. Plugging this form into the integro-differential equation above gives

Z t dρs(t) 1 X = − [(s s†ρ (t) − s†ρ (t)s )hΓ(t)Γ(t0)†i + (s s ρ (t) − s ρ (t)s )hΓ(t)Γ(t0)i dt h¯2 i j s j s i R i j s j s i R i,j t0 † † † 0 † 0 + (si sjρs(t) − sjρs(t)si )hΓ(t ) Γ(t)iR + (sisjρs(t) − sjρs(t)si)hΓ(t )Γ(t)iR + H.c.]

In order to proceed we will need more information about the reservoir. We will model the reservoir as a collection of harmonic oscillators. This choice is ubiquitous atomic physics where the reservoir is composed of electromagnetic field modes. The system noise operator may then be written as

X −i∆kij t Γ = gk,µak,µe , (B.10) k,µ,i,j,i6=j where ∆kij = ωk −ωij and ωij is the frequency difference between two system

92 states. If the collection of harmonic oscillators is in a thermal state then following important results hold

† † hak,µiR = hak,µak0,µ0 iR = hak,µak0,µ0 iR = 0, (B.11)

† hak,µak0,µ0 iR =n ¯(ωk)δkk0 δµµ0 (B.12)

† hak,µak0,µ0 iR = (¯n + 1)(ωk)δkk0 δµµ0 . (B.13) These then imply

† 0 † 0 † hΓ(t)iR = hΓ(t) iR = hΓ(t)Γ(t )iR = hΓ(t) Γ(t ) iR = 0, (B.14)

0 † 0 X 2 i∆kij (t−t ) hΓ(t) Γ(t )iR = |gk,µ| n¯(ωk)e (B.15) k,µ,i,j,i6=j

0 0 † X 2 −i∆kij (t−t ) hΓ(t)Γ(t ) iR = |gk,µ| (¯n(ωk) + 1)e . (B.16) k,µ,i,j,i6=j This sets half of the terms remaining in the master equation equal to zero. It is useful to coarse grain over the states by introducing a density of states,

D(ωk), and treating these as continuous integrals rather than discrete sums. Depending on the geometry of the reservoir this may be an exact step. The result is

∞ Z 0 † 0 2 i∆kij (t−t ) hΓ(t) Γ(t )iR = D(ωk)|g(ωk)| n¯(ωk)e dωk (B.17) 0

∞ Z 0 0 † 2 i∆kij (t−t ) hΓ(t)Γ(t ) iR = D(ωk)|g(ωk)| (¯n(ωk) + 1)e dωk (B.18) 0 The integral remaining in the master equation effects only the remaining reservoir correlation functions. These integrals are discussed in Chapter 2. Evaluating them gives the expected form of the master equation in the

93 Heisenberg picture

i X Γij Γij ρ˙ = − [δH, ρ ]+ (¯n+1)(2s ρs† −s† s ρ−ρs† s )+ n¯(2s† ρs −s s† ρ−ρs s† ). s h¯ s 2 ij ij ij ij ij ij 2 ij ij −ij ij ij ij i,j,i6=j (B.19)

94 Appendix C

Guide to the Use and Abuse of QuTiP

The Quantum Toolbox in Python (QuTiP) can make obtaining numerical solutions to problems in quantum mechanics where the degrees of freedom are discrete almost too easy. Most of the lines of code in a QuTiP code look very similar to an equation would be written down when trying to analytically describe a quantum system. Almost all QuTiP codes can be broken down to a few basic steps. First, define the parameters of that describe the system. These are things like the frequency difference between two levels of an atom. Second, define the operators that are needed to describe the system. QuTiP has a number of built in operators. For example, there are built in creation and annihilation operators for the harmonic oscillator (create(n) and destroy(n) respectively), Pauli operators(sigmax(), sigmay(), and sigmaz()) and identity operators (qeye(n)). The n in the function names of the cre- ation, annihilation, and identity operators indicates the size of the Hilbert space. The Hilbert space of a harmonic oscillator is of course infinite but must be truncated to make a numerical treatment possible. For problems with multiple degrees of freedom QuTiP has a built in tensor product func-

95 tion (tensor(operator1, operator2)) where operator1 and operator2 are as- sociated with different degrees of freedom. As noted above, if you wish to add two operators together they must have the same tensor product structure. That is, we cannot add op1 = tensor(create(5), sigmax()) and op2 = qeye(10). We can however add op1 and op3 = tensor(qeye(5), qeye(2)). Third, use the newly created operators to define the Hamiltonian. This step is straight forward and the result usually looks almost identical to the result of writing the Hamiltonian out by hand. Fourth, specify the initial state of the system. QuTiP has a number of built in states the most important of which are basis states (basis(n, m)) where n is once again the dimension of the Hilbert space and m is the state in that Hilbert space. Which physical state each an m value corresponds to depends on how the operators of the Hilbert space are defined. These states are flexible but are usually used as the energy eigenstates for some degree of freedom. Initial states of different degrees of freedom can be combined in the same way as the operators associated with the different degrees of freedom. Fifth, specify a list of times at which you wish to know the state and/or the expectation values of some list of observables. This list is easily created using numpy’s linspace function. Sixth, it is possible to specific a list of observable operators to take the exception value of. This step is optional, as long as the states are expect track of the expectation values can always be calculated after the fact (the need function in QuTiP is expect(operator, states) where states is either a state or be a list of states in which case expect returns a list containing the expectation value in each state in the list), but it is sometimes more convenient to calculate expectation values in this way. Seventh, specify a list of collapse operators if the system is an open quan- tum system. If the quantum system is closed simply pass a empty list or Schrdinger equation time evolution instead of master equation evolution.

96 Note that, as discussed above, in the quantum trajectories formalism, a non- hermitian effective Hamiltonian is constructed out of the Hamiltonian and the collapse operators. QuTiP will generate this effective Hamiltonian auto- matically so it need not be manually constructed. Eighth, specify a list of observable operators to take the expectation value of. This step is also optional and may also be skipped by passing an empty list. I usually skip this step since the if the states are available the expectation values can also be calculated later and if the master equation time evolution is used and expectation values are requested QuTiP will not return a list of states which defeats the point of working in the Schrodinger picture. Ninth, decide out the time evolution should be preformed. QuTiP has a number of built in choices. The two that are most commonly used are mas- ter equation evolution (mesolve(H, psi0, tlist, cops, exlist)) and Monte Carlo (quantum trajectories) evolution (mcsolve(H, psi0, tlist, cops, exlist, ntraj)). Notice that the Monte Carlo time evolution required one additional piece of information, the number of trajectories to run. If a number of trajectories is not specified the default is 500. The time evolution function is usually given a name. For example, result = mesolve(...). If expectation values were not requested, a list of states may be obtained with result.states. If expectation values were requested, result.expect is a list of lists of expectation values of the observables given in exlist. In other words, result.expect[0] is a list of the expectation value of the first observable in exlist at every time step. The best way to learn about QuTiP is to look at examples. Consider the following example numerically solving the closed Jaynes-Cummings model. import qutip as qt import numpy as np from matplotlib import pyplot as plt wc = 1.0 # cavity frequency

97 wa = 1.0 # qubit/atom frequency g = 0.1 # coupling strength

# cavity mode operator a = qt.tensor(qt.destroy(5), qt.qeye(2))

# qubit/atom operators sz = qt.tensor(qt.qeye(5), qt.sigmaz()) #sigma-z operator sigmam = 0.5 * (qt.sigmax() - 1j * qt.sigmay()) sm = qt.tensor(qt.qeye(5), sigmam) #sigma-minus operator

# the Jaynes-Cumming Hamiltonian H = wc * a.dag() * a + 0.5 * wa * sz + g * (a * sm.dag() + a.dag() * sm)

# initial state psi0 = qt.tensor(qt.basis(5, 3), qt.basis(2, 0))

#list of times for which the solver should store the state vector tlist = np.linspace(0, 100, 1000)

# time evolution result = qt.mesolve(H, psi0, tlist, [], [])

# plotting fig, axes = plt.subplots(1,1) axes.plot(tlist, qt.expect(sz, result.states)) axes.set_xlabel(r’$t$’, fontsize=20) axes.set_ylabel(r’$\left<\sigma_z\right>$’, fontsize=20)

98 Figure C.1: The atomic inversion vs time in the Jaynes-Cummings model.

plt.show()

The output is shown in Figure C.1. It displays Rabi oscillations as expected.

Notice that rather than trying to use the built in annihilation operator for the atom as destroy(2), I have by hand built an operator, sigmam, based on the equation 1 σ = (σ − iσ ). (C.1) − 2 x y This is because there is a disagreement in the representation convention used by QuTiP for harmonic oscillators and the one used for spins. For the har- monic oscillator basis(5, 4) is an excited state. In particular, it is the state fourth number state. For a spin on the other hand basis(2, 1) is spin down. This situation can be avoided by choosing to never use the built in spin oper- ators or by only using the built in built in σz operator and always multiplying

99 it by minus one. Like the tracking of tensor product structure this is occa- sionally inconvenient but I often find that it forces programs to be written in a very intentional and structured way which is usually a good thing. The initial state need not be a pure state. For example, in the program above to make the initial state a mixed state the following modification may be made.

# initial state # before we had psi0 = qt.tensor(qt.basis(5, 1), qt.basis(2, 0)) # create first pure state psi1 = qt.tensor(qt.basis(5, 3), qt.basis(2, 0)) rho1 = psi1 * psi1.dag()

# create second pure state psi2 = qt.tensor(qt.basis(5, 4), qt.basis(2, 1)) rho2 = psi2 * psi2.dag()

#create mixture psi0 = 0.75 * rho1 + 0.25 * rho2

The output if the initial state is a mixture is shown in Figure C.2. Again it shows Rabi oscillations but now with a reduced amplitude do to interference between the pure states that have been mixed. If open quantum system time evolution is desired then several modifica- tions are required. First, two new parameters, the atomic and cavity decay rates, are required. wc = 1.0 # cavity frequency wa = 1.0 # qubit/atom frequency g = 0.1 # coupling strength kappa = 0.01 # cavity decay rate gamma = 0.01 # atomic decay rate out of cavity

100 Figure C.2: The atomic inversion vs time in the Jaynes-Cummings model for an initial state that is mixed.

Second, a list of collapse operators must be generated

# A list of collapse operators c_ops = [np.sqrt(kappa) * a, np.sqrt(gamma) * sm]

Finally, the list of collapse operators must be passed to the time evolution function.

# before we had result = qt.mesolve(H, psi0, tlist, [], []) result = qt.mesolve(H, psi0, tlist, c_ops, [])

The output is shown in Figure C.3. The opening the system is to damp the Rabi oscillations. In to make the Hamiltonian time dependent the Hamiltonian must be slit into constant and time dependent parts. The coefficient of the time dependent parts should specified as below.

101 Figure C.3: The atomic inversion vs time in the damped Jaynes-Cummings model for an initial state that is mixed.

# the Jaynes-Cumming Hamiltonian H0 = wc * a.dag() * a + 0.5 * wa * sz def g_coeff(t, args): return g * np.cos(w * t) H1 = a * sm.dag() + a.dag() * sm

H = [H0, [H1, g_coeff]]

The output is shown in Figure C.4. The time dependence manifests itself in a variation of the Rabi frequency. The oscillation is still damped but the variation of the coupling strength has introduced an effect driving so that the steady state inversion is increased. If different parts of the Hamiltonian have different time dependent coefficients the Hamiltonian may be specified as below.

102 Figure C.4: The atomic inversion vs time in the damped Jaynes-Cummings model for a coupling with a sinusoidal time dependence for an initial state that is mixed.

103 (a) (b)

Figure C.5: The atomic inversion vs time in the damped Jaynes-Cummings model for a coupling with a sinusoidal time dependence for an initial state that is mixed produced using quantum trajectories. In (a) only one trajectory was used and in (b) 100 trajectories were averaged over.

H = [H0, [H1, g_coeff], [H2, q_coeff], ...]

As mentioned above, in order to use quantum trajectories time evolu- tion instead of master equation time evolution an additional parameter, the number of trajectories is necessary.

ntraj = 100 # number of trajectories to run

The line in the code that actually calls for the time evolution must also be modified to call mcsolve instead of mesolve.

# before we had result = qt.mesolve(H, psi0, tlist, c_ops, []) result = qt.mcsolve(H, psi0, tlist, c_ops, [sz], ntraj)

The output is shown in Figure C.5 for 1 trajectory and for 100 trajectories. When using quantum trajectories many trajectories must be averaged over in order to reproduce the results of master equation evolution. When using quantum trajectories time evolution it is usually simpler to request expec- tation values instead of states since there will now be a list of lists of states

104 (one list of state for every trajectory) and all of these trajectories must be averaged in order to accurately calculate an expectation value. This is why sz appears in the list that was previously empty. If a system has both driving and dissipation we would sometimes like to know how some steady state expectation value depends on a parameter that appears in the Hamiltonian. For example we might add to the system discussed above a cavity driving and wish to know how the steady state inversion depended on the atom cavity coupling strength. In this case a new parameter, the cavity driving strength, must be specified. Two new lists are required. First, a list of couplings to swept through and second, a list to store the inversions in each steady state. k = 0.1 # cavity driving strength glist = np.linspace(0.0, 10.0, 100) szlist = []

QuTiP has a built in function called steadystate which takes as input the Hamiltonian and a list of collapse operators. Usually it required that the Hamiltonian and this function be placed in a for loop in order to sweep through a range of value of some parameter as below. for i in glist: # the driven Jaynes-Cummings Hamiltonian H=wc*a.dag()*a+0.5*wa*sz+i*(a*sm.dag()+a.dag()*sm)+k*(a+a.dag())

# result = qt.mesolve(H, psi0, tlist, c_ops, []) result = qt.steadystate(H, c_ops) szlist.append(qt.expect(sz, result)) The result is shown in Figure C.6. The peak at coupling slightly less than 2 is likely related to a quantum phase transition that occurs in the driven Jaynes-Cummings model. If only the steady state is required the initial state

105 Figure C.6: The steady state atomic inversion vs coupling strength in the driven Jaynes-Cummings model. and a the list of times are no longer required. As a more relevant example the following program uses the steadystate function to plot the transparency window of EIT. import qutip as qt import matplotlib.pyplot as plt import numpy as np g = 5.0 #The decay rate of the probe transition G = 0.0*g #the decay rate of the control transition d = 0.0*g #The detuning of the control field c = 2.0*g #The Rabi frequency of the control transition p = 0.1*g #The Rabi frequency of the probe transition

#The are the number opperators for the bare states.

106 sig11 = qt.three_level_ops()[0] sig33 = qt.three_level_ops()[1] sig22 = qt.three_level_ops()[2]

#The lowering operators. sig13 = qt.three_level_ops()[3] sig23 = qt.three_level_ops()[4]

#creation of the lists that will eventually be plotted siglist = [] dlist = np.linspace(-15.0, 15.0, 100.0)

#this is the loop that sweeps through the detunings for D in dlist:

H1=D*(sig33-sig11) #the interaction hamiltonian H2=(p/2.0)*(sig13.dag()+sig13)+(c/2.0)*(sig23+sig23.dag()) H=H1+H2 #total hamiltonian

#solving for the steady state rho_ss=qt.steadystate(H,[np.sqrt(5.0)*sig13,np.sqrt(0.5)*sig23])

#here the expectation values are appended to the list to be siglist.append(qt.expect(sig33,rho_ss))m

#Change to units of lifetimes for i in dlist: i = i/g

107 Figure C.7: The steady state population of level |3i vs detuning for a lambda system showing the famous EIT dip.

#here the plots are made fig, axes = plt.subplots(1,1) axes.plot(dlist, siglist) axes.set_xlabel(r’$\Delta/\gamma$’, fontsize=20) axes.set_ylabel(r’$\left<\sigma_{33}\right>$’, fontsize=20) plt.show()

The output is shown in Fig C.7. This program also makes use of QuTiP’s built in three level operators. The labeling scheme used here for these is such that the one labeled sigij is the operators of the form |iihj.

108 Appendix D

Green’s Functions

Following [52] a, a linear differential equation for ψ(x) may be written in terms of a differential operator L as

Lψ(x) = f(x), (D.1) where f(x) is a source term. If the differential equation is the be satisfied in a region V subject to homogeneous boundary conditions, then Green’s function may be defined such that Z 3 ψ(x1) = d x2G(x1, x2)f(x2), (D.2) V where the Green’s function must also satisfy the boundary conditions. In words, this means that the Green’s function is the kernel of the integral operator that turns f(x1) into ψ(x1). Allowing the differential operator to act on this gives Z 3 Lψ(x1) = LG(x1, x2)f(x2)d x2. (D.3) V

This implies that 3 LG(x1, x2) = δ (x1 − x2). (D.4)

109 This equation says that the Green’s function is an impulse response func- tion. In the case of spherically symmetric boundary conditions at infinity the Green’s function simplifies to

G(x1, x2) = G(|x1 − x2|). (D.5)

As an example consider Poisson’s equation

52φ(x) = −4πρ(x), (D.6)

Where φ(x) is the electric potential and ρ(x) is the electric charge density. For a localized charge distribution in the absence of conductors of dielectrics, the boundary conditions are that φ(x) vanish as x → ∞. The Green’s function for this equation satisfies

2 3 0 51]G(|x1 − x2|) = δ (x − x ). (D.7)

It is helpful to make the substitution y = x1 − x3. The Fourier transform of the Green’s function is given by Z G˜(k) = d3yG(y)e−ik·y. (D.8)

Therefore, in momentum space, the Green’s function obeys

4π G˜(k) = . (D.9) k2

Taking the inverse Fourier transform of this gives

1 Z d3k G(y) = e+ik·y. (D.10) 2π2 k2

110 This integral can be evaluated to give

1 G(|x1 − x2|) = , (D.11) |x1 − x2| which is a familiar result usually obtained by more elementary means.

111 Appendix E

Pulse Propagation Programs

The program shown below uses the MaxwellBloch module to study the prop- agation of pulses through gases, especially stopped light. This program was used to generate the Figure 5.1.

mb_solve_json = """{ "ob_atom": { "decays": [ { "channels": [[0,1]], "rate": 0.0 }, { "channels": [[3,4]], "rate": 0.0 }, { "channels": [[1,2]], "rate": 0.0 }, { "channels": [[4,5]], "rate": 0.0 } ],

112 "energies": [], "fields": [ { "coupled_levels": [[0, 1]], "detuning": 0.0, "detuning_positive": false, "label": "probe1", "rabi_freq": 1.0e-3, "rabi_freq_t_args": { "ampl_1": 1.0, "centre_1": 0.0, "fwhm_1": 1.0 }, "rabi_freq_t_func": "gaussian_1" }, { "coupled_levels": [[3, 4]], "detuning": 0.0, "detuning_positive": false, "label": "probe2", "rabi_freq": 1.0e-3, "rabi_freq_t_args": { "ampl_1": 1.0, "centre_1": 0.0, "fwhm_1": 1.0 }, "rabi_freq_t_func": "gaussian_1" },

113 { "coupled_levels": [[1, 2]], "detuning": 0.0, "detuning_positive": false, "label": "coupling1", "rabi_freq": 5.0, "rabi_freq_t_args": { "ampl_2": 1.0, "fwhm_2": 0.2, "off_2": 4.0, "on_2": 6.0 }, "rabi_freq_t_func": "ramp_offon_2" }, { "coupled_levels": [[4,5]], "detuning": 0.0, "detuning_positive": false, "label": "coupling2", "rabi_freq": 5.0, "rabi_freq_t_args": { "ampl_2": 1.0, "fwhm_2": 0.2, "off_2": 4.0, "on_2": 6.0 }, "rabi_freq_t_func": "ramp_offon_2" }

114 ], "num_states": 6 }, "t_min": -2.0, "t_max": 12.0, "t_steps": 140, "z_min": -0.2, "z_max": 1.2, "z_steps": 140, "z_steps_inner": 50, "num_density_z_func": "square_1", "num_density_z_args": { "on_1": 0.0, "off_1": 1.0, "ampl_1": 1.0 }, "interaction_strengths": [1.0e3, 1.0e3, 1.0e3, 1.0e3], "velocity_classes": { "thermal_delta_min": -0.0, "thermal_delta_max": 0.0, "thermal_delta_steps": 0, "thermal_delta_inner_min": 0.0, "thermal_delta_inner_max": 0.0, "thermal_delta_inner_steps": 0, "thermal_width": 1.0 }, "method": "mesolve", "opts": {} } """

115 import numpy as np import matplotlib.pyplot as plt from maxwellbloch import mb_solve save_as = "test_2atoms" png_str = "images/" + save_as + ".png" csv_str1 = save_as + "zandt" + ".csv" csv_str2 = save_as + "omegas" + ".csv" mb_solve_00 = mb_solve.MBSolve().from_json_str(mb_solve_json)

Omegas_zt, states_zt = mb_solve_00.mbsolve(recalc=False) fig = plt.figure(1, figsize=(16, 12))

# Probe ax = fig.add_subplot(111) cmap_range = np.linspace(0.0, 1.0e-3, 11) cf = ax.contourf(mb_solve_00.tlist, mb_solve_00.zlist, np.abs(mb_solve_00.Omegas_zt[0]/(2*np.pi)), cmap_range, cmap=plt.cm.hot) ax.set_title(’Rabi Frequency ($\Gamma / 2\pi $)’, fontsize = 40) ax.set_xlabel(’Time ($1/\Gamma$)’, fontsize = 32) ax.set_ylabel(’Distance ($L$)’, fontsize = 32) ax.text(0.02, 0.95, ’Probe’, verticalalignment=’top’, horizontalalignment=’left’, transform=ax.transAxes, =’black’, fontsize=16) plt.colorbar(cf)

116 # Both for ax in fig.axes: for y in [0.0, 1.0]: ax.axhline(y, c=’grey’, lw=1.0, ls=’dotted’) plt.tight_layout()

plt.savefig(png_str)

with open(csv_str1, ’w’) as out_file: for i in range(len(mb_solve_00.tlist)): line = "" line += str(mb_solve_00.tlist[i]) line += "," + str(mb_solve_00.zlist[i]) line += "\n" out_file.write(line) for i in range(len(mb_solve_00.zlist)-len(mb_solve_00.tlist)): line = "" line += " " line += "," + str(mb_solve_00.zlist[len(mb_solve_00.tlist) + i]) line += "\n" out_file.write(line)

np.savetxt(csv_str2, np.abs(mb_solve_00.Omegas_zt[0]/(2*np.pi)), delimiter =’,’)

The following code was supplied by Dr. Xiaodong Qi and Dr. Ivan Deutsch from the University of New Mexico. This code was used to calculate the values shown in Table 5.2. % This code is written to study some properties of the traveling and % standing wave matrices.

117 % % By Xiaodong Qi ([email protected]), 2017-6-18.

%% Standing wave case. N=5; % Number of atoms. stepa=N; % Number of atom positions in the [0,2*pi] range. q=5*pi; % Characteristic spacing % If we treat the atoms’ positions are in a Gaussian distribution, % this is the width of the distribution function. sigma0=1.0e-3*pi; ax=linspace(0,q,stepa); % Evenly distributed x from 0 to 2*pi. % Uniformally and randomly distributed x from 0 to 2*pi. bx=rand(1,stepa)*q; % cx=TruncatedGaussian(sigma0,pi,[1,stepa])+pi; % Randomize the atom position around the periodic lattice points % with a Gaussian bias. cx=TruncatedGaussian(sigma0,(ax(2)-ax(1))/2,[1,stepa])+ax; % Generating a random number array for amplitude disturbance around some % positive value. sigmaa=0.2; am_detuning=ones(N,N); % No fluctuation on the field amplitude.

% Plotting parameters: lw=2; % Linewidth. fs=15; % Fontsize. % Initialize the g matrix for different atom position distribution % functions. Gamma_even_standing=zeros(N,N); Gamma_random_standing=zeros(N,N); Gamma_gaussian_standing=zeros(N,N);

118 Gamma_even_traveling=zeros(N,N); Gamma_random_traveling=zeros(N,N); Gamma_gaussian_traveling=zeros(N,N); for ii=1:N for jj=1:N Gamma_even_standing(ii,jj)=am_detuning(ii,jj)*cos(ax(ii))*cos(ax(jj)); Gamma_random_standing(ii,jj)=am_detuning(ii,jj)*cos(bx(ii))*cos(bx(jj)); Gamma_gaussian_standing(ii,jj)=am_detuning(ii,jj)*cos(cx(ii))*cos(cx(jj)); Gamma_even_traveling(ii,jj)=am_detuning(ii,jj)*1i*exp(1i*abs(ax(ii)-ax(jj))); Gamma_random_traveling(ii,jj)=am_detuning(ii,jj)*1i*exp(1i*abs(bx(ii)-bx(jj))); Gamma_gaussian_traveling(ii,jj)=am_detuning(ii,jj)*1i*exp(1i*abs(cx(ii)-cx(jj))); end end % Initialize the eigenvalues of the g matrices. D_Gamma_even_standing=zeros(1,N); D_Gamma_even_traveling=zeros(1,N); D_Gamma_random_standing=zeros(1,N); D_Gamma_random_traveling=zeros(1,N); D_Gamma_gaussian_standing=zeros(1,N); D_Gamma_gaussian_traveling=zeros(1,N); V_Gamma_even_standing=zeros(N,N); V_Gamma_even_traveling=zeros(N,N); V_Gamma_random_standing=zeros(N,N); V_Gamma_random_traveling=zeros(N,N); V_Gamma_gaussian_standing=zeros(N,N); V_Gamma_gaussian_traveling=zeros(N,N);

%% Plot out the position distribution. figure(100); subplot(3,1,1)

119 plot(ax/pi) legend([’evenly distributed’]) subplot(3,1,2) plot(bx/pi) ylabel(’x/\pi’) legend(’randomly distributed’) subplot(3,1,3) plot(cx/pi) xlabel(’n’) legend([’Gaussian distributed with \sigma=’,num2str(sigma0/pi),’\pi’]) figure(101) subplot(3,1,1) histfit((ax-ax)/pi) legend([’evenly distributed, dx/k=’,num2str((ax(2)-ax(1))/pi),’\pi’]) subplot(3,1,2) histfit((bx-ax)/pi) ylabel(’frequency’) legend(’randomly distributed’) subplot(3,1,3) histfit((cx-ax)/pi) legend([’Gaussian distributed, \sigma=’,num2str(sigma0/pi),’\pi’]) xlabel(’dx/k (\pi)’)

% Plot out the amplitude detuning distribution function. figure(103) histfit(TruncatedGaussian(sigmaa,1.0,[N,1])+1.0); xlabel(’relative amplitude fluctuation’); %% Calculate the eigenvector and eigenvalues for the standing wave case. % for aa=1:N

120 [Vx_Gamma_even_standing,Dx_Gamma_even_standing]=eig(squeeze(Gamma_even_standing(:,:))); [Vx_Gamma_random_standing,Dx_Gamma_random_standing]=eig(squeeze(Gamma_random_standing(:,:))); [Vx_Gamma_gaussian_standing,Dx_Gamma_gaussian_standing] =eig(squeeze(Gamma_gaussian_standing(:,:))); for nn=1:N D_Gamma_even_standing(1,nn)=2*Dx_Gamma_even_standing(nn,nn); D_Gamma_random_standing(1,nn)=2*Dx_Gamma_random_standing(nn,nn); D_Gamma_gaussian_standing(1,nn)=2*Dx_Gamma_gaussian_standing(nn,nn); V_Gamma_even_standing(:,nn)=Vx_Gamma_even_standing(:,nn); V_Gamma_random_standing(:,nn)=Vx_Gamma_random_standing(:,nn); V_Gamma_gaussian_standing(:,nn)=Vx_Gamma_gaussian_standing(:,nn); end

121 Appendix F

Jaynes-Cummings Model Generalization Program

The code shown below can be used to create all of the Figures in Chapter 7. It can simulate either the Jaynes-Cummings model, the Rabi model, or the two coupling Rabi model by choosing the appropriate values for g1 and g2.

import qutip as qt import numpy as np from matplotlib import pyplot as plt

wc = 1.0 # cavity frequency wa = 1.0 # qubit/atom frequency g1 = 1.0 # coupling strength g2 = 10.0 # coupling strength g = 1.0 kappa = 0.01 # cavity decay rate gamma = 0.01 # atomic decay rate out of cavity

# cavity mode operator a = qt.tensor(qt.destroy(5), qt.qeye(2))

122 # qubit/atom operators sz = qt.tensor(qt.qeye(5), qt.sigmaz()) #sigma-z operator sigmam = 0.5 * (qt.sigmax() - 1j * qt.sigmay()) sm = qt.tensor(qt.qeye(5), sigmam) #sigma-minus operator

# the Jaynes-Cumming Hamiltonian #H = wc * a.dag() * a + 0.5 * wa * sz + g * (a + a.dag()) * (sm + sm.dag()) H = wc * a.dag() * a + 0.5 * wa * sz + g1 * (a * sm.dag() + a.dag() * sm) + g2 * (a * sm + a.dag() * sm.dag())

# initial state psi0 = qt.tensor(qt.basis(5, 3), qt.basis(2, 0))

# A list of collapse operators #c_ops = [np.sqrt(kappa) * a, np.sqrt(gamma) * sm] c_ops = []

#list of times for which the solver should store the state vector tlist = np.linspace(0, 10*g1, 100)

# before we had result = qt.mesolve(H, psi0, tlist, [], []) result = qt.mesolve(H, psi0, tlist, c_ops, []) fig, axes = plt.subplots(1,1) axes.plot(tlist, qt.expect(sz, result.states), label=r’$\left<\sigma_z\right>$’) axes.plot(tlist, qt.expect(a.dag()*a, result.states), label=r’$\left$’) axes.set_xlabel(r’$tg_{1}$’, fontsize=20) #axes.set_ylabel(r’$\left<\sigma_z\right>$’, fontsize=20)

123 plt.legend() plt.show()

124 Appendix G

Semi-Classical Diffusion Programs

The programs shown in this section implement the semi-classical algorithms discussed in Chapter 7 part 2. The first uses the original algorithm to plot the position and momentum of an atom as it moves through an optical lattice. This program was used to produce Figure 8.4. The program can also simulate the motion of an atom in a ratchet with biharmonic driving defined by

F (t) = F0(Ad cos ωdt + Bd cos(2ωdt + φ)), (G.1) by setting the driving force amplitude to a value other than zero. This force simply appears as an additional force that is added to the light shift force and the stochastic diffusion force. import numpy as np from matplotlib import pyplot as plt

Uo = 1.0 # well depth k = 1.0 # wave number Gamma = 1.0 # atomic decay rate

125 m = 1.0 # atomic mass tmax = 10 # number of atomic lifetime to let the simulation run tstep = 100 # total number of time steps Fo = 1.0 # magnitude of the driving force A = 1.0 # relative strengths of the oscillations B = 1.0 wd = 1.0*Gamma # driving frequency phi = 1.0 # phase difference tlist = np.linspace(0, tmax*Gamma, tstep) # makes list of times dt = tlist[1] - tlist[0] # size of time steps p = 0.0 # sets initial momentum of the atom z = 0.0 # sets initial posiiton of the atom state = ’plus’ # sets the initial internal state of the atom plist = [p] # creates list to be plotted zlist = [z] # creates list to be plotted

# this loop does the time evolution for t in range(len(tlist)-1): f = Fo*(A*np.cos(wd*t)+B*np.cos(2*wd*t+phi)) # biharmonic driving # the program does one thing depending on the internal state of the atom if state == ’plus’: # the deterministic force felt by the atom F = k*Uo*np.sin(2*k*z) # the rate at which the atom switches internal states gamma = (2/9)*Gamma*(np.cos(k*z))**2 ] # randomly determines if the atom changes state if np.random.random_sample() < gamma*dt:

126 D = ((k**2*Gamma)/90)*(6-np.cos(2*k*z)) # the change in momentum this time step dp =F*dt+np.sqrt(2*D/gamma)*np.random.normal(0,1,1)[0] + f*dt # changes the internal state of the atom state == ’minus’ # if the internal state does not change else: D = ((k**2*Gamma)/90)*(35+7*np.cos(2*k*z)) dp =F*dt+np.sqrt(2*D*dt)*np.random.normal(0,1,1)[0] + f*dt # the same as above but for the other internal state this time step else: F = -k*Uo*np.sin(2*k*z) gamma = (2/9)*Gamma*(np.sin(k*z))**2 if np.random.random_sample() < gamma*dt: D = ((k**2*Gamma)/90)*(6+np.cos(2*k*z)) dp =F*dt+np.sqrt(2*D/gamma)*np.random.normal(0,1,1)[0] + f*dt state = ’plus’ else: D = ((k**2*Gamma)/90)*(35-7*np.cos(2*k*z)) dp =F*dt+np.sqrt(2*D*dt)*np.random.normal(0,1,1)[0] + f*dt p += dp # updates the momentum plist.append(p) z += (p/m)*dt # updates the position zlist.append(z)

# this loop changes the position and momentum units for i in range(len(zlist)): zlist[i] = zlist[i]*k plist[i] = plist[i]/k

127 # plotting plt.subplot(2,1,1) plt.plot(tlist, zlist) plt.xlabel(r’$t\Gamma$’, fontsize=20) plt.ylabel(r’$zk$’, fontsize=20) plt.subplot(2,1,2) plt.plot(tlist, plist) plt.xlabel(r’$t\Gamma$’, fontsize=20) plt.ylabel(r’$p/k$’, fontsize=20) plt.show()

A python program the modified semi-classical algorithm is shown below. Later in the program the shift register algorithm is used to calculate g(2)(τ). This program was used to produce Figures 8.4-8.7. import numpy as np from matplotlib import pyplot as plt

Uo = 1.0 # well depth k = 1.0 # wave number Gamma = 1.0 # atomic decay rate m = 1.0 # atomic mass tmax = 10000 # number of atomic lifetime to let the simulation run tstep = 100000 # total number of time steps delaymax = 1000 # max delay time Fo = 0.0 # magnitude of the driving force

128 A = 1.0 # relative strengths of oscillations B = 1.0 wd = 1.0*Gamma # driving frequency phi = 1.0 # phase difference tlist = np.linspace(0, tmax*Gamma, tstep) # makes list of times dt = tlist[1] - tlist[0] # size of time steps p = 0.0 # sets initial momentum of the atom z = 0.0 # sets initial posiiton of the atom state = ’plus’ # sets the initial internal state of the atom plist = [p] # creates list to be plotted zlist = [z]

# lists keep track of whether a sigma plus or sigma minus photon is emitted pluslist = np.zeros(tstep) minuslist = np.zeros(tstep)

# this loop does the time evolution for t in range(len(tlist)-1): f = Fo*(A*np.cos(wd*t)+B*np.cos(2*wd*t+phi)) # biharmonic driving # the program does one thing depending on the internal state of the atom if state == ’plus’: F = k*Uo*np.sin(2*k*z) # the deterministic force felt by the atom # the rate at which the atom switches internal states gamma = (2/9)*Gamma*(np.cos(k*z))**2 # randomly determines if the atom changes state if np.random.random_sample() < gamma*dt: p1 = p + F*dt

129 z1 = z + (p1/m)*dt F1 = k*Uo*np.sin(2*k*z1) dp =(1/2)*(F+F1+2*f)*dt # the change in momentum this time step state = ’minus’ # changes the internal state of the atom minuslist[t] = 1 # if the internal state does not change else: D = ((k**2*Gamma)/90)*(35+7*np.cos(2*k*z)) p1 = p +F*dt+np.sqrt(2*D*dt)*np.random.normal(0,1,1)[0] z1 = z + (p1/m)*dt F1 = k*Uo*np.sin(2*k*z1) dp =(1/2)*(F+F1+2*f)*dt+np.sqrt(2*D*dt)*np.random.normal(0,1,1)[0] pluslist[t] = 1 # the same as above but for the other internal state this time step else: F = -k*Uo*np.sin(2*k*z) gamma = (2/9)*Gamma*(np.sin(k*z))**2 if np.random.random_sample() < gamma*dt: p1 = p + F*dt z1 = z + (p1/m)*dt F1 = -k*Uo*np.sin(2*k*z1) dp =(1/2)*(F+F1+2*f)*dt state = ’plus’ pluslist[t] = 1 else: D = ((k**2*Gamma)/90)*(35-7*np.cos(2*k*z)) p1 = p +F*dt+np.sqrt(2*D*dt)*np.random.normal(0,1,1)[0] z1 = z + (p1/m)*dt F1 = -k*Uo*np.sin(2*k*z1) dp =(1/2)*(F+F1+2*f)*dt+np.sqrt(2*D*dt)*np.random.normal(0,1,1)[0]

130 minuslist[t] = 1 p += dp # updates the momentum plist.append(p) z += (p/m)*dt # updates the position zlist.append(z) totalcounts = 0.0 # this variable will keep track of the total number of counts

# creates a list of the appropriate length to store the g2 g2 = np.zeros(delaymax)

# this loop calculates the g2 and the total number of counts for i in range(len(pluslist)-delaymax): if pluslist[i] > 0: totalcounts += 1 for j in range(delaymax): if pluslist[i+j+1] > 0: g2[j] += 1 totalcountss = totalcounts**2 # total counts squared

# this loop normalizes g2 for i in range(len(g2)): g2[i] = (g2[i]/(totalcountss))*(len(pluslist)-delaymax)

# this creates a list of delay times for plotting delaylist = np.linspace(0, delaymax, delaymax)

# plotting figs = plt.figure(1)

131 axes = figs.add_subplot(111) axes.plot(delaylist, g2) axes.set_xlabel(r’$\tau$’, fontsize=20) axes.set_ylabel(r’$g^{(2)}(\tau)$’, fontsize=20) plt.show()

# this loop changes the position and momentum units for i in range(len(zlist)): zlist[i] = zlist[i]*k plist[i] = plist[i]/k

# plotting figs = plt.figure(2) axes = figs.add_subplot(211) axes.plot(tlist, zlist) axes.set_xlabel(r’$t\Gamma$’, fontsize=20) axes.set_ylabel(r’$zk$’, fontsize=20) axes2 = figs.add_subplot(212) axes2.plot(tlist, plist) axes2.set_xlabel(r’$t\Gamma$’, fontsize=20)

132 axes2.set_ylabel(r’$p/k$’, fontsize=20) plt.show()

133 Appendix H

Photon Counting Experimental Apparatus Details

The purpose of this appendix is to discuss important details of the operation of the single photon counting module and the FPGA which cannot be easily found in the manuals[50][51]. The single photon module is discussed first.

H.1 Single Photon Counting Module Details

There are two important things to remember when operating the module. The first is to be extremely careful when turning the module on. It should be turned on in as near complete darkness as possible. To this end the light tighting setup shown in Figure G.1 was built. Even with the light tighting the detector was always turned on with the turned off and the curtains around the table pulled. The detector is exposed to the light signal after giving the background count a few seconds to settle. The second important thing is to keep the count rate below 1MHz. To facilitate this the module should only be operated with the output connected to a pulse counter, FPGA, or other device that will display the count rate in real time. Once module is turned on and the count rate has been observed

134 to be at an acceptable level, the module may be exposed to more light. We found that usually with the light tighting the module may be operated with the room lights turned on and the table curtains open. For the flashlight test data, the flashlight was not shown directly at the fiber even with the light tighting as this sent the count rate dangerously high. For the laser test data, the laser was directed at the edge of the opening in the tighting which is marked in pencil on the lens paper on the front of the light tighting apparatus.

H.2 FPGA Multichannel Acquisition Board

The most important thing to remember when operating the board is to only ever use TTL input pulses. That is the input voltage should never exceed 5V and the to the greatest extent possible should always be positive. This can, and has, fried boards in the past. In order to prevent this, we have usually worked with any input to the board also being input to an oscilloscope, so that the voltage can be monitored in real time. The board is powered on whenever it is plugged into a a computer that is turned on. After every on off cycle the board must be reconfigured with the FPGAconf.exe file before it may be used. The target board must be set to Xylo-EM, the FX2 clock speed should be set tp 48MHz, the USB Driver should be set to CyUSB, the USB device should be set to 0 and the CyUSB GUID should be set to Cypress. In addition, a configuration file must be selected based on how the board is to be used. There are two default configurations of the board, counts mode and time tag mode. In counts mode, the board keeps track of the total number of counts on each channel and the coincidences between channels. Unfortunately, since it cannot be made to keep track of coincidences between channels with a delay, this mode is not every useful for constructing g(2)(τ). It is possible that this mode could be modified to that keep track of coincidences at a delay. It is

135 (a)

(b) Figure H.1: The light tighting apparatus from the (a) front and (b) back. The opening in the front is covered with lens tissue and marked with pencil. Even with the light tighting the flashlight was not shown directly at the apparatus but off at an angle. The laser light was shown at the very edge of the opening with a current of 90mA.

136 possible that it could then be used to calculate g(2)(τ) more directly. Instead of modifying the counts mode configuration files as discussed above, we have elected to use the default time tag mode. In this mode, the board records when counts arrive and on which channel they arrive. From this information it is possible to construct g(2)(τ) in post analysis. Both modes, are operated from similar LabView interfaces. These inter- faces allow for real time monitoring of the count rate recorded by the board. The count rate reported by the board has typically been observed to be somewhat lower than the count rate reported by the pulse counter. In order to save the collected data two steps must be taken in the LabView interface before it is run. The file to save to must be specified and the button so save the data must be pushed. The file that is actually consists of pseudo-binary that is discussed in[51]. In the time tag mode, the information contained in this file is essentially a list of times, as measured in clock increments since the beginning of the current clock cycle, when an event occurred along with which type of event occurred at each time. The default clock increment time is 20.83ns. It is possible to lower this time through the use of an external clock but we have not found this to be necessary. There are two types of events recorded, clock cycle resets, so that the total observation time may be reconstructed, and pulses detections. When the board detects a pulse is also indicates which input it was detected on.

137 Appendix I

Proof of the Cross Correlation Theorem

The purpose of this appendix is to provide a proof of the cross correlation theorem which is used in one of the two methods to calculate g(2)(τ). The first step is to recall that the correlation between two complex functions f(t) and g(t) is Z ∞ f ∗(t)g(t + τ)dt, (I.1) −∞ divided by the total observation time, T . Now preform a change of variables

x ≡ t + τ, (I.2) which leaves

Z ∞ Z ∞ f ∗(t)g(t + τ)dt = f ∗(x − τ)g(x)dx. (I.3) −∞ −∞

Now define a new function, h(x), according to

h(x) = f ∗(−x), (I.4)

138 and Z ∞ Z ∞ f ∗(x − τ)g(x)dx = h(τ − x)g(x)dx. (I.5) −∞ −∞ Now applying the convolution theorem gives

Z ∞ Z ∞ h(τ − x)g(x)dx = H(ω)G(ω)e−iωxdω, (I.6) −∞ −∞ where H(ω) and G(ω) are the Fourier transforms of h(τ − x) and g(x) re- spectively. However, it is a property of the Fourier transform that for any well behaved function, k(x)

Z ∞ k∗(−x)eiωxdx = K∗(ω). (I.7) −∞

Applying this to f(x) gives the desired result

Z ∞ Z ∞ f ∗(t)g(t + τ)dt = F ∗(ω)G(ω)e−iω(t+τ)dω. (I.8) −∞ −∞

This is the cross correlation theorem. In words this says that the correlation is proportional to the inverse Fourier transform of the complex conjugate of the Fourier transform of the first signal times the Fourier transform of the section signal. Of course, this integrals cannot strictly be extended to infinity but as long as they extend to some time significantly longer than the correlation time τc the theorems may be approximately applied. A note on normalization is necessary. Since g(2)(τ) is an averaged corre- lation the above expression must be divided by the total observation time, T , in addition to hIi2, where I is the intensity.

139 Appendix J

Photon Counting Data Analysis Programs

This appendix discusses the programs used the analyze the photon counting data.

J.1 Cross Correlation Theorem Program

The program that uses the cross correlation theorem to calculate g(2)(τ) is written in MatLab and is shown below. It is a function that takes five argu- ments. The first, Data, is a string containing the file name of pseudo-binary output by the FPGA board. The second, N, is the number of sections the data should be broken up into. Currently, the data is divided up such that each section has the same number of events (clock cycles and pulse detec- tions). We have considered modifying the program such that each section instead covers the same amount of time. The third, T, indicates how time increments should be combined into the time increment used when calculat- ing g(2)(τ). The fourth, Single, is a boolean variable. If Single is set to true, then the loop will only be run through once and an average over the whole data set will not be generated. The fifth, signal, is a string which should be

140 the name of the file that the list of ones and zeros corresponding to pulse detections will be written to. This program was used to make Figure 8.11 and 8.12. function[] = CorrelationCumulative(Data,N, T, Single, signal)

%THIS IS THE PRIMARY CODE %Everything else will run from this

%This function will use DataSep and subsequently BinSort to compare two %different channels of data (1 and 4) and cross correlate them using fast %Fourier transforms. The paper of reference that we use for this is Dr. %Karthik’s: %Sensors 2015, "High Frequency Sampling of TTL Pulses on a Raspberry Pi for %Diffusive Correlation Spectroscopy Applications" %N is the number of individual runs to add together %T is the ratio of the total bin time to the FPGA clock cycle (20.83ns) %Single is a boolean variable. If it is true the code breaks out of the %loop after the first iteration.

%First, let’s pull the data that we’re going to need

%[Times,Chan1,Chan2,Chan3,Chan4] = DataSep(Data); [A,h] = BinSort(Data); m = floor(h/N); %Number of data points in each individual run %m = 100000; for i = (1:N) %This Makes the time and signal matrices for the segment of data we are %looking at this iteration BinTime = A((i-1)*m+1:(i)*m,6:32);

141 Sig1 = A((i-1)*m+1:(i)*m,5); Sig2 = A((i-1)*m+1:(i)*m,4); Sig3 = A((i-1)*m+1:(i)*m,3); Sig4 = A((i-1)*m+1:(i)*m,2);

%This number will increase each time the internal clock resets Offset = 0; DecTime = zeros(m,1); %This is only a factor that tells the offset to activate n = 0;

%% This section will convert the binary time into decimals, and into a useable %% array that will be passed to the next program.

% A message should appear if the time is getting out of order, i.e., the % data is broken

%PropTime is going to be set so that the initial time is 0, not a bunch of %extras before the data starts later for j = (1:m) %Convert the time into something we can actually understand DecTime(j) = bin2dec(BinTime(j,1:27)); %We want to check if the internal clock reset on this count %We’ll call the actualy time proper time, and what the clock reads is %going to be DecTime. This makes sure we don’t change what we don’t %need to, since we’re only looking for resets. if j == 1 PropTime(j) = 0;

142 elseif DecTime(j) < DecTime(j - 1) Offset = Offset + 1; %This next line compensates for the lack of more digits for time PropTime(j) = DecTime(j) + (2^(26 + Offset)) - DecTime(1); %Add to the offset for the next clock reset

n = 1; %Offset is now active

else %This just makes sure offset doesn’t increase, but it still needs %to be present after each reset PropTime(j) = DecTime(j) + n*(2^(26 + Offset)) - DecTime(1);

if PropTime(j) < PropTime(j-1) display(’FUCK’); end end end

%% Quick Preallocating for speed and ease

Chan1 = zeros(1,m); Chan2 = zeros(1,m); Chan3 = zeros(1,m); Chan4 = zeros(1,m);

%% Convert all of the signals to an array for easier use for j = (1:m-1) Chan1(j) = str2num(Sig1(j));

143 Chan2(j) = str2num(Sig2(j)); Chan3(j) = str2num(Sig3(j)); Chan4(j) = str2num(Sig4(j)); end %I leave small parts at the end of my code as a break point for checking %the variables at the end and to make sure that each function is playing %nice with the others t = 3;

%To perform these functions, we need to make sure that each data point is %equally spaced

%We deem channel 1 - W, channel 2 - X, channel 3 - Y, channel 4 - Z %This part adjusts the bin sizes by combining the singals from adjacent bins W=zeros(ceil(max(PropTime)/T)+1,1); X=zeros(ceil(max(PropTime)/T)+1,1); Y=zeros(ceil(max(PropTime)/T)+1,1); Z=zeros(ceil(max(PropTime)/T)+1,1); %Now we print the collected data points onto the array of zeros for j = (1:length(Chan1)) W(ceil(PropTime(j)/T)+1) = W(ceil(PropTime(j)/T)+1) + 1; X(ceil(PropTime(j)/T)+1) = X(ceil(PropTime(j)/T)+1) + 1; Y(ceil(PropTime(j)/T)+1) = Y(ceil(PropTime(j)/T)+1) + 1; Z(ceil(PropTime(j)/T)+1) = Z(ceil(PropTime(j)/T)+1) + 1; end fileID =fopen(signal, ’w’); fprintf(fileID,’%10.10f\r\n’,W);

144 fclose(fileID); q = 0; for e= (1:length(W)) if W(e) == 1 q = q + 1; end end numcounts=q

%Add an intensity average for normalizing the g2 function later AvgIntW = mean(W); AvgIntW2 = mean(W.^2); AvgIntZ = mean(Z); %%Introduce Fourier Transform and find g2

FW = fft(W); IFW = conj(FW);

Autog2 = ifft(FW.*IFW)/(length(FW)*AvgIntW.^2); %length(W) %Autog2=shift(W,W);

%This portion will compare two differen channels and cross correlate %FW = fft(W); %FZ = fft(Z);

%Crossg2 = ifft(FW.*FZ)/(AvgIntW.*AvgIntZ);

145 %This part is nessicary because the maximum value in proptime is %not the same every iteration. It rounds the maximum value to 1 %significant figure. It throws away some data if it rounds down and %fills in with zeros if it rounds up.

%This section makes sure that all iterations through this loop produce a g2 %list of the same length so that they may be averaged if i == 1 len=int64(round(length(W),-(numel(num2str(length(W)))-1))); autofinal=zeros(len,1); %crossfinal=zeros(len,1); end if len > length(W) Autog2 = [Autog2;zeros(len-length(W),1)]; %Crossg2 = [Crossg2;zeros(len-length(W),1)]; end

Autog2=Autog2(1:len,1); %Crossg2=Crossg2(1:len,1);

%Now plug everything in to solve for g2

%Then plug into our equation %Here the total correlations are calculated by adding the %correlions calculated from each iteration size(autofinal); size(Autog2); autofinal = autofinal + Autog2; %crossfinal = crossfinal + Crossg2;

146 if Single == true break end end if Single == false autofinal=autofinal/N; end

%length(autofinal)

%%plot just for fun %figure; %subplot(2,1,1) %plot(w,’r’); hold on; %plot(x,’o’); hold on; %plot(y,’g’); hold on; %plot(z,’b’); hold on; %title(’Plot Channel Signals’);

%subplot(2,1,2) %plot(crossfinal); %title(’Plot of g2’); %size(autofinal) figure; plot(autofinal); %plot(AvgIntW*ones(len,1)); %plot(ifft(FW.*IFW));

147 xlim([30000, 40000]) title(’Auto Correlation of Channel 1’);

In order to run this program it is also necessary to have the following program which reads the initial input file and splits it into the detection events on each channel and the time these events occurred. This program was used to create Figure 8.13. function[BinStream,PointNumber] = BinSort(FirstFile)

%We need some sort of program that sorts out a stream of binary into 4 %different channels at particular times

%Unpack the file and save the binary to BinStream

UnpackedData = fopen(FirstFile); DecStream = fread(UnpackedData,’ubit32’); BinStream = dec2bin(DecStream);

%Was using ’bit1’ option, now trying ’ubit32’ because the last iteration %would crash my computer

%Each data point will be represented by 32 bits from the binary % Bits 0 - 26 will be the time stamp of when the signal came in %Bit 27 is Channel 1 %Bit 30 is Channel 4 %Bit 31 should be zero

%Let’s pack this into an array of each data point

%Old problem solved! Well, side stepped. The ’bit1’ option used earlier

148 %means that we’re already in an array with no commas or spaces. Only %downside is that the order is reversed

PointNumber = length(BinStream); Row = 32; %Time Stamp + Signals + clear channel %SortedData = zeros(Row,PointNumber); %j = 1; %For the row number

%Data points in order, bits are all backwards though.

Test = 4;

The other program that may be used to calculate g(2)(τ) is written in C++ and is shown below. It was written in C++ out of concern for speed. However, it operates based on the shift register algorithm which is slower than the using the cross correlation theorem so we are not sure which program will run faster for large data sets. The string on line 12 must be set to the file containing the list of ones and zeros corresponding to photon detections generated in the above program. It would be a useful future project to generate this list in C++. The list containing g(2)(τ) is written to the file names as a string on line 76. This may then be plotted.

#include #include #include using namespace std; int main() { //Create a dynamic array to hold the values vector originaldata;

149 //Create an input file stream ifstream in("lightsignal2.txt",ios::in);

/* As long as we haven’t reached the end of the file, keep reading entries. */ double number; //Variable to hold each number as it is read

//Read number using the extraction (>>) operator while (in >> number) { //Add the number to the end of the array originaldata.push_back(number); } //Close the file stream in.close();

/* Now, the vector object "originaldata" contains both the array of numbers, and its length (the number count from the file). */ int numrows = originaldata.size(); int numrowsplus1 = numrows + 1; double numbins = 10000; int numrowsminusnumbins = numrowsplus1 - numbins; double totalcounts = 0; double g2[10001]; double g2norm[10001]; double binsize = 1;

150 //Find total counts for (int kk = 0; kk < numrowsminusnumbins; kk++) if (originaldata[kk] > 0) totalcounts = totalcounts+1; cout << totalcounts; //Run through current data sets, row by row, to find coincidences for(int k = 0; k < numrowsminusnumbins; k++) { //Determine if the current row and column has a count in it if(originaldata[k] > 0) { /*When a count is found, correlate it with counts numbins after it. This is done by looping through the data a number of bins "numbins" after the count looking for subsequent counts*/ for(int ii = 0; ii < numbins; ii++) { /*If another count is found within numbins, place it properly in g2 array*/ if (originaldata[k+ii+1] >0) g2[ii] = g2[ii] + 1; } } } //Square intensity for normalization double totalcountssquare = totalcounts*totalcounts;

/*Normalize coincidence count by dividing each array element by total

151 intensity and multiplying by total number of bins*/ for (int p = 0; p < numbins; p++) g2norm[p] = g2[p]/totalcountssquare*numrowsminusnumbins;

ofstream myfile; myfile.open ("lightg2.txt"); for(int i = 0; i < numbins; i++) { myfile << g2norm[i]; myfile << "\n"; } myfile.close();

return 0; }

152 Appendix K

Bibliography

[1] S. E. Harris. Lasers without inversion: Interference of lifetime broahened resonances. Physical Review Letters, 62:1033, 1989.

[2] S. E. Harris. Electromagnetically induced transparency. Physics Today, 50:7, 1997.

[3] Peter W. Milonni and Joseph H. Eberly. Laser Physics. Wiley, first edition, 2010.

[4] J. Marangos M. Fleischhauer, A. Imamoglu. Electromagnetically in- duced transparency: Optics in coherent media. Reviews of Modern Physics, 77:633–673, 2005.

[5] Thomas Jenkins. Simulation of electromagnetically induced trans- parency and absorption. diploma thesis, Miami University, 2013.

[6] W. Ketterle. Atomic and optical physics 1, 2014. https://ocw.mit.edu/courses/physics/8-421-atomic-and-optical- physics-i-spring-2014/index.htm.

[7] I. Deutsch. Quantum optics 1. 2017.

153 [8] Marcus Cramer Konrad Banaszek and David Gross. Focus on quantum tomography. New Journal of Physics, Vol. 15, 2013.

[9] Z. Dutton L. V. Hau, S. E. Harris and C. H. Behroozi. Light speed reduction to 17 meters per second in an ultracold atomic gas. Nature, 397:594–598, 1999.

[10] C. Behroozi C. Liu, Z. Dutton and L. V. Hau. Observation of coher- ent optical information storage in an atomic medium using halted light pulses. Nature, 409:490–493, 2001.

[11] D. E. Chang A. Asenho-Garcia, J. D. Hood and H. J. Kimble. Atom- light interactions in quasi-one-dimensional nanostructures: A green’s- function prespective. Physical Review A, 95:0338181, 2017.

[12] R. H. Dicke. Coherence in spontaneous radiative processes. Physical Review, 93:99–110, 1953.

[13] M. Gross and S. Haroche. Superradiance: An essay on the theory of collective spontaneous emission. Physics Reports, 93:303–396, 1982.

[14] D. Steck. Atom and quantum optics. 2015.

[15] P. S. Jessen X. Qi, B. Q. Baragiola and I. H. Deutsch. Dispersive re- sponse of atoms trapped near the surface of an optical nanofiber with applications to quantum nondemolition measurement and spin squeez- ing. Physical Review A, 93:023817, 2016.

[16] Xiaodong Qi. Dispersive Quantum Inteface with Atoms and Nanopho- tonic Waveguides. PhD thesis, The University of New Mexico, 2018.

[17] E. T. Jaynes and F. W. Cummings. Comparison of quantum and semi- classical radiation theories with application to the beam maser. Pro- ceedings of the IEEE, 51:89–109, 1963.

154 [18] Jens Koch Andrew D. Greentree and Jonas Larson. Fifty years of jaynes- cummings physics. Journal of Physics B: Atomic, Molecular and Optical Physcis, Vol. 46, Num. 22, 2013.

[19] P. Rice. Introdcution to quantum optics. 2007.

[20] I. I. Rabi. On the process of space quantization. Physical Review, 49:324– 328, 1936.

[21] M. T. Batchelor Q. Xie, H. Zhong and C. Lee. The quantum rabi model: solution and dynamics. Journal of Physics A: Mathematical and Theoretical, 50(11).

[22] P. Rabl M.-J. Hwang and M. B. Plenio. Dissipative phase transition in the open quantum rabi model. Physical Review A, 97:013825, 2018.

[23] F. T. Hioe. Phase transitions in some generalized dicke models of su- perradiance. Physical Review A, 8.

[24] D. Marcos J. J. Garcia-Ripoll E. Solano C. J. P. M. Harmans P. Forn- Diaz, J. Lisenfeld and J. E. Mooij. Observation of the bloch-siegert shift in a qubit-oscillator system in the ultrastrong coupling regime. Physical Review Letters, 105:237001, 2010.

[25] J. Dalibard and C. Cohen-Tannoudji. Laser cooling below the dopple limit by polarization gradients: simple theoretical models. Journal of the Optical Society of America B, 6(11).

[26] P. Jessen G. Brennen, C. Caves and I. Deutsch. Quantum logic gates in optical lattices. Physical Review Letters, 82:1060, 1999.

[27] Martin Brown. Monte Carlo simulations of cold atom ratchets. PhD thesis, University College London, 2008.

155 [28] P. Hanggi and F. Marchesoni. Artificial borwnian motors: Controlling transport on the nanoscale. Reviews of Modern Physics, 81:387–442, 2009.

[29] K. Sengstock J.-Y. Courtois C. I. Westbrook C. Jurczak, B. Desruelle and A. Aspect. Atomic transport in an optical lattic: An investigation through polarization-selective intensity correlations. Physical Review Letters, 77(9).

[30] J. Siman S. Bali, D. Hoffmann and T. Walker. Measurements of intensity correlations of scattered light from laser-cooled atoms. Physical Review A, 53(5).

[31] L. Feeney S. Kim R. Stites, M. Beeler and S. Bali. Sensitive measure- ment of radiation trapping in cold-atom clouds by intensity correlation detection. Optics Letters, 29(23).

[32] P. D. Nation J. R. Johansson and F. Nori. Qutip: An open-source python framework for the dynamics of open quantum systems. Computational Physics Communications, 183:1760–1772, 2012.

[33] P. D. Nation J. R. Johansson and F. Nori. Qutip: An open-source python framework for the dynamics of open quantum systems. Computational Physics Communications, 184:1234–1772, 2013.

[34] J. J. Sakurai and J. Napolitano. Modern Quantum Mechanics. Pearson, second edition, 2011.

[35] H. J. Carmichael. Statistical Methods in Quantum Optics 1: Master Equations and Fokker-Planck Equations. Springer, first edition, 2002.

[36] Charles P. Poole Herbert Goldstein and John L. Safko. Classical Me- chanics. Addison Wesley, third edition, 2000.

156 [37] I. R. Senitzky. Dissipation in quantum mechanics. the harmonic oscilla- tor. Physical Review, 119(2).

[38] I. R. Senitzky. Dissipation in quantum mechanics. the harmonic oscilla- tor. ii. Physical Review, 124(3).

[39] I. Deutsch. Quantum optics 2. 2017.

[40] John Preskill. Quantum computation. 2018.

[41] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, anniversary edition edition, 2011.

[42] A. Kossakowski. On quantum statistical mechanics of non-hamiltonian systems. Reports on Mathematical Physics, 3(4):247–274, 1972.

[43] G. Lindblad. On the generators of quantum dynamical semigroups. Communications in Mathematical Physics, 48:119–130, 1976.

[44] Howard Carmichael. An Open System Approach to Quantum Optics. Springer, first edition, 1991.

[45] D. Griffiths. Introduction to Quantum Mechanics, chapter 10, pages 368–393. Pearson Prentice Hall.

[46] M. Fleischhauer A. Sorensen A. Gorschkov, A. Andre and M. Lukin. Universal approach to optimal photon storage in atomic media. Physical Review Letters, 98:123601–1–4, 2007.

[47] A. Wallraff S. M. Girvin A. Blais, R.-S. Huang and R. J. Schoelkopf. Cavity quantum electrodynamics for superconducting electrical cir- cuits: An architecture for quantum computation. Physical Review A, 69:062320, 2004.

157 [48] A. Gutierrez-Jauregui and H. J. Carmichael. Dissipative quantum phase transitions of light in a generalized jaynes-cummings-rabi model. Phys- ical Review A, 98:023804, 2018.

[49] Ethan Clements. Characterization of optical lattices using pump-probe spectroscopy and fluorescence imaging. diploma thesis, Miami Univer- sity, 2016.

[50] Single Photon Counting Module: SPCM-AQR Series.

[51] Sergey Polyakov and Joffrey Peters. Simple and Inexpensive FPGA- based Fast Multichannel Aquisition Board, 2015.

[52] Hans J. Weber George B. Arfken and Frank E. Harris. Mathematical Methods for Physicists. Academic Press, seventh edition, 2012.

158