© 2020 Michelle Victora NEW OPPORTUNITIES FOR PHOTON STORAGE AND DETECTION: AN EXPLORATION OF A HIGH-EFFICIENCY OPTICAL AND THE QUANTUM CAPABILITIES OF THE HUMAN EYE

BY

MICHELLE VICTORA

DISSERTATION

Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in in the Graduate College of the University of Illinois at Urbana-Champaign, 2020

Urbana, Illinois

Doctoral Committee:

Professor Bryce Gadway, Chair Professor Paul G. Kwiat, Director of Research Professor Yann Chemla Professor Ranxiao Frances Wang Abstract

The field of has grown in recent years, due to tremendous technological advancements toward quantum networking and quantum computation. Nevertheless, there is still a great need for creative research that explores possibilities for new capabilities. Particularly, we look towards research to develop new ways of manipulating and detecting photons.

Here, we discuss our efforts toward developing two separate quantum optics experiments that can provide great insight into the development of quantum devices. We begin by discussing our work to investigate the lower threshold of human vision and the eye’s potential as a single-photon detector, using a custom- built single-photon source, and a novel two-alternative forced-choice experimental design. Our preliminary

findings show promising data that support previous results found from a similar experiment using a somewhat different approach.

We then discuss our second project, where we have developed a robust reconfigurable optical delay line quantum memory that compares favorably with competing methods. Our memory is capable of photon storage with an unprecendentedly high time-bandwidth product, high free-space transmission over the range of 10 µs, and high fidelity. These attributes, plus the memory’s capability for multi-mode storage, make this system a strong candidate for a criticaal component in the large-scale heterogeneous quantum networks we hope to see developed in the next ten years.

ii To Mom, Dad, Denise, Mazin.

iii Acknowledgments

This project would not have been possible without the support of many people. Many thanks to my adviser,

Paul G. Kwiat, who introduced me to quantum information and quantum optics, and guided my development

as an experienced researcher. He pushed me to succeed in challenging and new ways, and opened so many

doors for me, for which I am truly grateful. Thanks also to my committee members, Bryce Gadway, Frances

Wang, and Yann Chemla, who offered guidance and support. Thanks in particular to Frances Wang, who

acted as a second mentor, teaching me the methods necessary to run psychology experiments, providing

lab space, and patiently answering all of my questions. Thanks also to Bryce Gadway who served as my

committee chair and supported me in regular meetings, encouraging me to seek opportunities, as well as to

start my thesis earlier rather than later. I also want to extend a very warm thank you to Lance Cooper and

Wendy Wimmer. I’ve found great comfort in knowing I can always come to the graduate office for help with all needs, from administrative to emotional. The safety and support of the university physics department has allowed me to excel and truly be proud of the work I have done during my PhD. I would also like to thank the funding agent that made this work possible. This research was funded by NSF grant’s PHYS

1519407 and 1521110, as well as an AMO Departmental fellowship from 2014-2015.

There are two people who had an extremely important role in the vision component of my thesis work.

First of all, Rebecca Holmes developed the vision experiment, building the single-photon source as well as the program that ran automated trials for observers. She allowed me to help take data for the temporal integration experiment and taught me how to operate and enhance the single-photon source. She was also one of my closest friends in the lab group, and helped me develop my confidence as a woman in science. In addition, Julia Spina helped me prepare the experiment for single-photon experiments and played a pivotal role in data-taking (and provided endless enthusiasm and support).

In addition, there are so many people who have worked on the quantum memory experiment, but I’d like to thank four in particular who I worked with directly. First, Professor Michael Goggin from Truman

State University had a hand in the memory work right from the beginning (over a decade ago) when the group was just starting to design the Herriott cell mirrors. He was the first person to suggest the necessity

iv for stabilization (which at the time meant building a box around the entire setup) and he spent a summer aiding me with getting our first fiber-coupled measurements. Second, Jia Jun Wong was the first student to successfully produce the pattern on our Modified Herriott cell, and he did an incredible job of handing off the experiment to me in my first year as a graduate student. Third, Fedor Bergmann provided us a year of his time and numerous gifts including Pockels cell drivers and delay generators, thereby enabling the complex triggering scheme necessary for operating all loops in synchrony with each other. He took the transmission measurements in Chapter 6. Finally, Spencer Johnson took the orbital angular momentum measurements discussed in Chapter 7 and proved to be an excellent resource in all things theory.

I’d like to thank my family for being an endless source of support through my time in the PhD. I am so lucky to have a dad who has a physics PhD and could provide knowledge and guidance to me as I was just entering the field. I am also so thankful for my sister Denise (and her husband Victor) for supporting the family in Minnesota while I pursued six years of academic research. I also cannot thank her enough for teaching the family how to video chat so we could all stay in contact during the COVID-19 pandemic.

Thank you to my mom who listened to all my tears and struggles whether they be academic or just the trials of still growing up. Mom, it is incredible how flexible you were with communication even when I would drop off the face of the earth for long stretches at a time. I promise I was just in lab. Of course, a big thank you in addition to my extended family who always asked me questions about my research even if I didn’t always answer them clearly.

Thank you the members of Kwiat lab (and Kwiat lab alumni) for all the support on my projects. Thank you to Alex who was my role model for balancing multiple tough projects. Thanks to Trent, Aditya, Michael,

Colin, Dalton, Kristina, Annika, Joe, Samantha, Chandana, and Andrew, and everyone who was so kind and so supportive no matter how many times I hid the wavefront sensor. Specifically, thank you to Nathan and Sam who stepped in at the last moment to take over my unfinished work – I wish you the best of luck in your grad school career and I hope you both get publications from the work you’re inheriting!! Most importantly, thank you to Chris, who answered all my late-night texts about group meeting papers, who looked at all potentially broken lab equipment I freaked out about, and who provided constant support whenever I doubted myself in the lab (or group meeting).

I really don’t have enough space in this paper to thank the incredible friends I have made at University of Illinois. The saddest part of graduating is knowing that I will no longer live in such a rare and supportive community. Thank you to Laura, Karmela, and Ashwathi, who served as honest, amazing roommates during our early years. Thanks in particular to Laura who worked right down the hall from me, and received all my

fiber-coupling and Zemax questions with patience and enthusiasm. Thank you to Ashwathi who has been

v one of my closest friend since Day One on the Wild Bohrs when neither of us made it to a base. Thank you to Alex, Paul, Matt, Carolee, Allycia, Cristina, Akshat, Brian, Lily, Astha, and Avery for all the board game nights, festivals, Bunny’s, Zoomgas, friendsgivings, etc. Particularly, a thank you to Lily and Paul for being the main organizers of events and making an effort to be friends with everyone. Thank you to John for sharing a year of movie-making together – I’m sorry we never finished all our projects. Thank you to

Angela for being an incredible roommate during some of my final years in graduate school. Thank you to

Wooyoung for all the shared Ambar and Basil Thai, and for also being one of my closest friends and voices of support.

Finally, thank you to Mazin who was always there for me and who I hope to always be there for. Thank you for being there through my lows and my highs, for making me laugh, for reminding me to take breaks, for encouraging me and having confidence in me even when I didn’t have confidence in myself. Everything

I could say in a format like this will inevitably sound cheesy, but I don’t know how I got so lucky to meet someone so special during graduate school. I’m glad I did, though, and I cannot wait to see where the next few years take us.

vi Table of Contents

List of Tables ...... x

List of Figures ...... xi

Chapter 1 An Introduction to Quantum Optics and Quantum Information ...... 1 1.1 An introduction to quantum optics ...... 1 1.1.1 Theory of quantization ...... 2 1.1.2 Quantum optics experiments ...... 3 1.1.3 ...... 5 1.2 An introduction to quantum information ...... 6 1.2.1 Qubits ...... 6 1.2.2 Quantum communication ...... 8 1.2.3 ...... 10 1.2.4 Quantum networks ...... 10 1.3 Conclusion ...... 12

Chapter 2 Introduction to Vision Project ...... 13 2.1 Motivation ...... 13 2.2 What is a photon anyway? ...... 14 2.3 Are we sure humans can even see one? ...... 15 2.3.1 Visual phototransduction ...... 15 2.3.2 Previous Studies ...... 18 2.4 How do we measure and send just one photon? ...... 21 2.4.1 Producing single photons ...... 22 2.4.2 Detecting single photons ...... 22 2.5 Conclusions ...... 22

Chapter 3 Single-Photon Source and Observation Station ...... 23 3.1 Single-Photon Source ...... 23 3.1.1 Spontaneous parametric downconversion ...... 23 3.1.2 Experimental Implementation ...... 25 3.1.3 Alignment ...... 27 3.1.4 Pockels cell switch ...... 28 3.1.5 Performance of the source ...... 29 3.2 Observation station ...... 33 3.2.1 Spatial Configuration ...... 34 3.2.2 Previous work with the source ...... 37

vii Chapter 4 A Single-Photon Test ...... 40 4.1 How many trials are needed for a single photon experiment? ...... 40 4.1.1 Initial power analysis ...... 40 4.1.2 Tinsley experiment ...... 42 4.2 Replication ...... 43 4.2.1 Training ...... 44 4.2.2 Experiment ...... 46 4.3 Preliminary conclusions ...... 48

Chapter 5 Introduction to Quantum Memory ...... 49 5.1 Popular forms of quantum memory ...... 50 5.1.1 Atomic ensembles ...... 50 5.1.2 Rare-earth ions ...... 52 5.1.3 Optical delay line ...... 52 5.1.4 Hybrid memory ...... 54 5.2 Applications ...... 55 5.3 Performance criteria ...... 55 5.3.1 Fidelity ...... 56 5.3.2 Efficiency and storage time ...... 56 5.3.3 Multimode capacity ...... 57 5.3.4 Bandwidth ...... 57 5.4 Summary ...... 60

Chapter 6 Building a Memory in Free-Space ...... 61 6.1 Building blocks for the memory ...... 61 6.1.1 Optical delay lines ...... 61 6.1.2 Pockels cells ...... 66 6.1.3 Herriott cells ...... 67 6.2 Operation ...... 69 6.2.1 Stabilization and ray tracing ...... 69 6.2.2 Triggering the memory ...... 72 6.2.3 Operating bandwidth ...... 72 6.2.4 Summary ...... 72

Chapter 7 Qubit State Storage ...... 74 7.1 Transducer ...... 74 7.1.1 Fast optical pulse ...... 76 7.2 Mode-matching in the quantum memory ...... 76 7.2.1 Modeling ...... 77 7.2.2 Experimental implementation ...... 79 7.3 Memory Fidelity ...... 80 7.3.1 Quantum process tomography ...... 80 7.3.2 Results ...... 80 7.4 Additional work and future directions ...... 81 7.4.1 Storage of orbital angular momentum modes ...... 81 7.4.2 Storage of hyperentanglement ...... 83 7.4.3 Summary and conclusions ...... 84

Chapter 8 Conclusions ...... 85

Appendix A Viewing Station Alignment Procedure ...... 87

Appendix B Memory Setup ...... 88

Appendix C Triggering Pockels cell for Optical Memory ...... 90

viii Appendix D Transfer Matrices for Modeling Quantum Memory ...... 92 D.1 Loop 1 ...... 92 D.2 Loop 2 ...... 92 D.3 Loop 3 ...... 93

Appendix E Sample Beam Waist Measurements for Optimizing Lens Placement . . . . 94

Appendix F Standard Process Tomography ...... 98

Appendix G Fast Pulse Circuit ...... 101

References ...... 102

ix List of Tables

1.1 Different qubits and various attributes ...... 9

3.1 Pulse width, repetition rate, and average output power for UV ...... 25

4.1 Expected photon counts for LED at different voltages...... 45

5.1 Comparison of different memories...... 56

B.1 Components in quantum memory ...... 89

x List of Figures

1.1 Interference Experiments...... 4 1.2 Bloch sphere representation of a qubit...... 8 1.3 Diagram of a quantum repeater...... 11

2.1 Poissonian distribution for three different values of the mean photon number hni...... 15 2.2 Schematic of the eye...... 16 2.3 Spectral ranges of the three cone subtypes ...... 17 2.4 Angular distribution of photoreceptor cells on retina...... 18 2.5 Diagram of rod photocurrent measurements in vitro ...... 19 2.6 Data from Hecht, Schlaer, and Pirenne ...... 20

3.1 Phase matching in SPDC ...... 24 3.2 Variable attenuator and spatial filter...... 25 3.3 Optical setup for collecting photon pairs...... 26 3.4 Filter spectra for 505 and 562 nm...... 27 3.5 Fast optical switch...... 28 3.6 Schematic of the single-photon source...... 30 3.7 Measured g2(0) vs. herald detections per pulse...... 32 3.8 Switch to control output to observation station...... 33 3.9 A visual field and top-down schematic of the observer viewing station...... 34 3.10 The effect of pre-exposure to light on dark adaption time...... 36 3.11 Steps of an experimental trial...... 37 3.12 Group results from vision experiment...... 38

4.1 Required sample size for a power of 0.90 to conclude single-photon vision ...... 41 4.2 Schematic of Tinsley apparatus ...... 42 4.3 High-confidence training trial accuracy over session number...... 46 4.4 Single-photon perception for different confidence ratings...... 47

5.1 EIT Λ state...... 51 5.2 Simplified schematic of optical memory...... 53 5.3 Simulated transmission of a high-performing digital quantum memory...... 54 5.4 Diagram of a wave plate rotating a linearly polarized state of light...... 58 5.5 Mirror reflectivity and cavity transmission plots against varying wavelength ...... 59

6.1 Diagram of 1st loop in optical delay line...... 62 6.2 Diagram of 2nd loop in optical delay line...... 63 6.3 Diagram of 3rd loop in optical delay line...... 64 6.4 Diagram and transmission data of entire quantum memory...... 65 6.5 Front and top-down view of two passes through PC2...... 67 6.6 Model of Herriott cell used in 2nd loop of optical delay line...... 68 6.7 Model of modified Herriott cell used in 3rd loop of optical delay line...... 68

xi 6.8 Simplified side view of modified Herriott cell...... 69 6.9 Diagram of ray transfer...... 71 6.10 Schematic of stabilization setup...... 71 6.11 Simulation of beam position with and without stabilization...... 72

7.1 Simplified schematic of -to-time-bin transducer...... 74 7.2 Detailed diagram of polarization-to-time-bin transducer...... 75 7.3 Profile and trajectory of a Gaussian beam ...... 78 7.4 Fiber-coupled transmission measurements for optical delay line...... 80 7.5 Process tomography matrices for quantum memory...... 82 7.6 Poincar´esphere for orbital angular momentum modes...... 83

B.1 Optics table with all used components for memory...... 88

C.1 Trigger scheme for memory operation...... 91

E.1 1st loop beam waist output for X1 = 0.889 and X2 = 1.997...... 94 E.2 1st loop beam waist output for X1 = 0.889 and X2 = 1.987...... 95 E.3 1st loop beam waist output for X1 = 0.839 and X2 = 1.960...... 96 E.4 2nd loop beam waist output for X1 = 0.792 and X2 = 0.9345...... 96 E.5 Gaussian fits and CCD camera images for first loop...... 97

F.1 Reconstructed process matrix for1 cycle through 1st loop...... 99 F.2 Reconstructed process matrix for 5 cycles through 1st loop...... 99 F.3 Reconstructed process matrix for 10 cycles through 1st loop...... 99 F.4 Reconstructed process matrix for 1 cycle through 2nd loop...... 100 F.5 Reconstructed process matrix for 5 cycles through 2nd loop...... 100 F.6 Reconstructed process matrix for 10 cycles through 2nd loop...... 100

G.1 Design of the Fast-Pulse board for short pulse generation...... 101

xii Chapter 1

An Introduction to Quantum Optics and Quantum Information

In this chapter, we give a broad overview of the fields of quantum optics and quantum information. We discuss how fundamental work in quantum optics has led to the birth and advancement of quantum information as its own field. Finally, we touch on how the work detailed in this thesis contributes to advances in both fields.

1.1 An introduction to quantum optics

Quantum optics as a field was born when Einstein proposed that light was made of particles [1]. Up through the second half of the nineteenth century, Maxwell’s equations governed the study of light as waves, which seemed to explain most electromagnetic physical phenomena [2]. However, the photoelectric effect and black-body radiation served as two examples where “quantized” definitions of light seemed required, and while Planck proposed radiation that was absorbed and emitted in discrete quanta to explain away thermal body spectra, Einstein generalized this idea to the light itself being quantized, rather than the processes of absorption and emission. Since then, many experiments have demonstrated that classical phenomena such as interference and diffraction also occur at the single-photon level [3], thus providing evidence that light obeys a wave/particle-like duality. De Broglie then extended this observation to declare that all matter exhibits wave and particle properties, which led to the foundations of itself.

A plethora of optics experiments have demonstrated this wave-particle duality. “Quantum eraser” exper- iments demonstrate that indistinguishable single-photon amplitudes can interfere with each other, resulting in measurable interference fringes (wave-like behavior) while distinguishable amplitudes cannot, resulting in identifiable photon trajectories (particle-like behavior) [4]. Many other non-classical quantum optics phe- nomena have been explored as well, including entanglement [5] and squeezed light [6]. It should be noted that any interference experiments could be completely described by classical electromagnetic theory if we rely on a coherent source of light for our source. If we truly want to describe these phenomena as “quantum,” we need to ensure that we perform these experiments with true single photons (rather than a dim pulse of light).

However, a single-photon source is not trivial to build, in some cases due to the fundamentally probabilistic

1 nature of quantum mechanics, in other cases due to the technical difficulty associated with exciting single

emitters and capturing their photons efficiently. The development of single-photon sources have often gone

hand-in-hand with research in quantum optics. One of the most common sources builds on the phenomenon

of spontaneous parametric down-conversion (SPDC) and uses one photon from a photon pair to “herald” the

presence of the other. This is the type of source we have developed and use in our vision experiment, which

is detailed in the first half of this thesis. While our single-photon source manages to achieve a relatively

low probability of producing more than one photon, we do suffer from heralding efficiency below 50%. This

efficiency can be improved with the aid of a quantum memory, such as the one discussed in the second half

of this thesis. Quantum memories can be used in a variety of other applications, as well, including use as

a repeater to enhance communication in a , or used to synchronize quantum gates in a

quantum computer.

1.1.1 Theory of quantization

To understand wave-particle duality, we can use the example of a photon as a quantization of a single-mode

electric field. To this end, we can think of a radiation field confined to a one-dimensional cavity of length c along the z-axis. This field has a classical field energy prescribed to it, as

Z   1 2 1 2 H = dV 0E (r, t) + B (r, t) , (1.1) 2 µ0

where H refers to the Hamiltonian of the field, E(r,t) refers to the electric field, and B(r,t) refers to the magnetic field. We can designate the electric field to be polarized along the x-direction, E(r,t) = exEx(z,t) where ex is a unit polarization vector. Analogously, we can define B(r,t) = eyBy(z,t). Using Maxwell’s equations, we can define solution is given by:

1  2ω2  2 Ex(z, t) = q(t)sin(kz), V 0 1 (1.2)  2  2 µ00 2ω Bx(z, t) = p(t)cos(kz), k V 0

2π where q(t) is a time-dependent amplitude for the electric field and p(t) =q ˙(t), and the wavenumber k = λ L satisfies the constraint that an integer number of wavelengths fit in the cavity, i.e., λ = n . From the equations in 1.2, we can show that a single-mode field is formally equivalent to a harmonic oscillator of unit

mass [7], where

2 1 H = (p2 + ω2q2). (1.3) 2

and q(t) and p(t) are analogous to position and momentum. In actuality, q(t) and p(t) are the in-phase

and out-of-phase components of the electric field amplitude. By converting p and q into their operator

equivalents,q ˆ andp ˆ obey the commutation relation [ˆq, pˆ] = i~. It is traditional (and convenient) at this point to introduce the non-Hermitian annihilation (ˆa) and creation (ˆa†) operators through the combinations

−1 aˆ = (2~ω) 2 (ωqˆ + ipˆ), (1.4) † −1 aˆ = (2~ω) 2 (ωqˆ − ipˆ).

As a result, the Hamiltonian operator takes the form

 1 Hˆ = ω aˆ†aˆ + , (1.5) ~ 2

and we can define |ni as an energy eigenstate of a single-mode field with energy eigenvalue En such that

 1 Hˆ |ni = ω aˆ†aˆ + |ni = E |ni . (1.6) ~ 2 n

Finally, we start to see an intuition build for a quantized single-mode state with an energy that varies

discretely with the number of photons. It is important to note here that even if no photons exist (n =

0), we still have a non-zero eigenvalue, also known as the zero-point energy. This concept describes quantumfluctuations, where the mean electric field is zero, but the variance is non-zero. These quan- tum fluctuations have an important contribution to spontaneous parametric down-conversion, the process through which we create pairs of entangled photons, which will be described in Chapter 3.

1.1.2 Quantum optics experiments

Many quantum optics experiments have been developed to explore the properties of single photons. As early as the 19th century, scientists were observing the photo-electric effect as one of the first indications of quantized photons [8]. In this experiment, light is used to irradiate a material, and the emission of electrons indirectly demonstrated a presence of photons. If light were simply wavelike, then the increased intensity of the light would result in a proportional increase in electron emission. However, it was found that low- frequency light could not generate electron emission, no matter how intense the source. This suggests that light might actually consist of discrete energy wave-packets (consistent with the theory of harmonic oscillator

3 (a) (b) Michelson Interferometer)

Figure 1.1: a) A diagram of Young’s double-slit experiment. One wave is split approximately into two point sources that interfere with each other and produce a sinusoidal pattern on a measurement screen. Even when we reduce our electrical field down to that of a single photon, this interference pattern still appears, after repeating the experiment many times. b) Michelson interferometer. Two electric fields are made to interfere with themselves through the use of a beamsplitter. Fringes are observed through changing the path in one arm and observing how counts through one port of the beam-splitter changes. energy eigenstates) that cannot excite an electron unless each particle meets a threshold energy.

While light can indeed be quantized, these light quanta still demonstrate wave-like properties. A number of quantum optics experiments explored this by interfering an attenuated source of light with itself. The light is attenuated to the point where most likely only one photon is in the system at a time1. Young’s

Double-Slit experiment demonstrated interference of such a source producing light having equal probability of passing through two separate slits on its path to a final “measurement” screen. Even when the source is attenuated to the one-photon-at-a-time level, interference fringes on the screen still suggests that a particle taking one path interferes with the same particle taking another path. Essentially, the particle itself is in a superposition state of taking both paths, resulting in an interference pattern arising from constructive and destructive interference.

It is interesting to note the role of measurement in all these experiments. For the photoelectric effect, we are essentially measuring the presence of light by measuring the number of electrons emitted from a material.

In Young’s double-slit experiment, our measurement tool is a screen that reflects how much light is hitting the screen. The double-slit measurement can be made more controllable by replacing the two slits with a

Michelson interferometer and the measurement screen with a solid-state photodetector. This photodetector allows us to effectively “count” photons as they each trigger a short but measurable avalanche current on the device capable of distinguishing photon arrival times. In this way, we can feel more confident that interference is happening at the single-photon level as we can examine the counts while we deterministically

1It should be noted that energy considerations alone cannot determine the number of photons at any given time. A laser source produces photons randomly, so even when attenuated there is at least some chance that there may be more than one photon. To that end, true photon experiments should use a better approximation to a single-photon source (such as the one used in our vision experiment!) than just an attenuated light beam.

4 vary the path difference. The human eye is another type of photodetector that converts light impulses to

electrical signals that are passed from retinal nerve endings to the brain. The first half of this thesis is

devoted to exploring the potential for the human eye to act as a single-photon detector.

1.1.3 Quantum biology

The study of quantum biology is the study of applications of quantum mechanics to biological objects and problems. Many biological processes involve the conversion of energy into forms that are usable for chemical transformations, and are quantum mechanical in nature. One of the more popular processes of study is that of photosynthesis – the process organisms utilize to convert light energy into chemical energy that a cell can use. This conversion is performed through electron excitation that must be transferred to a reaction site where the separation of charge can later be converted into chemical energy. This transport process must happen quickly, though, or the energy will be lost to fluorescence or thermal vibral motion. Zigmantas et al. performed a study showing that the process efficiency of the FMO complex in green sulfur bacteria reaches well above 99% [9]. This efficiency cannot be explained by such as a diffusion model; however, multiple studies have claimed the identification of electronic long-lived quantum , meaning a quantum wavefunction is maintained even amidst all the potential sources of decoherence[10, 11].

The long coherence times have sparked a great deal of research exploring the hypothesis that nature has devised a way to protect the quantum coherence in order to enhance the efficiency of photosynthesis. One proposal suggests the electron proceeds to its reaction site via quantum walks, where the electron can be in a superposition of two places at once during its travels [12]; such coherent evolution is known to lead to a much more rapid spreading than standard diffusion, where the root-mean-square distance diffused varies only as the square root of the time. Another proposal suggests that electron tunneling (another quantum phenomena) creates an energy sink that moves the electron to the reaction site quickly [13]. Many more proposals suggest many different mechanisms [14, 15], and much disagreement pervades the conversation, but it is one of the more popular areas of study in quantum biology.

Another study in quantum biology is that of the vibration theory of olfaction [16]. Olfaction refers to the sense of smell, and the vibration theory refers to how we as humans are able to detect odorants. This theory suggests that olfactory receptors detect the vibration of the molecules, with the different “smells” due to different vibrational frequencies. This theory is inspired by quantum particles having discrete frequencies that define different energy states. There have been multiple controversial results [17, 18] in efforts to experimentally prove the effect, but it has not been disproven.

Finally, as stated in the previous section, human vision relies on quantized energy in order to convert

5 light signals to an action potential, through a process called phototransduction. Experiments have shown that the sensors in the retina of the human eye are sensitive enough to detect a single photon. This is not the same, however, as determining that a human can reliably detect a single photon, since the signal of the retina sensor from one photon might not be enough to register in human consciousness. If we can determine reliable single photon detection by humans, though, it could open up the path to new single- photon detector technologies, fashioned after a single-photon detector that nature has created. Moreover, it could enable exploration of phenomena where a human directly observes a . In this case, according to standard quantum mechanics, either the human is directly responsible for “collapsing” the quantum mechanical wavefunction via the photodetection event, or the human herself is placed into a quantum superposition, and possibly even entangled with whatever produced the photon in the first place.

Both of these are intriguing possibilities that may someday be amenable to experimental exploration.

1.2 An introduction to quantum information

1.2.1 Qubits

In the previous section, we described how interesting physics phenomena can be explored through quantum optics research. A sister field of quantum optics is quantum information, the study of how information can be stored and processed in a quantum system. This field can be built up from an understanding of the quantum bit, or qubit, as the fundamental unit of information in a quantum system. While a classical bit exists in a binary state (either 1 or 0), a quantum bit can exist as a superposition of two or more states

[19]. A measurement of the superposition state can actually destroy the superposition and change the state of the qubit. In addition, two or more particles can be entangled in such a way that measurements on them then display nonclassical correlations, no matter their spatial separation [20].

These unusual characteristics allow for a range of interesting quantum information applications. For example, since random measurement is guaranteed to affect the state of a qubit, communication protocols can use qubits in place of classical bits so as to detect the presence of an eavesdropper. An eavesdropper would necessarily have to “measure” the state of the qubit that party A is transmitting to party B in order to gain information; the eavesdropper’s measurement will then affect the qubit in a way that the communicating parties can detect. The research of secure communication protocols comprises some of the work being done in quantum information. Beyond that, though, the mere characteristic of superposition allows for qubits to be used in novel computational methods. In particular, Shor’s algorithm [21] describes how quantum bits could be used to exponentially speed up the time it takes to factor large numbers. Accordingly, many nations

6 are in a race to have the first quantum computer capable of running Shor’s algorithm on a large number of

qubits (which will allow for the breaking of nearly all existing codes that protect national, industrial, and

private secrets).

Theoretical Form

The general of a qubit can be represented by a linear superposition of two orthonormal basis

1 0 states, usually denoted as |0i = ( 0 ) and |1i = ( 1 ). These two orthonormal basis states are known as the computational basis and are said to span the two-dimensional linear vector space of the qubit. Any operation on these qubits can be written as a matrix acting on the vector itself. Some common operations are the

Pauli and identity matrices:

        0 1 0 −i 1 0 1 0         σx =   , σy =   , σz =   , σI =   (1.7) 1 0 i 0 0 −1 0 1

We can see from the above matrices that the σx and σy matrices “flip” the state in the computational

basis: |0i ↔ |1i. We can also see that the σy and σz matrices multiply their eigenstates by equal amplitude but opposite sign.

As we mentioned above, a qubit state can be a superposition of the basis states |0i and |1i:

|ψi = α |0i + β |1i , (1.8)

where α and β are probability amplitudes and can both be complex numbers (see Fig. 1.2). Measuring the

qubit in the computational basis will result in the outcome |0i with probability |α|2, and outcome |1i with

probability |β|2, i.e., α and β are constrained by |α|2 + |β|2 = 1. Note that the superposition states do not have values in between 0 and 1. They truly have a value of either 0 or 1. Of course, the measurement could have instead been performed in, say, the σx basis (|0i + |1i , |0i − |1i). In this case, the outcomes would be σx eigenstates with appropriate probability amplitudes. It is also important to emphasize that the act of measurement effectively collapses the wavefunction into the measured state. If |0i is measured in the σx basis, its new state will be an eigenfunction of the σx basis.

Practical Form

Classical bits, e. g., in a computer or cell phone, are often encoded as low DC voltage on a circuit readout.

If the voltage is above a certain threshold, the bit readout is 1; below the threshold the bit readout is 0.

Quantum bits, on the other hand, can be encoded in many different degrees of freedom, depending on the

7 Figure 1.2: This is a Bloch sphere representation of a qubit. Probability amplitudes of a superposition state θ  iφ θ  |ψi = α |0i + β |1i are given by α = cos 2 and β = e sin 2 . Taken from https://commons.wikimedia. org/wiki/File:Bloch_sphere.svg (Creative Commons License). physical system used. For example, perhaps our qubit is an electron confined in silicon. The electron can be in a superposition of up (|1i) or spin down (|0i) states. Electron transport measurements are able to resolve the two basis states [22].

In our case, the qubit might be a photon with horizontal polarization (|0i) or vertical polarization (|1i).

Measurement in this basis can be made by placing a polarizing beam splitter in front of two photon detectors.

A click on the horizontal port represents a measurement of |0i, and a click in the vertical port represents

|1i.

In both of the above cases, the quantum bit can be in a superposition of states that will collapse

upon measurement. To generalize, any two-level quantum-mechanical system can in principle be used as

a qubit, though obviously there will be practical differences in terms of ease of control and measurement,

and robustness to decoherence. Multilevel systems can also be used (such as in atomic systems), if they

possess two states (e.g., the and first ) that can be effectively decoupled from the

rest. While our chosen form of quantum bit is the photon, an incomplete list of physical implementations

of qubits is shown in Table 1.1.

Superconducting and trapped ions are currently the most popular systems for quantum computing, due

to controllable interaction between qubits and potentially large scalability. However, the photon still wins

as the qubit of choice for information transmission, including and teleportation.

1.2.2 Quantum communication

Photons are ideal for the applications that focus on communication, since photons are relatively interaction-

free and therefore less subject to state decay. Furthermore, photons have many different degrees of freedom

8 Physical Qubit Information Encoding |0i |1i Polarization Horizontal Vertical Number of photons Vacuum Single-photon state Photon Time-bin encoding Early Late Orbital angular momentum l = 1 l = −1

Frequency ω1 ω2 Electron number No electron Single electron Charge Q=0 Q=2e Superconducting qubit Flux Counter-clockwise Clockwise current current Ion Ground state Excited state N-V center Spin state Spin down Spin up

Table 1.1: This table lists some of the more popular qubits and their various attributes that can be used to encode information.

in which information can be stored (see Table 1.1). Bennett and Brassard proposed the idea of secure

cryptography [23] in 1984. By the next decade the field saw ideas for quantum communication schemes such

as [24] and quantum dense coding [25] that could demonstrate enhanced security over

classical communication, due to the detectable measurement effect an eavesdropper would necessarily have

on a state.

While quantum bits allow for increased security, they also obey some limitations that increase the diffi-

culty of information retention and transmission:

1. No-Teleportation Theorem states that a qubit cannot be wholly converted into classical bits or

fully “read.”2

2. No-Cloning Theorem state that an arbitrary qubit cannot be copied.

3. No-Deleting theorem state that an arbitrary qubit cannot be deleted3.

Due to these limitations, quantum communication protocols usually do not focus on the sending of

sensitive information that cannot be corrupted or lost. Instead, protocols are developed for the sharing

of a quantum “key” that can later be used to decrypt encoded messages. This type of protocol is called

2No-teleportation is a bit of a misnomer, since quantum teleportation is a real protocol. However, quantum teleportation refers to a transfer of qubit state to another location. It does not mean that state can be more fully “read” in one state or another. 3The no-deleting theorem is the time dual to the no-cloning theorem. Specifically, if we have two copies of an unknown quantum state, we cannot arbitrarily make one blank. However, this can be “overcome” through the use of erasure channels, where the state is “erased” by swapping it with another state in the environment [26].

9 “,” (QKD) with the first protocol invented in 1984 (aptly named BB84 after its

inventors, Charles Bennett and Gilles Brassard) [23]. In May 2019, Hong Guo et al. reported field tests of a

QKD system through commercial fiber networks over distances over 30 km long [27] and in the past twenty

years, QKD has begun to have real presence in the industrial world, with commercial QKD systems offered

by companies such as ID Quantique.

1.2.3 Quantum computing

Quantum computers are devices that encode and compute data using qubits rather than bits. They can be

understood schematically in a similar way to classical computers. Input qubits are fed into a circuit, or a

series of quantum “gates” that are analogous to classical gates. Unlike many classical logic gates, though,

these operations must be reversible. These gates include Pauli gates (with their operation being identical to

the Pauli matrices as shown in 1.2.1), as well as a variety of other single- and two-qubit gates (e.g., gates

that perform an operation on one qubit only if another qubit registers as |1i).

By harnessing the power of quantum bits, a quantum computer may be able to solve certain algorithms

with computational speed that far surpasses classical processors. Deutsch wrote one of the first proposals

for a quantum computer in 1985 [28], and soon the field saw ideas for quantum algorithms that could

surpass classical computers [29]. It is important to note that a quantum computer is not universally faster

than classical computers. In fact, much theoretical work has gone into discovering just which algorithms a

quantum computer can excel at. For rote addition, subtraction, and general algebraic methods, a classical

computer will surely always be the mechanism of choice. Algorithms that exhibit a quantum advantage,

however, include Shor’s algorithm [21], which speeds up integer factorization of an integer N from O(eN ) to √ O(log(N)), and Grover’s algorithm [29], which accelerates combinatorial searches from O(M) to O( M),

where M is the size of the function’s domain. Recent years have also seen a development in quantum

simulation and optimization protocols, particularly with applications in [30, 31].

1.2.4 Quantum networks

Having introduced quantum computing and quantum communication, we can now motivate the need for large

quantum networks. Quantum networks facilitate the transmission of qubits between physically separated

quantum processors through the use of communication lines, with a primary goal being to extend across

large distances and act as a sort of “quantum internet.” However, the larger the quantum network, the more

lossy the communication – transmission in a fiber optic scales as e−αL where L is the length of the fiber.

Since the no-cloning theorem prevents us from copying qubits, we cannot simply use a quantum amplifier

10 Figure 1.3: Quantum repeaters are made up of quantum memories that store one photon from an entangled pair until a Bell State Measurement can be made. This diagram depicts a network that is connected in two steps – first, through a Bell state measurement on the outer edges, and finally a Bell state measurement to bring the two newly formed links together into one. to counter-effect this loss.

A quantum repeater can overcome the transmission loss by breaking up the distance L into N segments, where loss is small over each segment. Entangled photon pairs are generated over each segment, with one photon sent to a memory at Alice/Bob and the other to a Bell State Measurement (BSM) circuit on an adjacent node. Adjacent to that node, another entangled pair is being generated, with one photon being sent to the next node, and one photon being sent to the BSM circuit. The BSM projects the photons into an entangled state, thereby entangling Alice (or Bob’s) memory with a qubit two nodes over. Now, we can work our way towards the middle as we repeatedly generate entanglement links and Bell State measurements between adjacent nodes until Alice and Bob are entangled with each other. Finally, we can use this entanglement link to perform quantum teleportation: Alice can perform a BSM on a qubit she wants to send to Bob, thereby entangling the qubit with their entangled link. With adequate classical communication,

Bob can perform the appropriate gate to reconstruct the state on his own end, thereby receiving the quantum data Alice would like to send. The act of this transmission exhausts the entanglement link, and another link must be built up in order to transmit more data.

This protocol heavily relies on memories that are able to maintain coherence for the time it takes to generate entanglement across all links. These memories are used to synchronize the otherwise probabilistic entanglement distribution. In this way, the links do not have to all succeed simultaneously – this would do no better than simply trying to send a single photon directly across the entire network; instead each link tries repeatedly to connect, and once the memories of every node are sharing entanglement with their adjacent nodes, the protocol proceeds. To this end, much work has been done to develop improved quantum

11 memories, which will be discussed more in Chapter 5.

1.3 Conclusion

Here we have given an introduction to the separate but related fields of quantum optics and quantum information. The following sections will detail the work that has been done towards two separate areas in this field. The first half of the thesis details our work done towards determining whether or not humans are capable of detecting single photons. This work falls in the field of general quantum optics research, as the applications of such a discovery have yet to be determined. Success of single-photon detection by humans could potentially enable us to explore how human photoreception might behave differently from man-made photon detectors. The goal of our research, however, is to discover rather than to engineer. In Chapter 2, we provide a review of existing evidence for single-photon vision. In Chapter 3, we describe the single-photon source we have built and the two-alternative forced-choice experimental design that we have implemented in conducting our studies on human subjects. Finally, in Chapter 4 we present our preliminary findings from our study, comparing them with a similar experiment with an altered observational setup.

In the second half of the thesis, we discuss our develop a high-bandwidth, high-fidelity quantum mem- ory system that serves as a prototype for a fundamental component in quantum computers and quantum networks. Many of the fundamental building blocks of memory (optical loops, fast optical switches, and polarization-to-time-bin transducers) have been demonstrated in previous works. However, we develop a prototype to surpass the time-bandwidth capabilities of these previous demonstrations. In Chapter 5, we provide an overview of quantum memories with a comparison of methods with respect to important memory characteristics. In Chapter 6, we describe the memory we have built and its free-space transmission for a long range of storage times. In Chapter 7, we demonstrate our ability to store polarization qubits with high

fidelity and discuss our potential for multimode storage.

12 Chapter 2

Introduction to Vision Project

2.1 Motivation

Many studies have investigated the sensitivity of the human eye. In 1995 it was confirmed that rods – the photoreceptors responsible for low light vision – produce a photovoltage response to a stimulus at the single-photon level [32]. Even before that, though, behavioral studies attempted to determine the visual threshold. Hecht et al. [33] asked observers to indicate whether or not they detected a short flash of 510- nm light. By modeling photon absorption as a Poisson process and making some assumptions about the quantum efficiency of the eye, they determined a threshold of 5-7 photons. A similar experiment conducted by Van der Velden [34] reached a similar conclusion, though encouragement for a higher rate of false positive responses led to a threshold closer to 2 photons.

One inherent fault of these experiments is their potential susceptibility to observer bias against false positive responses: When an observer is simply asked to state whether or not she has seen a stimulus, she may be reluctant to make a positive response unless she is quite certain she saw something, possibly leading to an artifically high threshold. Another fault of these experiments is the use of a classical stimulus rather than a true single-photon source; any classical light source with a significant probability of producing one photon will necessarily have a non-negligible likelihood to produce more than one in the same spectral-temporal mode.

Our work seeks to improve on these past experiments by using a 2-alternative forced choice decision test to overcome false positives (explained further in Section 2.3.2). We have also developed a source more closely approximating a true single-photon source. The following section will explain the mechanics of this source, and why it is necessary for a confident analysis of single-photon vision. We will also discuss the human visual transduction system, and why we are tentatively optimistic about a human’s capability to detect single photons, given the right environment.

13 2.2 What is a photon anyway?

Light is quantized in particles called photons [35], and a single photon would be the minimum of light that could be detected, whether by a human eye or some other type of single-photon detector. Specifically, a quantum mechanical definition would classify a photon as a quantum state of the electromagnetic field where the photon number n=1. In order to determine whether or not a person can see a single photon, we must ensure two properties of our experiment: a) that there is high probability of one photon in a single trial, and b) that there is very low probability of more than one photon in a single trial. We will go into more detail on the difficulties of this in Chapter 3. In any case, the state we are working to produce is called a photon number state (or a Fock state), and is the only kind of state that serves our needs.

Most light sources we are familiar with–the Sun, , LEDs, etc. – do not produce light in a Fock state. For example, lasers produce coherent light, which means that all the emitters in the gain medium of the laser produce electromagnetic radiation that is in phase. The time of each photon emission is completely independent of the photons before and after it, which means we have no way to predict when exactly the next photon will come. This process is known as a Poissonian process, and the variance of the process depends on the intensity of the light. A of light is described by the following equation [36]:

2 ∞ n − |α| X α |αi = e 2 √ |ni , (2.1) n=0 n! where |ni is a state with a particular number of photons n, and |α|2 = hni is the mean number of photons present in some time interval t (or in our case, how many photons in a pulse). The state is a superposition of all possible photon numbers, but some are much more likely to be measured. The probability of measuring a particular photon number in one of our pulses is:

hnin P (n, hni) = |hn|αi|2 = e−hni , (2.2) n! i.e., the probability mass distribution of a Poissonian distribution with mean hni. Fig. 2.1 shows example probabilities for three different mean photon numbers. Note that, even if our coherent state had a mean photon number of one, there would still be a significant probability of zero, two, and three photons in a pulse; this is very different from a single-photon Fock state, which should have a mean photon number of one and zero variance. Consequently, a classical light source is nearly useless for answering the question of whether or not a person can see a single photon.

14 Figure 2.1: Poissonian distribution for three different values of the mean photon number hni.

2.3 Are we sure humans can even see one?

The human eye is an extraordinarily sensitive and versatile sensor that can function in a range of brightness levels over a relatively short period of time. The detectors that enable this are two types of photoreceptor cells which convert light to electrical signals in the retina, as shown in Fig. 2.2. The cone cells function in the brighter range of intensity and provide high sharpness of vision. Together, the signals from three distinct subtypes of cones combine to produce color vision (Fig. 2.3). The rod cells are used for night vision and they do not distinguish colors. However, they are extremely sensitive, and in vitro studies have shown that they are capable of producing electrical signals in response to single photons [37]. Our work is to determine whether these rod signals make it through the rest of the visual pathway and result in human perception.

Tinsley et al. published recent work claiming observation of this human perception [38] – we aim to replicate this experiment with a higher-efficiency source and therefore greater statistical power.

The rods and cones are not uniformly distributed across the retina. The cone cells are highly concentrated in the fovea centralis, which is an area of the retina only about 0.3 mm in diameter at the center of the visual field (Fig.2.4). The rod cell density peaks about 20◦ to the left or right of the fovea, and there are no rods in the fovea itself.

2.3.1 Visual phototransduction

Visual phototransduction is the conversion of photons into electrical signals by the photoreceptor cells, and is important to discuss in order to comment on the capability of humans to detect single photons. A photosensitive molecule, rhodopsin, is stacked in membranes in the outer segment of the rod. In darkness, a “dark current” of ions constantly circulates between the rod cell and the salty fluid that surrounds it, and

15 Figure 2.2: Schematic of the eye. Light enters through the pupil and is focused by the cornea and the lens onto the retina. Detailed cross-section of the retina: incident light passes through layers of transparent nerve cells before reaching the rod and cone photoreceptor cells. The layers of nerve cells all help process signals from the photoreceptors. Figure adapted from https://commons.wikimedia.org/wiki/File:Retina-diagram.avg (Creative Commons license). this fluid causes a voltage drop of about -40 mV across the cell membrane [39]. In the presence of light, a rhodopsin molecule can absorb a photon, change its shape, and catalyze a chemical cascade that decreases the concentration of cGMP (cyclic guanosine monophosphate), a chemical messenger found in the outer segment of the rod. This results in the closure of many of the ion channels and a reduction in the “dark current.” The resulting voltage difference alters the production of a neurotransmitter chemical, which passes the signal on to other cells.

This process is able to amplify the presence of a single photon into a macroscopic photovoltage: just one activated rhodopsin can catalyze at least 1,000 reactions in a second [40]. A blue-green photon (∼500 nm) that hits a rod cell head-on has about a 50% probability of being absorbed, and subsequently about a 60% probability of activating a rhodopsin molecule and initiating a cascade. The resulting total rod quantum efficiency of about 30% has been confirmed in vitro with both a classical source [41] and a single-photon source (similar to the one used in our work)[37]. The total efficiency in the living eye is difficult to measure accurately, but the cornea is thought to have at least a 4% reflection loss, with an additional 50% loss in the vitreous humor of the eye at 500 nm, and another 30-60% loss coming from the rod density fill factors in the retina. This brings the total efficiency of the eye down to 5-10% in the periphery [33, 42].

To recover from the light response, guanylate cyclase (GP) synthesizes cGMP inside the rod to restore cGMP in the outer segment and reopen the ion channels. For dim flashes, the entire rise and recovery time sums up to about 300 ms [41]. Early in this experiment, we used our apparatus to explore this temporal

16 Figure 2.3: Spectral ranges of the three cone subtypes (short-, medium-, and long-wavelength) and the rods. Taken from https://commons.wikimedia.org/wiki/File:1416_Color_Sensitivity.jpg (Creative Commons license). response with regards to the integration window [43]. The sensitivity of the eye is vastly increased after 5-10 minutes in darkness, and full dark adaptation takes 30-40 minutes depending on the initial light conditions1.

The photocurrent of a single rod has been previously measured directly by holding a living cell in a suction pipette connected to a measurement circuit (Fig. 2.5). Both toad [45] and monkey [32] rods were explored using this method, showing occasional photocurrent pulses in response to very weak flashes of light.

The statistics of the response were consistent with detection of single photons; single-photon sensitivity of toad rods was later verified using a true single-photon source [37].

Rod cells will sometimes also register “dark counts” – responses that are not triggered by a real photon but are otherwise indistinguishable from real detections. These can occur when rhodopsin is activated by random thermal fluctuations, which happens about once every 300 years; however, since rod cells each contain about 108 rhodopsins, total thermal activation per rod cell is about once every 90 seconds [46].

Since the human eye contains about 120 million rods, we see quite a lot of “dark counts” once the eye is fully dark adapted. It is very possible that this could lead to humans not being able to detect single photons, due to the difficulty of distinguishing the photon signal from general noise [47]. As a remedy, the human visual system could require a coincident signal from multiple photoreceptors, i.e., a few co-located rods might have to fire within a short time window to generate a detectable (to the observer) neural signal.

1The age of the observer can also affect the amount of time it takes to dark adapt. Older people generally need longer to achieve full dark adaption [44].

17 Figure 2.4: Typical angular distribution of photoreceptor cells on the retina. The cone cells are concentrated in the fovea, and the density of the rod cells peaks about about 20◦ to the right and the left of the fovea. The blind spot, where the optic nerve passes through the retina is located about 12-15◦ in the temporal retina, about about 1.5◦ below the horizontal. The exact location of the blind spot and distribution of photoreceptors can vary significantly between individuals. Taken from Foundations of Vision at https:// foundationsofvision.stanford.edu/wp-content/uploads/2012/02/rod.cone_.distribution2.png.)

2.3.2 Previous Studies

Hendrik Lorentz conducted one of the first human vision threshold studies when he calculated the number

of photons in a flash of light that was considered at the time to be “just visible” – that number was found

to be about 100 [48]. Since a significant fraction of photons was known to be lost in transmission, with

the remaining photons spread out over hundreds of rod cells, it seemed likely that individual rod cells were

capable of detecting single photons, which maybe meant one photon was enough to cause the perception of

light.

Next, Hecht, Schlaer, and Pirenne conducted a study where each of them sat in a dark room and received

very dim flashes of light, with mean photon numbers between 50 and 400. After each flash, the observer

had to report whether they saw the flash or not. For interpreting the data, the authors assumed that the

number of photons detected by the visual system in each trial was a random value from the Poissonian

distribution. They then assumed a threshold number of photons k required for perception and created a

model for how often observers should see flashes with particular mean photon numbers for particular values

of the threshold. The probability of any flash exceeding the threshold k is

∞ ∞ X X (ρhni)n P (hni, k) = P (n, ρhni) = e−ρhni , (2.3) n>k n! n=k n=k

where ρ is the probability that a photon incident on the cornea is detected (so hni is the mean photon

number at the cornea, but n is the number of photons actually detected in each pulse). Experimentally, the

18 Figure 2.5: (a) Isolated toad rod, drawn by suction into tight-fitting glass electrode. b) Measuring the pho- tocurrent of a single rod cell. The light-induced current flows through an electrode place in the surrounding electrolyte bath and can be measured by an amplifier. (c) Photocurrent from a monkey rod in response to brief flashes at t=0. Traces correspond to flashes with a range from n = 1 to n = 500. (d) Photocurrents produced by a series of dim flashes with different amplitudes representing absorption of either 1 or 2 photons. Figures from [41] (copyright by American Physical Society, used with permission). value of k can be determined by finding the best fit to the observed frequency of “yes” responses for several mean photon numbers. Hecht et al. determined that their data was best described by models with k = 6, k = 7, and k = 5 for each observer, respectively (Fig. 2.6).

Since the flashes covered an area of about 500 rods and the threshold was shown to be only 5-7 photons, the experiment seems to show that the rods are detecting the photons individually. However, we need to address a few problems with the assumptions and experimental design which reduces the accuracy of their estimate. First of all, asking observers to simply report whether a dim flash was seen or not relies on accurate judgment from people who have uncertain information [49]. We previously discussed the amount of noise in the visual system that results in an occasional signal when no photons are actually present. The authors/observers may have compensated for this noise by increasing their threshold for a “yes” response, or they might have responded “yes” for a signal that was simply a dark count. The former method could have falsely increased the threshold, and the latter could have falsely decreased the threshold, which results in large error bars on the reported accuracy. In a similar experiment, Van der Velden (1944) [34] encouraged observers to allow a higher rate of false positive responses, pushing down his measured threshold for 1-2 photons.

Sakitt (1972) [42] tried a new approach, where she instructed observers to rate the visibility of dim flashes on a scale from 0-6. She found that at least some observers’ ratings followed a Poisson distribution, with the mean of the distribution increasing linearly with the mean photon number of the flashes. This suggests that

19 Figure 2.6: Data from Hecht, Schlaer, and Pirenne [33]. The values shown for n are what we referred to above as k, the threshold for a “yes” response. (Creative Commons license). the observers were registering and counting the number of rod signals in each flash and assigning ratings accordingly. She saw that some observers began counting at about 2 or 3 rod signals, whereas Sakitt herself started her count at one rod signal. She argued that while a single-photon detection threshold may not be the optimal strategy for the visual system normally, observers were perhaps consciously capable of adjusting their own threshold. Unfortunately the study had a small sample size and relied on the assumption that about 3% of photons on the cornea were detected in the area of the retina that Sakitt studied. Since efficiencies vary between individuals, any single-photon vision study which relies on an assumption for this value is likely to be inconclusive.

To overcome the issue of observers having a natural bias against false positive responses, we use a two-alternative forced-choice (2AFC) design that does not ask the observer whether or not a stimulus was detected. It instead asks a question which is more likely to be answered correctly if the stimulus was detected.

We show the observer a stimulus that appears randomly on the top or bottom of their viewing station, then

ask which side the stimulus appeared on. If the observer can see the flashes, even in only a few trials, they

should be able to choose the correct side more often on average than they would with random guessing

(50%). Of course, if the chance of detection is low, then we need a large number of trials for a statistically

significant result.

Tinsley et al. (2016) [38] proclaimed the first direct detection of a single photon by humans with an

experiment very similar to ours, using a 2AFC protocol where viewers choose when they saw a photon,

rather than if they saw a photon. This overcomes the false positive threshold by changing the question. For

blank trials, viewers will answer correctly only 50% of the time. However, if they are capable of detecting

single photons, they should be able to see the single photons >50% of the time with statistical significance.

Tinsley saw an accuracy of 60% with ∼250 single-photon trials that were rated with high confidence (R3) by observers. However, when explored further, some self-contradicting results and lack of statistical power lead

20 to a strong motivation for us to replicate their study with our own apparatus. Interestingly, high-confidence

(R3) trials were used to optimistically select for trials where a single photon actually produced a signal the subject observed, leading to the increased overall accuracy. If this were true, though, we would assume that single-photon trials would have a higher percentage rated as R3 than blank trials (hopefully no one who is not receiving a stimulus will be confident that they are). In the actual experiment, though, we see

10% of both blank and single-photon trials as high-confidence, suggesting that the single-photon stimulus did not rise much above the dark counts produced by the eye itself. Additionally, due to an imbalanced experimental apparatus, there were 10x more blank trials received by the subject than single-photon trials.

This is problematic because smaller sample sizes are less resilient to statistical fluctuations. The variance of their ∼250 single-photon trials is ∼3x the variance of their ∼2,500 blank trials, so it is not statistically sound to compare the two distributions equally.

All in all, there is very tempting evidence suggesting that single-photon vision may be possible, but previous results simply motivate more investigation. Our experiment has the most promising setup so far to strongly demonstrate single-photon detection by humans.

2.4 How do we measure and send just one photon?

To quantify the probability of emitting two or more photons, we use the second-order correlation function

(with zero time delay) called the g(2)(0):

h(a†)2a2i g(2)(0) = , (2.4) ha†ai2 where a† and a are the creation and annihilation operators which represent changes in photon number

(a† |0i = |1i). The g(2)(0) is equivalent to the probability of emitting two photons at the same time, divided by the probability of getting two photons at the same time from a random (Poissonian) source. An ideal single-photon source should have a zero probability of emitting two photons at a time, so g(2)(0) = 0. We

(2) can also see this when we write g (0) in terms of variance, VN , and mean photon number, hni:

V − hni g(2)(0) = n + 1. (2.5) hni2

The variance of an ideal single-photon source is zero and hni = 1, so the numerator is -1, the denominator is

1, and the sum is equal to zero. In practice, we measure the g(2)(0) by splitting photons from the source into two beams and measuring both beams with single-photon detectors to determine how often two photons are

21 detected within a short coincidence window (30 ns).

2.4.1 Producing single photons

To produce our single photons, we can make an extremely good approximation to the n = 1 Fock state by using a process that produces a random stream of photons but can “herald” when one photon has been emitted and can then control how many photons reach our observer. Such heralded single-photon sources rely on nonlinear optical effects, which produce photons in pairs: one photon from each pair can be detected and destroyed, heralding the presence of its partner, the “signal” photon. These sources can be bright, and can achieve high efficiency limited primarily by optical loss and fiber coupling. The single-photon source used in this work is a heralded source based on spontaneous parametric downconversion, and will be described in more detail in Chapter 3.

2.4.2 Detecting single photons

We need to use single-photon detectors that operate differently than classical intensity detectors that mea- sure average optical power. These detectors come in a range of types, including semiconductor devices such as single-photon avalanche photodiodes (SPADs) and visible-light photon counters (VLPCs), and super- conducting devices such as nanowire and transition-edge sensors[50]. We use SPAD’s which have detection efficiencies from about 45-75% depending on wavelength, sub-nanosecond rise times, and some intrinsic noise in the form of “dark counts,” resulting from the high gain needed to amplify the electrical response to single photons. They also typically have a “dead time” of about 50 ns after a photon is detected, during which no photons can be detected (though in a process known as “twilighting,” an incident photon can be absorbed during the dead time, producing a count immediately at the end of the deadtime [51]). Finally, the SPAD’S cannot distinguish between simultaneous detections of one or more photons.

2.5 Conclusions

In this introduction we discussed what single photons are and why humans might be able to see them. In

Chapter 3, we will describe in detail the single-photon source that we have built, while in chapter 4, we will present the results of our work so far to observer single-photon detection by humans.

22 Chapter 3

Single-Photon Source and Observation Station

3.1 Single-Photon Source

3.1.1 Spontaneous parametric downconversion

Spontaneous parametric downconversion (SPDC) is a process that splits one “pump” photon into a pair of lower-energy “daughter” photons [52]. It is a nonlinear optical effect, which means it depends on a higher- order term in the dielectric polarization P of a material in an electric field. The optical response can be expressed as a power series:

P (t) = χ(1)E(t) + χ(2)E2(t) + χ(3)E(t) + ... (3.1)

Crystals that lack inversion symmetry contain a χ(2) term [53] and this makes SPDC possible. Any time- varying electric field of light that contains two frequencies ω1 and ω2 can be written as

−iω1t −iω2t E(t) = E1e + E2e + c.c., (3.2) where c.c. is the complex conjugate. Then the χ2 term in the expansion of P (t) contains terms like

2 −i(ω1+ω2)t E (t) = E1E2e + ... (3.3) and so on. The above term represents two frequencies combining to make a wave with higher frequency, commonly known as “sum-frequency generation” where two photons combine to make a higher-energy pho- ton. However, SPDC is effectively the reverse of this process – vacuum fluctuations stimulate the conversion, subject to energy conservation, of one high-frequency photon into two lower frequency “daughter” photons, historically called “signal” and “idler” photons. The chance of a split happening is about one in 109 for each pump photon in a bulk crystal (waveguides and cavities can lead to increased conversion efficiencies), and

23 Figure 3.1: Phase matching in non-degenerate, non-collinear spontaneous parametric downconversion. a) Phase matching inside a nonlinear crystal, showing the cone of downconverted photon pairs. b) Momentum conservation. c) Energy conservation.

the process obeys the statistics of a random (thermal) process.

SPDC also obeys conservation of momentum, enforced by the laws of phase matching (Fig. 3.1). In the

collinear degenerate case (where the photons are emitted in the same direction and have equal energies),

conservation of momentum requires kp = ks + ki, where

kp = cos θsks + cos θiki (3.4a)

sin θsks = sin θiki (3.4b) and n(ω) is the frequency-dependent index of refraction in the material. In order to solve these equations, we need a birefringent crystal which has different values of n for different polarizations along certain axes. This way, the phase-matching condition can be satisfied by at least one of the daughter photons having a different polarization than the pump. For Type I phase matching, both signal and idler have the same polarization and thus experience the same refractive index; for Type II phase matching, the signal and idler photons have orthogonal polarizations so that one experiences the crystal’s ordinary index of refraction while the other experiences the extraordinary index.

To control where the daughter photons emerge, we can cut the crystal at an angle relative to its optic axis. This will produce a spectrum of energies and momenta that obey phase-matching, so we need filters to select our desired signal and idler wavelengths. Experimental details are discussed in Subsection 3.1.2.

Since the photons are always produced in pairs, counting n idler photons accurately indicates the presence of n signal photons [54]. To make a practical single-photon source based on SPDC, we count the idler, or

“herald” photons, with a single-photon detector, blocking any unheralded signal photons (in principle there

24 (a) (b)

Figure 3.2: a) Variable attenuator (VA) consisting of a motorized half-wave plate followed by a polarizing beam splitter (PBS). The photodiode is used in coincidence with our herald photons to signal the presence of a signal photon, and thereby avoid triggering on background or dark counts. b) Spatial filter for the pump laser. The focusing lens has a focal length of 150 mm. The collimating lens has a focal length of 35 mm. The beam diameter (divergence) of the laser at the aperture for the x/y axis is 0.98 mm (0.76 mrad)/0.72 mm (0.83 mrad). The pinhole diameter was nominally 40 µm.

Repetition Pulse Width (ns) Average Output Pulse energy (uJ) Rate (kHz) Power (mW) 100 20 24 0.24 80 15 40 0.5 50 11 80 1.6 40 10 90 2.25

Table 3.1: Pulse width, repetition rate, and average output power for Crystalaser Model QL266-050. Pulse energy is calculated as average output power × pulse width.

are none – signal and idler photons are always created in pairs –but loss and finite detection efficiency can

yield unwanted signal “orphans”). In the next section, we describe our experimental implementation in more

detail.

3.1.2 Experimental Implementation

Pump Laser

We use a 266-nm UV pump laser (frequency-quadrupled Nd:YAG, Crystalaser Model QL266-050) to produce

signal and idler/herald wavelengths at 505 and 562 nm, respectively. The pump is pulsed with a variable

repetition rate, typically around 50-55 kHz, and has a 5- to 20-ns pulse width. Measured pulse width, peak

power, and pulse energy are shown in Table 3.1 for four different repetition rates. The average output power

degraded over the use of the laser, e.g., declining from 40 to ∼ 0.1 mW, over the course of five years.

A higher repetition rate is desirable to minimize g(2), which is proportional to the pulse energy. However,

the average power of our laser also decreases at higher repetition rates. We chose 50 kHz as a reasonable

balance between average power and pulse energy for our single-photon experiment. We further control the

laser power incident on the SPDC crystal with a half-wave plate and a PBS. A photodiode at the PBS output

25 Figure 3.3: Optical setup for collecting SPDC photon pairs at 505 nm and 562 nm into single-mode fiber (SMF). Bandpass filters (BP) are used to select the correct wavelengths from the broad spectrum that satisfies the phase-matching condition.

is used to conditionally pick out signal photon counts coincident with the signal of the laser (discussed in

the next section).

Nonlinear crystals are particularly susceptible to damage by the UV laser over time, resulting in degra-

dation of spatial mode and loss of power. In order to remedy this, a pinhole was necessary to select out a

Gaussian pump mode for our downconversion process. After the PBS, we focus our beam onto a pinhole

spatial filter to “clean” our spatial mode (see Figure 3.2) – we focused our beam using a 150-mm focal

length lens placed one focal length from a 40-µm diameter diamond pinhole from Fort Wayne vendor Wire

Dire. The lens was placed approximately 50 cm from the aperture of the laser. After the pinhole, we then recollimated our beam with a 35-mm lens placed one focal length away.

Downconversion and Heralding

A schematic of our source is shown in Fig. 3.6. After spatial filtering, the pump beam is focused to a spot with a 500-µm beam waist on a crystal with a χ(2) nonlinearity. We use a β-barium borate (BBO) crystal

cut for Type-II phase-matching with a 266-nm pump. The selected modes emerge at 3◦ relative to the pump

beam, and are then collimated with two 250-mm focal length lenses placed one focal length from the crystal

at the correct locations (see Fig. 3.3).

Approximately 400 mm after the collimation lenses, we collect (with 11-mm focal-length collection lenses)

downconverted photons at 505 and 562 nm into two single-mode 460 HP optical fibers. The two fibers are

AR coated to reduce Fresnel reflection losses to < 0.15%. Bandpass interference filters (Semrock 563/9-25

and 504/12-25) are used to select the signal and herald wavelength, with filter spectra shown in Figure 3.4.

These filters are also used on the output fibers to the observation station, to ensure no additional noise gets

26 Figure 3.4: Spectra of the filters used to select the herald and signal wavelengths, measured with a spec- trophotometer. The heralded spectrum is calculated from energy conservation and the measured herald spectrum. Figures taken from Rebecca Holmes’ thesis [55, 56]. through to the observer and distracts from the original signal.

3.1.3 Alignment

In order to achieve high heralding efficiency, we prioritize the heralding efficiency of the “trigger photons”

(the idlers) for the signal photons that we send to the observer, i.e., we maximize the efficiency for collecting the 505-nm photons, conditional on detecting a herald photon at 562 nm. A wonderful outline of the alignment procedure is included in the previous graduate student’s thesis [56]. It should be noted that the trigger photons were counted in coincidence with a photodiode triggered on a small diverted fraction of the pump laser in order to eliminate background counts. The heralding efficiency, ηH , is defined a:

c012 ηH = , (3.5) s01ηD2

where c012 are the coincidences between signal (1) and idler (2) and s01 are the singles of the idler, all coincidenced on the photodiode (0). ηD2 is the detection efficiency of the signal detector (the idler detection efficiency is canceled out on both sides of the equation). The effective spectral transmission efficiency of the 505-nm bandpass filter was measured to be 0.952 by comparing the coincidence rate with and without the filter. The spatial heralding efficiency (the probability that we herald the same spatial mode that is selected by the single-mode fiber in the signal arm) was measured to be 0.816 by comparing the coincidence rate with a single-mode fiber in the signal arm to the rate with a multi-mode fiber (which collects hundreds of spatial modes). All fibers had a custom anti-reflection coating applied to both ends to reduce Fresnel reflection losses, specified to be less than 0.0015 by Evaporated Coatings, Inc. With this information, we would predict a heralding efficiency of 0.952 × 0.816 × 0.75 × 0.97 = 0.562 which is close to our measured

27 Figure 3.5: Fast optical switch – when the Pockels cell (PC) is off, vertically polarized signal photons are rotated to horizontal by the half-wave plate (HWP) and are transmitted through the second polarizing beam splitter (PBS). When triggered on a herald photon detection, the PC rotates signal photons back to vertical polarization, and they are reflected at the second PBS. The 1st PBS is placed to clean up any polarization drift that could have occurred during fiber transmission; reduced overall transmission (99.8%) from the clean-up PBS is preferable to additional accidental photon transmission in a single-photon source. value of 0.55. However, the pump rate and heralding rate were not recorded for these measurements and it has subsequently been found that heralding efficiency depends significantly on the number of pairs/pulse (a function of these two numbers). This will be discussed more in Section 3.1.5.

In order to measure this heralding efficiency we used imperfect detectors, and then scaled up the efficiency by 1/ηd where ηd is the detector efficiency of the signal detector. We use fiber-coupled SPAD detectors, with the “trigger photon” detector optimized for the blue-green region of the visible spectrum (Laser Components

COUNT BLUE series, model 50B-FC) with an efficiency >0.70 at λ = 562. For aligning our signal arm, we used a Perkin Elmer Single Photon Counting Module (250-FC) with an efficiency of about 0.45 (this choice was made simply because one of our COUNT BLUE series broke).

3.1.4 Pockels cell switch

In order to make a true single-photon source, we need to ensure that no extra signal photons arrive at the eye outside of what is heralded by the detection of an idler photon. In order to implement this, we make a “switch” from a Pockels cell and a polarizing beam splitter (Figure 3.5). Each trigger detection heralds the presence of one signal photon and activates a Pockels cell, which is used as part of a fast optical switch that allows only heralded 505-nm photons to pass. The Pockels cell is a crystal arranged so that an applied voltage causes it to act as a voltage-controlled wave plate with a variable retardance( It does this through the use of the Pockels effect, which is an induced birefringence in certain crystals in the presence of a strong electrical field.). We use a BBO Pockels cell (Gooch & Housego Lightgate 3 with custom anti-reflection

28 coating for 505 nm) with a measured half-wave voltage of 2.31 kV 1. The measured extinction ratio of the

Pockels cell switch (ratio of transmitted photons through PBS2 when the switch is off versus when the switch is continuously on) is >800:1 and its transmission is 98.5%.

While the rest of our setup uses waveplates set in automated rotation stages (which are cheaper and less dangerous), using the Pockels cell as an optical shutter is necessary to achieve the speed we need to maintain a high-quality source. Specifically, the time delay between a herald detection and the optical response of the

Pockels cell is about 125 nanoseconds, while the response and rotation time of a waveplate is on the order of seconds. Even with the increased speed of the Pockels cell, we still need to initially delay our photons with a 25-m-long single-mode 460 HP optical fiber, which allows a time delay sufficient for activating the Pockels cell. While the downconversion photons are themselves highly polarized, optical fibers can cause polarization transformations over time; to remedy this, we place a clean-up PBS immediately before the Pockels cell and use fiber polarization paddles on the delay fiber of the signal photons to maximize transmission.

3.1.5 Performance of the source

A schematic of the full single-photon source is shown in Fig. 3.6. The basic operation goes in the follow- ing order: (1) the pump laser is activated, (2) photon pairs are created and herald photons (detected in coincidence with the pump photon) are counted by an FPGA, (3) the Pockels cell is triggered, transmitting photons through PBS2, and (4) when the desired herald count (usually one, in our case) is reached, the pump laser is shut down before another pulse is emitted (additional FPGA details in the following section).

The herald photons are counted in coincidence with the pump photodiode to ensure triggering on a genuine herald photon and not a background photon or dark count. Without the coincidence circuit, dark counts of

25 counts per second would dilute the number of our true herald photons with noise that does not correlate to a signal photon. Since we typically operate at a herald rate of 80 counts per second for our single-photon experiments, our maximum heralding efficiency would be reduced to 0.37 × 80/(80 + 25) = 0.28 if we did not condition our counts on the pump pulse.

FPGA

To produce heralded photons in real time, we must be able to count heralding photons quickly and shut the pump laser down in time. With the pump operated at repetition rates around 50 kHz (e.g., 20 microseconds between pulses), we require faster processing than can typically be implemented with an office computer.

An FPGA is essentially a programmable circuit that is programmed to do a single task with direct access

1The half-wave voltage is the voltage at which our Pockels cell behaves as a half-wave plate, rotating polarization from vertical to horizontal and vice versa

29 Figure 3.6: Schematic of the single-photon source. The pump laser power is controlled by a variable attenua- tor (VA). It produces pairs of photons at 562 and 505 nm inside a nonlinear BBO crystal. The 562-nm herald photons are then sent to a single-photon avalanche photodiode (SPAD) and counted (in coincidence with a photodiode triggered on the pump pulse) by an FPGA. The 505-nm photons are delayed through a 25-m fiber; a fiber polarization controller (FPC) is used to match their polarization to a polarizing beamsplitter (PBS1). Herald detections trigger a Pockels cell (PC) switch which allows heralded 505-nm photons to be reflected by PBS2. Once the desired number of photons is detected, the pump laser is shut off. The 505- nm signal photons are finally directed into one of two output fibers using a computer-controlled motorized half-wave plate (HWP) and PBS.

30 to input and output signals. Because of its simplicity and direct connections, it can be extremely fast.

In particular, we use a Xilinx Spartan-3 FPGA Starter Kit board with a dedicated 50 MHz series clock

oscillator source. Our FPGA program (1) reads a target herald count from the source control software, (2)

activates the pump laser via a TTL gate input, (3) counts pulses from the herald single-photon detector, and

(4) deactivates the pump immediately upon reaching the target count. In a test where ten herald photons

were produced and counted over 20,000 times, there were no extra (or missing) herald photon detections.

Heralding efficiency

The total heralding efficiency at the eye for a source properly calibrated for single-photon generation was

found to be 0.31 ±0.2, compared to 0.55 into single-mode fiber before any other optics. We can derive

this by examining transmission measurements of each of our optics and correcting for multi-photon counts

inherent in our higher herald rates. The 25-m optical delay fiber has a transmission of 88%, the Pockels cell

transmission is 98.5%, three polarizing beam splitters each have 99.8% efficiency in the reflected port, and

two custom half-wave plates for 505 nm each have a transmission of 99.5%. We collect our photons into

few-mode (9-µm core) with a collection efficiency of 82%, which gives us a predicted final heralding efficiency

of 0.55 × 0.88 × 0.985 × (0.998)2 × 0.82 = 0.385, which is somewhat higher than the measured value of 0.31

± 0.2. As mentioned in section 3.1.3, our heralding efficiency scales with the number of pairs incident in a single pulse. This makes sense, as the greater the number of pairs, the higher the chance of multi-photon events occurring. Since our photon detectors are not photon number resolving, these events look like single photon events with a higher heralding efficiency than normal, with the probability of a click on the signal

k photon detector scaling as Pc(η, k) = 1−(1−η) , where η is the detector efficiency (0.45) and k is the number of photons incident on the detector. As the pulse energy increases, the number of photons incident on the detector goes up as well, and so we see higher heralding efficiencies for higher pump powers. At our highest possible pump power, with 0.275 pairs produced from each pulse, we see a maximum heralding efficiency of

0.49. At a lower pump power with 0.0016 pairs in each pulse, we see a maximum heralding efficiency of 0.31.

This latter pairs per pulse rate gives us a low enough g(2) to properly run a test of single-photon vision. The heralding rate of 0.55 was likely measured at a pair rate of 0.1 pairs in each pulse, estimated by a linear fit relating pump power to heralding rate. g(2)

In order to measure the g(2)(0) [57] of our source, we used our final half-wave plate to equally split the signal photons between the outputs of the polarizing beam splitter (PBS3 in Figure 3.6), and temporarily

31 Figure 3.7: Measured g(2)(0) as a function of the average number of herald detections per pulse.

installed SPAD’s at the ends of our observation station fibers to detect coincidences between the two ports.

Specifically, we measured the conditional g(2)(0), which is the g(2)(0) of the signal photons when a herald

photon is detected. All counts were made in coincidence with our pump pulse photodiode:

(2) c0123 g (0) = s01, (3.6) c012c013 where c indicates coincidence counts, s indicates singles counts, and the labels 0, 1, 2, and 3 correspond to the photodiode, the herald detector, and the two signal detectors, respectively. These measurements were taken over various time intervals ranging from 30 seconds to 90 minutes. We can interpret the ratio of coincidence counts as the probability of simultaneously receiving two photons in the signal arm divided by the probability of getting two photons simultaneously in the signal arm as a result of accidentals (all conditioned on a herald detection), multiplied by the probability of getting a singles count in the herald arm. To add a bit more clarification, accidentals refer to coincidence counts that result not from correlated photons from a single signal-idler pair, but from simultaneous photons, e. g., from independent pairs, noise or detector dark counts.

The previous graduate student did a wonderful job of measuring g(2) at several different pump powers and at two rep rates, as shown in Fig. 3.7. Though the best g(2) was found at 80 kHz pump repetition rate, with a herald detection rate of ∼52 counts per second (cps), the degradation of the pump laser forced us to choose a slightly lower repetition rate that was capable of generating enough power. For our single-photon tests, we operate our laser at 50 kHz, with a herald detection rate of 80 cps, yielding a g(2) of 0.0031±

32 Figure 3.8: Heralded photons pass through the final polarizing beam-splitter (PBS) of the Pockels cell switch from the left. A motorized half-wave plate (HWP) is randomly set by a Labview program, causing the photons to be either transmitted (HWP at 0◦) or reflected (HWP at 45◦) by a second PBS. The light from the fiber in the reflected (transmitted) port of the PBS fiber is transmitted to the top (bottom) side of the observation station, respectively. As they exit the collimation packages, the photons pass through a second lens that weakly focuses them onto a 30 mu-m spots on the retina; the spots from the two channels are separated by ∼10 mm, and well away from the “blind spot”.

0.0015. This means that the probability of a multiple-photon event was <0.5% for each heralded photon,

i.e., for every 200 trials, only ∼1 multi-photon event is likely to occur. Additionally, if we assume 10%

transmission of the eye, a multi-photon event is now only likely to occur every 2000 trials. For context, in

our experiment the total number of trials is expected to be on the order of 4000, the number required to yield

¿250 high-confidence trials. An expected number of < 2 multi-photon trials thus makes up less than 1% of

our high-confidence trials. A significant result would require a 60% success probability in our high-confidence

trials, so multi-photon trials would play a negligible role towards a false claim of single-photon detection in

this particular experiment.

3.2 Observation station

The ideal experimental design for this experiment is a two-alternative forced-choice (2AFC) experiment: we

present a photon randomly on the top or the bottom half of the visual field and ask, “which half?” (Fig.

3.8). If the observer can choose top or bottom with accuracy significantly above 50%, we can conclude they

were able to detect the photon in at least some trials. To account for any other indicators, we perform an

equal number of “blank” trials and compare the two accuracies in our analysis.

We do realize that the visual threshold for photon recognition may dynamically change based on the

circumstances (after all, an observer would not be able to recognize an additional single photon on a bright,

sunny day). However, Sakitt (1972) argued convincingly that observers may be able to adjust their own

33 Figure 3.9: The observer’s visual field (left) and a top-down schematic (right) of the observer viewing station. The fixation cross consists of a dim 700-nm LED placed 15 cm behind a crosshairs mask; black fabric (not shown) encloses the region between the LED and the mask. Light from the top and bottom optical fibers is collimated with 11-mm focal-length fiber collimation packages, then focused with 150-mm focal length lenses (see text for details). Degrees of freedom for motorized mounts are available on both sides. The top and bottom beams are both aligned to the observer’s right eye when the observer is positioned in a chin rest, with differing distances of ± 3, 4.5 cm (bottom, top) from the fixation cross. The chin rest forehead bar is within 0.5 cm of the front of the observer’s eye. thresholds to suit an experimental task [42] – to this end, we make an effort in our work to “train” our subjects. By exposing the subject to a dim source many times before actual data acquisition, we can simultaneously observe the potential of our subjects to perform well, and set them up best for success as they become accustomed to the protocol. In this section, we will describe the observer viewing station built to deliver photons to targeted locations on the retina. We will also talk about the structure of our experimental trials and our breakdown of the number of trials needed for our work.

3.2.1 Spatial Configuration

A schematic of the viewing station is shown in Fig. 3.9. It is located in a small room adjacent to the single- photon source room, and photons from the single-photon source are delivered via the two optical fibers from

3.8. The collimation lenses on the ends of the optical fibers are held in motorized mounts, approximately

30 cm above the breadboard, and the observer herself performs fine alignment (with a keyboard input) at the beginning of a session. In this experiment, we chose a visual angle of 20◦ to the right, with ±7◦ above and below the fovea (see Figure 2.4 for a typical distribution of photoreceptor cells along the retina. We chose these angles to target the area of maximum rod density while avoiding the blind spot; however, there

34 is some individual variation in the distribution of rods and cones on the retina and the location of the blind

spot.

In the original version of this experiment, fibers were positioned to the left and right of the fixation cross,

about 5◦ above the fixation cross. However, issues arose due to the proximity of the blind spot on the right

side (see Figure 2.4). To remedy this, both fibers were moved to the left side of the fixation cross. We then

experimented with different visual angles in the horizontal and vertical plane, observing the success rate

of one individual performing short sessions with photon numbers of 185 and 370 photons at the cornea at

the differing angles. We found best performance at the current angles. We were then able to compare the

performances of three individuals who ran sessions on both the left-right and top-down configurations. On

average, performances remained largely the same.

The observer sits with her head stabilized by an adjustable chin rest; a fixation pattern, consisting of

a 700-nm LED positioned behind a mask, is used to help maintain alignment during the test. When the

observer is in the correct position, she sees a symmetric cross – movement from this position leads to a loss

of the pattern (as the LED crosshairs mash, and observer fovea must be collinear). Since the rods are not

sensitive to far-red light, the alignment LED does not ruin the dark adaptation.

Retinal spot size

Light from the top and bottom fibers are each collimated with an adjustable fiber collimation package

with an 11-mm focal length, then weakly focused onto the retina using 150-mm focal length lenses. The

transverse spatial mode of the photons determines the probability distribution of their position on the retina,

and it was desirable to focus this to a relatively small area (30 µm). To model this, the previous graduate student [56] used several existing eye models in optical modeling software (ZEMAX) and found these models approximately agree with Gaussian ray transfer calculations, another method of optical simulation that we will discuss further in Section 7.2.

Fixation cross and alignment

The fixation cross is a dim red crosshairs equidistant between the up and down stimulus locations. Because of the insensitivity of rods to long-wavelength light, the choice of 700 nm allows observers to dark adapt while still having a visual to focus on for alignment purposes. We instruct the observer to focus on the cross during experimental trials, which fixes the position of the visual field and ensures the desired locations on the retina remain targeted. The cross will only appear symmetrical if the observer holds her head so that the mask is directly centered on the LED. Any head movements will disrupt this alignment; maintaining

35 Figure 3.10: The effect of pre-exposure to light on dark adaption time [58]. Though the adapting inten- sity is listed in units of photons, these units are actually now called “microtrolands,” where 1 troland is approximately the intensity of the sunlit sky (so 400,000 photons is about half this intensity). (Copyright by Rockefeller University Press, used with permission). a symmetric pattern ensures the observer will keep movements very minimal over the duration of the test.

Focusing on the cross also defines the focal length of the observer’s lens, which is critical to producing a reliable spot size. The viewing station alignment procedure is included in Rebecca Holmes’ thesis [56], and copied in Appendix A for reference.

Dark adaption

The observer remains in total darkness for the duration of this session, which typically lasts 90 minutes, including 30 minutes of dark adaption. Dark adaption is the gradual regeneration of rhodopsin in the rods in darkness, as well as a shift from the cone system to the rod system. The process is nearly complete after 30 minutes2, though some continuing dark adaption occurs for at least 40-50 minutes. As shown in

Fig. 3.10, this process completes more quickly if the pre-adapting light level is low. Our observers begin all experimental sessions with approximately 10 minutes of alignment in dim light, followed by 30 minutes in total darkness before beginning experimental trials.

Session structure

In each trial, the observer hears a chime prompting her to press a key, which activates the single-photon source. After a stimulus is presented on the top or bottom, the observer uses keys to indicate on which side she thinks it appeared, as well as a confidence level from R1 to R3 (R1 = no confidence/guessing, R2 = some confidence, R3 = high confidence). The observer is in control of when the stimulus is delivered, so the

2While we have reported a longer dark adaption time for older subjects, the average age of our observers is ∼ 22 years old.

36 Figure 3.11: Steps of one experimental trial, total timeline ∼10 seconds. whole experiment is effectively self-paced. Once she submits her answer, she then hears a sound telling her whether her response was “correct” (she chose the half where a potential photon was sent) or incorrect (she chose the wrong half); note that in most cases (> 90%), NO photon was delivered to the retina. A typical trial takes 10 seconds to complete, so a block of 300 trials can be completed in about 60 minutes, allowing 10 minutes for intermittent breaks. Note that because of source inefficiency and loss, this means that photons only reach the rods at most 30 times per session.

After a trial completes, the source resets itself and emits another alert sound to let the observer know she can send the stimulus whenever she is ready. We set the reset time to three seconds, regardless of whether the waveplate rotates or not, so as not to hint to the observer what side the stimulus will be on. The source is controlled by a LABVIEW program for these trials, with a flowchart of the control software shown in

Appendix B of Rebecca Holmes’ thesis [56]. We intersperse our single-photon trials with much brighter

(nW) trials which use a 505-nm LED as the stimulus; specifically the LED is used every 20th trial for most of our subjects3. This LED was also used for our “training” sessions, so as to minimize use of a dying laser

(see Sect. 4.2.1).

3.2.2 Previous work with the source

The previous graduate student used this source to conduct a study of temporal summation in the human visual system. Since the author of this thesis assisted in that experiment, some of the details are discussed here. A much deeper analysis is included in Rebecca Holmes’ thesis [56]. Temporal summation refers to the eye’s ability to integrate signals that arrive at different times. This summation can be characterized by an integration window, which is the length of time over which incoming visual signals are summed. We conducted two experiments to study this integration window and the integration efficiency of the human eye. In Experiment 1, we delivered a stream of photons to nine observers at a constant rate (30 photons

3Some returning subjects requested an increased frequency of bright trials, but we ran initial recruits using the LED every 20 trials to allowed us to finish each session just under two hours

37 Figure 3.12: Group results of Experiment 1a from [43]. In this experiment, participants were all shown stimuli delivered at a constant rate. Data was taken from 9 participants; spline regression was used to estimate the average integration time, t = 650 ms. Experiment 1b studied the integration time for two individual participants who were invited back for repeat sessions. per 100 ms delivered to the cornea) while varying the duration from 120 to 1400 ms; trials were presented in a random order. The results are shown in Figure 3.12. We conducted two separate sessions which were identical in structure, but differing in the choice of photon numbers. Since the rate was kept the same, the stimulus duration was also affected. Heralding efficiency in these experiments was 0.37.

In order to determine the temporal integration window, we plotted the mean accuracy of the participants against stimulus duration. Up to the point that the duration is equal to the integration window, we should see a linear increase in performance. For times longer than the integration window, additional photons should make no difference in accuracy, as they are not included in the summation process; this means performance should no longer improve and we should see a plateau in our data. A spline regression was used to determine that this turning point occurs at ∼650 ms, much longer than the ∼100ms found in previous experiments

[59, 60], none of which were performed at such low light levels.

We also conducted an experiment to investigate the integration efficiency of the eye. Using similar photon numbers as used in Experiment 1, we changed the duration of time over which we sent these stimuli

– either a time window of ∼100 ms or <30 ms. Since these durations are equal to or shorter than the windows of complete summation measured in previous studies [59, 60], we can assume full integration, and use our results to measure a relationship between accuracy and photon number under complete summation.

This relationship is effectively the slope of accuracy dependent on photon number. Between our 100-ms and our 30-ms constant duration conditions, we saw no change in this slope. When compared with Experiment

38 1, where photons were delivered at a constant rate, we saw indications of a shallower slope, however, the results were not significant, which would give some indication that summation may not be as efficient in long durations as it is in short durations.

It should be noted that while previous groups have studied summation by observing the tradeoff between timing and intensity to achieve complete integration, we have approached our studies through the observation of partial summation leading up to complete summation. This novel approach illustrates the robustness of our experimental setup in investigating the visual system at near-threshold levels of light. We have also illustrated a surprising consequence of dark adaption – while previous studies have demonstrated integration times on the order of ∼100 ms, we have concluded an integration time 6 times longer. This makes sense, evolutionarily, though, as longer integration times could (partially) compensate for lower numbers of stimuli.

39 Chapter 4

A Single-Photon Test

4.1 How many trials are needed for a single photon experiment?

4.1.1 Initial power analysis

Our temporal integration experiment relied on the assumption that the transmission of the eye is 10%.

Accordingly, if we send 30 photons into the observation station, the mean photon number reaching the

retina is 3. It was these modified numbers that populated the y− axis in Figure 3.12. However, in order

to confidently conclude single-photon vision, we cannot risk more than one photon reaching the retina in a

particular trial. To avoid this, we send only single photons to the observation station (rather than the ten

photons needed to have n average of one photon reaching the retina). In order to have high statistical power

with only one photon, the number of trials needed depends on the overall efficiency of our system.

The overall transmission of the living eye is thought to reach a maximum value between 5-10% at around

15-20◦ away from the fovea. If we optimistically take the upper bound of this range, and if the heralding

efficiency of the single-photon source is ηsps=0.31, then the maximum fraction of trials in which a photon is actually detected by a rod is ηexp = 0.031. If we assume the observer can choose the correct answer in every trial with a rod signal (requiring high focus and signal-to-noise ratio for the duration of the trials), then the

maximum expected accuracy is

psignal × 100% + (1 − psignal) × 50% = 51.55% (4.1)

To analyze our data, we use a two-tailed binomial test to generate a p−value, which is an exact test of

the statistical significance of deviations from an expected distribution of observations into two categories.

For our case, the null hypothesis is that an observer is likely to get the stimulus location right as often as

wrong. We define the statistical power of an analysis as the probability of rejecting this null hypothesis. To

distinguish this 51.55% accuracy from the null hypothesis, approximately 10,909 trials would be needed for

a statistical power of 0.90. Assuming 300 trials in a two-hour session, we could reach the desired sample

40 Figure 4.1: Required sample size for a power of 0.90 as a function of observer accuracy, using a two-tailed binomial test of the null hypothesis accuracy p0 = 0.50 at a desired significance level of 0.05 (we consider a significance value > 0.05 as insignificant). Line is a visual aid only, sample sizes were computed with the MATLAB function sampsizepwr. size in about 35 sessions (preferably multiple sessions with a group of the same observers). This could be a relatively attainable goal, e.g., given a month or two of data-taking. However, the expected accuracy is still far lower than the above analysis estimates.

As shown in Fig. 4.1, the required number of trials rises exponentially as the expected accuracy decreases.

If the efficiency of the eye is actually 5% instead of 10% then the number of trials grows to ∼50,000. And if

observers cannot keep a perfect attention span, and are only accurate in 80% of trials with a rod signal1, then

the sample size surpasses 70,000. What’s more, every experimental group needs an equally sized control

group where viewers receive “blank” trials in which no photon is delivered. This way, we can test for a

significant difference between the measured accuracy and the control accuracy. Including these trials, the

total number of trials is then increased by a factor of four from the values shown in Fig. 4.1 in order to

achieve the same p-value.

The numbers we have just listed imply >6 months work of data-taking, probably with many different

subjects who are paid hourly for all the time they are spending in a dark room performing a somewhat

menial task. However, a 2016 paper by Tinsley and Molodotov et al. [38] demonstrated the use of “high-

confidence” trials to reduce the number of trials needed to conclude statistical significance, which we explore

in the following sections.

1It is very common for subjects to experience a temporary “clouding of vision” while dark adapting, which tends to result in a number of missed trials until vision acuity returns.

41 Figure 4.2: Schematic of experimental apparatus used by Tinsley et al. [38].Source is activated during either Interval 1 or Interval 2.

4.1.2 Tinsley experiment

A 2016 study by Tinsley et al. [38] used a single-photon source and a 2AFC design similar to our experiment, and claimed to show that observers performed above chance and were therefore able to see single photons.

This section discusses some experimental differences and similarities between our source and their source, as well as our initially proposed single-photon experiment versus their completed experiment.

The experimental setup used an SPDC pair source very similar to ours, with identical pump, signal and idler wavelengths. However, instead of operating a switch powered by detection of the heralded photon, the single-photon trials are selected through post-selection using an EMCCD camera to detect the herald photon. One benefit of the EMCCD approach is that the group was able to lower their probability of multiple photons to ∼0.02% (ours is closer to 0.5%), due to the fact that multiple photons are likely to fall on different pixels and be separately detected. Because the herald photons are detected on a camera rather than a trigger detector, though, the overall procedure is somewhat different. Instead of counting one herald photon and shutting down the pump like we do, they activate the pump for 1 ms and post-select on the trials in which exactly one pixel of the EMCCD detected a photon. Around 8% of the trials end up being post-selected for single-photon analysis in the study.

The study also used a 2AFC task, but this was set up in the temporal domain (rather than spatial).

Instead of asking observers to choose left/right, the observer chooses whether the stimulus was observed

42 in coincidence with the 1st or 2nd tone, spaced about 800 ms apart. The time interval is well outside the integration window we determined from our previous study. The structure of self-paced trials and audio feedback was very similar to our study as well. In this case, the stimulus was presented at 23◦ below the

fixation cross.

As mentioned before, this study hones in on high-confidence trials to reduce the overall number of trials needed. In total, the authors used three observers to conduct 30,767 trials, only 2,420 of which passed post- selection. Of these trials, 242 trials received the highest confidence rate (“R3”), with an overall accuracy of

0.60 ±0.03, which is significantly different from 0.50. By simply counting the high-confidence trials in the experiment, they could filter out the missed trials due to low heralding (or eye) efficiency and overcome the need for extra statistical power.

However, there are some strong points questioning the validity of this approach. One red flag is that the single-photon trials/blank trials had roughly equal percentage high-confidence trials (9.8%/10%). However, due to the method of trial preparation, the experiment was overwhelmingly dominated by blank trials with

2770 high-confidence blank trials being compared to 242 high-confidence trials. While the results are still statistically significant, it does not seem that the observers were more confident about single-photon trials.

They simply happened to be more accurate.

Another contention is their reliance on a one-tailed proportion test vs. a two-tailed proportion test

(which we used in our power analysis shown in Fig. 4.1). To use a one-tailed test, the authors must assume that observers would never choose against the placement of a stimulus; i.e., every time a stimulus occurs on the top side of the station, they choose the bottom and vice-versa. This would result in an accuracy significantly below 50%, but would still imply conscious recognition of the photon but with an incorrect response2. If Tinsley et al. had used a two-tailed proportion test, they would have seen a p−value of 0.11 and the near significance vanishes. In view of the arguments against the analysis, it is worth performing our own investigation on this method, and so we are currently replicating the experiment on our own apparatus.

4.2 Replication

To replicate Tinsley’s experiment, we needed to decide between performing the same number of total single- photon trials as the number they post-selected on (2,420) or perform single-photon trials until we hit the same number of high-confidence trials (242). Since a more significant p−value (0.0010) was achieved with the high-confidence trials than with the overall trials (0.0545), we opted to condition the end of our experiment

2While this may sound unlikely, the author’s own experience is that she can has often registered higher confidence with accuracy down in the 20% range – usually this comes from her recognition of the flashing off of a stimulus and perceiving it as the absence of a stimulus, which leads her to choose the opposite fiber.

43 on hitting the correct number of high-confidence trials. We predicted a total number of trials needed to

match the Tinsley experiment, and structured our data-taking so as to accommodate four different subjects.

Each subject performed five different sessions of single-photon trials. Sessions were similar in structure to

the sessions in our integration experiment (see Section 3.2.2), but we replaced varying streams of photons

with 150-160 single-photon trials and 150-160 blank trials3. The first session was used only as practice to become comfortable with the ultra-low stimuli. If four subjects did not produce 242 high-confidence trials (the number needed for replication of the Tinsley experiment), then we would simply continue the data-taking with more subjects.

4.2.1 Training

An important component in replicating the Tinsley paper was the act of training subjects on higher stimuli levels before data-taking, in order to increase performance at the single-photon level. Their training sessions were taken with a Poissonian light source, with photon number between 1 and 15 photons at the cornea. The

Poissonian light source used was a multi-line continuous-wave argon-ion laser (LASOS). A filter (Semrock

TBP01-501/15) was used in combination with an acousto-optical modulator (AOM; AA Opto-Electronic) to select wavelengths between 495 and 505 nm. Subjects performed 8 sessions, one session a day, to reach their optimal performance level.

Because of the degradation of our laser, we performed our training sessions using an M505F3 LED (the same 505-nm LED that was used for bright trials), LED pulsed at a 30-ms pulse length. We varied the voltage to produce different photon numbers in each pulse – photon number measurements were taken with a UQD-Logic-16 timetagger accumulating counts over a 60-ms correlation window (19 × 106 time bins of

0.78125 ns width). Voltage could only be modulated in units of 0.1 volts, which ultimately limited our

flexibility; the LED power was reduced with an attenuator (transmission of 2.9 × 10−4). At each voltage, measurements were taken for ten different pulses. Mean and standard deviation were calculated from the data, after background was subtracted. Table 4.1 lists corresponding photon numbers, after dividing out the detector efficiency (0.70).

Our training sessions were run at 2.6 volts. The reader has perhaps noted that this voltage corresponds to a photon number slightly higher than the maximum 15 photons used in the Tinsley experiment. This choice was made out of a matter of convenience; the same LED was used for alignment purposes and for bright trials. In order to match the pulse duration and brightness deemed necessary for the comfort of most participants, we needed to operate at a higher number of photons for our training trials. This should not

3Varied numbers were used since 2,420 did not split evenly into 16 trials. While we are not conditioning on an identical number of trials, it was important to set a common overall trial number used for each of our subjects.

44 Voltage Average Photon Counts 3.5 V 310.6 ± 20.43 3.2 V 189.3 ± 9.5 2.8 V 122.1 ± 12.4 2.7 V 60.1 ± 7.4 2.6 V 18.4 ± 3.9

Table 4.1: Average photon counts at the cornea for a 30-ms pulse length for our LED at different voltages. Counts were divided out by detector efficiency.

strongly affect our results; with a 10% eye efficiency, the difference between 18.4 and 15 photons is 0.34 mean photons at the retina, and with a 3% eye efficiency, the difference ∼ 0.1. More importantly, the average mean photon number at the retina could be anywhere between 0.5 and 1.8, depending on the eye efficiency.

By biasing towards a higher photon number, we ensure a higher number of photon trials where a stimulus successfully transmits to the retina, and observers can have a less frustrating training experience.

We recruited 30 volunteers who underwent pre-training screening sessions of evenly mixed trials with the

LED set at 2.7, 2.8, 3.0, and 3.2 volts (with bright trials of 3.5 volts). Depending on their accuracy level, we recruited them to the next stage of the experiment where they performed 6-8 training sessions. Specifically, volunteers who passed the selection criteria to move on to the training sessions all scored above 55% on their

2.7-volt (∼ 60-photon) trials and above 70% on their 3.2-volt (∼189-photon) trials. Seven volunteers were chosen, and one of the graduate students conducting the experiment also elected to undergo the training process.

While no selection criteria was used for the Tinsley experiment (three dedicated volunteers were chosen at the beginning of the experiment and trained over a succession of days), we chose to only take single- photon data with subjects who managed to attain a cumulative average of above 50% in high-confidence trials for ¿6 training sessions. Out of all volunteers who underwent training, only one subject passed this standard. As perhaps the reader has noted, there is an extremely low conversion efficiency from volunteer to single-photon test subject. While 23% of volunteers performed well enough on the pre-trial to complete training, only one volunteer (3%) made it all the way to the final round. If we only needed 2,420 trials to observe 242 high-confidence (R3) trials, then this would lead to an expected recruitment need of 120 volunteers to yield the four test subjects we needed. However, the percentage of high-confidence trials taken in the single-photon data sessions was only 6.9%, much lower than the expected 10%, meaning that many more volunteers must be recruited to finish the test. Unfortunately, due to the 2020 outbreak of COVID-19, additional subjects could not be recruited for the remainder of the year. In addition to our volunteer, one graduate student also underwent training and became the first subject for single-photon data acquisition4.

4It is, perhaps, easier for a dedicated graduate student to become a subject in this experiment. While she only underwent

45 Figure 4.3: The above graphs demonstrate the average probability of a correct response for high-confidence trials across subjects for our training sessions. The left graph represents the accuracy of Tinsley’s subjects (reprinted from supplementary materials), while the right graphs represents the accuracy of our subjects. Top right displays aggregate accuracy for subjects who were invited for single-photon data acquisition. Bottom right displays aggregate accuracy for subjects who performed below criteria. Error bars denote standard error measurement.

We have been able to perform 1212 (1222) single-photon (blank) trials with 84 (82) high-confidence trials

(the discrepancy between single-photon and blank trials resulted from a software error in one of the training sessions. We hope to make this up in our next training sessions.

4.2.2 Experiment

We take the time here to compare our preliminary training data with that of Tinsley’s test subjects. First, we demonstrate improvement of accuracy in high-confidence trials, similar to that shown in Tinsley’s work

(see Fig. 4.3). The larger error bars on our data can be attributed to an overall smaller number of subjects.

The first two sessions from our volunteer are excluded in the data, as there were only 0 and 1 high-confidence trials. All official training sessions from our experienced graduate were included, as she had already been calibrated to an appropriate level of high-confidence trials from previous 15-photon experimental sessions.

Next, we compare our single-photon data to that of Tinsley (see Fig. 4.4). We found that 1,212 single- photon events led to an average success probability of 0.514 ± 0.014 (p = 0.84). This accuracy closely

five official training sessions, she had hours of previous experience observing the stimulus and adapting her visual response “strategy”. Similarly to the graduate students in Tinsley’s work, she could take her data over a condensed timeframe, as opposed to our volunteer who could only make time to take data twice per week.

46 Figure 4.4: The probability of providing the correct response in 2AFC trials for all post-selected single- photon events and the high-confidence R3 events. Our work is represented in blue, with n1=1,212 and n2=84, with the red bar representing accuracy in our high-confidence blank trials. The green represents Tinsley’s work with n1=2,420 combined trials, and n2=242 for all high-confidence trials. matches that of Tinsley’s experiment, where 2,420 single-photon events led to a success probability of 0.516

± 0.010 (p = 0.11). The different p−values result from differing analysis. We compare our single-photon data to 1,222 blank trials with a success of 0.510 ± 0.014. Tinsley’s analysis instead compared their data to the hypothetical value of 0.5, effectively ignoring the possibility that experimental artifacts could lead to a false bias in an observer’s accuracy.

Neither groups were able to achieve a significantly low p−value (<0.05) on their overall trial accuracy.

If we instead examine only the 84 high-confidence (R3) trials we saw, we find a success rate of 0.583 ± .05

(p = .544). In Tinsley’s experiment, 242 trials led to a success rate of 0.60 ± 0.03 (p = 0.0010). In this case, while our accuracies are within error bars of each other, our blank trial distributions once again differ.

We compare our 84 single-photon high-confidence trials with 82 high-confidence blank trials, with a success rate of 0.537 ± .055. Tinsley’s blank trial distribution was 10 times the size with 2,770 trials leading to a success rate of 0.507 ± 0.009.

With the preliminary data we have, we performed a power analysis on our high-confidence trials. To confidently detect a difference between 0.58 (accuracy in R3 single-photon trials) and 0.54 (accuracy in R3 blank trials), we would need to perform 3600 high confidence trials in total (1800 blank trials and 1800 single-photon trials5). We also characterized a grouping of our R2 and R3 confidence trials, where we found that 846 “mid”-confidence trials led to a success rate of 0.52 ± 0.02 (p = 0.391). All in all, our current data is inconclusive, and we will continue to run our experiment at an appropriate time.

5Of course, we cannot control for the number of both. We can simply stop our experiment on the condition of achieving 1800 high-confidence single-photon trials and hope the high-confidence blank trials are of a comparable number.

47 4.3 Preliminary conclusions

While it is difficult to draw conclusions when we are in the middle of conducting our experiment, it is worth

commenting on some of the similarities and differences between our preliminary numbers and Tinsley’s

results. First of all, all success probabilities (single-photon and blank) are within one standard error of

measurement from each other. With higher statistical power, we are optimistic we could expect similar

results to Tinsley’s.

However, it is still doubtful whether we should expect similar results to Tinsley’s. One of the most important differences in our experiment is our source that achieves a heralding efficiency of 0.31 (Tinsley’s group reports their heralding efficiency at ∼ 0.20). Assuming a globally homogeneous distribution for eye efficiency, our experiment should have experienced a 50% increase in number of trials that registered with the observer, and therefore an increase in overall subject accuracy. Section 4.2.2 demonstrates this is not the case.

Interestingly, we can estimate the maximum accuracy observers should have been able to achieve in

Tinsley’s high-confidence trials, using data about the observer performance in the blank trials and the known heralding efficiency of the source, and the claimed efficiency of the human eye in Tinsley’s paper6

(0.30). Even with a high estimated human efficiency, we still calculate the maximum possible accuracy in

R3 post-selected trials to be 0.526 ± 0.004, which is significantly below the measured value of 0.60 ± 0.03

[56]. Such a result is likely a result of statistical fluctuations playing a large role in the outcome of the

experiment.

In conclusion, we hope to finish our replication of Tinsley’s work over the course of the next year. We

would hope to reinforce the claim of successful single-photon detection by observing an average success

probability > 60%. However, even with a null outcome, a fully finished replication would still provide

insight on how humans may best respond to a visual stimulus task, and how temporal differences between

stimuli may or may not be easier to detect than spatial differences. If a null result is achieved, we will

consider extending the experiment to a much higher overall trial number.

6A human eye efficiency of 30% actually does match the quantum efficiency of the rods (25-33% from various measurements), but to assume this efficiency matches the eye itself assumes a 90% transmission before the retina, which seems implausible (see Chapter 2).

48 Chapter 5

Introduction to Quantum Memory

There has recently been a great deal of effort in the quantum information science community towards the developing of an optical quantum memory. With recent developments in quantum communication and quantum computation, quantum memories will be critical for applications both in industry and academia.

However, there are many competing methodologies, e.g., atomic ensembles, solid-state memories, optical delay lines, etc., each with its own advantages and disadvantages. This section will briefly discuss these different methodologies, concluding with a comparison of our optical delay line memory with the most popular competing memories.

In Chapter 1, we discussed some current work to develop quantum information processing (QIP). To develop a QIP system that is analogous to one for classical information, we need to produce quantum versions of all parts of a classical information system. If we have classical algorithms, we must make quantum algorithms. If we have classical communication, we must have quantum communication. And for storing qubits for all of the potential use cases, we must have some form of a quantum memory. As discussed in Section 1.2.1, photonic qubits have the advantage of minimal interaction with the environment and therefore smaller susceptibility to decoherence. However, the difficulty with photonic qubits is that photons are never stationary. While their constant speed of 3×108 m/s makes them excellent for transferring quantum information from one location to another, operations on two photons at once are made difficult if the photons’ arrival times are not correlated. This is where a quantum memory comes into play. An optical quantum memory can store the quantum state of one or more photons for a designated period of time. When an operation is ready to use the quantum state, the memory should be able to release the photon in a way that interfaces well with the operator (whether that “operator” be a measurement, quantum repeater, or quantum computer). As we discussed in Section 1.2.4, quantum repeaters, which heavily rely on quantum memories, are an important part of quantum networks.

In addition to use in quantum communication applications, photons can be used in quantum computers as well, with various protocols outlining what such a scheme might look like [61, 62]. In this case, an optical quantum memory can be used for synchronizing various processes within the quantum computer. Quantum

49 memories can also be useful building blocks for developing deterministic single-photon sources [63], and synchronizing multiphoton quantum communication protocols [64].

It should be noted that the concept of storing and transmitting light is not new. What makes a quantum memory trickier than a classical memory is the issue of measurement. To truly store a quantum state, the photon must be stored without measurement until the final readout. This, combined with the effects of the no-cloning theorem, means there is little way to check and then correct for errors if a disturbance (whether it be a person or the atmosphere) changes the state. The only information about the qubit’s state is contained in the qubit’s wavefunction itself, which is most often destroyed upon measurement.

5.1 Popular forms of quantum memory

5.1.1 Atomic ensembles

A popular method of photon storage in atomic ensembles is the use of electromagnetically induced trans- parency (EIT) to convert a quantum state of light into a group of atoms to hold for the necessary storage time. When light interacts with atoms, the incident photon is effectively slowed, then transferred into a collective state of all the atoms. Quantum “dark states” can be engineered to increase storage time through the “forbidding” of a transition back down to a ground level until a final read-out is performed. To under- stand this, it is worth discussing the phenomenon of electromagnetic induced transparency (EIT) as the first atomic memories were EIT-based, and many of the popular memories that have been developed since then use a similar three-energy level scheme as in EIT.

Electromagnetically induced transparency was first introduced by Harris et al. at Stanford in 1990 [65].

Their work demonstrated that a strong laser beam of a tuned frequency can cause interference between excitation paths that will forbid some transitions between states. By mapping a photon to a higher excited atomic state, the photon can be stored – that is, the state will not decay to the ground state because the transition is not allowed. As shown in Figure 5.1, the EIT memory is based upon a Λ-level system with a ground state |gi, an excited state |ei, and a metastable storage state |si. Dipole transitions are allowed from

|gi to |si and |ei, but the transition from |si to |gi is typically forbidden, which means that a state stored in |si can be relatively long-lived. The light to be stored is typically referred to as the signal pulse, while the strong laser beam mediating transitions is referred to as the control pulse. By dynamically adjusting the control pulse intensity, the signal can move an atom in state |gi to the storage state |si. Once the light is stored, the control can be turned off.

To read out from such a memory, the control pulse is re-applied, causing a photon to be re-emitted.

50 Figure 5.1: In a typical EIT setup, the atoms begin in state g. A control beam allows for a transition from state |ei to |si. When the signal, or probe beam, allows for an excitation of atoms into e, the control beam allows their final resting state into state s. They will remain in this state until a final pulse of the control beam allows an excitation back up to state e and a final resting state at state |gi. This decay will release a photon of the same wavelength and characteristics of the original signal, thus behaving as a quantum memory.

As long as the originally excited atoms have not moved from their original locations when the photon signal was absorbed, they will act to coherently emit the photon in the original direction. This method makes the memory on-demand, susceptible to the limitations of the system. These limitations can include wavelength, bandwidth, and mode capacity. The biggest disadvantage of such a memory is the spurious noise photons that can occur from the intense control fields. Since these photons cannot always be filtered out of measurement, they can ultimately degrade the fidelity of the output. In addition, the control field reduces the storage of a signal photon into a single mode so that multi-mode inputs (containing information such as polarization or orbital angular momentum) must be stored through the use of spatial multiplexing

[66], i.e., different ensembles (or parts of ensembles) for different states. Even so, EIT memories can have a long storage time and are relatively easy and inexpensive; these characteristics make it a popular choice for quantum information systems [67]. EIT memories have been demonstrated in a wide variety of materials, including nitrogen-vacancy centers in diamond. In 2018, a highly efficient EIT-based quantum memory in cold rubidium atoms demonstrated 92% storage efficiency for 207 ns, which is the highest record for such a system [68].

Another method for storing photonic states in atomic ensembles is the off-resonant cascaded absorption protocol (ORCA), which relies on the same idea of exciting an atom from a ground state to a storage state

[69]. However, instead of using a Λ-level system, ORCA relies on a 3-level system, where |si has a higher energy than |ei. By detuning the ORCA transition away from the intermediate state, this memory allows

51 for a higher bandwidth than allowed in the tight atomic transitions of the EIT protocol. However, instability in the system can lead to a much shorter decay time of the stored qubit – many atomic (and solid-state) memories display an inverse tradeoff between memory bandwidth and memory lifetime.

5.1.2 Rare-earth ions

Crystals doped with rare earth ions are a wonderful candidate for storage of photons. The crystal memory system is somewhat analogous to the vapor memories of the previous section, with light being stored in the electronic and nuclear spin states of the crystal. The optical 4f − 4f transitions in rare-earth ions are known for their long optical coherence times (100 µs - 1 ms), albeit at cryogenic temperatures [70]. Storage protocols typically are based on absorption and re-emission of photons using photon-echo effects, which is very different from the energy level schemes of the previous sections. Photon echos rely on a material’s absorption of resonant light, followed by a dephasing of the absorbing atoms. Re-emission of the light

(known as a photon echo) is triggered by re-phasing of the atoms [71]. One of the most popular storage methods is the atomic frequency comb (AFC), which relies on creating a spectrally periodic structure of

√2Nc narrow absorption peaks, which naturally dephase and rephase with time Tint = πσ , where Nc is the number of peaks, and σ is the Gaussian width of each peak. The photon is absorbed and stored as a single excitation |si de-localized over all the resonant atoms in the ensemble. To allow for on-demand readout, the collective excitation in |ei can be transferred to a ground station spin level |si, which does not have a comb structure and so the evolution of phases will be locked during storage in the spin wave. When converted back into |ei, release of the photon will occur upon rephasing. This was utilized most recently by Holz¨apfel et al. [72], where they achieved a storage time of 2.5% for 75 ns.

5.1.3 Optical delay line

Our memory makes use of an optical delay line for storage, and unlike the above memories, which rely on the controlled absorption and re-emission of a photonic qubit, we do not convert or slow down our photon at all. We simply leave it in its original form, and contain it in an optical loop that releases the photon at programmable intervals. Specifically, with a specific path length p, we can store for duration Np/c where c is the speed of light, and N is an integer representing the total number of elementary intervals. We determine

N using a programmable Pockels cell and PBS switch, very similar to the one used in our vision work

(discussed in Chapter 3. Using this method, our photon is in principle unchanged after storage, the largest source of error being loss from our optical elements1.

1A potential alternative to a free-space delay line would be a fiber delay line, where light is coupled into a fiber loop of a specific length and exit is controlled through an acousto-optic modulator. However, polarization-dependent dispersion in fiber

52 Figure 5.2: Simplified schematic of an optical memory. By using multiple loops, each with one Pockels cell “switch”, we are able to achieve a long storage time (microseconds) without losing resolution (tens of nanoseconds).

A simplified setup of the optical memory is shown in Figure 5.2. Here, we demonstrate the use of

multiple optical loops rather than just one. Using just one optical loop can limit either our storage time or

our resolution. For example, if our optical memory consists of a single loop with an optical loss of 1% per

cycle (the loss of the best Pockels cell switches) we can store for about 70 cycles before our total transmission

decays to < 50%. Say we set our loop size at 10 ns – this translates to a maximum storage of 700 ns. If

10,000 ns −5 we want to store for 10 µs, this small loop size is infeasible: (0.99) 10 ns = 4 × 10 . If we make our loop size 100× larger, we can store for 70 µs (assuming – rather incorrectly – that all the loss is in the optical switch), but we no longer have the capability for storing for any time that is not a multiple of 1 µs. That means we might have trouble synchronizing with other devices. By using multiple loops, we can overcome this problem by storing m, n, and o times in loops 1, 2, and 3 where m, n, and o are integers ranging from 1 to 9 and loops 1, 2, and 3 are, e.g., 10 ns, 100 ns, and 1 microseconds, respectively. For example, if we want to store for 7.52 microseconds, we can store for 7 cycles in our longest loop, 5 cycles in our middle loop, and

2 cycles in our 1st loop. Figure 5.3 shows the “scalloped” transmission curve that results from this method.

Unfortunately, aligning a loop large enough to contain a photon for longer than 1 µsec can be difficult, which means that there is little hope of storing a photon for more than a millisecond). However, since many quantum processes can be performed in under a microsecond [73], we find this type of memory still has quite a range of applications and, at least by some metrics, may be the most competitive quantum memory of its time. would limit the fidelity of a long-lived polarization-encoded state contained in such a way; even time-bin encoding would require dispersion compensation methods so that the output pulse shape would be independent of the number of cycles

53 Figure 5.3: Simulated transmission of a high-performing digital quantum memory. The blue line represents a stacked memory of three loops of length 10 ns, 100 ns, and 1 µs, respectively, while the yellow line represents the use of a single loop. Loss is modeled as 1% from the optical switch and 0.00001% from lossy coating on all other optical elements (assuming 1 mirror every meter).

5.1.4 Hybrid memory

Finally, we conclude this overview by discussing the effort towards the development in hybrid memories. As we move closer to developing large-scale quantum networks for commercial use, we need to allow for the connection and interplay of network nodes that use different memory methods. Most recently, Walmsley et al. developed a hybrid quantum memory at room temperature [74]. One part of the memory was made up of an atomic ensemble memory, using a far off-resonance Duan Lukin Cirac Zoller (FORD) protocol. This protocol is very similar to the Raman quantum memory we discussed in Section 5.1.2. However, instead of using a control pulse to open up a transition between states, the read-in pulse itself is the quantum state to be stored. By storing the read-in pulse itself, the group was able to overcome the noise that typically comes from scattering off the control pulse when trying to store a signal pulse. However, this means the memory has no capability to “map in” a quantum state it has received from a sender. This memory is purely designed to create a quantum state and store it (in this case, for as long as 1400 ns).

The second part of the memory is a classical optical delay line memory as discussed in Section 5.1.3.

This delay line consists of a single loop that is 52-ns long, with a storage efficiency of 90% (they use three

Pockels cells that each have a transmission of 97%). Together, these memories demonstrated well-preserved quantum correlations for lifetimes as long as ∼1600 ns. However, the maximum efficiency of the memory was measured at 0.07 for 30 ns, due to imperfect retrieval efficiency using the FORD protocol.

54 5.2 Applications

As discussed in Section 1.2.4, a quantum repeater is analogous to a classical repeater, where a transmission

distance is broken up into segments interspersed with amplifiers. Since loss is proportional to the amount

of distance traveled, breaking that distance up and inserting restorative processes can increase the overall

distance traveled. Of course, we already know we cannot simply use amplifiers on the transmission channels

– the no-cloning theorem forbids it. Instead of amplifiers, quantum repeaters take advantage of entanglement

by sending one photon from an entangled pair to a quantum memory and the other to the end of the local

link, where it potentially is interfered with a similar photon from the next link to realize entanglement

swapping, on condition of successful interference. Quantum repeaters require a memories that can store

qubits for a sufficiently long time (the round trip time between nodes) without decoherence, e.g., for nodes

separated by 30 km, a storage time >100 µs is needed.

Beyond synchronization in quantum repeaters, quantum memories are also necessary for synchronization in quantum computers between quantum gates. Although qubit conversion would be necessary for a non- optical quantum computer, there are multiple promising proposals for all-optical quantum computers [61, 62].

A quantum optical memory can also be used as a component of single-photon sources. In our vision

experiment, we essentially use a fixed-time quantum memory in the form of an optical fiber to synchronize

the detection of one herald photon and the emission of the signal photon (see Chap. 2). More ambitious

single-photon sources rely on less lossy (i.e., free-space) and more dynamic methods of storage to optimize

their performance [63].

5.3 Performance criteria

This section will discuss the important performance metrics for characterizing a quantum memory, while

comparing the performance of our memory to that of recently obtained memories using other methods.

These are (1) Kaczmarek et al. (2018) [69], who mapped photons into the electron orbitals of rubidium

vapor using two-photon off-resonant cascade absorption (ORCA); (2) Wang et al. (2019) [75] who used EIT

to store photons in a rubidium vapor, as well; (3) Holz¨apfelet al. (2012) [72] who used an atomic frequency

comb (AFC) in rare-earth ion-doped crystals; and (4) Walmsley et. al. (2020) [74] who developed the hybrid

quantum memory discussed in Section 5.1.4.

55 Table 5.1: Fidelity, efficiency, storage time, bandwidth, and time-bandwidth product for different memories. For memories that listed varying storage times, we display two storage times and their respective efficiencies.

ORCA EIT AFC Hybrid Delay Line

Fidelity N/A > 0.99 N/A N/A > 0.99

Efficiency 0.25 0.85, 0.65 0.035, 0.001 0.06, 0.003 0.97, 0.82

Storage Time 86 ns 1 µs, 7 µs 2 ms, 50 ms 30 ns, 1.59 µs 12.5 ns, 1.25 µs

Bandwidth 250 MHz 3 MHz 160 kHz 500 MHz 1.52 THz

Time-Bandwidth 21.5 9 3200 20.2 6 × 106

5.3.1 Fidelity

In general, a quantum memory stores a pure or mixed state represented by a ρ and outputs a state ρ’, which should be close to ρ. The fidelity with respect to an input state is given by by [76]:

q F (ρ) = T r pρ0ρpρ0. (5.1)

If the input state is a pure state |ψi, this simplifies to F = hψ|ρ|ψi. The fidelity of a quantum memory for

an arbitrary set of input states can be determined by characterizing it as a quantum process and performing

a quantum process tomography [77]. A deeper explanation of how this is performed is provided in Chap.

7. Ultimately, a quantum process tomography describes how closely an experimental system matches its

theoretical operator; for a quantum memory we would ideally want the memory to implement the Identity

transformation, i.e., the output is the same as the input, for all possible input states. The end fidelity we

state is the process fidelity, the overlap of the measured process with the identity (but usually not including

the effect of loss, which is parametrized by the efficiency (see below)). It should be noted, though, that

Kaczmarek listed their fidelity as an average of output states to input states (an average of state fidelities

rather than a complete process fidelity). Table 5.3.1 demonstrates our reference rubidium memory as the

only recent method that listed a fidelity comparable with ours. For memories where no fidelity was listed,

no measurement was performed verifying fidelity of distinguishable quantum states.

5.3.2 Efficiency and storage time

Another useful criterion is the efficiency η, which is the ratio between intensity of the stored and retrieved pulses (or the probability that a single photon will be successfully stored and subsequently released). Ef-

ficiency is relatively easy to determine experimentally, but the number must be paired with storage time.

56 Table 5.3.1 lists multiple storage times and efficiencies for our memory, as well as the rubidium (EIT) and

hybrid memory.

5.3.3 Multimode capacity

The multimode capacity of a quantum memory determines the number of optical modes that can be stored in

a memory cell with a performance above the requisite performance threshold. This capacity can refer to how

many different qubits with a single encoding type can be held at one time (e.g., how many time-bin encoded

photons can be stored at once), or how many different methods of information encoding (polarization, time-

bin, etc.) can be held. While none of the referred memories mention multimode capacity, we predict our

memory should be able to hold up to 999 different qubits at a time, as the limitation is simply the ratio of

the longest storage time to the resolution of the system. We also can confidently say our system is capable

of holding polarization and time-bin qubits; we also believe our system has the potential for storing orbital

angular momentum modes, though the storage time may be more limited, as discussed in Section 7.4.1.

5.3.4 Bandwidth

One of the biggest advantages of a free-space optical delay line memory is its ability to store photons of a very

large bandwidth. For many competing memories, there is a necessary trade-off between storage time and

bandwidth of photons (the larger the linewidth of the atomic transition, the shorter the memory coherence

time). As shown in Table 5.3.1, our memory surpasses the current leading memories’ bandwidth by many

orders of magnitude. Additionally, if we look at the time-bandwidth product, a multiple of storage time by bandwidth, we see that other methods typically achieve a time-bandwidth product of order ten. We have preliminarily measured our memory at a time-bandwidth product on the order of 106, though we estimate our memory at a 109 time-bandwidth product (see details below). This time-bandwidth product is relevant as the ratio between the maximum storage time and the duration of the shortest pulse the memory could hold. Neglecting other degrees of freedom like polarization and spatial mode, it is the ultimate limit on the number of independent spectral-temporal modes that can be stored.

The large gap between our time-bandwidth product and that of the leading competing memories can be explained simply by the difference in methodology. In all three of the referenced methodologies, storage is achieved through conversion of a photon using some sort of electronic or atomic transition. Though some effort has been made to smear out the linewidth of this transition and therefore increase the bandwidth of photons that can be stored, the overall bandwidth is still limited to a tight atomic transition.

In our case, no such conversion happens and so all of our bandwidth restriction comes from our optics.

57 Figure 5.4: Green represents the electric field parallel to the optic axis. Blue represents the electric field perpendicular to the axis. Red is the combined field. This diagram shows that linearly polarized light entering a half-wave plate can be resolved into parallel and perpendicular components, with respect to the optic axis of the waveplate. Since the parallel wave propagates slightly slower than the perpendicular one, it is ultimately delayed exactly half a wavelength relative to the perpendicular wave, and the resulting combination is a mirror image of the input state. This image was taken from https://en.wikipedia.org/wiki/Waveplate (Creative Commons license).

Our optical loops rely on a series of mirrors and lenses to contain our photon, so first and foremost we must examine the bandwidth of our mirror and lens coatings at a central bandwidth of 705 nm. In general, a broadband coating on a mirror can have reflectivity greater than 99.5% for about 80-nm bandwidth; custom coatings can retain this reflectivity for a similar bandwidth. In our case, we have used a custom coating on our longest Herriott cell as we reflect off the surface of these mirrors over 100 times. We measured the transmission of our longest cell to be > 0.81 over a bandwidth of ∆10 nm, or 7 THz. This transmission refers to the light that propagates through the cell after 340 reflections, which means the reflectivity of the mirrors is < 0.999 (see Figure 5.5b)).

The second primary source of loss in our memory arises in the “optical switch” employed to switch the photon into and out of each loop. Specifically, we use a Pockels cell – effectively a voltage-controlled waveplate

– to switch the polarization from horizontal to vertical. Optics that dynamically alter the polarization state of light typically do this through the use of birefringence, where the index of refraction is different for light polarized along one or the other of two perpendicular axes. The behavior of a waveplate depends on the thickness of the crystal, the wavelength of light, and the variation of the index of refraction:

2π∆nL φ = , (5.2) λ0 where φ is the phase difference enacted by the waveplate on the “fast” and “slow” polarized components,

58 (a) (b)

(c)

Figure 5.5: a) Thorlabs E02 reflectivity coating for S-polarized light used on all 1” mirrors used in the quantum memory. Light spends the majority of loop lifetime in the S-Polarized state. Data taken from Thorlabs specification sheets. b)This graph depicts a transmission measurement of our largest cavity (∆T = .02At the time of measurement, the memory did not have any form of stabilization and so the output beam vector was very unstable. This led to a greater uncertainty in the free-space power measurement. over a bandwidth of ∆10nm. We can see that the transmission is greater than 80% for all measured wavelengths. c) A simulation of transmission through Brewster’s angle PBS at various wavelengths. Sellmeier equations were used for both air and the PBS material (UV fused silica). Transmission remains above 97% for wavelengths greater than 400 nm.

59 ∆n is the difference of refractive index for these polarizations, L is the thickness of the material, and λ0 is the (vacuum) wavelength of the input light. We can characterize the transmission of our system by the

T = cos2(φ/2) (polarizing beam splitters at the end of our delay loops enforce dependence of transmission

on the polarization state of the photon). If we want our transmission to remain at T ≥ 97%, we calculate

an overall bandwidth of 78 nm, or 90 THz.

The polarizing beamsplitters (PBS) at the end of our delay loops have a characteristic bandwidth as

well. We utilize Brewster-angle PBS’s which perfectly transmit light with p polarization (horizontal, in our

case) when the incident angle is:

  n2 θB = arctan , (5.3) n1

where n1 is the refractive index of the initial medium through which the light propagates (air), and n2 is the index of the refraction. This relation can be calculated using Fresnel equations, and can be understood from the manner in which electric dipoles in the medium respond to light that is polarized in the same plane as the incident ray and the surface normal at the point of incidence (p-polarized light). Destructive interference leads to a minima in reflectance (and maxima of transmission) of all p-polarized light and this is what allows us to control the paths of horizontal and vertical polarization in our system (s− and p− polarization). However, the angle is dependent on the wavelength of light, since the refractive index for a material is also wavelength-dependent. Figure 5.5c) displays simulated transmission over wavelength for a

Brewster’s angle of 55.5◦ (the Brewster’s angle calculated for λ = 705 nm, where fused silica has a refractive index n = 1.455). We find that the transmission for p-polarized light falls below 0.97 at 124 nm, leading to a bandwidth of ±583 nm for our Brewster’s angle PBS’s, with a memory centered at 705 nm.

The focal length of our lenses are also dependent on refractive indices, and so they vary with wavelength as well. For our 1-m focal length lenses, used in our shortest loop, we calculate a focal length ranging from

1 to 1.014 for wavelength of 400-1000 nm (reflective of the broadband coating on the lenses). This leads to a difference in optimal placement in lenses (± 1 cm), and likely leads to a potential decrease in fiber-coupling efficiency at the output. However, it is unlikely to have a large effect on the overall free-space transmission.

5.4 Summary

In this introduction we discussed what quantum memories are and why they are an important in the growing

field of quantum computation and quantum information. In the following chapters, we will describe in detail the memory we have built and future directions.

60 Chapter 6

Building a Memory in Free-Space

6.1 Building blocks for the memory

As introduced in Chapter 5, we have developed an optical delay line quantum memory, capable of storing

a photonic qubit on the order of microseconds with a bandwidth on the order of terahertz. To overcome

transmission losses in our optical cavity, we have configured a system of three loops varying in length: 12.5

ns, 125 ns, and 1.25 µs. By using three loops, we can store single photons with high efficiency over variable

delays [N x 12.5 ns, 1 ≤ N ≤ 999, where N is an integer representing an interval of time in our storage system]. Our optical table has dimensions 4’ x 3’. Fitting all loops on the table requires more than 400 additional feet of optical path length. For this purpose, we use Herriott cells with multiple reflections, as described in Section 6.1.3.

6.1.1 Optical delay lines

12.5-ns loop

An optical delay line consists of a fixed optical path length set by a series of mirrors. With the optical table space we have (4 x 3 meters), we can easily create an optical loop that is 3.76 meters long with a total of 8 mirrors. This corresponds to a 12.54 ns storage time, which maps to an 79.75-MHz repetition rate laser1.

All mirrors in this loop are 1” BB1-E02 mirrors with a Thorlabs B-coating ??. Reflectivities for the mirrors

were measured by Thorlabs at > 99.94% reflection for P-polarization (corresponding to vertical polarization

in our setup) and 99.46% for S-polarization (corresponding to horizontal polarization) at a wavelength of

705 nm (our operating wavelength).

For operating at path lengths > 1 m, we needed to use lenses to keep the beam from diverging. For our

first loop, we used two plano-convex lenses (Thorlabs LA1464-B), with a calculated2 focal length of 1.004

1When this project was first conceptualized over ten years ago, the lab planned to make use of a 76 MHz laser already on hand. Since then, the laser is no longer in use and path lengths have somewhat migrated to what could easily be fit on our optical table. 2Focal lengths were calculated using the Sellmeier coefficients for a UV fused silica lens at Λ = 705 nm.

61 (a) (b)

Figure 6.1: a) One optical delay line is a path length that is set by a configuration of mirrors with an input and output point. To make the delay time dynamic, a Pockels cells and PBS controls whether the pulse stays in the delay line for another cycle or exits. In this way, we can store any amount of time that is a multiple of the original path length. b) Normalized transmission data plotted from 0 to 125 ns, a fitted line reports a transmission efficiency of 97.3%. Power measurements taken using an amplified silicon detector photodiode (ET-2030A from Electro-Optics Technology); errors bars correspond to standard deviation of 26 measurements. meter. For free-space transmission measurements, the first lens was placed 0.890 meters after our PBS. The second lens was placed 1.98 meters after our first lens. The decision of lens placement is further detailed in Chapter 73. Lenses were AR coated by Coastline Optics for a reflectivity < 0.05%. Herriott cell mirrors were measured to have a reflectivity of 99.95±0.011% reflectivity.

To store for multiples of 12.5 ns, then we incorporate a “switch” that will keep the photon in the loop until we are ready to switch it out. Much like in our vision project, this “switch” is created through the combined use of a Pockels cell and a PBS. When a photon is polarized horizontally, it transmits through the

PBS and enters the loop. By firing the Pockels cell, we switch the photon to polarization vertical, causing the photon to reflect off the PBS instead of transmit, thereby storing the photon in the loop. By cycling the photon through this fixed path length multiple times, we achieve variance in this storage time (in discrete intervals), and therefore create a “digital” quantum memory4. When we are ready to release the photon, we fire the Pockels cell a second time, converting the photon back to horizontal polarization, leading to transmission from the PBS and exit from the loop. Transmission of the Pockels cell was measured at 97.5%.

3Typically, beam divergence is constrained using a 4F-imaging arrangement, where two lenses of equal focal length are placed two focal lengths apart from each other. Since we were unable to find lenses for purchase with 0.94-m focal length(3.76 meters/4), we settled on using two 1-m focal length lenses with appropriate placement. 4Our system was designed to store in each loop for 0-9 times. One could use different “passes”. For example, with a binary (base 2) system, each loop would be twice as long as the previous one. Ten loops (requiring on the order of ten Pockels cells) would enable storage for up to 1024 cycles).

62 (a) (b)

Figure 6.2: a) Simplified schematic of 2nd loop in optical delay line. Nominal focal lengths for L2.1, L2.2, and L3.2 = 1 m, 400 mm, and 400 mm, respectively. The distance from PBS2 to PBS3 is 188 mm. The distance from PBS3 to L2.1 is 223 mm. The distance from L2.1 to HC is 529 mm. The distance from HC to L2.2 is 792 mm. The distance from L2.2 to L2.3 is 926 mm. The distance from L3 to PBS1 is 965 mm. All measurements have a ±1 mm error bar. All distances were measured from center-to-center of the optics. b) Normalized free-space transmission data plotted from 0 to 1.25 µs; the curve corresponds to a transmission efficiency of 93.1%. Errors bars derived from standard deviation of 26 measurements.

125-ns loop

In order to store a photon for 125 ns, we need to fit 376 meters on one optical table. In order to achieve this, we constructed a multipass Herriott cell [78], a multi-reflection cavity further detailed in Section 6.1.3. To constrain the beam, we used one plano-convex lens (Thorlabs LA1464-B) with a calculated focal length of

1.004 meter, as well as two plano-convex lenses (Thorlabs LA1172-B) lenses with a calculated focal length of 401.7 m, as shown in Figure 6.2. The Herriott cell mirrors have a diameter of 51 ±1 mm and a radius of curvature (ROC) of 1 ± 0.05 meters; a small hole is placed in each mirror to allow the light to enter and exit through the cell. The mirrors are placed 913 mm apart from each other. Figure 6.2 shows a schematic of the 125-ns optical loop. All 1” mirrors were coated with a Thorlabs B-coating and all lenses and both

Herriott cell mirrors were coated by Coastline Optics. At the time of free-space measurement, the Pockels cell crystal was a FastPulse Q1059 KD*P crystal with a 95.4% transmission5. The crystal has since been replaced with a Q1059PSG-670 KD*P crystal with a transmission >= 98%.

1.25-µs loop

In order to store a photon for 1.25 µs, we needed to build a modified Herriott cell with ∼ 10× the reflections of our 2nd loop Herriott cell. The cell is made up of one spherical mirror and two flat mirrors placed 1.098

5It was later found that this crystal was 21 years old, sold to Professor Kwiat during his time at Los Alamos!! The Fastpulse tech support informed us that the transmission was relatively high for how old the crystal was.

63 (a) (b)

Figure 6.3: a) Simplified schematic of 3rd loop in optical delay line. Nominal focal length for L3.1, L3.2, and L3.3 = 750 mm, 500 mm, and 500 mm, respectively. The distance from PBS3 to PBS2 is 188 mm. The distance from PBS2 to L3.1 is 307 mm. The distance from L3.1 to the modified Herriott cell (MHC) is 354 mm. The distance from MHC to L3.2 is 511 mm. The distance from L3.2 to L3.3 is 901 mm. The distance from L3.3 to PBS3 is 473 mm. All measurements have a ±1 mm error bar. All distances were measured from center-to-center of the optics. b) Normalized free-space transmission data plotted from 0 to 12.5 µs; curve corresponds to a transmission efficiency of 80.0%. Errors bars derived from standard deviation of 26 measurements. m apart6. The diameter of the spherical mirror is 76 ± 1 mm, with a ROC of 3.00 ± 0.05 m. The two

flat mirrors each have a dimension of 51 x 51 ±1 mm. Additionally, as shown in Figure 6.3, we utilized three lenses, one plano-convex lens (Thorlabs LA1978-B) with a calculated focal length of 753 m, and two plano-convex lenses (Thorlabs LA1908-B) with a calculated focal length of 502 mm.

“Stacking” optical loops

Figure 6.4 depicts all three optical loops connected from input to output7. As can be shown, the input pulse must always traverse the first loop for at least one cycle. However, the 2nd and 3rd loops can be bypassed by never firing the Pockels cell. The photon will transmit through PBS2, then reflect back to the PB3 at a point

5 mm adjacent. Once it passes through the Pockels cell on its return path, the Pockels cell will fire, forcing the photon to exit through the output (reflected) port of PBS2. This port is where we perform all of our power measurements. The minimum path length a pulse must travel from input to output is approximately

6 m, leading to roughly 20 ns of minimum storage time in the loop. All cycle times are added on to this minimum storage time to make up the total time a pulse spends in the system.

The input beam is produced by an HL7001MG Thorlabs laser diode at 705 nm coupled into single-mode

fiber (Thorlabs P1-3224-FC-2), then re-collimated with a microscope objective lens (Newport M-20x). At

6The flat mirrors are placed adjacent to each other with one slightly tilted relative to the other, which is discussed more in Section 6.1.3 7A more detailed schematic of the memory can be found in Appendix B

64 (a)

(b)

Figure 6.4: a) Simplified schematic of “digital” quantum memory. Three optical loops varying in length from 12.5 ns to 1.25 µs allow for storage of light for variable time intervals. Lenses have been removed for simplicity. A half-wave plate (HWP) is necessary for keeping the pulse inside the loop. b) Normalized transmission data plotted from 20 to 12.52 µs.

65 the time of these measurements, the beam waist was calculated to be approximately 6.4 mm8. For alignment purposes, the diode was operated with a laser diode controller. For measurement purposes, the light was pulsed in one of two ways. For all measurements that did not require ultrashort pulses, the diode was pulsed with a 5.2-ns pulse of ∼ 80 mA of operating current. For measurements that required pulses < 2ns (see

Chapter 7), we used a gain-switching circuit designed by Fedor Bergmann to generate pulses on the order of 0.6 - 0.8 nanoseconds (details in Appendix G).

Data from our free-space transmission MEASUREMENT is shown in Figure 6.4. A “scalloped” trans- mission pattern results from the digital system of our loop – in a system in which all the loss is in the optical switches, 9 cycles in the 1st loop and 9 cycles in the 2nd loop should have 18 times more loss than 1 cycle through the 3rd loop. However, we see discrepancies between losses in all of our loops. In our shortest loop

(12.5 ns), our cycle-to-cycle transmission is 97.5%; our mid-size loop (125 ns) sees a 93.1% transmission, while our longest loop has an 80% transmission. The increased loss as our loop size increases corresponds with the increase in number of reflections: our middle loop has ∼20 more reflections than our shortest loop – a 99.95% reflective coating translates to the observed drop in transmission. However, we see that our longest loop has a much greater loss than expected per cycle, most likely attributed to slight clipping somewhere, due to the complicated nature of the cell. The transmissions were measured using a photodiode.

Measurements were taken with a system frequency of 0.5 Hz. The efficiencies in Figure 6.4 are normalized to a single transmission through the shortest loop and the times are normalized no passes through the other delay buffers.

6.1.2 Pockels cells

We use two Pockels cells in our setup in order to manage our space constraints. For the shortest loop, we use a 3-mm transversal RTP Pockels cell, placed at the waist of the beam in Loop 1. By placing the waist at the 3-mm radius aperture, we reduce the likelihood of clipping on the edge of the cell, since the beam size is reduced. One valid concern is that the waist will have a larger range of momentum spreads than points where the beam is more collimated. Fortunately, the use of a 1-m focal length lens ensures very minimal focusing. Experiment has shown high-efficiency switching in the Pockels cell, with an extinction ratio of

100:1. The driver is a BME Bergmann dpp2b1 driver; the rise time is 5 ns, which allows us to fire the

Pockels cell on and off in time for a 12.5-ns loop duration.

For our second loop we use a longitudinal KD*P Pockels cell with a 10-mm x 10-mm square aperture.

8Many difficulties in properly measuring beam waists arose from conflicting values given by a Thorlabs Shack-Hartmann Wavefront Sensor and a Thorlabs CCD Camera Beam Profiler. While the Wavefront Sensor measured a waist of 5 mm, the Beam Profiler measured a waist of 6.4 mm. Since the Beam profiler gave a more realistic beam divergence reading, we had more trust in its measurement of the beam diameter. Space on the table did not allow for accurate knife-edge measurements.

66 Figure 6.5: Front and top-down view of two passes through PC2.

The driver is a dpp7c/BBO and is capable of running up to a 60 kHz repetition rate with a rise time of 7 ns. The Pockels cell is placed between two polarizing beam splitters so that both loop 2 and loop 3 can be completely bypassed to minimize loss. The 1/e2 beam waist of each beam that passes through the Pockels cell is approximately 1.5 mm. Figure 6.5 shows the beam spots, along with a top-down view of the beam path.

6.1.3 Herriott cells

Setting up the memory for 37.5 and 375 meters would be impossible if not for the use of Herriott cells. The

Herriott cell first appeared in 1965, invented by Herriott and Schulte while at Bell Laboratories [78]. The

Herriott cell consists of two opposing spherical mirrors, where a hole is machined into one or both of the mirrors to allow the input beam to enter and exit the cavity. The number of traversals is controlled by adjusting the separation distance and angle of input beam. For our second loop, we use a Herriott cell with a hole in each mirror to achieve 37 reflections (19 on the entrance mirror and 18 on the exit with the final reflection exiting through the hole in the exit mirror) over a one-meter distance. Figure 6.6 shows a typical reflection pattern formed by this cell is shown in.

In our third loop, we use a modified form of this cell, achieving 340 reflections over a one-meter distance.

This modified form replaces one spherical mirror with two flat mirrors slightly tilted with respect to each other, as shown in Figure 6.8. This corrupts the symmetry of the Herriott cell and allows us to achieve a much longer optical path length. We can essentially understand this as the tilted mirror generating a second

Herriott cell, which is governed by a new optical axis that is shifted relative to the first. The beam can be contained in “both” of these Herriott cells until one of the paths leaves through the exit point. In our case, these exit points are above and below the two flat mirrors. The typical reflection patterns formed by these cells are shown in Figure 6.7.

67 Figure 6.6: Reflection pattern on the input (left) and output (right) spherical mirror of the Herriott cell in the 2nd loop. Solid lines indicate mirror outlines and entry/exit holes Numbers indicate reflection numbers. Both mirrors are the same size but have different holes. The input hole has a radius r = 4 mm, R = 1.25 cm from the center of the mirror. The output hole has a radius r = 2 mm, R = 2 cm. .

Figure 6.7: Predicted reflection pattern on the two flat mirrors (left) and spherical mirror (right) of the modified Herriott cell in the 3rd loop. The actual spot size will depend on the reflection number as the beam propagates and gets re-focused by the curved mirrors. Solid lines indicate the mirror outlines and entry/exit locations.

68 Figure 6.8: Simplified side view of modified Herriott cell, demonstrating an exaggerated tilt between two flat mirrors.

The stability of the Herriott cells is particularly susceptible to small movements of the mirror, and errors of the input beam. Translation and rotation of the input beam will be mapped to the output with the exact same magnitude – this mapping can unfortunately result in clipping which means the overall efficiency might drop with beam drift. It should also be noted that the longer the distance between mirrors, the more sensitive the system becomes to small length changes. For the large 340-reflection Herriott cell, 1 mm of length change between mirrors would result in 34 cm of overall optical delay (> 1 ns). We detail our efforts to ensure stability in Section 6.2.1.

6.2 Operation

6.2.1 Stabilization and ray tracing

Ray tracing

In order to analyze the stability of a large optical system, and understand how small changes could change the overall beam and transmission, we need a strong computational approach for ray-tracing. The most common program to use for such a purpose would be ZEMAX, but this program is difficult to implement for complex structures such as the Herriott cell with multiple reflections between the same optical components.

We therefore performed our stability analysis using an advanced version of the ABCD-matrix formalism, a well-known mathematical form for performing ray tracing calculations with only paraxial rays[79].

The ABCD technique maps the optical system to an input and output plane, both perpendicular to the optical axis of the system. The propagating central ray’s position and angle is defined relative to the optic axis, with the x- and y- directions considered as transverse to the optical axis. A light ray may enter at distance x from the optical axis, traveling in a direction that makes an angle θ with the optical axis. After propagation through a number of optical elements to the output plane, the ray may then be found at a

69 distance x0 from the optical axis and at an angle θ0 with respect to it (see Figure 6.9).

The ABCD matrix relates the output ray to the input according to

      x AB x0         =     (6.1) θ CD θ0 where the ABCD matrix varies depending on the optical element it is representing. For example, for free space propagation between two planes, the matrix is given by

  1 d   S =   (6.2) 0 1 which relates the parameters of the input and output ray as:

x0 = x + dθ

θ0 = θ

This relation is easy to understand as a simple propagation of a ray (under the paraxial assumption that angles are sufficiently small that sin(θ) ∼ θ). Another example of a thin lens, whose ABCD matrix is given by   1 0 L =   . (6.3)  −1  f 1

In this case, we treat the lens as simply a modification of the angle, depending on where the beam is hitting the lens:

x0 = x x θ0 = − + θ. f

If the beam hits the lens square in the center, then the angle should not change at all. If we want to model a thick lens, then we simply sandwich a propagation matrix between two thin-lens matrices, where the propagation matrix incorporates the index of refraction of the lens. Ultimately, any optical element can be built up from the multiplication of the propagation and lens matrices, which is how we are able to model our entire system. The above equations can be used to model one dimension. By turning the vectors into

70 Figure 6.9: In ABCD matrix analysis, an optical element (here, a thick lens) gives a transformation between (x, θ) to (x0, θ0).

Figure 6.10: Schematic of stabilization setup. A stabilization laser is picked off using a half-silvered mirror, then split and redirected to two separate Quadcells. Feedback from Quadcell 1.1 (QC1.1) is used to control mirror 1.1, while feedback from Quadcell 1.2 (QC1.2) is used to control mirror 1.2. LS1.2 FL= 300.0mm. LS1.2 = 750.0mm. Both lenses are placed one focal length away from respective Quadcells.

4-vectors with x- and y- components, we can create a 3D picture of alignment (and misalignment)9.

Stabilization

Output beam variations over time have caused one of the biggest obstacles in achieving high transmission within our setup. Without active stabilization, we found the variance of beam position to be as much as

±0.2 mm in the transverse direction. While this variance does not significantly affect our free-space storage, it does limit our ability to fiber-couple our beam upon exit (as would be necessary for interfacing with other systems). To remedy this, we need a form of active stabilization to counter-act this drift. We achieved this stabilization using two Quadcells paired with stabilization lasers in our first and third loop. We use the

Quadcells to monitor angular drifts in the setup. Variance of the stabilization laser’s position is translated

9It should be noted is that this method works for paraxial rays but of course is not perfect, and does not account for aberrations, which must be evaluated using full ray-tracing techniques [80].

71 (a) (b)

Figure 6.11: a) Beam position without active stabilization and errors on all mirrors within different delay loops (maximum of µrad). b) Simulation of beam position with active stabilization.

to voltage offsets, which are fed back as corrections into two piezo-controlled mirrors (see Figure 6.10). In

this way, we are able to reduce the variance to ±50 µm, enough to achieve consistent fiber-coupling to within 1-2% 10. Experimentally, we used a CCD camera and observed minimal variation of the beam at the time of measurement. However, variation did occasionally increase in response to temperature changes, and consistent fiber coupling was typically unachievable below a variance of 10% after the hours of 5 PM (likely in response to the air conditioning schedule of the building). Overall, the temperature in the lab would

fluctuate from 26 to 27◦ Celsius on average.

6.2.2 Triggering the memory

Triggering the memory is vital to operation. All programming of the Pockels cell pulses is done using a BME

SG08P2 delay generator [81] with outputs fed into a splitter box, which is then fed into the Pockels cell drivers themselves. While the first PC and first loop are relatively simple to understand, firing for the 2nd and 3rd loop are a bit more complicated in order to allow for bypassing of the 2nd and 3rd loop. Appendix

C gives a detailed description of the triggered scheme.

6.2.3 Operating bandwidth

We have not performed our planned measurement yet, but the memory is currently working at a bandwidth of 2.5 nm centered at 705 nm (1.52 THz), FWHM measured using an Ocean Optics spectrum analyzer.

6.2.4 Summary

In this chapter, we discussed our free-space transmission measurements for our optical quantum memory.

We observed ≥ 80% transmission for all loops with an input bandwidth of 1.52 THz. We achieve a 50%

10The ± 50 µm lateral shift at the fiber corresponds to an angular shift of 0.0001◦ at the fiber collection lens. Given its 11-mm focal length, this in turn corresponds to 0.022 µm relative to the 3-5 µm mode field diameter of the fiber

72 transmission for as long as ∼ 4µs leading to a time-bandwidth product of 6.08 ×106.

73 Chapter 7

Qubit State Storage

While the previous chapter provided a thorough demonstration of our memory’s capabilities for storing

photons with high transmission, the primary goal of our memory is to store photonic qubits with high

transmission, meaning we must have a method of storing and recording our photons’ quantum state, rather

than just its presence. However, because our switch relies on full control over the qubit’s polarization state,

we are unable to directly store polarization qubits (polarization being the easiest and most popular form of

storing state information). Instead, we must first convert the polarization qubit into a degree of freedom

more amenable to our switch mechanism.

7.1 Transducer

In order to enable storage of polarization qubits in our system, we devised an input transducer capable

of translating a superposition of horizontal and vertical polarization into a superposition of two time-bins,

separated by ∼3 ns. This transformation is achieved by using two polarizing beam-splitters to separate

horizontal and vertical components out in time, then using a Pockels cell to flip only the earlier time

slot from a horizontal to a vertical component (see Figure 7.1). In this way, we can convert polarization

information (H and V) to time-bin information (t0 and t1). By keeping the time-bin separation small enough

to fit both pulses simultaneously in the Pockel’s cell pulse width of ∼7 ns, we can ensure t0 and t1 undergo the same operations inside the quantum memory. After the memory, using transducer in reverse, we can

Figure 7.1: A diagram of an optical arrangement that converts a polarization-encoded qubit into a time-bin qubit through the use of two PBS’s and one Pockels cell.

74 Figure 7.2: A detailed diagram of our polarization-to-time-bin transducer in lab. Focal lengths of lenses in long optical path are both 25.4 cm. Input is prepared using a PBS, HWP, and QWP. A beamsplitter (BS) allows for pick-off of the returning photon pulse from the optical memory, to allow for full quantum state tomography. A QWP, HWP, and PBS are placed before coupling into an SAP500 single-photon APD.

convert time-bin qubits back to polarization qubits (after applying a σx gate). To physically realize the transducer, we were constrained by: (1) limited space, (2) small time-bin separation, and (3) beam divergence. Due to limited space, we needed to use the same transducer for both input to and output from our delay line; this has the added great advantage that the relative phase shift between the two time bins is subtracted in the transduction back to the polarization encoding. With a small time-bin separation, we needed a Pockels cell and driver capable of an equivalently short pulse rise time. To achieve these small rise times, our Pockels cell would need a particularly long recovery time before re-firing. To overcome this limitation, we utilized two Pockels cells, one that we activated for the input to our quantum memory, and the second that we activated for the output from our quantum memory. Finally, since we are producing two photonic time-bin qubits that should be indistinguishable apart from arrival time, we must ensure the difference in path length does not result in a difference in spatial mode (further discussed in Section 7.2). To this end, we use two lenses in our longer path to approximate a 4F-imaging system back onto the second PBS.

To achieve transmission to and from the optical memory, after our Pockels cells we placed a PBS with a fiber in each port. The transmissive (horizontal) port led to the quantum memory, while the reflective

(vertical) port received from the quantum memory. A non-polarizing beamsplitter (BS) was placed between the initial input to the transducer and the first PBS to extract some of the post-memory photons for quantum state tomography (see Section 7.3 on the output from the quantum memory).

A detailed schematic of this setup is shown in Figure 7.2. The input to the transducer uses the same

75 HL7001MG laser diode used in our optical delay line, redirected with an SM600 fiber to an F220FC-B fiber collimation package, FL=10.99 mm, with an output beam waist of 2.1 mm. Both PBS’s in the polarization- to-time-bin transducer are polarizing beamsplitter cubes (Thorlabs PBS102, 10 mm) with a transmission

Tp > 90% and reflection Rs > 99.5% over a wavelength range of 620 - 1000 nm; here, p-polarized light is horizontal and s-polarized light is vertical in our reference frame. The measured extinction ratio Tp : Ts > 1000 : 1. The 4f-imaging in the longer path uses two plano-convex lenses (Thorlabs LA1950-B N-BK7, FL=

25.4 mm). Between the PBS’s and the output fiber, we place two Pockels cells with RTP crystals. After the

Pockels cells, we have another PBS102 with SM600 fibers aligned to the transmissive and reflective ports.

These fibers are each collimated with F240FC-B collimation packages, FL = 7.93 mm1.

7.1.1 Fast optical pulse

To operate our memory with the use of the transducer, we needed to produce pulses on the hundreds of picoseconds range so they could be adequately distinguished from each other when separated by only a few nanoseconds. To produce these pulses, a visiting student developed a circuit capable of gain-switching our laser diode at a pulse width of 600-800 picoseconds. Because the circuit relies on a DC bias, we need to take into account the background light when we take our photon-counting measurements. At optimal DC bias, we measured ∼34:1 ratio of pulse to background in our counts over 1.5 ns. In all our measurements, background counts were measured by turning off pulses and leaving the DC bias untouched.

7.2 Mode-matching in the quantum memory

In the previous chapter, we touched on beam divergence as a real issue when working with long beam paths, such as in our quantum memory. In order to interface our quantum memory with the transducer, we need to ensure mode-matching within loops. There are several reasons mode-matching is important:

1. Without proper constraint, the diverging beam may suffer transmission loss if it exceeds the clear

aperture of any of the optics it must pass through (in our system the Pockels cell is this limiting factor,

with a square aperture of 3 mm in our 1st loop).

2. In order to interface with other apparatuses, the beam must be able to couple efficiently into single-

mode fiber, which means it must end up close to the optimal mode of the collimation lens.

1An attentive reader may note that we used different collimation packages for the input to our transducer than the input and output to and from the delay line. This was simply due to availability of optics. Replacing these lenses would potentially increase transmission to the delay line, which is currently ∼ 70%.

76 3. We discussed the potential of storing information in orbital angular momentum modes; in this case,

spatial mode distortion would result in reduced fidelity.

All these factors mean we need to ensure the spatial parameters of the beam remain largely consistent

as it travels around the loop.

First off, let us assume our beam is mostly Gaussian, meaning its irradiance profile follows a Gaussian

distribution. However, this irradiance profile does not stay constant through propagation in space; due to

diffraction, a Gaussian beam will diverge from the beam “waist” (defined as the transverse width where

1 irradiance is e2 (13.5%) of its maximum value) reaches a minimum value, ω0. The beam converges and diverges equally on both sides of the beam waist by the divergence angle θ, given by:

λ θ = , (7.1) ω0n

where n is the refractive index of the medium and λ is the wavelength of the light in vacuum.Thus, a tighter waist will result in a larger divergence angle (this is a manifestation of the relating the spot size to the transverse momentum spread). Another common metric for characterizing the divergence of a Gaussian beam is the Rayleigh range, zr, which is the distance from the waist where the cross-section √ 2 πω0 area of the beam is doubled, i.e., the diameter ω(zr) = 2ω0 : zr = λ We see that a small beam (desirable to reduce clipping) has a necessarily small Rayleigh range. Thus, while a smaller divergence angle is ideal

for easier beam propagation, our design constrains our waist size – our smallest Pockels cell entry point is 3

mm, and even in our largest Pockels cell entry point of 10 mm, we must ensure ample room for two beams

to travel through in parallel. A compromise of ω0 = 1.44 mm allows for a 2.8-mm-diameter beam, which comfortably travels through our Pockels cells while still maintaining a Rayleigh length of about one meter.

7.2.1 Modeling

A beam at any point can be characterized by its q parameter [82]:

−πω(z)2 q(z) = , (7.2) iλ

where λ is the wavelength, and ω is the waist at distance z (ω(z = 0) = ω0):

s  z 2 ω(z) = ω0 1 + . (7.3) zr

77 (a)

(b)

Figure 7.3: a) A Gaussian beam converges and diverges from the beam waist by an angle inversely propor- tional to the waist size. b)The beam follows a Gaussian intensity profile.

78 The goal in our optical system is to have the output beam identical to the input beam; however, as we see

in Eq. 7.2, the q parameter will undergo change simply through free-space propagation. We aim to make

the various storage cavities q-preserving by installing refractive optics at appropriate positions. To calculate

the output beam parameters, we use:

A · qin + B qout = , (7.4) C · qin + D s −λ ωout = , (7.5) π · Im(qout)

−1 1 Rout = Re( ), (7.6) qout

where ωout is the waist of the beam, Rout is the ROC of the beam at the output, and A, B, C, and D characterize the transfer matrix of the optical system, as introduced in Chapter 6. Appendix D details the transfer matrices we used to model our system for initial placement of lenses.

7.2.2 Experimental implementation

While we have the tools to extract power and photon-number measurements for a single pulse of light, it is much harder to directly measure waist and radius of curvature for each pulse. Tools we typically use to measure these parameters are a Shack-Hartmann Wavefront Sensor (Thorlabs WFS40-7A4) and a Scanning-

Slit Optical Beam Profiler (Thorlabs BP209-VIS). A wavefront sensor uses a lenslet array to measure the local tilts of the wavefront across the beam, while the beam profiler uses knife-edge measurements to determine the spatial intensity distribution, and at several longitudinal positions along the beam, thus the size and location of the incoming beam. However, both devices were found unable to reliably make these measurements on the

< 6-ns pulses, supplied at a 10 KHz rate (the fastest repetition rate and longest pulse we could use and still reliably operate our pulsing system). However, we found that a CCD camera (FLIR CM3-U3-13S2C-CS) could integrate our signal to produce a reproducible intensity profile we could save as an image for analysis.

We analyzed these images using a Gaussian fit in Matlab to determine how the x- and y- radius (waist) varied per loop. We then iteratively moved lenses until we minimized the overall waist modulation over ten cycles. Sample beam waist measurements are shown in Appendix E, along with a detailed description of the iteration procedure we used. We have performed this procedure for both the first and second loop, with our best fiber-coupled transmission measurements shown in Figure 7.4. Beams were collected with a

CFC-11X-B adjustable collimation package with an SM600 fiber.

79 (a) (b)

Figure 7.4: Fiber-coupled transmission measurements for 12.5-ns (a) and 125-ns loop (b) of optical quantum memory. The best exponential fit for the 12.5-ns loop is a transmission of 94.8% per cycle. The best exponential fit for the 125-ns loop is for a transmission of 96.0% per cycle, but the fit is very poor and it is likely that the pulse stored for 250, 375 ns matched the fiber’s input mode better than that of the pulse stored for 125 ns.

7.3 Memory Fidelity

7.3.1 Quantum process tomography

To characterize the performance of the system matches our intentions, we make a quantum process tomog-

raphy measurement to determine the χ fidelity of the memory on the incident quantum states [83, 84]. This measurement uses known input quantum states to probe a system; for each input we perform a full quantum state tomography of the output [84]. Specifically, we engineered our entire memory (including transducer) to act as a σ matrix: x   0 1     , 1 0

effectively flipping the horizontal and vertical polarization states, as well as left and right circularly polarized

states. Diagonal and anti-diagonal will remain unchanged (apart from a global phase) by this process. This

operation can ultimately be corrected by placing a HWP at the output, which would force our system to

behave closely to an identity matrix, a more desired behavior for quantum memories.

7.3.2 Results

To perform the process tomography, we use a polarizing beam-splitter, quarter-wave-plate, and half-wave-

plate (see Figure 7.2). One set of these three optics is placed at the beginning of our setup to prepare the

required input states. Another set is placed at the end of our system for quantum state tomography. We

80 send a complete set of input states (36 measurements) through our system and measure ρout for each. We measure a fidelity of output states to input states, and in this way we can reconstruct the operator that acts

on our states using a maximum likelihood estimation (MLE) technique. As stated in the previous section,

we engineer our memory to act as a σx matrix. The χ fidelity represents how closely our system’s operator matches this matrix. Appendix F provides a more detailed description of our analysis, as well as additional measurement matrices. To get an error estimate, we take up to ten sets of 36 measurements (6 input states overlapped with 6 output states) using a UQD-Logic-16 timetagger with a correlation window of 1.5 ns.

Our first step was to measure the χ fidelity of the transducer itself. Ten measurements produced an average χ fidelity of 0.978 ± 0.002. We then measured our total system’s χ fidelity (transducer with optical delay line) with storage times up to 1.25 µs. For our shortest storage time (12.5 ns), we measured a total

χ fidelity of 0.976 ± 0.001. For ten cycles through the 2nd loop (corresponding to a storage time of 1.25

µs), we measured a total χ fidelity of 0.975 ± 0.007. We can see the χ fidelities of the quantum memory are within error bars of the measured χ fidelity of the transducer itself; when measuring the fidelity of the optical memory to the transducer itself, we found χ fidelity of 0.998 ± .001 suggesting that the limiting factor is the polarization optics of the transducer rather than the memory itself. Figure 7.5 shows a graphical depiction of reproduced χ matrices for the transducer and for 10 cycles through the 2nd loop, as well as the fidelity matrix of the quantum memory to the transducer itself. Appendix F gives a more detailed description of our process tomography analysis.

7.4 Additional work and future directions

7.4.1 Storage of orbital angular momentum modes

In previous chapters, we have emphasized the competitive time-bandwidth product of our memory, as well its high process fidelity to store polarization- and time-bin encoded qubits. However, in Chapter 5, we mentioned the additional capability to store orbital angular momentum modes of light. The orbital angular momentum of light (OAM) is the component of angular momentum of a light beam that is dependent on the

field spatial distribution and not on the polarization [85], the “spin” angular momentum. As a qubit, OAM modes can encode information in the Hermite-Gaussian and Laguerre-Gaussian spatial modes, as shown in

Figure 7.6.

It is worth noting that, in addition to a possible form of qubits, OAM modes can act as qudits that can each be in a superposition of > 2 states. The OAM qubits we measured in our work result are characterized by superpositions m = ±1 where m is an integer representing the helicity of the beam structure. However,

81 (a) (b)

(c)

Figure 7.5: The real part of process tomography matrices for transducer (left) and 10 cycles through the 2nd loop (right). Measured χ fidelity of transducer is 0.976 ± 0.001, while measured χ fidelity of 2nd loop is 0.975 ± 0.007. c) Measured χ fidelity of quantum optical memory to transducer is 0.998 ± 0.001.

82 Figure 7.6: Poincar´esphere for qubit basis modes in the OAM parameter space. |Ri and |Li are Laguerre- Gauss modes, while |Hi, |V i, |Di, and |Ai are Hermite-Gauss modes (superpositions of Laguerre-Gauss states.) Pictures of OAM modes taken from [85] (Creative Commons License).

|m| can take on numbers > 1, leading to a much larger Hilbert space over which we could explore. A quantum memory’s ability to store such modes could provide real advantage towards exchanging informa- tion in quantum networks, where encoding more information in each photon could result in improved key distribution rates [86].

In a separate experiment (carried out by a fellow graduate research student, Spencer Johnson), a quantum process tomography was performed on OAM mode qubits using a similar technique to that used in our measurements in Section 7.3. The qubits passed through a Pockels cell-PBS “switch” very similar to the one used in our optical quantum memory, with a measured process fidelity of 0.998 to that of an identity matrix.

Qubits were produced and measured with a spatial light modulator (Holoeye Pluto-2) to project into and out of particular orbital angular momentum states. An exciting next step would be to inject orbital angular momentum modes into our quantum memory. It is an open question as to how the Herriott cells in the 2nd and 3rd loop might potentially transform or degrade the spatial mode information.

7.4.2 Storage of hyperentanglement

Previous work in our lab has demonstrated creation, measurement, and (superdense) teleportation of photon pairs [87] that are hyperentangled in polarization and orbital angular momentum. As we detailed in the previous section, we are exploring the possibility of storing OAM modes in the 2nd and 3rd loops of our quantum memory. Even if we are unable to store in the higher loops, we can almost certainly demonstrate storage of polarization, time-bin, and OAM modes in our first loop and would hope to extend previous work

83 by storing hyper-entangled qubits in our broadband quantum memory.

7.4.3 Summary and conclusions

In this chapter we discussed our efforts to control the spatial mode of our photonic qubit so as to interface with a polarization-to-time-bin transducer. We measured a process fidelity for our system > 97% and discussed potential future directions for high-fidelity storage of photonic qubits in OAM modes.

84 Chapter 8

Conclusions

In this thesis, we presented our work towards answering two separate research questions in quantum optics.

First, we asked whether or not humans could detect single photons. Towards answering this question, we developed a high-efficiency single-photon source and explored the effects of training on increasing a person’s ability to detect low stimuli. Second, we asked if we can build a quantum memory that surpasses the time- bandwidth product of current memories by multiple orders of magnitude. We answered this question with a confident affirmative, contributing critical next steps towards the development of quantum memories.

While these two questions may seem distinct in concept, the methods needed for answering have sub- stantial overlap. Both experiments required development of a high-performing light source specific to our measurement task (a single-photon source for our vision experiment and a fast pulse laser diode circuit for our quantum memory). Both experiments relied on either characterization of, or measurement through single-photon detectors. In all of our work, we were able to increase efficiency in our devices through careful manipulation of our photon’s spatial mode.

Beyond just method overlap, though, the two experiments also have the potential to advance the per- formance and practical applications of the other. Developing a high-efficiency single-photon source requires quantum memory to enable synchronization between signal and idler photons; while our particular source uses a fiber delay line memory, a Herriott cell (such as the one in our optical quantum memory) could be used to achieve an increased overall transmission efficiency, which would in turn lead to increased statistical power in our vision experiment. Furthermore, the multimode capacity of our optical quantum memory allows for potential multiplexing that could also increase the efficiency of our single-photon source [63]. As a separate example, an advanced stage of the vision experiment could employ single-photon quantum superposition states, precisely the sort one might want to store in a quantum memory.

In addition, exploring the human eye (“nature’s photodetector”) and its single-photon detection capabil- ity could provide inspiration for the further development of single-photon detectors. For example, the range of brightness a human eye can detect is much larger than that of a standard lab single-photon detector. For certain quantum applications, a detector’s ability to adjust its dynamic range could perhaps help overcome

85 environmental effects (or attacks) that attenuate or brighten a light source unexpectedly. Quantum net- works could benefit from having access to such a detector, just as they could benefit from a robust quantum memory to contain the light that has been transmitted.

This dissertation opened with an introduction discussing the relationship of quantum information and quantum optic, and how they have developed and informed each other. Moreover, our work has demonstrated how interdisciplinary studies have begun to play a strong role in addressing fascinating questions in quantum information and quantum optics. Our vision work has uniquely brought together the fields of psychology, biology, and quantum physics to provide new insight on the human eye. Our memory work has united physics and electrical engineering to generate the stabilization and pulse-generating circuits necessary for fully realizing our quantum device. Advances in physics are no longer isolated to just the work of physicists

– we are now at a time where scientists and engineers in many areas are making vital contributions across the field. Fundamental research has contributed to commercial applications, and limitations in applications have in turn informed new directions for fundamental research. We hope that our work has helped both in advancing the field, as well as encouraging important dialogue across multiple disciplines.

86 Appendix A

Viewing Station Alignment Procedure

1. The observer adjusts the chin rest (x and y position, but not z position) so they can sit comfortably

and see a perfectly symmetrical fixation cross. This effectively sets the correct x and y position of

the observer’s eye, equidistant between the left and right stimulus positions and at the height of the

fixation cross.

2. An alignment beam from a 505-nm LED is split equally between the top and bottom fibers. The

observer uses the motorized tip-tilt stages (controlled with a modified keyboard) to make very small

adjustments to the alignment beam, until the top and bottom sides are clearly visible and appear to

be point sources.

3. The observer maintains the alignment throughout an experimental session by keeping the fixation cross

centered; if they wish to sit back for a break from the task, accurate alignment can be restored by

returning to the chin rest and visually re-centering the cross.

87 Appendix B

Memory Setup

Figure B.1: Optics table with all used components for memory.

A detailed schematic of the quantum memory setup is shown in Figure B.1. The different delay loops are connected by a two lens 4f imaging system. All used components are listed in Table B.1.

88 Parts Loop 1 Loop 2 Loop 3 Interconnection 1 to 2 Interconnection 2 to 3 I0.1, I0.2, I2.1, I2.2, I2.1, I2.2, I3.1, I3.2, I5.1, I5.2, I1.1, I1.2, I2.3, I2.4, I2.3, I2.4 I3.3, I3.4 I5.3, I5.4 Irises I1.3, I1.4, I2.5, I2.6 I1.5, I1.6 M1.1, M1.2 M2.1, M2.2 M3.1, M3.2 M4.1, M4.2 M5.1, M5.2 M1.3, M1.4 M2.3, M2.4, M3.3, M3.4, M4.3, M4.4 M5.3, M5.4, M1.5, M1.6, M2.5, M2.6, M3.5, M3.6, M5.5 Mirrors M1.7, M1.8 M2.7, M2.8, M3.9, M3.10, M2.9 M3.9, M3.10, M3.11 L1.1, L1.2 L2.1, L2.2, L3.1, L3.2 L4.1, L4.2 L5.1, L5.2 Lenses L2.3, M3.3, M3.4, Polarizers PBS1 PBS2, PBS3 PBS3, PBS2 Polarizers PBS1 PBS2, PBS3 PBS3, PBS2 Pockels PC1 PC2 PC2 Cell

Table B.1: Used components in quantum memory.

89 Appendix C

Triggering Pockels cell for Optical Memory

Figure C.1 displays pulses of both Pockels cells as well as the different pulses of each channel (A, B, C, D).

The voltage sequence of a Pockels cell is determined by the switching of two push-pull switches of the driver

head to apply a high voltage (+HV) to the Pockels cell. Channels A and B correspond to on and off switches

for applying +HV. Channels C and D correspond to applying -HV. Both +HV and -HV correspond to the

same operator on the input light (in this case, a π/2 rotation around the H-V axis.

Switching through the first memory takes two pulses at PC1. Incoming pulses are triggered at 0.2 µs.

The A On and A Off switches are held fixed at A On (A1) = 0.956, 0.1 µs and A Off (B1)= A1 +

n*0.0125407 µs for the cases with and without the transducer, respectively. B On (C1) and B Off (D1)

must be moved synchronously as they are responsible for coupling out of the loop. C1 = A1 + 0.099 +

n*0.0125407 µs and D1=C1 + 0.1 µs (leading to a total pulse width of 0.1 µs) where n = 0...9 for each

cycle stored in the 1st loop. The delay generator produces two output pulses to input into the splitter box,

A XOR B and C XOR D. The splitter box converts these pulses into safe A ON, A OFF, B ON, and B

OFF pulses.

Switching through the second and third loop will take a total of three pulses at PC2. Because of this, we

need to make use of the Inhibit function – an internal pulse which starts the delay counters of the individual

channels. While the overall memory trigger runs anywhere from 1-10 kHz, the Inhibit pulse is modified

according to the amount of time we want to store a pulse in our third loop. To skip traversing through

the 2nd and 3rd loop, A On (A2) = C1 + 0.0083 + 0.013 + 0.06 µs and A Off (B2) = A2 +

0.125407 µs. The extra 0.013 + 0.06 µs ensures that all Pockels cell pulses are delayed enough to avoid

storing the optical pulse. We also ensure this by setting the inhibit trigger I = 1.0 + -0.125407 + 1.25407

µs,delaying the inhibit trigger > 23µs).

To store in just the 2nd loop, while skipping the 3rd loop, C1 = A1 + 0.099 + n*0.125407 µs. A2

= C1 + 0.0083 µs, and B2= A2 + n*0.125407µs where n = 0...9 for each cycle stored in the 2nd loop

(no 0.013 µs delay). C2 = A2 + 0.011 and D2 = B2 + 0.011 I = 1.0 + n*.125407 + 1.25407 µs.

To store in just the 3rd loop, skipping the 2nd loop, A2 = C1 + 0.0083 + 0.013 µs. The extra

90 Figure C.1: Trigger scheme for memory operation.

0.013 µs ensures the Pockels cell will not fire and switch the optical pulse into the 2nd loop. B2 = A2

+ 0.12547. C2 = A2 + 0.011 and D2 = B2 + 0.011 + 0.02, switching the optical pulse into the

3rd loop. I = 1.0 + -0.125407 + n*1.25407 where n = 0...9 for the number of cycles we are storing in

Loop 3. The A2 and B2 pulses are triggered once again on the inhibit trigger, with A2 = C1 + 0.0083 +

0.360 + 0.013 and B2 = A2 + 0.125407 + 0.4. C2 and textbfD2 are pulsed at the same time since no additional pulse is needed.

To store in the 2nd and 3rd loop, A2 = C1 + 0.0083 µs, no longer needing an additional 0.013 µs

since we will be entering the 2nd loop. The only other modification is to add an additional 0.0125407 µs to

the Inhibit trigger, so I = 1.0 + -0.125407 + n*0.125407 + m*1.25407 µs where n, m= 0...9. n is

the number of cycles we store in Loop 2 and m is the number of cycles we store in Loop 3.

91 Appendix D

Transfer Matrices for Modeling Quantum Memory

To initially place our lenses, we modeled our system using ray transfer matrices. To determine lens placement, we fixed our total optical path length while letting the distance between two of our lenses vary. Focal lengths of lenses were calculated using the index of refraction for N-BK7 at a wavelength of 705 nm (n0 = 1.455).

We optimized our system by minimizing the absolute difference between qout and qin (see Section 7.2.1 for a deeper discussion of q).

D.1 Loop 1

For our first loop, we modeled our system with:

          1 OL1 − X1 − X2 1 0 1 X2 1 0 1 X1             ·   ·   ·   ·   0 1 − 1 1 0 1 − 1 1 0 1 f100 f100 with OL1 = 12.540757 ns · c0 = 3.75962 m, and f100 = 1.00409 m. Here c0 is the speed of light. Op- timization produced X1 = 0.894084 and X2 = 1.95146. Calculations can be found in the Mathematica

file“CalculateLens1 MichelleVersion.nb”.

D.2 Loop 2

The transfer matrix for the second loop is given by:

          1 OL − 37 · lHC − X1 − X2 1 0 1 X2 1 0 1 X1  2 2            ·   ·   ·   ·   0 1 − 1 1 0 1 − 1 1 0 1 f40 f40

         18 1 lHC 1 0 1 lHC 1 0 1 lHC  2     2     2  ·   ·     ·   ·   0 1 − 2 1 0 1 − 2 1 0 1 rHC2 rHC2

92       1 dHC − dL 1 0 1 dL  2 21     21  ·   ·   ·   0 1 − 1 1 0 1 f100

with OL2 = 125.40757 ns ·c0 = 37.5962 m, dL21 = 0.438 m, dHC2 = 0.922 m, lHC2 = 0.913 m, rHC2 = 1

m, f100 = 1.00409 m, and f40 = 0.401664 m. Optimizing gives X1 = 0.75 and X2 = 0.959. Calculations can be found in the Mathematica file “CalculateLens2 MichelleVersion.nb”

D.3 Loop 3

The transfer matrix for the third loop is given by:

          1 OL − 340 · lHC − X1 − X2 1 0 1 X2 1 0 1 X1  3 3            ·   ·   ·   ·   0 1 − 1 1 0 1 − 1 1 0 1 f50 f50

     170       1 lHC 1 0 1 lHC 1 dHC − dL 1 0 1 dL  3     3   3 3     1  ·   ·     · · ·  0 1 − 2 1 0 1 0 1 − 1 1 0 1 rHC3 f75

with OL3 = 1254.0757 ns·c0 = 375.962 m, dHC3 = 0.76851 m, lHC3 = 1.098 m, rHC3 = 3.0 m, f50 = 0.502

m and f75 = 0.752995 m. Optimization produces X1 = 0.371 and X2 = 1.19. Calculations can be found in “CalculateLens3 MichelleVersion.nb”.

93 Appendix E

Sample Beam Waist Measurements for Optimizing Lens Placement

A fair amount of time was spent optimizing lens placement for the 1st and 2nd loop of the quantum memory. Simulations from Appendix D gave informed initial estimates for lens placement, but variations in optical tolerance and optical path length necessitated some iterative movement and measurement strategies to minimize the variance in spatial mode per loop. To do this, we performed an iterative procedure of adjustments between two lenses in the loop, since beam waist and ROC can be fully controlled by the inter- lens distance between two lenses, as well as their central placement within the loop. We alternated between adjusting the inter-lens distance until overall beam waist variation was minimized, then moving both lenses central location incrementally while maintaining the inter-lens distance. At the new central location, we re- optimized inter-lens distance and compared with the optimal beam waist measurements from the previous position. If the overall variance was worse, we moved our lenses in the opposite direction – if better, we continued to move our lenses in the same direction, re-optimizing with each new central location. In this way, we found a local minima in beam variance, near to the location from our simulated predictions.

Here, we produce a few of our sample beam waist measurements (taken with a FLIR Chameleon CCD

Camera) compared with the beam waist as simulated by our transfer matrices. All beam waist measurements were taken approximately 1 meter after the output of the PBS for the loop. All fits were done with the

(a) (b)

Figure E.1: 1st loop beam waist Output for X1 = 1.889 and X2 = 1.997. Data is shown on left, while simulation from transfer matrix is shown on right.

94 Matlab file “image reader beamwidthfit leasqr batch 1 1.m”. All simulations were performed with a 760 µm beam waist size placed 10 cm after the PBS.

Figure E.1 shows the results from a lens configuration that results in wide divergence (as demonstrated by a steadily increasing beam waist at the output from our simulation). Our data shows a plateau in beam waist after the twelfth cycle – this is likely due to clipping, either at the Pockels cell driver aperture or at the half-mirror stationed at the output for pick-off of our stabilization laser.

(a) (b)

Figure E.2: 1st loop beam waist output for X1 = 1.889 and X2 = 1.987. Data is shown on left, while simulation from transfer matrix is shown on right.

Figure E.2 shows our beam waist output for a lens placement where the beam perhaps still suffered from a bit of divergence but less so than Figure E.1. We can see there is some wide variance between the smooth curves of our Mathematica simulations and our measurements that have a more peaked profile.

Beyond imperfect modeling, our measurements might suffer from interference effects on our CCD camera.

In addition, if any light leaks through from previous loops, we may find an increase in our measured beam waist, particularly if misalignments widen the overall profile in a particular direction.

Figure E.3 shows our beam waist output for our final lens placement, leading to an overall fiber-coupled transmission of 94.8% per loop. We chose this lens configuration due to its overall minimal change in beam waist size over ten cycles. Indeed, of the three lens configurations shown, this configuration is closest to the optimal lens arrangement, predicted by our simulations.

Figure E.4 compares data from the current lens configuration of Loop 2. Simulations are likely further from experiment, due to a ∼1-3% fault tolerance in the radius of curvature for the mirrors. In addition, we did not simulate the drilled holes in the Herriott cells, which could cause potential clipping if the beam waist becomes too wide at that point. After minimizing the variance in beam waist, we found peak free- space transmission was achieved with a slightly off-axis alignment, suggesting the beam indeed suffered from clipping when aligned on-axis.

95 (a) (b)

Figure E.3: 1st loop beam waist output for X1 = 1.839 and X2 = 1.960. Data is shown on left, while simulation from transfer matrix is shown on right.

(a) (b)

Figure E.4: 2nd loop beam waist output for X1 = 0.792 and X2 = 0.9345. Data is shown on left, while simulation from transfer matrix is shown on right.

96 Figure E.5 show sample fits for our optimized Loop 1 path, fitting with a smooth Gaussian function from the Matlab library.

Figure E.5: Gaussian fits and CCD camera images for four cycles through the first loop. Fits were performed with the Matlab imgaussfilt that uses a smoothing Gaussian filter set at a standard deviation of 2.

97 Appendix F

Standard Process Tomography

For our quantum memory, we performed a standard quantum process tomography (SQPT) on our first two

loops. SQPT allows one to determine the effect of an unknown operation on a quantum state. In this

case, we have a particular operation we’d like to perform (a σx matrix) and we would like to measure the fidelity of our system to that particular process. To perform a process tomography, we perform a standard tomography on all orthogonal input polarization states (H, V, D, A, L, R). As discussed in Ref. [84], one may use these tomographies to calculate the process matrix χ which describes the behavior of the system under study. Essentially, we can represent any operation on a quantum system ρ as a map ρ → ε(ρ). ε can be represented by:

X † ε(ρ) = AmρAnχmn, (F.1) mn where the An are a set of measurement operators (e.g., relating to polarization) which span the space of operators for a given Hilbert space and χmn are the elements of the χ matrix. In this way, χ is built up as a representation of  with respect to the measurements we have done. A perfect σx matrix is given by:

  0 0 0 0     0 1 0 0 χ =   . (F.2) σx   0 0 0 0     0 0 0 0

p√ √ 2 We can define the process fidelity F = (T r[ χmeasχtarget χmeas)] /(T r[χmeas]T r[χtarget]), which measures the similarity between two processes [88]. Perfect agreement between matrices would yield a process fidelity of 1. A process fidelity of 0 reflects complete orthogonality. To reconstruct the χ matrix for n number of cycles in our memory, we take ∼10 sets of 36 measurements and use maximum likelihood

estimation to fit a χ matrix to each set. By constraining the symmetries of our reconstructed χ matrix, we

ensure physicality. Below, we show the real and imaginary χ matrices and process fidelities we produced for

the first and second loop.

98 Figure F.1: Reconstructed process matrix for 1 cycle through 1st loop. χ fidelity = .986 ± .001.

Figure F.2: Reconstructed process matrix for 5 cycles through 1st loop. χ fidelity = .981 ± .002.

Figure F.3: Reconstructed process matrix for 5 cycles through 1st loop. χ fidelity = .976 ± .004.

99 Figure F.4: Reconstructed process matrix for 1 cycle through 2nd loop. χ fidelity = .981 ± .004.

Figure F.5: Reconstructed process matrix for 5 cycles through 2nd loop. χ fidelity = .979 ± .005.

Figure F.6: Reconstructed process matrix for 10 cycles through 2nd loop. χ fidelity = .975 ± .007.

100 Appendix G

Fast Pulse Circuit

Figure G.1: Design of the Fast-Pulse board for short pulse generation.

To be able to get very short optical laser pulses, Fedor Bergmann designed a board which can be used to

gain-switch a standard laser diode. To trigger inputs are used to switch the laser diode on and off, with the

trigger level set by potentiometer RP2. The power input is 24V. The DC bias can be regulated between

0V and 5V using potentiometer RP1. Resistors R15 and R44 can be used to limit the current. The input impedance is set to 100 Ω.

101 References

[1] A. Einstein. Uber¨ einen die erzeugung and verwandlung des lichtes betreffenden heuristischen gesicht- spunkt. Annalen der Physik., 322:132–148, 1905. [2] J. Maxwell. A dynamical theory of the electromagnetic field. Philosophical Transactions of the Royal Society of London., 155:459–512, 1865. [3] G. I. Taylor. Interference fringes with feeble light. Proceedings of the Cambridge Philosophical Society, 15:114, 1909. [4] P. G. Kwiat, A. M. Steinberg, and R. Y. Chiao. Observation of a “quantum eraser”: A revival of coherence in a two-photon interference experiiment. Physical Review A, 45:7728–7739, 1992. [5] J. S. Bell. On the einstein-podolsky-rosen paradox. Physics, 28(12):633–657, 1964. [6] S. Schiller Breitenbach and J. Mlynek. Measurement of the quantum states of squeezed light. Nature, 387:471–475, 1997. [7] David J. Griffiths. Introduction to Quantum Mechanics (2nd ed.). Prenice Hall, 2004. [8] J. Buchwald and A. Warwick. Histories of the electron: The birth of microphysics. MIT Press, 387:21– 23, 1997. [9] J. Dost´al,T. Manˇcal,R. Augulist, F. V´acha, J. Pˇsenˇcik,and D. Zigmantas. Two-dimensional elec- tronic spectroscopy reveals ultrafast energy diffusion in chlorosomes. Journal of the American Chemical Society, 134:11611–11617, 2012. [10] G. S. Engel, T. R. Calhoun, E. L. Read, T.-K. Ahn, T. Man˘cal,Y.-C. Cheng, R. E. Blankenship, and G. R. Fleming. Evidence for wavelike energy transfer through quantum coherence in photosynthetic systems. Nature, 446:782–786, 2007. [11] G. Panitchayangkoon, H. Dugan, K. A. Fransted, J. R. Caram, E. Harel, W. Jianzhong, R. E. Blanken- ship, and G. S. Engel. Long-lived quantum coherence in photosynthetic complexes at physiological temperature. In Proceedings of the National Academy of Sciences, volume 107, pages 12766–12770, 2010. [12] M. Mohseni, P. Rebentrost, S. Lloyd, and A. Aspuru-Guzik. Environment-assisted quantum walks in photosynthetic energy transfer. Journal of Chemical Physics, 129:174106–174106–9, 2008. [13] H. Lee, Y.-C. Cheng, and G. Fleming. Quantum coherence accelerating photosynthetic energy transfer. Springer Series in Chemical Physics Ultrafast Phenomena XVI, pages 607–609, 2009. [14] Mattia Walschaers, Jorge Fernandez-de-Cossio Diaz, Roberto Mulet, and Andreas Buchleitner. Op- timally designed quantum transport across disordered networks. Phys. Rev. Lett., 111:180601, Oct 2013. [15] Jennifer C. Brookes. Quantum effects in biology: golden rule in enzymes, olfaction, photosynthesis and magnetodetection. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473(2201):20160822, 2017.

102 [16] R. D. Hoehn, D. E. Nichols, H. Neven, and S. Kais. Status of the vibrational theory of olfaction. Frontiers in Physics, 6:25, 2018. [17] L.J.W Haffenden, V.A Yaylayan, and J Fortin. Investigation of vibrational theory of olfaction with variously labelled benzaldehydes. Food Chemistry, 73(1):67 – 72, 2001. [18] Molecular vibration-sensing component in human olfaction. PLoS One, 8, 2013. [19] M. A. Nielsen and I. L. Chaung. Quantum Compuation and Quantum Information. Cambridge Uni- versity Press, 2010. [20] A. Einstein, B. Podolsky, and N. Rosen. Can quantum-mechanical description of physical reality be considered complete? Physical Review, 47:777, 1935. [21] P. W. Shor. Algorithms for quantum computation: discrete logarithms and factoring. In Proceedings 35th Annual Symposium on Foundations of Computer Science, pages 124–134, 1994. [22] R. Hanson, J. M. Elzerman, L. H. Willems van Beveren, L. M. K. Vandersypen, and L. P. Kouwenhoven. Electron spin qubits in quantum dots. In IEDM Technical Digest. IEEE International Electron Devices Meeting, 2004., pages 533–536, 2004. [23] C. H. Bennett and G. Brassard. Quantum cryptography: public key distribution and coin-tossing. In Proc. 1984 IEEE International Conference on Computers, Systems, and Signal Processing, pages 175–179, 1984. [24] C. H. Bennett, G. Brassard, C. Cr´epeau, R. Josza, A. Peres, and W. K. Wootters. Teleporting an un- known quantum state via dual classical and einstein-podolsky-rosen channels. , 70:1895–1899, 1993. [25] C. H. Bennett and S. J. Wiesner. Communication via one- and two-particle operators on einstein- podolsky-rosen states. Physical Review Letters, 69:2881–2884, 1992. [26] C. H. Bennett, D. P. DiVincenzo, and J. A. Smolin. Capacities of quantum erasure channels. Physical Review Letters, 78:3217, 1997. [27] Y. Zhang, Z. Li, Z. Chen, C. Weedbrook, Y. Zhao, X. Wang, Y. Huang, C. Xu, X. Zhang, Z. Wang, M. Li, X. Zhang, Z. Zheng, B. Chu, X. Gao, N. Meng, W. Cai, Z. Wang, G. Wang, S. Yu, and H. Guo. Continuous-variable qkd over 50 km commercial fiber. Quantum Science and Technology, 4:035006, 2019. [28] D. Deutsch. Quantum theory, the church-turing principle and the universal quantum computer. In Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, volume 400, pages 97–117, 1985. [29] L. K. Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the twenty- eighth annual ACM symposium on theory of computing - STOC ’96, pages 212–219, 1994. [30] R. Babbush, P. Love, and A. Aspuru-Guzik. Adiabatic quantum simulation of quantum chemistry. Scientific Reports, 4, 2015. [31] N. Moll, P. Barkoutsos, L. S. Bishop, J. M. Chow, A. Cross, D. J. Egger, S. Filipp, A. Fuhrer, J. M. Gambetta, M. Ganzhorn, A. Kandala, A. Mezzacapo, P. M¨uller,W. Riess, G. Salis, J. Smolin, I. Tav- ernelli, and K. Temme. Quantum optimization using variational algorithms on near-term quantum devices. Quantum Science and Technology, 3(3):030503, jun 2018. [32] D. M. Schneeweis and J. L. Schnapf. Photovoltage of rods and cones in the macaque retina. Science, 268:1053–6, 1995. [33] Y. S. Hecht and S. Shlaer. Energy, quanta and vision. The Journal of General Physiology, pages 819–840, 1942.

103 [34] V. Velden. The two-quanta hypothesis as a general explanation for the behavior of threshold values and visual acuity for the several receptors of the human eye. Journal of the Optical Society of America A, 38:570–581, 1948. [35] G. N. Lewis. The conservation of photons. Nature, 118:132–148, 1905. [36] R. J. Glauber. Coherent and incoherent states of the radiation field. Physical Review, 131(6):2766–2788, 1963. [37] N. M. Phan, M. F. Cheng, D. A. Bessarab, and L. A. Krivitzky. Interaction of fixed number of photons with retinal rod cells. Physical Review Letters, 112:213601, 2014. [38] J. N. Tinsley, M. I. Molodtsov, R. Prevedel, D. Wartman, J. Espigul´e-Pons, M. Lauwers, and A. Vaziri. Direct detection of a single photon by humans. Nature Communications, 7:12172, 2016. [39] W. Hagins, R. Penn, and S. Yoshikami. Dark current and photocurrent in retinal rods. Biophysical Journal, 10:380–412, 1970. [40] T. M. Vuong, M. Chabre, and L. Stryer. Millisecond activation of transducin in the cyclic nucleotide cascade of vision. Nature, 311(5987):659–66, 1984. [41] F. Rieke and D. A. Baylor. Single-photon detection by rod cells of the retina. Reviews of Modern Physics, 70(3):1027–1036, 1998. [42] B. Sakitt. Counting every quantum. The Journal of Physiology, 223:131–150, 1972. [43] R. Holmes, M. Victora, R. Wang, and P. G. Kwiat. Measuring temporal summation in visual detection with a single-photon source. Vision Research, 140:33–43, 2017. [44] G. R. Jackson, C. Owlsley, and G. McGwin Jr. Aging and dark adaption. Vision Research, 39:3975– 3982, 1999. [45] D. A. Baylor, T. D. Lamb, and K. W. Yau. Responses of retinal rods to single photons. The Journal of Physiology, pages 613–634, 1979. [46] D. A. Baylor, B. J. Nunn, and J. L. Schapf. The photocurrent, noise and spectral sensitivity of rods of the monkey maca fascicularis. The Journal of Physiology, 357:575–607, 1984. [47] H. B. Barlow. Retinal noise and absolute threshold. Journal of the Optical Society of America, 46:634– 639, 1956. [48] M. A. Bouman. History and present status of quantum theory in vision, 2012. [49] J. Tanner, P. Wilson, and J. A. Swets. A decision-making theory of visual detection. Psychological Review, 61(6):401–409, 1954. [50] M. D. Eisaman, J. Fan, A. Migdall, and S. V. Polyakov. Invited review article: Single-photon sources and detectors. Review of Scientific Instruments, 82:071101, 2011. [51] Single-photon detector and calibration. In Single-Photon Generation and Detection, volume 45. Aca- demic Press, 2013. [52] D. C. Burnham and D. L. Weinberg. Observation of simultaneity in parametric production of optical photon pairs. Physical Review Letters, 25(2):84–87, 1970. [53] Robert W Boyd. Nonlinear optics. Academic press, 2019. [54] C. K. Hong and L. Mandel. Experimental realization of a localized one-photon state. Physical Review Letters, 56(1):58–60, 1986. [55] Our research group.

104 [56] R. M. Holmes. Testing the limits of human vision with quantum states of light. PhD thesis, University of Illinois at Urbana-Champaign, 4 2017. [57] C. C. Gerry and P. L. Knight. Quantum coherence functions. chapter 5, pages 115–134. Cambridge University Press, Cambridge, 2005. [58] H. Davson, editor. The Eye, vol. 2. London: Academic Press, 1962. [59] L.T. Sharpe, A. Stockman, C. C. Fach, and U. Markstahler. Temporal and spatial summation in the human rod visual system. Journal of Physiology, 463:325–348, 1993. [60] B. H. B. Barlow. Temporal and spatial summation in human vision at different background intensities. Journal of Physiology, 141(2):337–350, 1957. [61] E. Knill, R. Laflamme, and G. J. Milburn. A scheme for efficient quantum computation with linear optics. Nature, 409:46–52, 2001. [62] P. Walther, K. J. Resch, T. Rudolph, E. Schenck, H. Weinfurter, V. Vedral, M. Aspelmeyer, and A. Zeilinger. Experimental one-way quantum computing. Nature, 434:169–176, 2005. [63] F. Kaneda and P. G. Kwiat. High-efficiency single-photon generation via large-scale active time multi- plexing. Science Advances, 5(10), 2019. [64] C. Jones, D. Kim, M. T. Rakher, P. G. Kwiat, and T. D. Ladd. Design and analysis of communication protocols for quantum repeater networks. New Journal of Physics, 18, 2016. [65] S. E. Harris, J. E. Field, and A. Imamo˘glu. Nonlinear optical processes using electromagnetically induced transparency. Physical Review Letters, 64(10), 1990. [66] T.-S. Yang, Z.-Q. Zhou, Y.-L. Hua, X. Liu, Z.-F. Li, P.-Y. Li, Y. Ma, C. Liu, P.-J. Liang, X. Li, Y.-X. Xiao, J. Hu, C.-F Li, and G.-C. Guo. Multiplexed storage and real-time manipulation based on a multiple degree-of-freedom quantum memory. Nature Communications, 9(3407), 2018. [67] L. Ma, O. Slattery, and X. Tang. Optical quantum memory based on electromagnetically induced transparency. Journal of Optics, 19(4):043001, 2017. [68] Y.-F. Hsiao, P.-J. Tsai, H.-S. Chen, S.-X. Lin, C.-C. Hung, C.-H. Lee, Y.-H. Chen, Y.-F. Chen, I. A. Yu, and I.-C. Chen. Highly efficient coherent optical memory based on electromagnetically induced transparency. Physical Review Letters, 120(18):183602, 2018. [69] K. T. Kaczmarek, P. M. B. Brecht Ledingham, G. S. Thomas, S. E. andThekkadath, O. Lazo-Arjona, J. H. D. Munns, E. Poem, A. Feizpour, D. J. Saunders, J. Nunn, and I. A. Walmsley. High-speed noise-free optical quantum memory. Physical Review A, 97:042316, 2018. [70] R. M. Macfarlane and R. M. Shelby. Coherent transient and holeburning spectroscopy of rare earth ions in solids, volume 6, chapter 3, pages 51–178. Elsevier Science Publishers, 1987. [71] W. Tittel, M. Afzelius, R. L. Cone, T. Chaneli`ere,S. Kr¨oll,S. A. Moiseev, and M. Sellars. Photon-echo quantum memory, 2008. [72] A. Holz¨apfel,J. Etesse, K. T. Kaczmarek, A. Tiranov, N. Gisin, and M. Afzelius. Optical storage for 0.53 seconds in a solid-state atomic frequency comb memory using dynamical decoupling. New Journal of Physics, 2020. [73] Y. He, S. K. Gorman, D. Keith, L. Kranz, J. G. Keizer, and M. Y. Simmons. A two-qubit gate between phosphorus donor electrons in silicon. Nature, 571:371–375, 2019. [74] X.-L. Pang, A.-L. Yang, DouJ.-P., H. Li, C.-N. Zhang, E. Poem, D. J. Saunders, H. Tang, J. Nunn, I. A. Walmsley, and X.-M. Jin. A hybrid quantum memory-enabled network at room temperature. Science Advances, 6, 2020.

105 [75] Y. Wang, J. Li, S. Zhang, K. Su, Y. Zhou, K. Liao, S. Du, Y. Han, and S.-L. Zhu. Efficient quantum memory for single-photon polarization qubits. Nature Photonics, 13:346–351, 2019. [76] R. Josza. Fidelity for mixed quantum states. Journal of Modern Optics, 41:2315–2323, 1994. [77] J. B. Altepeter, E. R. Jeffrey, and P. G. Kwiat. Photonic state tomography, 2005.

[78] D. Herriot and H. Schulte. Folded optical delay lines. Applied Optics., 4:883–891, 1965. [79] P. A. B´elanger.Beam propagation and the abcd ray matrices. Opt. Lett., 16(4):196–198, Feb 1991. [80] Modernizing the teaching of advanced geometric optics, 1992.

[81] T. Bergmann and F. Bergmann. BME-SG08 Delaygenerator User Manual. Bergmann Meßger¨ate Entwicklung, Hagenerleite 24, 82481 Murnau, 1st edition edition, 2014. [82] Amnon Yariv. Quantum Electronics. Wiley, 3rd edition, 1989. [83] M. Mohseni, A. T. Rezakhani, and Lidar. D A. Quantum-process tomography: Resource analysis of different strategies. Physical Review A, 77:032322, 2008.

[84] I. L. Chuang and M. A. Nielsen. Prescription for experimental demonstration of the dynamics of a quantum black box. Journal of Modern Optics, 44:2455–2467, 1997. [85] A. Nicolas, L. Veissie, E. Giacobino, D. Maxein, and J. Laurat. Quantum state tomography of orbital angular momentum photonic qubits via a projection-based technique. New Journal of Physics, 17, 2015.

[86] Lana Sheridan and Valerio Scarani. Security proof for quantum key distribution using qudit systems. Phys. Rev. A, 82:030301, Sep 2010. [87] T. M. Graham, H. J. Bernstein, T.-C. Wei, M. Junge, and P. G. Kwiat. Superdense teleportation using hyperentangled photons. Nature Communications, 6, 2015.

[88] I. Bongioanni, L. Sansoni, F. Sciarrino, G. Vallone, and P. Mataloni. Experimental quantum process tomography of non-trace-preserving maps. Physical Review A, 82:2455–2467, 2010.

106