University of Calgary PRISM: University of Calgary's Digital Repository

Graduate Studies The Vault: Electronic Theses and Dissertations

2015-12-22 Efficient Framework for Quantum Walks and Beyond

Dohotaru, Catalin

Dohotaru, C. (2015). Efficient Framework for Quantum Walks and Beyond (Unpublished doctoral thesis). University of Calgary, Calgary, AB. doi:10.11575/PRISM/25844 http://hdl.handle.net/11023/2700 doctoral thesis

University of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission. Downloaded from PRISM: https://prism.ucalgary.ca UNIVERSITY OF CALGARY

Efficient Framework for Quantum Walks and Beyond

by

Cat˘ alin˘ Dohotaru

A THESIS

SUBMITTED TO THE FACULTY OF GRADUATE STUDIES

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE

DEGREE OF DOCTOR OF PHILOSOPHY

GRADUATE PROGRAM IN COMPUTER SCIENCE

CALGARY, ALBERTA

December, 2015

c Cat˘ alin˘ Dohotaru 2015 Abstract

In the first part of the thesis we construct a new, simple framework which amplifies to a constant the success probability of any abstract search algorithm. The total query complexity is given by the quantum hitting time of the resulting operator, which we show that it is of the same order as the quantum hitting time of the original algorithm. As a major application of our framework, we show that for any reversible walk P and a single marked state, the quantum walk corresponding to P can find the solution using a number of queries that is quadratically smaller than the classical hitting time of P. Our algorithm is more general and simpler to implement than the solution known previously in the literature (Krovi, Magniez, Ozols, and Roland, 2015), which was developed specifically for quantum walks; we also prove that, for the particular case of quantum walks, we can embed their algorithm into our framework, thus simulating it exactly. Finally, we show that we can implement amplitude amplification using our tool.

In the second part of the thesis we give a new lower bound in the query model which proves that Grover’s algorithm for unordered searching is exactly optimal. Similar to existing methods for proving lower bounds, we bound the amount of information we can gain from a single oracle query, but we bound this information in terms of angles. This allows our proof to be simple, self-contained, based on only elementary mathematics, capturing our intuition, while obtaining at the same an exact bound. We then turn our attention to non-adaptive algorithms for the same problem of searching an unordered set. In this model, we obtain a lower bound and we give an algorithm which matches the lower bound, thus showing that the lower bound is exactly optimal.

ii Acknowledgements

I would like to express my gratitude for my supervisor, Peter Høyer, for his continu- ous help and encouragement, even in the moments when I lost any hope. Next I would like to thank my committee members Hari Krovi, Philipp Woelfel, Michael J. Jacobson Jr., Gilad Gour, and Peter Høyer for reading my thesis and providing insightful com- ments. I have benefited and learned a lot from the numerous discussions I had with Donny Cheung, Jibran Rashid, Nathan Wiebe, and Philipp Woelfel, and I would like to thank them all for their help. I also thank my friends and my family for their support. Contents

Abstract ii

Acknowledgements iii

1 Introduction 1 1.1 Quantum computation ...... 1 1.2 Quantum query complexity ...... 2 1.3 Reflections and rotations ...... 4 1.4 Mathematical preliminaries ...... 4 1.5 Notations ...... 8

2 Controlled quantum amplification 9 2.1 Problem description ...... 9 2.2 Introduction ...... 10 2.2.1 Related work ...... 10 2.2.2 The circuit of Tulsi ...... 16 2.2.3 Our contribution and organization ...... 16 2.3 The circuit ...... 18 2.4 The flip-flop theorem ...... 20 2.5 Spectral analysis of our circuit ...... 24 2.5.1 Eigenphases ...... 24 2.5.2 Principal eigenvector ...... 26 2.6 Cost of our circuit ...... 29 2.6.1 Definition of the quantum hitting time ...... 29

iv Contents

2.6.2 Quantum hitting time and phase estimation ...... 32 2.6.3 Relations between quantum hitting times ...... 34 2.7 Simulation of amplitude amplification ...... 42 2.7.1 Standard amplitude amplification ...... 42 2.7.2 Simulation of amplitude amplification ...... 43 2.8 Application to quantum walks ...... 45 2.8.1 Classical hitting time ...... 46 2.8.2 Definition of a quantum walk ...... 50 2.8.3 Relation between classical and quantum hitting times ...... 51 2.9 Simulation of the interpolated quantum walk ...... 56

3 Exact lower bounds for quantum unordered search 62 3.1 Introduction ...... 62 3.2 Exact lower bound for adaptive quantum algorithms ...... 63 3.2.1 The query model for adaptive quantum algorithms ...... 64 3.2.2 Exact lower bound for quantum searching ...... 65 3.3 Exact lower bound for non-adaptive quantum algorithms ...... 70 3.3.1 The query model for non-adaptive quantum algorithms ...... 70 3.3.2 Lower bound for non-adaptive quantum algorithms ...... 72 3.3.3 Exact non-adaptive algorithm for quantum searching ...... 73

4 Conclusions 74 4.1 Summary of original contributions ...... 74 4.2 Future work ...... 76

Bibliography 83

A Examples of source code 84 A.1 Efficient simulation of the quantum walk on the grid ...... 84 A.2 Comparison with interpolated quantum walks ...... 87

v List of Figures

2.1 Subconstant success probability of A ...... 13 2.2 Iterations of A on a grid graph ...... 13 2.3 Hitting times and the ϵδ upper bound ...... 14 2.4 The circuit of Tulsi ...... 16 2.5 U as A and reflection...... 18 2.6 U as W and reflection...... 19 2.7 U as W and reflection, simplified...... 19 2.8 U as A and rotation...... 19 2.9 U as W and rotation...... 20 2.10 Phase estimation with t bits of precision ...... 33 2.11 U(2) with reflection about span{|0, init⟩, |1,˜ g⟩} ...... 38 2.12 U(2) with unconditional G in the second register...... 38 2.13 Equivalent form for U(2) ...... 39 2.14 Simulation of amplitude amplification ...... 44 ′ 2.15 Original walk P (left) and the modified walk P (right)...... 48

vi CHAPTER 1

Introduction

1.1 Quantum computation

We first outline in this section the basic concepts of quantum computation. For more details we refer the reader to the textbook [NC00].

States: The state of a quantum computer is described by a unit vector in an finite dimen- m sional Hilbert space C2 . Consider a basis B = {|i⟩; i ∈ {0, 1}m} of this space. The basis states are labeled by classical bit-strings of m bits. Any state in the space is a superposition (linear combination) of the basis states

|Ψ⟩ α | ⟩ = ∑ i i , (1.1) |i⟩∈{0,1}m

∥Ψ∥2 ∑ |α |2 α ∈ C where = |i⟩∈{0,1}m i = 1. The coefficients i are called amplitudes. The state |Ψ⟩ can be represented as a column vector with 2m dimensions.

Combining two independent systems: If |u⟩ and |v⟩ are quantum states, then the com- bined state of the system is a tensor product |u⟩ ⊗ |v⟩. If the state |u⟩ was on n qubits, and the state |v⟩ was on m qubits, then |u⟩ ⊗ |v⟩ is a state on n + m qubits. We often denote a tensor product state without the ⊗ symbol, so instead of |u⟩ ⊗ |v⟩ we write |u⟩|v⟩ or |u, v⟩.

1 Chapter 1. Introduction

Measurement: Measuring the state |Ψ⟩ from Eq. 1.1 with respect to the basis {|i⟩; i ∈ { }m} | ⟩ |α |2 0, 1 produces outcome i with probability i . After the measurement, the state collapses to the observed basis state |i⟩. We only described here projective measurements; more general measurements are equivalent to projective measurements on a larger space. In order to distinguish between two quantum states, they must be almost orthogonal (see Lemma 64 for a more precise statement).

Unitary evolution: requires that the evolution of a quantum system is reversible. That is, the |Ψ⟩ is mapped to the state U|Ψ⟩, where U is a linear and norm-preserving transformation (such matrices are called unitary). Unitary matrices admit an orthogonal basis of eigenvectors, and the corresponding eigenvalues have absolute value 1.

Simulation of classical computation: Any quantum computer can perform any clas- sical computation. That follows from the fact that any classical computation can be made reversible by replacing any irreversible gate x → g(x) with the reversible gate (x, y) → (x, y ⊕ g(x)), and running it on input (x, 0). In other words, we make a classical computation reversible by storing all intermediate steps [Ben73]. Another approach is to simulate the gates AND and NOT (which are universal for classical computation) with the controlled-swap gate (which is reversible). Since two results obtained in different ways might not interfere during a quantum computation, we usually store the answer of a classical computation with reversible gates into an ancilla register, and then perform the classical computation in reverse.

1.2 Quantum query complexity

The computational model we use in this work is called black box query complexity in the quantum literature; in the classical literature, the established name is decision tree complex- ity. Suppose that we want to compute an n-bit boolean function f : {0, 1}N → {0, 1} on an input x ∈ {0, 1}N. In the black-box model, the input is not accessible directly, but through oracle queries. Classically, if we want to learn bit i of the input x, we need to query

the black box with the index i; the black box returns the value xi. An algorithm which computes f on input x may alternate between oracle queries and other computational steps which are independent of the input. The query complexity of a function f is the least number of queries made by an algorithm which computes f (x) over all inputs x.

2 Chapter 1. Introduction

∨N For instance, if f is the OR function defined as f (x) = i=1xi, then the deterministic query complexity of f is N. In the worst case, if a deterministic algorithm made N − 1 oracle calls and saw only zeros in the N − 1 bits it has queried, then it still needs to query the last bit in order to produce the correct answer. In the case of , we have the same setup, where we can access the ∈ { }N → input x 0, 1 through black-box queries. Since the map i xi is not reversible (thus, not unitary), quantum algorithms model an oracle query as a unitary transformation Ox given by   (− )xi | ⟩ ≤ ≤ | ⟩ 1 i; w if 1 i N Ox i; w =  |i; w⟩ if i = 0.

Here the register |w⟩ corresponds to an ancilla workspace which is not affected by a query, and the index i = 0 corresponds to a non-query. A T-query A which computes a function f for an input x has the form

A = UTOxUT−1 ... OxU1OxU0,

where Uk are unitary transformations independent of the input. In general, the unitaries Uk, with k = 1 ... N may be different; recently, Reichardt [Rei11] proved that an optimal quantum query algorithm for f has the form A = UOxU ... OxUOxU0. The algorithm starts in a state |0⟩ which is easy to prepare, and we obtain the output according so some pro- jective measurement of the final state A|0⟩. We measure the complexity of an algorithm by the number of queries to the input that it makes.√ For example, the OR function can be computed by a quantum algorithm with O( N) queries [Gro97], and this is exactly optimal. There are two main reasons for which the quantum query model is used extensively. First, many fundamental quantum algorithms have been developed in this model (e.g. Grover’s algorithm [Gro97] and its generalizations to quantum walks, such as the algo- rithm for element distinctness [Amb04]). Also, Shor’s factoring algorithm has at its core a quantum query algorithm for period finding. Secondly, in the query model, we can prove lower bounds and thus we can prove that quantum computers can solve some problems with fewer queries than the classical computers. We discuss in more detail the quantum query model in Section 3.2.1.

3 Chapter 1. Introduction

1.3 Reflections and rotations

We can view any quantum algorithm as starting with an initial state |init⟩ in a Hilbert space and constructing a state with a large overlap with a target state |g⟩ by using re- flections or rotations. In fact, iterating some two reflections gives optimal quantum algo- rithms [Rei11]. In this section we review some basic facts about reflections and rotations that we use in this work.

Definition 1. For any state |ψ⟩ we define reflection about |ψ⟩ as

Ref(|ψ⟩) = 1 − 2|ψ⟩⟨ψ|.

That is, Ref(|ψ⟩) maps |ψ⟩ to −|ψ⟩, and any vector orthogonal to |ψ⟩ remains un- changed. In some papers and textbooks, a reflection about |ψ⟩ leaves |ψ⟩ unchanged, and flips the sign of anything orthogonal to |ψ⟩.

Definition 2. For any angle δ, we define the counterclockwise rotation in a fixed coordi- nate system by angle δ as [ ] cos δ − sin δ Rot(δ) = . sin δ cos δ The unitary Rot(δ) has the eigenvectors | ⟩ = √1 (|0⟩ + i|1⟩), corresponding to the 2 − δ δ eigenvalue e i and |⟩ = √1 (|0⟩ − i|1⟩), corresponding to the eigenvalue ei . 2 We use the following elementary facts.

Proposition 3. For any state |ψ⟩, when restricted to the two-dimensional subspace spanned by ⊥ ⊥ |ψ⟩ and |ψ ⟩, we have Ref(|ψ⟩) = −Ref(|ψ ⟩). α Proposition 4. Let d1 and d2 be two lines in the plane which intersect at a point O, and let be the angle between them. Then reflection with respect to d1 followed by reflection about d2 is a rotation by angle 2α about the point O.

1.4 Mathematical preliminaries

In this section we give some brief mathematical preliminaries required for some of the results in this work. For more details on Markov chains see standard textbooks on ran- domized algorithms such as [MR95] and [MU05].

Definition 5. A stochastic process on some finite space X is a collection of random variables

{Xt; t ∈ T} which take values from X.

4 Chapter 1. Introduction

Definition 6. A discrete time stochastic process X0, X1, ... , Xt, ... on the state space X is a Markov chain if

| | Pr[Xt = x Xt−1 = y, Xt−2 = xt−2, ... , X0 = x0] = Pr[Xt = x Xt−1 = y] = Px,y.

The definition captures the property that, at any time step t, the current state1 of the

chain Xt depends only on the previous state Xt−1 and it is independent of the history of how the process arrived at state Xt−1. A Markov chain can be described by its transition matrix P, whose entry in row x and

column y is Px,y. One application of the transition matrix is to compute easily the future states of the process. Namely, if the distribution at time t is d(t), then the distribution at time t + 1 is d(t + 1) = P · d(t). We can say that a Markov chain maps a distribution on some state space X to another distribution on X, and we identify a Markov chain with

its transition matrix P. In the mathematical literature, the convention is to denote by Px,y the probability of making a transition from x to y, in which case P acts on probability distributions from the right.

Definition 7. An automorphism of a random walk P is a permutation matrix Q such that QPQT = P.

Definition 8. A probability distribution π over X is a stationary distribution for P if it is an eigenvector of P with eigenvalue 1, that is π = Pπ.

Definition 9. A Markov chain P is irreducible if, for all x, y ∈ X, there exists some t such t that (P )y,x > 0. Equivalently, a chain P is irreducible if any state is accessible from any other state in some number of steps.

t Definition 10. A Markov chain P is aperiodic if gcd{t; (P )x,y > 0} = 1 for all the states x, y ∈ X.

We need the following theorem, which combines the Perron-Frobenius theorem for nonnegative matrices [HJ90, Theorem 8.4.4] with the fundamental theorem for Markov chains.

Theorem 11. If a Markov chain P is irreducible and aperiodic, then it will converge to a unique stationary distribution π, with positive entries. All the other eigenvalues of P are strictly smaller than 1 in absolute value. 1We use the notions of state and vertex (of a Markov chain) interchangeably.

5 Chapter 1. Introduction

We note that the requirement of a Markov chain being irreducible and aperiodic is not restrictive – most random walks we use for practical problems have these properties. The following definition introduces a more restrictive class of random walks2. Definition 12. An aperiodic, irreducible random walk P is called reversible if it satisfies the condition

πxPy,x = πyPx,y, (1.2) for any states x, y. Here we denote by πx the coordinates of the stationay distribution π of P. Intuitively, the reversibility condition says that, in the stationary distribution, jumping from a state x to a state y is the same as jumping from y to x. A very useful class of Markov chains is given by random walks on undirected graphs. Let G = (V, E) be a finite, undirected, and connected graph. For any vertex x we denote by d(x) its degree. The random walk on G is described by the sequence of moves of a particle between the vertices of G. If the particle is at vertex y, and if y has d(y) neighbors, then the particle moves to a neighbor x of y with probability 1/d(y). Thus, the random walk on a graph can be described by repeated applications of the matrix P with entries { 1 ( ) d(y) if y, x is an edge, Px,y = 0 otherwise.

The random walk on G is reversible, and it converges to the stationary distribution π with components d(x) π = . x 2|E| When the graph G is regular (all the vertices have the same degree), then the stationary distribution is uniform. If we are given an irreducible chain P (but not necessarily aperiodic), then we can 1 1 construct the chain P1 = 2 (P + ), which is aperiodic and has the same stationary distri- bution as P. This operation slows down the original walk P by introducing self-loops at each vertex with probability 1/2, and does not influence the asymptotic behavior of the walk. Moreover, if P is reversible, then the eigenvalues of P1 are non-negative. From now on we assume that, if P is an irreducible chain, then P is aperiodic and has non-negative eigenvalues. As we saw, these assumptions do not restrict the generality.

2We use the notions of Markov chain and random walk interchangeably. We note that, in some sources, a random walk is a finite Markov chain that is reversible (see Definition 12).

6 Chapter 1. Introduction

Assume that P is an irreducible and aperiodic random walk on a state space X, and a subset M of the elements of X have been marked (they correspond to the solutions of the problem we are solving). We start in the unique stationary distribution of P and we walk according to P until we arrive at a marked item. We express the running time of this algorithm using the notion of classical hitting time.

Definition 13. We define the hitting time HT(P, M) as the expected number of steps of the random walk starting from an unmarked state until reaching a marked state for the first time. The initial state is picked according to the stationary distribution of P restricted to unmarked states.

When there is a single marked item m, we use the notation HT(P, {m}). It is obvious that, when there are more marked states, the hitting time decreases. For example, consider the random walk on the complete graph with N vertices. For any vertices x, y let HT(x, y) be the expected number of steps before vertex y is visited when starting from vertex x. Suppose that a single vertex m on the graph is marked. Since the stationary distribution of the random walk on the complete graph is uniform, we have { } 1 − HT(P, m ) = − ∑ HT(x, m) = N 1. N 1 x̸=m Indeed, at each step of the random walk on the complete graph with N vertices, we can go to the marked vertex with probability 1/(N − 1), so after an expected number of N −

1 steps we arrive at m. If there are two marked vertices m1, m2 then the hitting time { } − HT(P, m1, m2 ) is (N 1)/2. Given an irreducible, aperiodic random walk P with a marked state m, the following algorithm finds the marked item with constant success probability. If there is no marked item, the algorithm always detects that.

1. Sample a state x from the stationary distribution of P. 2. Repeat for 10 · HT(P, {m}) times 2a. If x is marked, output x and exit. 2b. Otherwise, update x according to P. 3. Output “there is no marked state”.

Many classical and quantum algorithms we discuss in this work have this form, and we omit to mention explicitly what happens if there is no marked item. Also, in most

7 Chapter 1. Introduction cases, we do not have an exact expression for the hitting time HT(P, {m}), and we use an upper bound for it. We discuss more about the classical hitting time in Section 2.8.1.

1.5 Notations

We denote by 1 the identity matrix. In most cases, the dimension is clear from the context and we omit it. We use AT for the transpose of a matrix A and A† for the conjugate trans- pose of A. Probability distributions are marked with an overline. Thus, if X is a finite set T with n elements, a probability distribution on X is the column vector p = (p1, p2, ... , pn) , ··· where p1 + p2 + + pn = 1.

8 CHAPTER 2

Controlled quantum amplification

2.1 Problem description

The problems we consider in this work can be formulated using the general framework of the abstract search algorithm, first introduced by Ambainis, Kempe, and Rivosh [AKR05]. An abstract search algorithm requires a target state |g⟩, an initial state |init⟩, and a unitary W such that

• the state |g⟩ has real coordinates in a canonical orthogonal basis for the space in which we are working,

• the matrix W has real entries,

• the state |init⟩ is the unique (+1)-eigenvector of W,

• ⟨g|init⟩ ̸= 0.

We are given a reflection operator G = 1 − 2|g⟩⟨g| that enables us to distinguish be- tween the solution |g⟩ and any other orthogonal state. The abstract search algorithm applies the unitary A = W · G repeatedly to the starting state |init⟩. Abstract search algorithms can be used to solve two different scenarios. In the first scenario, we start in the state |init⟩ and we want to determine whether G is a reflection about some state, or G is the identity matrix (in which case there exists no target). If G is the identity matrix, then the final state after iterating A will be |init⟩. If G is a proper reflec- tion, we need to construct a state that is almost orthogonal to |init⟩. In the second, more

9 Chapter 2. Controlled quantum amplification

difficult scenario, we want not only to determine the existence of a target |g⟩, but also to construct a final state that has constant overlap with |g⟩. For both scenarios the standard approach is to iterate A, even if, for some problems, this is not enough to produce the target state. Grover’s algorithm [Gro97], amplitude amplification [BHMT02] and many algorithms based on quantum walks are examples of the abstract search algorithm. In this work, we give a new, simple framework which takes as input any abstract search algorithm A, and outputs the target state with constant success probability. We show that, whenever A determines whether a target |g⟩ exists or not using T iterations, our new circuit both determines and finds the target using at most 2T applications of A. As a major application of our framework we show that, for a reversible Markov chain P, there exists a quantum algorithm which finds a unique√ marked item m with√ constant suc- cess probability using a number of queries of order S + HT(P, {m}) · U + HT(P, {m}) · C. Our solution is easier to implement than the algorithm from [KMOR15], and can be applied to a larger class of unitaries, not only to quantum walks. We also prove that our framework can simulate Grover’s algorithm, amplitude amplification and the algorithm for interpolated quantum walks given by [KMOR15].

2.2 Introduction

2.2.1 Related work

The framework of quantum walks has been a powerful method to obtain new quantum algorithms. Among its many applications, we mention verification of matrix products [BS06ˇ ], testing group commutativity [NM07], subgraph finding [CK11], formula eval- uation [ACR+10], triangle finding [MSS07, Gal14, GN15], simulating learning graphs [JKM13] or obtaining a time-efficient algorithm for 3-distinctness [BCJ+13]. The first algorithm based on quantum walks was Grover’s search algorithm [Gro97], although its connection with random walks was discovered much later. This algorithm provided a quadratic speed-up in the quantum case for any classical algorithm based on brute-force search. The model of discrete-time quantum random walks can be traced to the work of Wa- trous [Wat01], who introduced quantum walks on regular graphs. As a possible compu- tational tool, quantum walks were introduced by Ambainis, Bach, Nayak, Vishwanath, and Watrous for the case of the line [ABN+01], and by Aharonov, Ambainis, Kempe and Vazirani for general graphs [AAKV01]. Several other papers [AAKV01, ABN+01, SKW03, Ric07] studied notions related to quantum walks and pointed their advantages over clas-

10 Chapter 2. Controlled quantum amplification

sical random walks. Ambainis [Amb04] was the first who used a quantum walk to obtain a speedup beyond that given by Grover’s algorithm. Szegedy [Sze04] generalized many of the previous approaches to quantum walks, giving a generic method to obtain an ab- stract search algorithm from any symmetric Markov chain. His setup was extended to any reversible Markov chains in [MNRS07, MNRS12]. Before we discuss in more detail about quantum walks, we need to introduce some terminology; more details are given in Section 2.6.1 and in Section 2.8. A more thor- ough discussion of some earlier results can be found in the surveys of Ambainis [Amb03], Kempe [Kem03] or Santha [San08]. Let P be a reversible Markov chain on a state space X. Then P has a unique stationary distribution, which we denote by π. Let M ⊂ X be a set of marked states. We denote by W(P) the quantum walk obtained from the classical Markov chain P. We define the following problems.

• DETECT(P): We are promised that |M| = 0 or |M| ≥ k for some unknown positive integer k. Decide which is the case.

• FIND ONE(P): We are promised that |M| = 0 or |M| = 1. Decide whether M is empty; if M ̸= ∅, find the marked state.

• FIND MANY(P): We are promised that |M| = 0 or |M| ≥ k for some unknown positive integer k. Decide whether M is empty; if M ̸= ∅, find any marked state.

Following [MNRS07], we identify three costs associated with a search algorithm based on W(P).

• Setup cost S: the number of quantum queries needed to draw a sample from the stationary distribution π;

• Update cost U: the number of quantum queries needed to take a step of W(P);

• Checking cost C: the number of quantum queries needed to check if a state is marked.

Let ϵ be a lower bound on the probability of obtaining a marked state when sampling classically according to the distribution π. Thus, if π is uniform and |M| = 1, then ϵ = 1/|X|. For example, Grover’s algorithm [Gro97] (which consists of a reflection about the √1 · √1 · marked subspace followed by reflection about the initial state) has a cost of ϵ S + ϵ C queries. In applications such as [MSS07], the quantities S, U, and C also capture the costs of maintaining a classical data structure d(x) associated with any state x ∈ X.

11 Chapter 2. Controlled quantum amplification

Denote by δ the spectral gap of the classical walk P, and by HT(P, M) the classical hitting time of P with respect to the marked set M. When the marked set and the walk P are obvious from the context, we simply use the notation HT. The first result to go beyond the speedup of Grover’s brute-force search was given by Ambainis [Amb04], who obtained a quantum walk algorithm for Element Distinctness. His main result is summarized in the next theorem.

Theorem 14 ([Amb04]). Let P be the random walk on the Johnson graph J(N, r) of subsets of size r of a space X with N elements. Let M be either the empty set or the class of all r-subsets that contain a fixed subset of constant size k ≤ r. There exists an explicit, constructive quantum algorithm which detects if M is empty, and, if it is not, finds the marked k-subset using a number of queries of order 1 1 S + √ U + √ C. ϵδ ϵ

The abstract search algorithm framework first appeared√ in√ [AKR05], where√ it is used to locate a single marked item on the two-dimensional N × N torus using N log N queries. In a seminal paper [Sze04], Szegedy generalized many of the previous approaches to quantum walks, giving a generic method to obtain an abstract search algorithm from any reversible Markov chain. He proved√ that DETECT(P) can be solved quantumly using a number of queries which is order of HT(P, M).

Theorem 15 ([Sze04]). Let P be an irreducible, aperiodic and symmetric1 Markov chain. There exists an explicit, constructive quantum algorithm which solves DETECT(P) with constant suc- cess probability using a number of queries of order √ √ S + HT(P, M) · U + HT(P, M) · C.

Magniez, Nayak, Richter, and Santha [MNRS12] extended Theorem 15 to any re- versible chain P. The algorithm in Theorem 15 iterates A = W · G, where W is the quantum walk op- erator corresponding to P. A major obstacle in quantum walks comes from the fact that, although A can determine whether a target exists or not, it does not necessarily find the target. One example of graph for which this happens is the two-dimensional torus: any iteration of A on the initial state will always be almost orthogonal to the target, although the final state of the algorithm becomes almost orthogonal to the starting state. Graphi- cally this is illustrated in Figure 2.1 and Figure 2.2.

1 A Markov chain P is called symmetric if Px,y = Py,x for any states x, y.

12 Chapter 2. Controlled quantum amplification

AT|init⟩ A3|init⟩

A2|init⟩

A|init⟩

|init⟩

|g⟩

Figure 2.1: In some cases, every iteration of a quantum search algorithm A on the initial state |init⟩ may remain almost orthogonal to the target |g⟩.

1

0.8

0.6

0.4

0.2

0 0 50 100 150 200 Timesteps

Figure 2.2: Iterations of A on a grid graph of size 20 × 20 with a single marked location. Green: the probability of the marked location. Red: the inner product between current state and initial state. √ The checking cost of HT(P, M) · C in Theorem 15 is asymptotically larger than the checking cost given by Theorem 14 for the particular case of the Johnson graph. Mag- niez, Nayak, Roland, and Santha [MNRS07] improved Theorem 14 in the more difficult √1 · scenario of finding a solution, obtaining the checking cost of ϵ C for a large class of graphs. Their result is stated exactly in the following theorem.

13 Chapter 2. Controlled quantum amplification

Theorem 16 ([MNRS07]). Let P be a reversible Markov chain. There exists a quantum algorithm which solves FIND MANY(P) with constant success probability using a number of queries of order

1 1 S + √ U + √ C. ϵδ ϵ

It is known that

≤ 2 HT(P, M) ϵδ (2.1)

(see, for instance, [Sze04, Lemma 10]), so Theorem 16 improves Theorem 15 for all graphs for which the inequality in Eq. 2.1 is tight. However, there are graphs for which the upper bound for the hitting time given by Eq. 2.1 is not optimal, and we list in Figure 2.3 some examples (see [AF02] for proofs of the bounds on the hitting time). Therefore it is an open question if FIND MANY(P) can be solved with the same number of queries as DETECT(P).

Graph δ 1/(ϵδ) HT(P, {m}) cycle 1/N2 N3 N2; hypercube 1/ log NN log N N; 2D-torus 1/N N2 N log N.

Figure 2.3: Hitting times and the ϵδ upper bound. All the graphs have N vertices, there is a single marked item, and ϵ = 1/N.

The first to address the question of actually finding a solution was Tulsi [Tul08]. He proved that,√ for the two-dimensiona torus, a single marked item can be found quantumly with O( N log N) queries. Magniez et al. [MNRS12] extended Tulsi’s approach to all state transitive graphs, obtaining the following theorem.

Theorem 17 ([MNRS12]). Let P be a reversible, state transitive2 Markov chain. There exists a quantum algorithm which solves FIND ONE(P) with constant success probability using a number of queries of order √ √ S + HT(P, {m}) · U + HT(P, {m}) · C, where m is the unique marked item.

A key step required to prove that the algorithm of Tulsi [Tul08] finds the marked item is to lower bound the success probability of A = W(P) · G. For the case of the 2D-torus, this probability is Θ(1/ log N) [AKR05], but, for general graphs, it is unknown. Magniez

2A Markov chain is state transitive if, given any two states x, y, there exists an automorphism of P which takes x to y. Intuitively, from any state, the random walk P looks the same.

14 Chapter 2. Controlled quantum amplification

et al. [MNRS12] overcome this obstacle by assuming that the graph is state transitive. However, this is a strong condition which, in fact, implies that the classical hitting time { } α α HT(P, m ) = 1/ 1, where 1 is the smallest positive eigenphase of A. Magniez et al. [MNRS12] introduce a notion of quantum hitting time which, for quan- tum walks, is different from the square root of the classical hitting time. This definition appears to have no algorithmic interpretation and it is not consistent with the query com- plexity of the detection algorithm from [Sze04]. We discuss this definition in detail in Section 2.6.1. Krovi, Magniez, Ozols and Roland [KMOR15] introduced the interpolated walk P(s) = ′ (1 − s)P + sP , where 0 ≤ s < 1, and defined the corresponding quantum walk W(P(s)). They show that walking according√ to W(P(s)) solves√ FIND ONE(P) on any reversible Markov chain using order of S + HT(P, {m}) · U + HT(P, {m}) · C queries. The idea of interpolating between classical Markov chains before applying Szegedy’s quantization was introduced in [KOR10] and [KMOR10] (which came out roughly at the same time). Later, the authors of [KMOR10] noted that the arguments in [KMOR10] and [KOR10] are not sufficient to obtain a quadratic speed-up over the classical hitting time for the case of multiple marked items [KMOR15, page 41]. Krovi et al. [KMOR15] define an interpolated hitting time HT(s) for any s ∈ [0, 1) and + introduce a so-called extended hitting time HT = lims→1 HT(P(s), M). They obtain the following theorem. Theorem 18 ([KMOR15]). Let P be a reversible, Markov chain. There exists an explicit quantum algorithm which solves FIND MANY(P) with constant success probability using a number of queries of order √ √ S + HT+ · U + HT+ · C. When there is a single marked item m,[KMOR15] prove that HT+ = HT(P, {m}) [KMOR15, Proposition 16]. For multiple marked items the extended hitting time may be asymptotically larger than the classical hitting time [AK15]. Therefore, at this moment, it is not known whether the problem FIND MANY(P) can be solved with the same query complexity as DETECT(P). The smallest known query complexity for FIND MANY(P) is either given by Theorem 18 or by the following theorem. Theorem 19. Let P be a reversible, Markov chain. There exists an explicit quantum algorithm which solves FIND MANY(P) with constant success probability using a number of queries of order √ √ log(N)(S + HT · U + HT · C).

Here HT is the maximum of the hitting times HT(P, {m}) over all marked items m.

15 Chapter 2. Controlled quantum amplification

Theorem 19 is obtained using a classical reduction [AA05] from Theorem 18 for a single marked item m.

2.2.2 The circuit of Tulsi

In order to compare it with our circuit introduced in Section√2.3, we√ briefly describe the circuit used in [Tul08] to find a single marked item on the N × N torus. The exact same circuit√ was used in [MNRS12√ ] to solve FIND ONE(P) for a state transitive graph using O(S + HT(P, {m}) · U + HT(P, {m}) · C) queries. Here m is the unique marked item. |δ ⟩ − δ| ⟩ δ| ⟩ |δ ⟩ δ| ⟩ δ| ⟩ δ ∈ Let 1 = sin 0 + cos 1 and√ 0 = cos 0 + sin 1 , for some angle (0, π/2). The optimal choice is cos δ = 1/ log N, so δ is close to π/2. Let G = 1 − 2|g⟩⟨g| be reflection about the target |g⟩, and let W be the walk operator on the torus, consisting of reflection followed by SWAP. The circuit is depicted in Figure 2.4.

| ⟩ δ |δ ⟩ 1 1 −Z 1 1

|init⟩ G W |g⟩

Figure 2.4: The circuit from [Tul08]

When δ = 0, the ancilla qubit is redundant, and the circuit in Figure 2.4 simply runs the operator A = WG. The key idea for the circuit from Figure 2.4 is that the operators W and G do not meet too often, as each of these unitaries is controlled by almost orthogonal states. Thus, intu- itively, we can say that the circuit in Figure 2.4 runs mostly W. By contrast, our circuit in Figure 2.5 runs mostly A.

2.2.3 Our contribution and organization

We give a new circuit which can be used to amplify the success probability of any abstract search algorithm to a constant. Our circuit controls the abstract search algorithm using an extra qubit parametrized by an angle θ˜, and we refer to this process as controlled quantum amplification. In Section 2.3 we discuss our circuit and give equivalent forms of it. The optimal choice of the parameter θ˜ depends on the initial success probability. Namely, we choose θ˜ (see Eq. 2.3 and Eq. 2.4) such that the operator U has a unique (+1)-eigenvector

16 Chapter 2. Controlled quantum amplification √ √ which has overlap 1/ 2 with the starting state and overlap 1/ 2 with the target state (Theorem 26). The optimal choice of θ˜ is

sin θ sin θ˜ = , (2.2) cos θ where sin2 θ = |⟨g|init⟩|2. Next, we provide a toolbox for analyzing such controlled pro- cesses, without making any additional assumptions about the abstract search algorithm from which we started. In Section 2.4 we obtain a flip-flop theorem that gives the com- plete spectral decomposition of the operator consisting of a one-dimensional reflection followed by any real unitary. This theorem enables us to obtain the first major result of the chapter.

θ′ ≤ θ′ ≤ · · · ≤ θ′ π Theorem 20. Let the nonzero eigenphases of A be 0 < 1 2 m < . Then the α nonzero eigenphases k of U satisfy the relation

θ′ ≤ α ≤ · · · ≤ θ′ ≤ α π 0 < 1 1 m m < .

θ˜ The theorem above holds for any choice of . √ Since the√ unique (+1)-eigenvector of U has overlap 1/ 2 with the starting state, and overlap 1/ 2 with the target state, in order to obtain the target state, it is enough to construct this (+1)-eigenvector. We show how to do that in Section 2.6 using phase esti- mation, and we express the cost using the quantum hitting time of U. The main challenge is to connect the quantum hitting time of U to the quantum hitting time of A. We prove that QHT(A, |init⟩) = Θ(QHT(U, |1,˜ g⟩)) in Section 2.6.3. In Section 2.8 we apply our circuit to quantum walks with a single marked item, and show that one can find the marked item with constant success probability. Our findings are summarized in the following theorem.

Theorem 21. Let P be a reversible Markov chain over a space X, and let m ∈ X be a single marked state. There exists an explicit quantum algorithm which solves FIND ONE(P) with constant success probability using a number of queries of order √ √ S + HT(P, {m}) · U + HT(P, {m}) · C.

This provides a simpler alternative to the algorithm given by [KMOR15]. In Section 2.9 we show that our circuit U can simulate the algorithm from [KMOR15] with no overhead. Thus, in practice, one can use the circuit which is easier to implement, and that is U. In Section 2.7 we prove that our circuit U can emulate amplitude amplification.

17 Chapter 2. Controlled quantum amplification

2.3 The circuit

Let θ˜ be any angle in (0, π/2) (in applications θ˜ will be close to 0). We rotate the compu- tational basis {|0⟩, |1⟩} by the angle θ˜, obtaining the basis {|0˜⟩, |1˜⟩}, where

|0˜⟩ = cos θ˜|0⟩ + sin θ˜|1⟩ (2.3) and

|1˜⟩ = − sin θ˜|0⟩ + cos θ˜|1⟩. (2.4)

The parameter θ˜ ∈ (0, π/2) is critical to the behavior of our circuit, and it is chosen such that sin2 θ˜ is the initial success probability, i.e. sin2 θ˜ is the probability that we obtain |g⟩ when we measure |init⟩ according to {|g⟩⟨g|, 1 − |g⟩⟨g|}. We take as input an abstract search algorithm A = WG, where G = 1 − 2|g⟩⟨g| is reflection about some target state |g⟩. The goal is to obtain |g⟩ with constant success probability. In the following figures a circle indicates a control, and a rectangle indicates a quantum operator. We introduce the circuit from Figure 2.5.

|0⟩ 1˜ 0 Z |1˜⟩

|init⟩ G A |g⟩

Figure 2.5: U as A and reflection.

We define the state |init⟩ as |init⟩ from which we removed |g⟩, and then we renormalized. | ⟩ {Π 1 − Π } This corresponds to first measuring the initial state init according to M, M , Π where M denotes the projection operator onto the marked subspace. If we obtain a marked item, then we are done; otherwise, we have prepared |init⟩. The state |init⟩ was first used as a starting state in [Sze04], and then in the subsequent papers [MNRS12], [KMOR15]; in the case of quantum walks, |init⟩ is related to the classical hitting time. The starting state for our circuit is |0⟩|init⟩, and the state we want to obtain is |1˜⟩|g⟩. In the figures in this chapter we depict the target state as the output for our circuit only to illustrate our goal; we prepare a state which has constant overlap with |1˜⟩|g⟩ after some iterations of the circuit. The leftmost operator in Figure 2.5, which is the first operator to be applied to the starting state |0⟩|init⟩ is G controlled by |1˜⟩, that is |1˜⟩⟨1˜| ⊗ G + |0˜⟩⟨0˜| ⊗ 1. If the qubit on the top wire is |1˜⟩, then we apply G to the state in the second wire; if the

18 Chapter 2. Controlled quantum amplification

qubit on the top wire is |0˜⟩, then on the bottom wire we do nothing. This operator is in fact a reflection, since it can be written in the form 1 − 2|1,˜ g⟩⟨1,˜ g|. The second operator to be applied is A controlled by |0⟩. Throughout this work we denote by U the circuit in Figure 2.5, unless otherwise mentioned. When θ˜ = 0, the ancilla qubit is redundant and the circuit simply runs the operator A. We can express the same circuit in terms of W and a reflection (see Figure 2.6).

|0⟩ 0˜ 0 Z |1˜⟩

⊥ |init⟩ G W g |g⟩

Figure 2.6: U as W and reflection.

The last gate in the circuit from Figure 2.6, namely Z ⊗ (1 − |g⟩⟨g|) + 1 ⊗ |g⟩⟨g| only acts ⊥ ⊥ on states of the form |1⟩|g ⟩ for some |g ⟩ ⊥ |g⟩, which are (-1)-eigenvectors for the operator U. Since the initial state |0, init⟩ does not have overlap with any state of the form ⊥ |1⟩|g ⟩, we can ignore the last gate in Figure 2.6, and assume that U consists of the gates shown in Figure 2.7.

|0⟩ 0˜ 0 |1˜⟩

|init⟩ G W |g⟩

Figure 2.7: U as W and reflection, simplified.

We can rewrite our circuit using A and a rotation, as in Figure 2.8. As before, we omitted a gate Z ⊗ (1 − |g⟩⟨g|) + 1 ⊗ |g⟩⟨g| which plays no role in our setting.

| ⟩ | ⟩ 0 Rot−2θ˜ 0 1˜

|init⟩ g A |g⟩

Figure 2.8: U as A and rotation.

Finally, we can rewrite the same circuit U using W and a rotation (see Figure 2.9). Thus we can run our algorithm using either A or W, with either a reflection or a con- trolled rotation.

19 Chapter 2. Controlled quantum amplification

| ⟩ − | ⟩ 0 Rot−2θ˜ 0 Z 1˜

|init⟩ g W |g⟩

Figure 2.9: U as W and rotation.

Using our circuit, we can amplify the success probability of any abstract search algo- rithm A to a constant, with the following procedure.

1. Measure the state |init⟩ according to {|g⟩⟨g|, 1 − |g⟩⟨g|}. If we obtain |g⟩, then we are done. Otherwise, we normalize the resulting state and call it |init⟩. θ˜ sin θ 2 θ |⟨ | ⟩|2 2. Set sin = cos θ , where sin = g init . 3. Add a second register which contains the control qubit for U, and construct U. 4. Add a third register initialized to |0t⟩ and apply phase estimation to U for input state |init⟩ with precision t, where 2t = Θ(QHT(A, |init⟩)). 5. Measure the third register with respect to {|0t⟩⟨0t|, 1 − |0t⟩⟨0t|}. 6. Provided the outcome of the measurement is |0t⟩, measure the first register.

2.4 The flip-flop theorem

In this section we obtain the complete spectrum of the unitary which consists of a one- dimensional reflection followed by any real unitary. This enables us to prove that, in each two-dimensional eigenspace of the operator U from Figure 2.5, the rotational angles are larger than the corresponding rotational angles of the operator A. Let A be any real unitary acting on some space H and let |g⟩ ∈ H be a state with real amplitudes in the same space. Define the one-dimensional reflection G = 1 − 2|g⟩⟨g|. Our goal is to obtain the spectrum of AG. α Let |eα⟩ be an eigenvector of A with eigenvalue ei , where −π < α ≤ π. If |eα⟩ ⊥ |g⟩, then AG|eα⟩ = AGG|eα⟩ = A|eα⟩, so |eα⟩ is an eigenvector with the same eigenvalue for AG. Thus, in the following, we restrict our attention to eigenvectors |eα⟩ of AG which have non-zero overlap with |g⟩. Therefore, we can write

′ |eα⟩ = |g⟩ + i|eα⟩, (2.5)

20 Chapter 2. Controlled quantum amplification

′ with |eα⟩ ⊥ |g⟩. Since A is a real unitary matrix, its eigenvalues different from 1 come in conjugated  φ  i k | ⟩ pairs e , corresponding to the eigenvectors Ak , for k = 1, 2, ... m. We denote the space spanned by these eigenvectors as span2D(A). Also, we let span+1(A) be the (+1)- − | ⟩ eigenspace of A and span−1(A) be the ( 1)-eigenspace. We decompose g into the eigen- basis of A as m ( ) + + − − | ⟩ | ⟩ | ⟩ | ⟩ − | − ⟩ g = g0 A0 + ∑ gk Ak + gk Ak + g 1 A 1 k=1 + − − ∈ C for some complex values g0 , g 1 , gk , gk . Here we have grouped all the (+1) eigen- | ⟩ | ⟩ vectors of A which have overlap with g into a single (+1) eigenvector A0 ; we did the − | ⟩ | +⟩ | −⟩ same grouping for the ( 1) eigenvectors. Since g is real and Ak and Ak are con- ∈ R jugated, we can multiply the eigenvectors of A with appropriate phases so that g0 , + − − ∈ R ∈ R g 1 , and gk = gk = gk for all k. Therefore

m ( ) + − | ⟩ | ⟩ | ⟩ | ⟩ − | − ⟩ g = g0 A0 + ∑ gk Ak + Ak + g 1 A 1 (2.6) k=1

We can now obtain the relevant eigenvectors of AG.

′ ′ Lemma 22. Consider the (unnormalized) state |eα⟩ = |g⟩ + i|eα⟩, with |eα⟩ ⊥ |g⟩. Let ( ( ) ( ) ) (α) α − φ α φ ′ k + + k − |eα⟩ = g cot |A ⟩ + ∑ g cot |A ⟩ + cot |A ⟩ − (2.7) 0 2 0 k 2 k 2 k k ( ) x g− tan |A− ⟩. (2.8) 1 2 1

If α is solution of the equation ( ( ) ( )) ( ) − φ φ ( ) 2 x 2 x k x + k − 2 x g0 cot + ∑ gk cot + cot g−1 tan = 0, (2.9) 2 k 2 2 2

α then |eα⟩ is an eigenvector of AG, with eigenvalue ei .

′ Proof. We take |eα⟩ of the form ( ) ′ + + − − | ⟩ | ⟩ | ⟩ | ⟩ − − | − ⟩ eα = g0b0 A0 + ∑ gk bk Ak + bk Ak + g 1b 1 A 1 , (2.10) k

21 Chapter 2. Controlled quantum amplification

where the coefficients bj are real. We can write

AG|eα⟩ = −A|g⟩ + iA|eα′ ⟩ φ + − φ − − | ⟩ − i k | ⟩ i k | ⟩ − | − ⟩ = g0 A0 ∑ gk(e Ak + e Ak ) + g 1 A 1 ( k ) + φ + − − φ − | ⟩ i k | ⟩ i k | ⟩ − − − | − ⟩ + i g0b0 A0 + ∑ gk(bk e Ak + bk e Ak ) g 1b 1 A 1 . k

α Imposing the condition AG|vα⟩ = ei |vα⟩ we have

− iα + g0( 1 + ib0) = e g0(1 + ib0 )  φ  α  i k − i gke ( 1 + ibk ) = e gk(1 + ibk )

We obtain ( ) ( ) ( ) α α α α ( ) 1 + ei 2 cos2 + 2i sin cos α b = −i = −i ( 2) ( 2) ( 2) = cot . 0 − iα 2 α − α α 2 1 e 2 sin 2 2i sin 2 cos 2

A similar calculation gives the other unknown coefficients from Equation 2.8. Imposing ′ the condition |eα⟩ ⊥ |g⟩, we obtain Eq. 2.9.

We note that the result in Lemma 22 appears in a slightly different form in [AKR05] and [Tul08]. The following theorem is the main result of this section.

Theorem 23 (Flip-flop theorem). Consider any real unitary A and let |g⟩ be a state with real φ ≤ φ ≤ · · · ≤ φ amplitudes in the same space. Denote the positive eigenphases of A by 0 < 1 2 m.

| ⟩ ∈ − i) If g span2D(A), then AG has m 1 two-dimensional eigenspaces, a (+1)-eigenspace, and − | ⟩ α a ( 1) eigenspace which overlap g . The positive eigenphases j of AG satisfy the inequality φ ≤ α ≤ φ ≤ · · · ≤ φ ≤ α ≤ φ π 0 < 1 1 2 m−1 m−1 m < . | ⟩ ∈ ⊕ ⊕ ii) If g span2D(A) span+1(A) span−1(A), then AG has m + 1 two-dimensional eigenspaces, − | ⟩ α and no (+1) and ( 1) eigenspaces which overlap g . The positive eigenphases j of AG sat- α φ ≤ α ≤ · · · ≤ φ ≤ α π isfy the inequality 0 < 0 < 1 1 m m < . | ⟩ ∈ ⊕ iii) If g span2D(A) span+1(A), then AG has m two-dimensional eigenspaces, no (+1) − α -eigenspace and a ( 1)-eigenspace. The positive eigenphases j of AG satisfy the inequality α φ ≤ α ≤ · · · ≤ α ≤ φ π 0 < 1 < 1 1 m m < .

22 Chapter 2. Controlled quantum amplification

| ⟩ ∈ ⊕ iv) If g span2D(A) span−1(A), then AG has m two-dimensional eigenspaces, a (+1) - − α eigenspace and no ( 1) eigenspace. The positive eigenphases j of AG satisfy the inequality φ ≤ α ≤ φ ≤ · · · ≤ φ ≤ α π 0 < 1 1 2 m m < . | ⟩ We denote the decomposition of g into the eigenbasis of A as (m , 1 , 1)A, namely we have m two-dimensional subspaces, a (+1) eigenspace, and a (−1) eigenspace. Then the previous theorem gives the following correspondence between the eigenspaces of A and the eigenspaces of AG.

eigenspaces of A eigenspaces of AG

(m, 0, 0)A (m − 1, 1, 1)AG (m, 1, 1)A (m + 1, 0, 0)AG (m, 1, 0)A (m, 0, 1)AG (m, 0, 1)A (m, 1, 0)AG | ⟩ ∈ The first line of the previous table implies that, if g span2D(A), then AG has a (+1) and a (−1) eigenvector which overlap |g⟩, and the number of two-dimensional eigenspaces of AG decreases by 1 compared to A.

Proof. We prove the first item, as the other cases are similar. Decomposing |g⟩ into the eigenspaces of A we have

m ( ) | ⟩ | +⟩ | −⟩ g = ∑ gk Ak + Ak , k=1 for some real coeeficients gk. Let ( ( ) ( )) − φ φ 2 x k x + k f (x) = ∑ gk cot + cot k 2 2

denote the left hand side of Eq. 2.9. Then f (0) = 0, and f (π) = 0, so the unitary AG has a (+1)-eigenvector and a (−1)-eigenvector which overlap |g⟩. If f (x) = 0, then (− ) = ↗φ ( ) = −∞ f x 0, so we only need to look for positive roots of f . We have limx k f x ↘φ ( ) = ∞ → (−π π) and limx k f x . Since the function x cot x is strictly decreasing on , , the φ φ φ ̸ φ function f is strictly decreasing on any interval ( k, k+1) with k = k+1. We deduce α ∈ φ φ that f has a unique root k ( k, k+1). If the eigenvalues of A have multiplicity (say φ φ φ i 1 that 1 = 2), then we obtain two independent eigenvectors for AG with eigenvalue e | +⟩ | +⟩ | −⟩ | −⟩ by interchanging A1 with A2 and A1 with A2 in Eq. 2.9. Since any eigenvector of A which is orthogonal to |g⟩ is also an eigenvector with the same eigenvalue for AG, | ⟩ ∈ we obtained a complete description of the spectrum of AG: when g span2D(A), one

23 Chapter 2. Controlled quantum amplification

of the rotational eigenspace of A is destroyed, and we obtain a (+1)-eigenspace and a (−1)-eigenspace. The rest of the (m − 1) rotational eigenspaces of A are mapped to m − 1 rotational eigenspaces for AG, with larger rotational angles for each space.

Since our circuit U from Figure 2.5 consists of the reflection about |1,˜ g⟩ followed by another real unitary, we can apply the flip-flop theorem. Using this approaxh, in the next section we compare the eigenphases of U with the eigenphases of the initial operator A.

2.5 Spectral analysis of our circuit

2.5.1 Eigenphases

We prove in this section that the operator U from Section 2.3 has a unique (+1)-eigenvector which overlaps the target state |1,˜ g⟩. Then we show that, in each two-dimensional rota- tional subspace of the circuit U, the eigenphases of U are larger than the corresponding eigenphases of the operator A. First we recall the conditions that we require for W. We work in the general setting of an abstract search algorithm, which requires a target state |g⟩, an initial state |init⟩, and a unitary W such that

• the state |g⟩ has real coordinates in a canonical orthogonal basis for the space in which we are working,

• the matrix W has real entries,

• the state |init⟩ is the unique (+1)-eigenvector of W,

• ⟨g|init⟩ ̸= 0.

The condition W|init⟩ = |init⟩ together with the fact that W is real implies that |init⟩ has real coordinates. We start in the state |init⟩ and the goal is to obtain a state which has constant overlap with the target |g⟩. Let G = 1 − 2|g⟩⟨g| denote the reflection about |g⟩. The abstract search algorithm applies the unitary A = WG repeatedly to the starting state |init⟩. If there is no marked item (so G = 1), then we always stay in the initial state |init⟩. However, if there is a target |g⟩, the current state of the algorithm will move away from |init⟩. We can detect this movement using the SWAP-test. What happens in some cases (such as the quantum walk on the two-dimensional torus) is that the current state of the algorithm remains almost orthogonal to the target |g⟩, so a measurement will give the

24 Chapter 2. Controlled quantum amplification

target state with a subconstant probability. That is why we need to amplify the success probability of A. We show in this work how we can do this “in place”, without increasing the total query complexity of A. If the operator W has more (+1)-eigenvectors, then the best thing we can do is to take |init⟩ as the projection of |g⟩ onto the (+1)-eigenspace of W. If the initial state contains some other +1-eigenvectors, that probability is not touched by either A and U. Since the circuit U from Figure 2.5 consists of the reflection about |1,˜ g⟩ followed by another real unitary, Theorem 23 implies that U has a unique (+1)-eigenvector which overlaps the target |1,˜ g⟩. We now prove the first main result of our work, namely that in each two-dimensional eigenspace, the operator U rotates faster than the operator A.

θ′ ≤ θ′ ≤ · · · ≤ θ′ π Theorem 20 (Restated). Let the nonzero eigenphases of A be 0 < 1 2 m < . α Then the nonzero eigenphases k of U satisfy the relation

θ′ ≤ α ≤ · · · ≤ θ′ ≤ α π 0 < 1 1 m m < .

Proof. We use the version of our circuit consisting of A and a reflection (Figure 2.5). Since | ⟩ ∈ ⊕ | ⟩ ∈ ⊕ g span2D(W) span+1(W), case ii) of Theorem 23 implies that g span2D(A) | ⟩ − | ⟩ span−1(A). Denote by A−1 the eigenvector(s) of A with eigenvalue 1 and with A ′ k iθ the conjugated eigenvectors of A with eigenvalues e k . Let B = (Z ⊗ 1)(|0⟩⟨0| ⊗ A + | ⟩⟨ | ⊗ 1 | ⟩| ⟩ − 1 1 ). Then the eigenvectors of the operator B are 0 A−1 , with eigenvalue 1,   θ′ | ⟩| ⟩ i k | ⟩|ψ⟩ |ψ⟩ − 0 Ak , with eigenvalues e , and 1 for any state with eigenvalue 1. Since

|1,˜ g⟩ = − sin θ˜|0, g⟩ + cos θ˜|1, g⟩,

|˜ ⟩ ∈ ⊕ we deduce that 1, g span2D(B) span−1(B). Then case iv) of Theorem 23 shows that α θ′ ≤ α ≤ θ′ ≤ · · · ≤ θ′ ≤ α the nonzero eigenphases k of U satisfy the relation 0 < 1 1 2 m m < π, which is what we wanted to prove.

By contrast to Theorem 20, Magniez et al. prove in [MNRS12] that the smallest eigen- θ′ phase of their amplification circuit (Figure 2.4) is smaller than the smallest eigenphase 1 of A. Thus, in the principal eigenspace, their circuit rotates slower than the abstract search algorithm A. They do not compare the other eigenphases, and the total query complexity Θ θ′ they obtain is (1/ 1). We prove in Section 2.9 that, in the case when W is a quantum walk with a sin- gle marked item, our algorithm can simulate exactly the algorithm given by Krovi et al.[KMOR15]. To our knowledge, it is not known how to compare the phases of the in-

25 Chapter 2. Controlled quantum amplification

terpolated quantum walk W(P(s)) with the corresponding phases of the abstract search algorithm A = W · G.

2.5.2 Principal eigenvector

In this subsection we find an explicit expression for the unique (+1)-eigenvector of U. Namely we show that, with a suitable choice for θ˜, the unique (+1)-eigenvector of our circuit is ( ) | ⟩ √1 | ⟩ − | ⟩ U0 = 0, init 1,˜ g . 2 Then an algorithm which finds a marked item with constant probability will map the | ⟩ initial state to U0 , and then measure it. ⊥ Lemma 24. If an eigenvector of U has overlap with |1, g ⟩, then the corresponding eigenvalue ⊥ must be −1. Here |g ⟩ denotes any vector orthogonal to |g⟩.

Proof. We can see from the circuit in Figure 2.9 that U maps |0⟩|g⟩ (or |1⟩|g⟩) to a super- | ⟩|Φ ⟩ | ⟩| ⟩ |Φ ⟩ | ⟩| ⊥⟩ | ⟩ | ⊥⟩ position of 0 1 and 1 g , for some state 1 . Also, U maps 0 g to 0 W g , and ⊥ ⊥ ⊥ |1, g ⟩ to −|1, g ⟩ for any |g ⟩ ⊥ |g⟩.

Let ( ) + − | ⟩ | ⟩ | ⟩ | ⟩ − | − ⟩ g = a0 init + ∑ ak Wk + Wk + a 1 W 1 (2.11) k

| ⟩ ∈ R be the decomposition of g into the eigenbasis of W, where a0, ak, a−1 . We denoted   θ | ⟩ i k θ ∈ by Wk the conjugated eigenvectors of W corresponding to eigenvalues e , with k (0, π) for k = 1, 2, ... , m. Let θ a0 = sin , (2.12) where θ ∈ [0, π/2] − {0}. Recall that the state |init⟩ is obtained from |init⟩ by removing the target |g⟩, and then renormalizing. Let

|Φ⟩ | ⟩ − | ⟩ = init a0 g + − − 2 | ⟩ − | ⟩ | ⟩ − − | − ⟩ = (1 a0) init a0 ∑ ak( Wk + Wk ) a0a 1 W 1 . k

We have ( ) ∥Φ∥2 − 2 2 2 2 2 = (1 a0) + a0 ∑ ak + a−1 (2.13) k

26 Chapter 2. Controlled quantum amplification

Since |g⟩ is normalized, Eq. 2.11 implies that

2 2 − 2 2 ∑ ak + a−1 = 1 a0, (2.14) k

∥Φ∥2 − 2 2 θ and plugging this into Eq. 2.13, we obtain that = 1 a0 = cos . Therefore |Φ⟩ θ ( ) θ sin + − sin | ⟩ = = θ| ⟩ − ∑ | ⟩ + | ⟩ − − | − ⟩ init ∥Φ∥ cos init θ ak Wk Wk θ a 1 W 1 . (2.15) cos k cos

Lemma 25. The unique (+1)-eigenvector of U is the (unnormalized) state

θ | ⟩ √1 | ⟩ − √1 θ cos | ⟩ v0 = 1,˜ g sin ˜ 0, init . (2.16) 2 2 sin θ

Proof. Let ( ) 1 − θ˜ 1 | ⟩ = √ |0⟩ + i|1˜⟩ = e i √ (|0⟩ + |1⟩) , 2 2

and ( ) 1 θ˜ 1 |⟩ = √ |0⟩ − i|1˜⟩ = ei √ (|0⟩ + |1⟩) 2 2 Then the rotation operator −Rot(−2θ˜) we used in Figure 2.9 has the eigenvector | ⟩ θ − θ with eigenvalue −e2i ˜, and the eigenvector |⟩ with eigenvalue −e 2i ˜. According to Lemma 24, we search for a (+1)-eigenvector of U of the form

| ⟩ √1 | ⟩ | ⟩ √1 | ⊥⟩ v0 = ( , g + x , g ) + i 0, g , 2 2

⊥ ⊥ for some x ∈ C and |g ⟩ ⊥ |g⟩. Similar to Equation 2.11, we decompose |g ⟩ into the eigenbasis of W as ( ) ⊥ + + − − | ⟩ | ⟩ | ⟩ − − | − ⟩ g = a0b0init + ∑ ak bk Wk + bk Wk + a 1b 1 W 1 , (2.17) k

 − for some unknown coefficients b0, bk , b 1 we want to find. We have

| ⟩ 1 −iθ˜ | ⟩ | ⟩ | ⟩ 1 iθ˜ | ⟩ − | ⟩ | ⟩ √1 | ⊥⟩ v0 = e ( 0 + i 1 ) g + x e ( 0 i 1 ) g + i 0, g . 2 2 2

27 Chapter 2. Controlled quantum amplification

| ⟩ | ⟩ Applying the rotation controlled by g to v0 (see Figure 2.9), we obtain the state

| ⟩ −1 iθ˜ | ⟩ | ⟩ | ⟩ − 1 −iθ˜ | ⟩ − | ⟩ | ⟩ √1 | ⊥⟩ v1 = e ( 0 + i 1 ) g xe ( 0 i 1 ) g + i 0, g 2 2 2 1 θ˜ − θ˜ 1 θ˜ − θ˜ 1 ⊥ = − (ei + xe i )|0, g⟩ − i (ei − xe i )|1, g⟩ + √ i|0, g ⟩. 2 2 2

Identifying the amplitude of |1, init⟩ in both sides of the equation U|v⟩ = |v⟩, we obtain

iθ˜ − −iθ˜ −iθ˜ − iθ˜ a0i(e xe ) = a0i(e xe ).

̸ − Since a0 = 0, the last equation has the unique solution x = 1. For the amplitudes of  | − ⟩ | ⟩ 0, W 1 and 0, Wk we have, respectively

θ − − θ ia−1(sin ˜ b−1) = ia−1( sin ˜ + b−1)  θ˜   i k − θ˜ − θ˜ iake ( sin + bk ) = iak( sin + bk ),

+ − ⊥ − θ˜ | ⟩ ⊥ | ⟩ which imply that b 1 = bk = bk = sin for all k. To find b0, we ask that g g , so ( ) 2 θ˜ 2 2 a0b0 + sin 2 ∑ ak + a−1 = 0. k

2 θ Using Eq. 2.14, we obtain b = − sin θ˜ cos , so 0 sin2 θ

2 θ ( ) ⊥ cos + − | ⟩ = − θ˜ | ⟩ + θ˜ ∑ | ⟩ + | ⟩ + θ˜ − | − ⟩ g sin θ init sin ak Wk Wk sin a 1 W 1 . sin k

Using the expression for |init⟩ given by Eq. 2.15, from the last relation we obtain Eq. 2.16, which is what we wanted.

Lemma 25 leads us to the optimal choice for θ˜, given in the next theorem.

Theorem 26. Choosing θ˜ such that

sin θ sin θ˜ = , (2.18) cos θ

the unique (+1)-eigenvector of U is ( ) | ⟩ √1 | ⟩ − | ⟩ U0 = 0, init 1,˜ g (2.19) 2

28 Chapter 2. Controlled quantum amplification ( ) We can also prove directly that |U ⟩ = √1 |0, init⟩ − |1,˜ g⟩ is a (+1)-eigenvector of 0 2 U. On one hand we have

1 |init⟩ = |init⟩ − sin θ˜|g⟩, (2.20) cos θ

and ( ) | ⟩ | ⟩ 1 | ⟩ − θ˜| ⟩ U 0, init = 0 W θ init sin g ( cos ) 1 = |0⟩ |init⟩ − sin θ˜ W|g⟩ , cos θ and

U|1,˜ g⟩ = − sin θ˜ |0⟩W|g⟩ + cos θ˜|1⟩|g⟩.

Thus ( ) 1 U |0, init⟩ − |1,˜ g⟩ = |0, init⟩ − cos θ˜|1, g⟩. cos θ On the other hand, using Eq. 2.20 and expanding |θ˜⟩ into the basis {|0⟩, |1⟩}, we can write

1 |0, init⟩ − |1,˜ g⟩ = |0, init⟩ − sin θ˜|0, g⟩ + sin θ˜|0, g⟩ − cos θ˜|1, g⟩, cos θ | ⟩ and from here we can see that U0 is a (+1)-eigenvector for U. | ⟩ We have proven that U0 has constant overlap with both the starting state for U, and | ⟩ with the target state. In the next section we show that we can prepare U0 efficiently by doing phase estimation.

2.6 Cost of our circuit

2.6.1 Definition of the quantum hitting time

Let U be any real unitary and let |w⟩ be some real unit state in the same space A acts α − α on. Since U is real, its eigenvalues different from 1 come in conjugated pairs (ei j , e i j ), α ∈ π α ≤ α ≤ · · · ≤ α where j (0, ), for j = 1, ... , m. We can order these eigenphases as 1 2 m. | +⟩ | −⟩ Let Uj and Uj be the conjugated eigenvectors of U corresponding to the eigenvalues α − α ei j and e i j , where j = 1, ... , m.

29 Chapter 2. Controlled quantum amplification

Decomposing |w⟩ into the eigenbasis of U we have

m ( ) | ⟩ | ⟩ +| +⟩ −| −⟩ | ⟩ w = w0 U0 + ∑ wj Uj + wj Uj + w−1 U−1 . j=1

| ⟩ − Here we grouped all 1-eigenvectors of U into U0 , and all the ( 1)-eigenvectors into | ⟩ | ⟩ | ⟩ | ⟩ U−1 . Since U and w have real components, we can choose the eigenvectors U0 , U−1 ,  + − | ⟩ ∈ R − ∈ R ∈ R and Uj such that w0 , w 1 , and wj = wj = wj for all j. Thus

m ( ) | ⟩ | ⟩ | +⟩ | −⟩ | ⟩ w = w0 U0 + ∑wj Uj + Uj + w−1 U−1 . j=1

Definition 27. The quantum hitting time of U from |w⟩ is v u u m t 1 QHTα(U, |w⟩) = 2 ∑ |w |2 . (2.21) j α2 j=1 j

The subscript α indicates that we consider the eigenphases of the operator. Defini- tion 27 has an algorithmic motivation in quantum computing. Namely, QHTα(U, |w⟩) is the precision we need to use in phase estimation to prepare the (+1)-eigenvector of U starting from |w⟩ (Theorem 31). When U = WG, where W is a quantum walk, and G = 1 − 2|g⟩⟨g| is reflection about some target state, the quantum hitting time from Equation 2.21 is the square root of the hitting time of the classical random walk from which we constructed W. We can define another quantum hitting time which involves cotangents of the eigen- phases. This definition is a consequence of the expressions given by Theorem 23 and it also has an algorithmic interpretation.

Definition 28. The cotangent quantum hitting time of U from |w⟩ is v u ( ) u m α | ⟩ t | |2 2 j QHTcot(U, w ) = 2 ∑ wj cot . (2.22) j=1 2

We next prove that QHTα is of the same order as QHTcot.

Lemma 29. For any real unitary U and any real unit state |w⟩ we have

1 | ⟩ | ⟩ √1 1 | ⟩ QHTcot(U, w ) < QHTα(U, w ) < + QHTcot(U, w ) 2 2 2

30 Chapter 2. Controlled quantum amplification

Proof. Since sin x < x < tan x for any x ∈ (0, π/2), we have that

1 cot2 x < < 1 + cot2 x. (2.23) x2

Using the first inequality from 2.23, we have v v u ( ) u ( ) u m α u m t j t 4 QHT (U, |w⟩) = 2 ∑ |w |2 cot2 ≤ 2 ∑ |w |2 = 2QHTα(U, |w⟩). cot j j α2 j=1 2 j=1 j

α π α ∈ π Since the eigenphases j of U belong to (0, ), we have j/2 (0, /2). Then, using the second inequality from 2.23, we obtain v v u ( ) u ( ( )) u m u m α 1 t 4 1 t j QHTα(U, |w⟩) = √ ∑ |w |2 < √ ∑ |w |2 1 + cot2 j α2 j 2 j=1 j 2 j=1 2 √ √1 1 2 = 1 + QHTcot. 2 2 √ √ √ We obtain the conclusion using the inequality a + b ≤ a + b, where a, b are positive reals.

Magniez et al. [MNRS12] give a different definition for the quantum hitting, using the

L1 norm. Namely, they introduce the quantity

m | ⟩ | |2 1 QHT1(U, w ) = 2 ∑ wj α , (2.24) j=1 j which we call the L1 quantum hitting time. For consistency with our definition, we omit- ted in Eq. 2.24 the constant quantity corresponding to the (−1)-subspace of U. | ⟩ | ⟩ α α Both QHTα(U, w ) and QHT1(U, w ) are upper bounded by 1/ 1, where 1 is the α smallest positive eigenphase of U. Ambainis et al. [AKR05] use 1/ 1 to measure the query complexity of the abstract search algorithms. √ Using Jensen’s inequality for the concave function x → x, we have v u m u m 2 1 t 1 QHT (U, |w⟩) = 2 ∑ |w | ≤ 2 ∑ |w |2 = QHTα(U, |w⟩), 1 j α j α2 j=1 j j=1 j and the inequality can be strict.

31 Chapter 2. Controlled quantum amplification

The L1 quantum hitting time however does not have an algorithmic interpretation. To overcome this, Magniez et al. [MNRS12] define an effective L1 quantum hitting time. | ⟩ Namely, they note that QHT1(U, w ) is the expectation of the random variable QH which α 2 ϵ takes the value 1/ j with probability 2wj , and the value 0 otherwise. Then the -error quantum hitting time is defined as

QHTϵ(U, |w⟩) = min{a; Pr[QH > a] ≤ ϵ}. (2.25)

ϵ α There are cases when the -error quantum hitting time is 1/ 1, ([MNRS12], Theorem 3.14) and, in the case of quantum walks, it is not known how the ϵ-error quantum hitting time compares to the classical hitting time. One can similarly define an effective classical hitting time and prove that it is the square root of the effective quantum hitting time. The ϵ-error quantum hitting time is used in the context of the abstract search algo- rithm. We are given a starting state |w⟩. If there are no marked items, then |w⟩ is a (+1)- eigenvector of U, while, if there are marked items, then |w⟩ contains other eigenvectors of U as well. To distinguish between the two cases with probability 1 − O(ϵ), it is enough to do phase estimation with precision QHTϵ(U, |w⟩). Magniez et al. [MNRS12] also solve the problem of finding a single marked item with QHTϵ(U, |w⟩) queries, where ϵ is a constant. However, they use a very strong assumption, namely that the initial state |w⟩ has constant | ⟩ α overlap with the principal eigenspace of U, which implies that QHTϵ(U, w ) = 1/ 1. We believe that, in order to find marked elements, we need to use the L2 norm to

define the quantum hitting time; the L1 definition from Eq. 2.24 or Eq. 2.25 might only be used when the starting state of the algorithm has non-zero overlap with a constant number of two-dimensional eigenspaces of U, or the eigenphases of U are all about the same. From now on, we use the notation QHT(U, |w⟩) as a shorthand for QHTα(U, |w⟩).

2.6.2 Quantum hitting time and phase estimation

Now we study the relation between the quantum hitting time and the running time of phase estimation. Technically speaking, we should use the name phase detection, as we are interested to distinguish between phase 0 and a phase different from 0. The results in this section are standard (see for instance [Kit95], [CEMM98], [BHMT02], [MNRS12]), we present them here to emphasize the connection with the quantum hitting time.

Lemma 30. Let U be any real unitary, and |w⟩ be an eigenvector of U corresponding to a phase iφ t different from 0; thus U|w⟩ = e |w⟩. Let δw be the amplitude of |0 ⟩|w⟩ after applying phase

32 Chapter 2. Controlled quantum amplification

| t⟩ F −1 0 T 0 FT

|w⟩ Uk

Figure 2.10: Phase estimation with t bits of precision. The controlled gate applies Uk if the first register is |k⟩, where k = 0, 1, ... , T − 1.

| t⟩| ⟩ |δ |2 ≤ π2 = t estimation with t bits of precision starting from 0 w . Then w T2 φ2 , where T 2 . Proof. The circuit for phase estimation is shown in figure 2.10. Tracing the action of the circuit, we have

T−1 T−1 1 1 φ |0t⟩|w⟩ → √ ∑ |k⟩|w⟩ → √ ∑ eik |k⟩|w⟩ T k=0 T k=0 T−1 T−1 1 φ − → ∑ eik ∑ e ikx|x⟩|w⟩. T k=0 k=0

Then the amplitude of |0t⟩|w⟩ is

T−1 iTφ 1 φ 1 1 − e δ = ∑ eik = · . (2.26) w − iφ T k=0 T 1 e

We use now the elementary fact that the ratio between the arc length and the correspond- x ≤ π ing chord is maximum when the chord is a diameter. Thus |1−eix| 2 , which implies |1 − eix| ≥ 2x/π. Applying this inequality to Eq. 2.26 we obtain the desired upper bound 2 on |δw| .

Theorem 31. Let U be any real unitary, and let |g⟩, and |init⟩ be real states. Assume that U has a | ⟩ | ⟩ | ⟩ (+1)-eigenvector U0 which has constant overlap with both init and g . There is an algorithm which outputs |g⟩ with constant success probability and calls U at most QHT(U, |init⟩) times.

Proof. The algorithm is the following. 1. Apply phase estimation with precision t, where 2t = T = Θ(QHT(U, |init⟩). 2. Measure the first register with respect to {|0t⟩⟨0t|, 1 − |0t⟩⟨0t|}. 3. Provided the outcome is |0t⟩, measure the second register. To prove its correctness, we first decompose |init⟩ into the eigenbasis of U as ( ) | ⟩ | ⟩ | +⟩ | −⟩ | ⟩ init = a0 U0 + ∑aj Uj + Uj + a−1 U−1 . j

33 Chapter 2. Controlled quantum amplification

φ Let j be the eigenphases of U corresponding to the two- dimensional eigenspace spanned | +⟩ | −⟩ δ | t⟩| ⟩ by Uj and Uj . Denote by j the amplitude of the state 0 Uj at the end of the phase estimation algorithm. Then the probability that we measure |0t⟩ at step 2 when the | ⟩ second register does not contain U0 is

2 2 π aj e = ∑ a2δ2 ≤ ∑ . j j 2 φ2 j T j j (√ ) Θ ∑ 2 φ2 By choosing T = j aj / j we make this error probability at most a constant. | ⟩ 2|⟨ | ⟩|2 − Then the probability of measuring g at step 3, is at least a0 g U0 e, which is at least a constant.

2.6.3 Relations between quantum hitting times

In this key section we obtain a series of technical lemmas which relate the quantum hitting time of the circuit U from Figure 2.7 to the quantum hitting time of W and the quantum hitting time of A. This also implies a relation between the quantum hitting time of W and the quantum hitting time of A; to our knowledge, such relations were unknown until now, even for the particular case when W is a quantum walk operator. For simplicity, we formulate some of the equalities in this section between quantum hitting times using the notation Θ. It is possible to track the constants hidden behind Θ and show that they are below 2; this bound is also supported by our experiments, in addition to the rigorous proofs we give here. First we prove a lemma which relates the quantum hitting time of any unitary U to the (+1)-eigenvector of an operator consisting of U and a one-dimensional reflection. The lemma will play a crucial role in the remaining proofs in this section.

Lemma 32. Let U be any real unitary, and let |w⟩ be a real state which does not overlap the (+1)- (1) 1 − | ⟩⟨ | | (1)⟩ (1) eigenspace of U. Let U = U( 2 w w (), and let )U0 be the (+1)-eigenvector of U which overlaps |w⟩. Then QHT(U, |w⟩) = Θ 1 . |⟨ | (1)⟩| w U0 Proof. We first decompose |w⟩ into the eigenbasis of U as ( ) | ⟩ | +⟩ | −⟩ | ⟩ w = ∑wj Uj + Uj + w−1 U−1 , j

34 Chapter 2. Controlled quantum amplification

∈ R (1) where wj for all j. Then, by the flip-flop theorem 23, the (+1)-eigenvector of U is

⊥ | + w⟩ = |w⟩ + i|w ⟩ ( ) φ ( ) | ⟩ j | −⟩ − | +⟩ = w + i ∑ wj cot Uj Uj . j 2

The norm of | + w⟩ is v √ u ( ) u φ ∥ ∥ ⊥ 2 t 2 2 j Θ | ⟩ +w = 1 + w = 1 + 2 ∑ wj cot = (QHT(U, w )). j 2

Thus, normalizing | + w⟩ we obtain the state

( ) 1 ⊥ |U 1 ⟩ = (|w⟩ + i|w ⟩), 0 Θ(QHT(U, |w⟩))

⟨ | (1)⟩ and from here we obtain the conclusion. We need to take the absolute value of w U0 because the eigenvectors of a matrix are not uniquely determined: if we multiply an eigenvector by a complex phase, we obtain another eigenvector.

For the rest of this section, U denotes the amplification circuit from Figure 2.5. First we show that for U the quantum hitting time from the starting state is the same as the quantum hitting time from the target. This is a consequence of the fact that the (+1)- eigenvector of U is an equal superposition of only the starting state and the target. Lemma 33. The following relation holds

QHT(U, |1,˜ g⟩) = QHT(U, |0, init⟩).

Proof. We know from Eq. 2.19 that ( ) | ⟩ √1 | ⟩ − | ⟩ U0 = 0, init 1,˜ g . 2 {| +⟩ | −⟩} is the unique (+1)-eigenvector for U. Let Sj = span Uj , Uj be the two-dimensional eigenspaces of U, for j = 1, ... , m. Projecting onto the subspace Sj we have

Π | ⟩ √1 Π | ⟩ − √1 Π | ⟩ 0 = S v0 = S 0, init S 1,˜ g . j 2 j 2 j

Thus, the projections of |0, init⟩ and |1,˜ g⟩ onto the two-dimensional eigenspaces of U

35 Chapter 2. Controlled quantum amplification

have the same lengths. Therefore, according to the definition of the quantum hitting time 2.21, we obtain QHT(U, |1,˜ g⟩) = QHT(U, |0, init⟩).

The next lemma shows that the target state belongs to the two-dimensional conjugated eigenspaces and the (+1) eigenspace of U.

Lemma 34. The unitary U has no (−1)-eigenvectors which overlap the target |1,˜ g⟩.

U {| +⟩ | −⟩} Proof. We use the rewriting of the circuit given in Figure 2.5. Let Ak , Ak be the two-dimensional eigenspaces of the real unitary A, corresponding to the conjugated φ − φ eigenvalues ei k and e i k . Let ( ) | ⟩ | +⟩ | −⟩ g = ∑gk Ak + Ak , k be the decomposition of |g⟩ into the eigenbasis of A. Therefore ( ) |˜ ⟩ − θ˜ | ⟩ | +⟩ | −⟩ θ˜| ⟩ 1, g = sin ∑gk 0 Ak + Ak + cos 1, g . (2.27) k

Then, applying the flip-flop theorem 23, we deduce that the eigenphases of U are solutions of the equation ( ( ) ( )) ( ) − φ + φ 2 x k x k − 2 θ˜ x ∑gk cot + cot cot ( ) tan (2.28) k 2 2 2

However, we can see directly that x = π is not a solution for Eq. 2.28, so U has no (−1)- eigenvectors which overlap |1,˜ g⟩.

In the following theorem we relate the quantum hitting time of the amplifier U to the quantum hitting time of W.

Theorem 35. Let ϵ = sin2 θ be the probability we measure the target |g⟩ from the initial state init, and let |g⟩ = − sin θ|init⟩ + cos θ|g⟩. Then ( ) 1 (U |˜ ⟩) = Θ √ (W | ⟩) QHT , 1, g ϵQHT , g . (2.29)

Proof. The first part of the proof uses Lemma 32 to relate QHT(U, |1,˜ g⟩) to a dot product. Then we use some circuit rewritings to deduce another expression for the eigenvector that is used to express QHT(U, |1,˜ g⟩) as a dot product. At the end we do some calculations and we apply Lemma 32 for W.

36 Chapter 2. Controlled quantum amplification

As we saw earlier (Eq. 2.19), the unique (+1) eigenvector of U is ( ) | ⟩ √1 | ⟩ − | ⟩ U0 = 0, init 1,˜ g . 2

Thus, we can decompose |1,˜ g⟩ into the eigenbasis of U as ( ) ( ) |˜ ⟩ √1 −| ⟩ | +⟩ | −⟩ 1, g = U0 + ∑bj Uj + Uj , (2.30) 2 j

| ⟩ where we denoted by Uj the conjugated eigenvectors of U corresponding to the eigen-  α values e i j . Then v u ( ) u α |˜ ⟩ t 2 2 j QHT(U, 1, g ) = ∑ bj cot (2.31) j 2

Let 1 ( ) |U+⟩ = √ |0, init⟩ + |1,˜ g⟩ . 0 2 ∥ + ∥ | +⟩ ⊥ | ⟩ Then U0 = 1 and U0 U0 . Also, we can write √ ( ) | +⟩ |˜ ⟩ | ⟩ | +⟩ | −⟩ U0 = 2 1, g + U0 = ∑bj Uj + Uj . (2.32) j

From Eq. 2.32 we obtain ( ) |˜ ⟩ Θ | +⟩ QHT(U, 1, g ) = QHT(U, U0 ) . (2.33)

| +⟩ ∈ Also, Eq. 2.32 implies that U0 span2D(U), so we can apply Lemma 32. We conclude (1) 1 − | +⟩⟨ +| | (1)⟩ that the operator U = U( 2 U0 U0 ) has a (+1)-eigenvector U0 , which overlaps | +⟩ U0 such that ( ) 1 QHT(U, |U+⟩) = Θ . (2.34) 0 |⟨ +| (1)⟩| U0 U0

In the following we make a series of transformations which enable us to evaluate the inner product from Eq. 2.34.

37 Chapter 2. Controlled quantum amplification

From Eq. 2.30 and the flip-flop theorem 23 we have ( ) α ( ) | (1)⟩ | +⟩ j | −⟩ − | +⟩ U0 = U0 + i∑bj cot Uj Uj , j 2

| ⟩ ⊥ | +⟩ | ⟩ ⊥ | ⟩ | ⟩ ⊥ | (1)⟩ and since U0 U0 and U0 Uj , we deduce that U0 U0 . Consider the operator

(2) (1) | ⟩ · | +⟩ · | ⟩ U = U Ref( U0 ) = U Ref( U0 ) Ref( U0 ).

| (1)⟩ (2) | ⟩ ⊥ | (1)⟩ Then U0 is a (+1)-eigenvector for U because U0 U0 . We have

| +⟩ | ⟩ {| +⟩ | ⟩} Ref( U0 )Ref( U0 ) = Ref(span U0 , U0 ) = Ref(span{|0, init⟩, |1,˜ g⟩}).

Thus, U(2) is equivalent to the circuit in Figure 2.11

0 1˜ 0˜ 0

Ref(init) G G W

Figure 2.11: U(2) with reflection about span{|0, init⟩, |1,˜ g⟩}

Since |g⟩ ⊥ |init⟩, the operators |0⟩⟨0| ⊗ Ref(init) + |1⟩⟨1| ⊗ 1 and 1 ⊗ G commute, so U(2) can be written in the form given in Figure 2.12. Finally, grouping the gates controlled

0 0

G Ref(init) W

Figure 2.12: U(2) with unconditional G in the second register. by |0⟩ and the gates controlled by |1⟩, we obtain that U(2) is equivalent to the circuit in Figure 2.13. Using Figure 2.13, we see that the eigenvectors of U(2) which have |1⟩ in the first reg- ⊥ ⊥ ister are |1, g⟩ (with eigenvalue −1), and |1, g ⟩ (with eigenvalue +1, for any |g ⟩ ⊥ | ⟩ | +⟩ | ⊥⟩ | (1)⟩ g ). Since U0 is orthogonal to any vector 1, g , we conclude that U0 , the (+1)- (2) | +⟩ eigenvector of the circuit U which has nonzero overlap with with U0 , must have the

38 Chapter 2. Controlled quantum amplification

1 0

G Ref(g) · Ref(init) · W

Figure 2.13: Equivalent form for U(2)

| (1)⟩ | (1)⟩ form 0, W0 . Here W0 is a (+1)-eigenvector for

W(1) = W · Ref(|init⟩) · Ref(|g⟩).

To summarize,

| (1)⟩ | (1)⟩ U0 = 0, W0 , (2.35)

θ˜ sin θ (up to a global phase). Since sin = cos θ , we can write

1 ( ) |U+⟩ = √ |0, init⟩ + |1,˜ g⟩ 0 2 1 ( ) 1 = √ − sin θ˜|0, g⟩ + |0, init⟩ + √ cos θ˜|1, g⟩ 2 ( )2 1 1 1 = √ |0, init⟩ − 2 sin θ˜|0, g⟩ + √ cos θ˜|1, g⟩ 2 cos θ 2 1 1 1 = √ (|0, init⟩ − 2 sin θ|0, g⟩) + √ cos θ˜|1, g⟩, 2 cos θ 2 so

1 1 1 |U+⟩ = √ |0⟩G|init⟩ + √ cos θ˜|1, g⟩ (2.36) 0 2 cos θ 2

Let |g⟩ = − sin θ|init⟩ + cos θ|g⟩. In other words, |g⟩ is obtained from |g⟩ by removing |init⟩ and then normalizing, so |g⟩ ⊥ |init⟩. Then

Ref(|init⟩)Ref(|g⟩) = Ref(span{init, |g⟩}) = Ref(span{init, |g⟩}), which implies that

W(1) = W · Ref(|init⟩) · Ref(|g⟩) = W · Ref(|init⟩) · Ref(|g⟩)

39 Chapter 2. Controlled quantum amplification

| (1)⟩ · | ⟩ · | ⟩ Thus W0 is a (+1)-eigenvector of W Ref( init ) Ref( g ). Since W|init⟩ = |init⟩, the unitary W · Ref(|init⟩) = W(1 − 2|init⟩⟨init|) has the same spectrum as W, except for |init⟩ becoming a (−1)-eigenvector. As |g⟩ ⊥ |init⟩, we de- duce that the state |g⟩ has the same decomposition in the eigenbasis of W and W · (1 − | ⟩⟨ | | (1)⟩ · | ⟩ 2 init init ). Therefore W0 is a (+1)-eigenvector of the unitary W Ref( g ). Accord- ing to Lemma 32 we have ( ) 1 QHT(W, |g⟩) = Θ . (2.37) |⟨ | (1)⟩| g W0

As W and |init⟩ are real, we deduce that the unitaries W and 1 − 2|init⟩⟨init| commute. | (1)⟩ (1) · | ⟩ · | ⟩ | ⟩ Therefore W0 is a (+1)-eigenvector for W = W Ref( init ) Ref( g ), and init is a − (1) | (1)⟩ ⊥ | ⟩ ( 1)-eigenvector for W , so W0 init . Finally we can put all the pieces of the proof together. From 2.34 and 2.35 we have ( ) ( ) 1 1 QHT(U, |U+⟩) = Θ = Θ . (2.38) 0 |⟨ +| (1)⟩| |⟨ +| (1)⟩| U0 U0 U0 0, W0

Using Eq. 2.36 we have

|⟨ +| (1)⟩| |⟨ +| | ⟩⟨ | ⊗ 1 | (1)⟩| U0 0, W0 = U0 ( 0 0 ) 0, W0 1 1 ( ) = √ |⟨init|G|W 1 ⟩| (2.39) 2 cos θ 0

Since

G|init⟩ = |init⟩ − 2 sin θ|g⟩ = |init⟩ − 2 sin θ(cos θ|g⟩ + sin θ|init⟩),

| (1)⟩ ⊥ | ⟩ and W0 init , we have

|⟨ | | (1)⟩| θ θ|⟨ | (1)⟩| init G W0 = 2 sin cos g W0 . (2.40)

From Eq. 2.37, Eq. 2.38, Eq. 2.39, and Eq. 2.40 we obtain ( ) 1 (U |˜ ⟩) = Θ √ (W | ⟩) QHT , 1, g ϵQHT , g . (2.41) and taking into account Eq. 2.33, we obtain the conclusion.

40 Chapter 2. Controlled quantum amplification

We can also relate the quantum hitting time of A to the quantum hitting time of the controlled amplifier U.

Theorem 36. ( ) 1 (A | ⟩) = Θ √ (W | ⟩) QHT , init ϵQHT , g . (2.42)

Proof. Since ⟨g|init⟩ ̸= 0, the state |g⟩ has non-zero overlap with the unique (+1)-eigenvector of W. Then, according to Theorem 23, the operator A = W · G has no (+1)-eigenvectors, so |init⟩ has no overlap with the (+1)-eigenspace of A. Using Lemma 32 we have ( ) 1 QHT(A, |init⟩) = Θ , (2.43) |⟨init|w⟩|

where |w⟩ is the unique (+1)-eigenvector of the operator B = A(1 − 2|init⟩⟨init|). We can write

B = A(1 − 2|init⟩⟨init|) = W · Ref(|g⟩) · Ref(|init⟩) = W · Ref(|init⟩) · Ref(|g⟩).

Since |init⟩ is a (−1)-eigenvector of B, and |w⟩ is the (+1)-eigenvector of B, we deduce that |init⟩ ⊥ |w⟩. We have

|init⟩ = cos θ|init⟩ − sin θ|g⟩,

and, since |w⟩ ⊥ |init⟩ we obtain √ |⟨init|w⟩| = sin θ|⟨g|w⟩| = ϵ|⟨g|w⟩|.

Then, using Eq. 2.43, we conclude that ( ) 1 1 QHT(A, |init⟩) = Θ √ . (2.44) ϵ |⟨g|w⟩|

The operator W · Ref(|init⟩) has the same eigenvalues and eigenvectors as W, except for |init⟩ becoming a (−1)-eigenvector. Because |g⟩ ⊥ |init⟩, we deduce that the state |g⟩ has

41 Chapter 2. Controlled quantum amplification

the same decomposition in the eigenbasis of W and in the eigenbasis of W · Ref(|init⟩). Also, |w⟩ cannot be orthogonal to |g⟩. To prove this, suppose that |w⟩ ⊥ |g⟩. Since |w⟩ ⊥ |init⟩, the fact that |w⟩ is a (+1)-eigenvector of B = W · Ref(|init⟩) · Ref(|g⟩) implies that |w⟩ is a (+1)-eigenvector of W. This is, however, a contradiction. Therefore, using Lemma 32, we conclude that ( ) 1 QHT(W, g) = Θ . (2.45) |⟨g|w⟩|

Using Eq 2.44 and Eq 2.45 we obtain the conclusion.

As the (+1)-eigenvector of the amplifier U has constant overlap with both |0, init⟩, which is the starting state of U, and with the target state |1,˜ g⟩, Theorem 31 shows that U outputs |1,˜ g⟩ with constant success probability using Θ(QHT(U, |0, init⟩) queries. Com- bining then the results of Lemma 33, Theorem 35, and Theorem 36, we obtain the main result of this section.

Theorem 37. The amplifier U starting in the initial state |0, init⟩ outputs the target state |1,˜ g⟩ with constant success probability using Θ(QHT(A, init)) queries.

This powerful theorem implies that our amplifier U does not increase the asymptotic query complexity of A. Therefore, if we can use an abstract search algorithm A to solve a practical problem (so its quantum hitting time is not too large), then we can amplify A to achieve constant success probability.

2.7 Simulation of amplitude amplification

2.7.1 Standard amplitude amplification

In this section we review briefly the amplitude amplification algorithm [BHMT02]. We are given a target state |g⟩, and we have an algorithm W which implements a reflection about some initial state |init⟩. Thus

W = 2|init⟩⟨init| − 1.

The goal is to rotate the state |init⟩ such that to obtain a constant overlap with the target. We construct the operators and A = −W · G, where G = 1 − 2|g⟩⟨g|. Let |bad⟩ be the state

42 Chapter 2. Controlled quantum amplification

obtained by normalizing the vector |init⟩ − ⟨g|init⟩|g⟩. We can write

|init⟩ = sin θ|g⟩ + cos θ|bad⟩, for some angle θ ∈ [0, π/2]. The simplest analysis of the operator A uses the property about the product of two reflections stated in Proposition 4. In our situation, in the two dimensional subspace S = span{|g⟩, |init⟩}, the operator A = (1 − 2|init⟩⟨init|)(1 − 2|bad⟩⟨bad|) is a product of two reflections, so it is a rotation by an angle 2θ. Thus, A has the eigenvector |A+⟩ = √1 (|g⟩ + i|bad⟩) with eigenvalue 1 2 − θ θ e 2i , and the eigenvector |A+⟩ = √1 (|g⟩ − i|bad⟩) with eigenvalue e2i . Outside the 1 2 subspace S, the operator A acts trivially (as 1). We have

1 1 − |g⟩ = √ |A+⟩ + √ |A ⟩, 2 1 2 1

and −i i − |bad⟩ = √ |A+⟩ + √ |A ⟩. 2 1 2 1 After applying A for t steps to |bad⟩, we obtain the state

−i θ i − θ − |ψ⟩ = √ e2ti |A+⟩ + √ e 2ti |A ⟩. 2 1 2 1

− θ θ We achieve maximum overlap with |g⟩ when e 2ti = −e2ti , so 4tθ = π. Therefore, after ⌈ π ⌉ | ⟩ T = 4θ applications of A, we end up with a state close to g .

2.7.2 Simulation of amplitude amplification

In this section we show that we can use the circuit from Figure 2.5 to simulate amplitude amplification with a constant slowdown compared to the original algorithm. We use the same notations as in previous section, except that we denote the state |bad⟩ by |init⟩. Defining |g⟩ = cos θ|g⟩ − sin θ|init⟩

we can write W = 2|init⟩⟨init| − 1 = 1 − 2|g⟩⟨g|,

and the circuit is depicted in Figure 2.14.

Theorem 38. The circuit U in Figure 2.14 simulates amplitude amplification.

43 Chapter 2. Controlled quantum amplification

|0⟩ 0˜ 0 |1˜⟩

|init⟩ Ref(g) Ref(g) |g⟩

Figure 2.14: Simulation of amplitude amplification

Proof. As we already showed (Eq. 2.19), the unique (+1) eigenvector of U is ( ) | ⟩ √1 | ⟩ − | ⟩ U0 = 0, init 1,˜ g . 2

We also note that |1, init⟩ is a (−1) eigenvector for U, and we can ignore for the purpose of the analysis the gate Z ⊗ |init⟩⟨init| + 1 ⊗ |g⟩⟨g|, as this gate generates the trivial (−1)- eigenspace of U (we did not depict this gate in Figure 2.14). Therefore, the action of the unitary U happens in a three-dimensional subspace S. In the subspace spanned by |0,˜ g⟩ and |0, g⟩, the operator U is a product of two reflec- tions, so, according to Proposition 4, the effective action of U is a rotation. The rotational angle is the double of the angle between the reflection axes, which is given by √ cos φ = ⟨0, g|0,˜ g⟩ = cos θ˜ cos θ = cos(2θ),

so that √ √ sin φ = 1 − cos(2θ) = 2 sin θ.

Since the probability to obtain the target when measuring the√ initial state is subconstant, θ ≈ θ φ ≈ φ ≈ θ we can approximate sin √, which implies sin 2 . Thus, U is a rotation by θ {| ⟩ | ⟩} an angle of approximately 2 2 in the subspace Rot2D = span 0,˜ g , 0, g . We have ⟨0, g|0, init⟩ = − sin θ, and ⟨0, g|1,˜ g⟩ = − sin θ˜ cos θ = − sin θ, | ⟩ ⊥ | ⟩ | ⟩ ⊥ | ⟩ | ⟩ so U0 0, g . Also, U0 0,˜ g , which shows that U0 is orthogonal to the subspace

Rot2D. Thus the three-dimensional subspace S on which U acts nontrivially is a direct sum ⊕ {| ⟩} S = Rot2D span U0 . Let 1 ( ) |U+⟩ = √ |0, init⟩ + |1,˜ g⟩ . 0 2

44 Chapter 2. Controlled quantum amplification

| +⟩ ⊥ | ⟩ | +⟩ ⊥ | ⟩ | +⟩ Since U0 U0 and U0 1, init , we conclude that U0 belongs to the two- dimensional subspace where U acts as a rotation. The initial state is ( ) | ⟩ √1 | +⟩ | ⟩ 0, init = U + U0 . 2 0

We need to rotate this state until we obtain the state ( ) −| ⟩ √1 −| +⟩ | ⟩ 1,˜ g = U + U0 . 2 0

π Thus, we need to rotate |U+⟩ by an accumulated angle of π, which requires T = ⌈ √ ⌉ 0 2 2θ applications of U. θ In short, amplitude amplification rotates with an angle 2 √until reaching 90 degrees, at which point it stops; our amplifier, U rotates with an angle 2 2θ, but it needs to rotate the initial state with 180 degrees.

2.8 Application to quantum walks

In this section we use our framework to obtain a large class of practical applications. Namely we show that we can embed a quantum walk with a single marked vertex into our circuit. As a consequence, we conclude that quantum walks are able not only to detect the presence of the marked location, but also to find it. Specifically, we prove the following theorem.

Theorem 21 (Restated). Let P be a reversible Markov chain over a space X, and let m ∈ X be a single marked state. There exists an explicit quantum algorithm which solves FIND ONE(P) with constant success probability using a number of queries of order √ √ S + HT(P, {m}) · U + HT(P, {m}) · C.

Our algorithm is new, more general, and it can be implemented much simpler than the solution given in [KMOR15], which builds a quantum walk from an interpolated classical walk .

45 Chapter 2. Controlled quantum amplification

2.8.1 Classical hitting time

We discuss now some facts related to classical random walks, with the purpose of obtain- ing a formula for the classical hitting time. We point out where we need the reversibility condition - we believe that a major obstacle to generalize our results to non-reversible Markov chains is the lack of a compact expression for the classical hitting time involving eigenphases and probabilities. The key finding in this subsection is the expression for the classical hitting time using the eigenvalues and the eigenvectors of the discriminant ′ matrix D(P ). In this section, we denote by P a Markov chain, namely a matrix with nonnegative entries such that the sum of the elements on each column is 1. For any indices x and

y, the entry Px,y is the probability of jumping from state y to state x, thus P acts on any probability distribution from the left. Throughout this section we assume that the random walks we are working with have a unique stationary distribution, namely a unique right eigenvector with eigenvalue 1. The most general requirement for P is to be irreducible and aperiodic; that is, the graph corresponding to P is connected and acyclic. We denote the stationary distribution with π. As a consequence of the Perron-Frobenius theorem, the column vector π has real, positive entries.

Definition 12 (Restated). An aperiodic, irreducible random walk P is called reversible if it satisfies the condition

πxPy,x = πyPx,y, (2.46)

for any states x, y. Here we denote by πx the coordinates of the stationay distribution π of P.

For our discussion, an important consequence of reversibility is the fact that the matrix of the random walk is diagonalizable - that is, there exists a complete basis formed by its eigenvectors. We include the proof here as it introduces the discriminant matrix, which is the key for obtaining the spectrum of a quantum walk.

Lemma 39. If a random walk P is reversible, then the matrix P is diagonalizable.

Proof. Denote by π the stationary distribution for P, so Pπ = π. The column vector π

has positive entries πx which sum to 1. Define the diagonal matrix D by D = diag(π), so −1 Dx,x = πx, and let S be the matrix with entries Sx,y = πyPx,y. Then P = SD . Consider

46 Chapter 2. Controlled quantum amplification

the discriminant matrix

− − − D(P) = D 1/2SD 1/2 = D 1/2PD1/2. (2.47)

We claim that P and D(P) have the same eigenvalues and eigenvectors. Indeed, if Pλ = − − − − − − λλ, then SD 1/2(D 1/2λ = λλ, so D 1/2SD 1/2(D 1/2λ) = λ(D 1/2λ). In other words, − if λ is an eigenvector of P, then D 1/2λ is an eigenvector with the same eigenvalue for D(P). The reversibility condition from Eq. 2.46 implies that the matrix S is symmetric, so D(P) is also symmetric. Thus D(P) has real eigenvalues and is diagonalizable. It follows that P has real eigenvalues and is diagonalizable.

−1/2 1/2 Lemma 40. Assume√ that P is reversible, and consider the discriminant D(P) = D PD . Then D(P) = P ◦ PT, where ◦ denotes the entrywise product of matrices, PT is the transpose of the matrix P, and the square root is taken entrywise.

Proof. We have √ √ √ √1 π √1 π · D(P)x,y = π Px,y y = π yPx,y Px,y x √ √ x √ 1 = √ πxPx,y · Px,y = Px,yPy,x, πx

which is what we wanted to prove.

In the following, we assume that P is an irreducible and aperiodic random walk in a space X with N elements. A subset M of the elements of X have been marked.

Definition 13 (Restated). We define the hitting time HT(P, M) as the expected number of steps of the random walk starting from an unmarked state until reaching a marked state for the first time. The initial state is picked according to the stationary distribution of P restricted to unmarked states. ′ Definition 41. For a Markov chain P, we define the totally absorbing walk P as the walk we obtain from P by replacing all outgoing transitions from marked states with self-loops. ′ Permuting rows and columns, we can assume that the matrices P and P have the following block form

[ ] P P P = U,U U,M PM,U PM,M

47 Chapter 2. Controlled quantum amplification

b b U b U b

bM b b b M

′ Figure 2.15: Original walk P (left) and the modified walk P (right). and [ ] ′ P 0 P = U,U . 1 PM,U M,M

′ Graphically, we illustrate P and P in Figure 2.15.

Lemma 42. Let P be an irreducible and aperiodic chain with stationary distribution π. Let ϵ = π π π ∑x∈M x, and let U be the column vector obtained from by removing the entries corresponding to marked states. Then √ ( ) √ ∞ (√ ) π 1 π ( ) = U √ ∑ t π U HT P, M − ϵdiag π PU,Udiag U − ϵ (2.48) 1 U t=0 1

Proof. We have

∞ HT(P, M) = ∑ k · Pr[we need exactly k steps] k=1 ∞ = ∑ k · Pr[we need more than t steps]. t=0

The distribution after t steps of the walk is [ ] ′ π /(1 − ϵ) (P )t U 0

We have [ ] t ′ P 0 (P )t = U,U . 1 ··· t−1 1 PM,U( + PU,U + + PU,U)

48 Chapter 2. Controlled quantum amplification

Let 1U be the column vector which has 1 on the unmarked positions. Then π Pr[we need more than t steps] = 1T Pt U U U,U 1 − ϵ √ √ √ √ πT π T π π π Since Udiag(1/ U) = 1U and diag( U) U = U, we obtain Eq. 2.48. Assume now that the walk P is also reversible. Then [ ] ′ P 0 D(P ) = U,U . 1 0 M,M

( ′) ≤ λ′ ≤ · · · ≤ λ′ < The symmetric matrix D P has eigenvalues 0 N−|M| 1 1 with eigenvec- | ⟩ | ⟩ λ′ = ··· = λ′ = tors hN−|M| , ... , h1 , and eigenvalues N N−|M|+1 1 with the correspond- ing eigenvectors having support only on marked states. Let √ 1 √ 1 |bad⟩ = √ ∑ π |x⟩ = √ π . (2.49) − ϵ x − ϵ U 1 x∈/M 1

The reversibility condition 2.46 allows us to simplify Eq. 2.48, namely √ ( ) √ (√ ) π 1 π ′ U √ t π U = ⟨ | ( )t| ⟩ − ϵdiag π PUUdiag U − ϵ bad D P bad . 1 U 1

Thus

∞ N λ′ t|⟨ | ⟩|2 HT(P, M) = ∑ ∑ ( k) hk bad . (2.50) t=0 k=1

The terms in Eq. 2.50 with k > N − |M| disappear since the corresponding eigenvectors of ′ D(P ) have support only on marked states. Interchanging then the two sums in Eq. 2.50 we obtain the following classic theorem (see, for instance, [KMOR15], [Sze04]).

Theorem 43. If P is an irreducible, aperiodic, and reversible chain, then

N−|M| |⟨ | ⟩|2 hk bad HT(P, M) = ∑ − λ′ . (2.51) k=1 1 k

λ′ θ′ θ′ ≤ π − | | − 2 ≤ ≤ Let k = cos k, with 0 < k /2 for k = 1, ... , N M . Since 1 x /2 cos x

49 Chapter 2. Controlled quantum amplification

1 − x2/4 when 0 < x ≤ π/2, we obtain from Eq. 2.51 that ( ) −| | N M |⟨h |bad⟩|2 HT(P, M) = Θ ∑ k . (2.52) (θ′ )2 k=1 k

Eq. 2.52 provided the justification for the definition of the notion of quantum hitting time in Eq. 2.21. We emphasize that we obtained the expression from Eq. 2.52 by assuming that the walk P is reversible; without the reversibility condition, the simplest expression we could obtain for the hitting time is given by Eq. 2.48.

2.8.2 Definition of a quantum walk

Let X be a space on which we defined a reversible classical random walk P. Thus, the

entry Px,y of the matrix P stores the probability of jumping from y to x, for any x, y ∈ X. Consider the vector space H = span{|x⟩; x ∈ X}, and let √ |px⟩ = ∑ Py,x|y⟩ y∈X be the superposition containing all the neighbors of an element x. We consider the sub- spaces A = span{|x⟩|px⟩; x ∈ X} and B = span{|px⟩|x⟩; x ∈ X}. On the subspace A + B we define the quantum walk corresponding to P as the unitary operator ( )

W = −SWAP · Ref(A) = SWAP · 2 ∑ |x, px⟩⟨x, px| − 1 . (2.53) x∈X

Here SWAP is the unitary which swaps the two registers. Szegedy [Sze04] introduced the operator

B · A WSze = Ref( ) Ref( ).

2 We can see easily that WSze = W . In particular, WSze and W have the same eigenvectors corresponding to the eigenvalues different from 1; also, by halving the eigenphases of

WSze, we obtain the eigenphases of W. Let M ⊂ X denote the subset of the states of X which are marked (they correspond to the solutions of the problem we are solving). In order to use W for detecting or finding 1 − | ⟩⟨ | ⊗ | ⟩⟨ | marked items, we need to attach to it the marking oracle G = 2 ∑x∈M x x px px . Szegedy [Sze04] observed that one can model the reflection G about the marked subspace

50 Chapter 2. Controlled quantum amplification

′ with a different quantum walk. Namely, consider the totally absorbing walk P , obtained from P by replacing any transitions from marked states with self loops. In matrix form [ ] ′ P 0 P = U,U . 1 PM,U M,M

A′ {| ⟩| ′ ⟩ ∈ } B′ {| ′ ⟩| ⟩ ∈ } Define the subspaces = span x px ; x X and = span px x ; x X . The ′ A′ B′ corresponding walk WSze(P ) is defined on the space + by

′ B′ · A′ WSze(P ) = Ref( ) Ref( ).

∈ | ′ ⟩ | ⟩ ′ | ⟩| ′ ⟩ ′ | ⟩| ⟩ | ⟩| ⟩ For m M we have pm = m , so W(P ) m pm = W(P ) m m = m m . Therefore, A {| ⟩| ⟩ ∈ } B {| ⟩| ⟩ ∈ } denoting by −M = span x px ; x / M and −M = span px x ; x / M , we have

′ B · A WSze(P ) = Ref( −M) Ref( −M).

The following theorem, which is stated without proof in [MNRS12], connects the two walks. Theorem 44. · 2 ′ (W G) = WSze(P ). For our applications, we use the operator A = W · G.

Denote by T the isometry which maps |x⟩ to |x⟩|px⟩ for any x ∈ X. The initial state for the quantum walk ( ) √ |init⟩ = T ∑ πx|x⟩ . x∈X and the target state is 1 √ |g⟩ = √ ∑ π |m⟩|p ⟩. − ϵ m m 1 m∈M The reversibility condition ensures that SWAP|init⟩ = |init⟩.

2.8.3 Relation between classical and quantum hitting times

In this subsection we prove that, in the case of reversible walks, the quantum hitting time of A from |init⟩ is the square root of the classical hitting time. This enables us to obtain the main application of our circuit, namely that we can solve the problem of finding a unique marked√ item m with constant√ success probability using a number of queries of order S + HT(P, {m}) · U + HT(P, {m}) · C.

51 Chapter 2. Controlled quantum amplification

Consider an irreducible, aperiodic, and reversible random walk P on a state space X. Denote by T the isometry

T : |x⟩ → |x⟩|px⟩.

The isometry T maps from the space of the classical walk P to the space of the quantum − · A · | ⟩⟨ | − 1 walk. We have the operators W = SWAP Ref( ) = SWAP (2 ∑x∈X x, px x, px ), 1 − | ⟩⟨ | ⊗ | ⟩⟨ | · G = 2 ∑x∈M x x px px , and A = W G. The subspace on which W and A have non-trivial action is span{|x⟩|px⟩, |px⟩|x⟩}, where x ∈ X. Szegedy [Sze04] gave a general method to find the eigenvalues and the eigenvec- tors of any operator consisting of two reflections. We adapt his theorem for the particular case of quatum walks, defined as a reflection followed by a swap.

λ λ ∈ Theorem 45 (Spectral decomposition of W). Let the eigenvalues of D(P) be N, ... , 2 λ | ⟩ | ⟩ | ⟩ |π⟩ (0, 1), and 1 = 1. Denote by PN , ... , P2 , P1 = the complete set of eigenvectors of D(P) λ { | ⟩ · | ⟩} corresponding to the eigenvalues k. Let Sk = span T Pk , SWAP T Pk , for k = 1 ... , N. Then

′ ̸ ′ i) Each subspace Sk is invariant under W, and, for k = k , the subspaces Sk and Sk are ⊕ orthogonal. Denote by S = kSk. ⊥ ii) W acts as −SWAP on S (thus it may only have the eigenvalues 1).

{ |π⟩} iii) The subspace S1 = span T is one-dimensional, and it is the only (+1)-eigenspace of W. We also have |init⟩ = T|π⟩ = SWAP · T|π⟩.

λ θ θ ∈ π iv) For k = 2, ... , N, let k = cos k, with k (0, /2). In each subspace Sk with θ θ i k k = 2, ... , N, the operator W is a rotation of an angle k, thus it has eigenvalue e with eigenvector θ ⊥ | ⟩ − i k · | ⟩ | ⟩ | ⟩ T Pk e SWAP T Pk = T Pk + i(T Pk ) , − θ and eigenvalue e i k with eigenvector

θ ⊥ | ⟩ i k · | ⟩ | ⟩ − | ⟩ T Pk + e SWAP T Pk = T Pk i(T Pk ) .

| ⟩ ⊥ We denoted by (T Pk ) the state in the two-dimensional subspace Sk which is orthogonal | ⟩ to T Pk .

The space on which W acts non-trivially is HW = span{|x⟩|px⟩, |px⟩|x⟩; x ∈ X}, which has dimension at most 2N. We can prove that at least one of the 2N vectors which span H − W is a linear combination of the remaining 2N 1 vectors (Lemma 51). Since items iii)

52 Chapter 2. Controlled quantum amplification

and iv) from Theorem 45 give 2N − 1 independent eigenvectors for W in HW, we conclude that W has no (−1)-eigenvectors in the subspace we are interested in.

A ( ′) < λ′ ≤ Theorem 46 (Spectral decomposition of ). Let the eigenvalues of D P be 0 N−|M| · · · ≤ λ′ | ⟩ | ⟩ ′ 1 < 1 with eigenvectors hN−|M| , ... , h1 . The rest of the eigenvalues of D(P ) are 1 and { | ⟩ · the corresponding eigenvectors have support only on marked states. Let Sk = span T hk , SWAP | ⟩} − | | T hk , for k = 1 ... , N M . Then ′ ̸ ′ i) Each subspace Sk is invariant under A, and, for k = k , the subspaces Sk and Sk are orthog- ⊕ onal. Denote by S = kSk. ⊥ ii) A acts as −SWAP on S (thus it may only have the eigenvalues 1). The states |m⟩|m⟩ with m ∈ M are (−1)-eigenvectors of A.

− | | λ′ θ′ θ ∈ π iii) For k = N M , ... , N, let k = cos k, with k (0, /2). In each subspace Sk with ′ θ′ − | | θ i k k = N M , ... , N, the operator A is a rotation of an angle k, thus it has eigenvalue e with eigenvector

′ | ⟩ − iθ · | ⟩ | ⟩ | ⟩ ⊥ T hk e k SWAP T hk = T hk + i(T hk ) ,

′ −iθ and eigenvalue e k with eigenvector

′ | ⟩ iθ · | ⟩ | ⟩ − | ⟩ ⊥ T hk + e k SWAP T hk = T hk i(T hk ) .

| ⟩ ⊥ We denoted by (T hk ) the state in the two-dimensional subspace Sk which is orthogonal | ⟩ to T hk .

Theorem 45 shows that, in the case of a quantum walk, any relation involving the eigenvectors of D(P) can be translated immediately into a relation involving the eigen- vectors of the operator W by applying the isometry T. The same is true for Theorem 46, ′ where any relation involving the eigenvectors of D(P ) gives a relation involving the eigenvectors of the operator A.

Theorem 47. Let W = −SWAP · Ref(A) be the quantum walk corresponding to an irreducible, aperiodic, and reversible Markov chain P over the state space X. Let M ⊂ X be the subset of · 1 − | ⟩⟨ | ⊗ | ⟩⟨ | marked states, and let A = W G, where G = 2 ∑x∈M x x px px . Then (√ ) QHT(A, |init⟩) = Θ HT(P, M) . (2.54)

53 Chapter 2. Controlled quantum amplification

Proof. We have [ ] ′ P 0 D(P ) = U,U , 1 0 M,M

′ so the (+1)-eigenvectors of D(P ) have support only on the marked states. Denote the ′ λ′ θ′ θ′ ≤ π eigenvalues of D(P ) different from 1 by k = cos k, with 0 < k /2 for k = − | | | ⟩ 1, ... , N M , and let hk be the corresponding eigenvectors. The two-dimensional ro- A | +⟩ | −⟩ tational subspaces of are spanned by conjugated eigenvectors ( Ak , Ak ), such that

√1 | +⟩ | −⟩ | ⟩ ( A + A ) = T hk . (2.55) 2 k k

Since |bad⟩ does not overlap the marked states, we have

| ⟩ | ⟩ bad = ∑ bk hk , k ∈ R for some bk (see also Eq. 2.49). Using Eq. 2.55, we have

| ⟩ | ⟩ √1 | +⟩ | −⟩ init = T bad = ∑ bk( Ak + Ak ) 2 k

v Therefore u u −| | (√ ) tN M |b |2 QHT(A, |init⟩) = ∑ k = Θ HT(P, M) , (θ′ )2 k=1 k which is what we wanted to obtain.

Using Eq. 2.54 we note that there are examples of graphs for which there exists a √1 strict separation between the quantum hitting time and ϵδ . Such examples are listed in Figure 2.3. For such random walks, the MNRS algorithm from Theorem 16 for the problems FIND ONE(P) or FIND MANY(P) has an update cost greater than the quantum hitting time, while the detection algorithm from Theorem 15 has an update cost equal to the quantum hitting time. Combining Theorem 47 with Theorem 37, we obtain the main application of our frame- work.

Theorem 21 (Restated). Let P be a reversible Markov chain over a space X, and let m ∈ X be a single marked state. There exists an explicit quantum algorithm which solves FIND ONE(P) with

54 Chapter 2. Controlled quantum amplification

constant success probability using a number of queries of order √ √ S + HT(P, {m}) · U + HT(P, {m}) · C.

θ ⟨ | ⟩ Assume now that, instead of a0 = sin = g init we know only an approximation a∗ of it, such that 1 |a∗ − a | ≤ a . 0 3 0 1 Here the constant 3 is chosen arbitrarily. Then Theorem 21 can be extended to this case, without increasing the asymptotic query complexity.

Theorem 48. Let P be a reversible Markov chain over a space X, and let m ∈ X be a single | − | ≤ marked state. Assume that we only know an approximation a∗ of a0 such that a∗ a0 a∗/3. There exists an explicit quantum algorithm which solves FIND ONE(P) with constant success probability using a number of queries of order √ √ S + HT(P, {m}) · U + HT(P, {m}) · C.

Proof. We have 2 4 a ≤ a∗ ≤ a . 3 0 3 0 2 We assume that the initial success probability a0 is at most 1/2. This is without loss of 2 | ⟩ | ⟩ generality, since, if a0 > 1/2, we can simply measure init and obtain g with constant success probability. We know from Eq. 2.16 that the unique (+1)-eigenvector of U is the (unnormalized) state θ | ⟩ √1 | ⟩ − √1 θ cos | ⟩ v0 = 1,˜ g sin ˜ 0, init . 2 2 sin θ

We choose θ˜ such that a∗ sin θ˜ = √ . 1 − a2∗ θ˜ cos θ Then sin sin θ is upper bounded by a constant, so ( ) 2 θ ∥ ∥2 1 2 θ cos v0 = 1 + sin ˜ 2 sin2 θ | ⟩ is a constant. Thus, normalizing v0 , we deduce that the unique (+1)-eigenvector of U has constant overlap with the starting state of U, and with the target state of U. The conclusion then follows from Theorem 31, Theorem 37, and Theorem 47.

55 Chapter 2. Controlled quantum amplification

Using Eq. 2.18, Theorem 21 can be extended to the case of unknown initial success probability Our algorithm also works for the case of multiple marked items, however we do not know how to relate its query complexity with the classical hitting time.

2.9 Simulation of the interpolated quantum walk

As we discussed earlier, quantum walks detect the presence of a marked item using a number of queries which is the square root of the classical hitting time. However, it may not be possible for quantum walks to actually find a marked vertex. To overcome this drawback, Krovi et al. [KMOR15] proposed a new approach, which we describe here briefly. Let X be a space on which we defined a classical random walk P which is reversible.

The entry Px,y stores the probability of jumping from y to x, for any x, y ∈ X. Let M denote the subset of marked items from X. From the matrix P we construct the totally absorbing ′ walk P by replacing all outgoing transitions from M with self-loops. Thus [ ] ′ P 0 P = U,U M,M , 1 PM,U M,M

where, for instance, PM,U contains the transitions from unmarked states to marked states. We denote by D = D(P) the discriminant of P. We have √ Dx,y = Px,yPy,x = ⟨py|x⟩⟨y|px⟩. (2.56)

Krovi et al. introduce a notion of interpolation between any reversible random walk P ′ and its corresponding absorbing walk P . The interpolated walk P(s) is defined as

′ P(s) = (1 − s)P + sP , 0 ≤ s < 1.

In matrix form [ ] P (1 − s)P ( ) = U,U U,M P s − , (2.57) PM,U (1 s)PM,M + s

By definition, the amplitudes of the state |p(s)x⟩ are obtained by taking square roots of

56 Chapter 2. Controlled quantum amplification

the columns of P(s), so |p(s)x⟩ = |px⟩ for x ∈/ M. The interpolated walk P(s) is reversible, so we can use Szegedy’s [Sze04] construction to define the corresponding quantum inter- − Π − 1 polated walk W(P(s)) = SWAP(2 A(s) ). In the following, since the classical walk P is fixed, we use W(s) as a shorthand for W(P(s)). Krovi et al.√ [KMOR15] prove that, for a chosen s, the walk W(s) finds the marked vertex using HT+ queries, where HT+ is a quantity called extended hitting time. They also prove that, for a single marked item m, the extended hitting time is equal to the { } classical√ hitting time HT(P,√m ), concluding that W(s) finds a unique marked item using O(S + HT(P, {m}) · U + HT(P, {m}) · C) queries. We show that our controlled quantum amplifier U can simulate interpolated quantum

walks. We do so by giving an explicit and constructive embedding Es of W(s) into our framework, for every fixed s. For simplicity, we present our embedding and the proof for the optimal value of s and for a unique marked element. H H Let W(s) denote the space of the walk W(s), and let U denote the space in which U acts. We can then state our simulation theorem. ≤ H Theorem 49. Fix any 0 s < 1. There is an inner-product preserving map Es from W(s) to a subspace of HU such that EsW(s) = UEs.

The theorem permits for two simulation approaches. The first approach is to simulate t t the action of W (s) by first applying Es, then U (s), and finally an inverse of Es. The second approach is to entirely forego W(s), and work directly in the space acted upon by U. − ϵ For simplicity, we present our proof for s = 1 1−ϵ , which is the optimal value for s chosen in Eq. 40 in [KMOR15]. Here ϵ = sin2(θ) is the initial success probability. This θ θ˜ (θ˜) = sin( ) value of s maps to our optimal choice of given in Eq. 2.58, which satisfies sin cos(θ) . We have √ 1 − s = sin θ˜. (2.58)

In our proof, we require the following lemma about the spanning set of vectors for the subspace of a quantum walk.

Lemma 50. Let X be the state space of a reversible random walk P with the stationary distribution

π = (πx). Fix any state m ∈ X. Then √ √ π π | ⟩ y | ⟩ − x | ⟩ | ⟩ pm, m = ∑ π y, py ∑ π px, x + m, pm . (2.59) y̸=m m x̸=m m

57 Chapter 2. Controlled quantum amplification

Proof. By the Perron–Frobenius theorem, P has a unique stationary distribution π with

positive entries. Therefore πm ̸= 0. Since P is reversible, we have SWAP|init⟩ = |init⟩, so √ √ √ √ ∑ πx|px, x⟩ + πm|pm, m⟩ = ∑ πy|y, py⟩ + πm|m, pm⟩, x̸=m y̸=m

and the conclusion follows. H {| ⟩ | ⟩ | ⟩} ∈ − By Lemma 50, we have W(s) = span x, px , py, y , m, p(s)m , where x, y X { } B H m . We choose a basis 1 for W(s) from the spanning set we described; such a basis will necessarily contain the state |m, p(s)m⟩. The marked state for W(s) is |m, p(s)m⟩, which then gives that the marked state for U is |1˜⟩|m, pm⟩ = |1˜⟩|g⟩. B H We define the embedding Es from the basis 1 to a subset of U, and we extend it to H all the states in W(s) by linearity. We define Es through the following mappings

| ⊥ ⟩ | ⟩| ⊥ ⟩ Es m , pm⊥ = 0 m , pm⊥ | ⊥⟩ | ⟩| ⊥⟩ Es pm⊥ , m = 0 pm⊥ , m (2.60)

Es|m, p(s)m⟩ = −|1˜⟩|m, pm⟩,

⊥ where |m ⟩ ranges over all basis states orthogonal to |m⟩. The state |init⟩ is a superpo- ⊥ 1 sition of the states |m , p ⊥ ⟩, and thus the embedding E maps √ (|m, p(s) ⟩ + |init⟩), m s 2 m the unique (+1)-eigenvector of W(s), to √1 (−|1,˜ g⟩ + |0, init⟩), which is the unique (+1)- 2 eigenvector of U.

Lemma 51. The action of Es on the state |p(s)m, m⟩ is given by

Es|p(s)m, m⟩ = sin θ˜|0⟩|pm, m⟩ − cos θ˜|1⟩|m, pm⟩. (2.61)

Proof. Using Eq. 2.59 and the definition of Es given by Eq. 2.60 we have  √ √  πy(s) π (s) E |p(s) , m⟩ = |0⟩  ∑ |y, p ⟩ − ∑ x |p , x⟩ − |1˜⟩|m, p ⟩ s m π ( ) y π ( ) x m y̸=m m s x̸=m m s

= |0⟩(|p(s)m, m⟩ − |m, p(s)m⟩) + sin θ˜|0⟩|m, pm⟩ − cos θ˜|1⟩|m, pm⟩. (2.62)

By Eq. 2.57, √ |p(s)m⟩ = 1 − s|pm⟩ + x|m⟩, (2.63)

58 Chapter 2. Controlled quantum amplification √ √ − − − where x = √(1 s)Pm,m + s (1 s)Pm,m. Plugging Eq. 2.63 into Eq. 2.62 and using the relation 1 − s = sin θ˜ (given by Eq. 2.58), we obtain √ √ E |p(s) , m⟩ = 1 − s|0⟩|p , m⟩ + x|0⟩|m, m⟩ − 1 − s|0⟩|m, p ⟩ s m m √ m −x|0⟩|m, m⟩ + 1 − s|0⟩|m, pm⟩ − cos θ˜|1⟩|m, pm⟩,

which implies the stated relation.

Lemma 52. The embedding Es preserves inner products. |ϕ ⟩ |ϕ ⟩ Proof. We want to show that, for any states 1 , 2 in the domain of Es, the inner product |ϕ ⟩ |ϕ ⟩ |ϕ ⟩ |ϕ ⟩ between 1 and 2 is the same as the inner product between Es 1 and Es 2 . |ϕ ⟩ | ⟩ |ϕ ⟩ | ⊥⟩ The only non-trivial case is when 1 is m, p(s)m and 2 is a state pm⊥ , m for ⊥ some |m ⟩ orthogonal to |m⟩. On one hand we have √ √ ⟨ | ⊥⟩ ⟨ | ⟩⟨ | ⊥⟩ − ⟨ | ⟩ m, p(s)m pm⊥ , m = m pm⊥ p(s)m m = 1 s m pm⊥ Pm⊥,m .

On the other hand, √ −⟨ |˜⟩⟨ | ⊥⟩ θ˜⟨ | ⟩⟨ | ⊥⟩ θ˜⟨ | ⟩ 0 1 m, pm pm⊥ , m = sin m pm⊥ pm m = sin m pm⊥ Pm⊥,m . √ Since 1 − s = sin θ˜, the two expressions are identical and hence the inner products are identical.

We finally prove that applying W(s) and then the embedding Es, is equivalent to first applying Es, and then U. Since Es preserves inner products, this completes our proof that U can simulate W(s).

Theorem 53. EsW(s) = UEs.

Proof. We prove that the equality EsW(s) = UEs holds for a particular√ spanning set of H | ⟩ | ⟩ | ⊥ ⟩ states for W(s). Namely, we make the choice ( m, p(s)m + init )/ 2, m , pm⊥ , and | ⊥⟩ | ⊥⟩ | ⟩ pm⊥ , m , where m ranges over all computational basis states orthogonal to m . This | ⟩ | ⊥ ⟩ is a spanning set because the state init is a superposition of the states m , pm⊥ . Since the unique (+1)-eigenvector of W(s) is √1 (|m, p(s) ⟩ + |init⟩), we have 2 m

√1 √1 EsW(s) (|m, p(s)m⟩ + |init⟩) = (−|1,˜ g⟩ + |0, init⟩). 2 2

59 Chapter 2. Controlled quantum amplification

Further, as √1 (−|1,˜ g⟩ + |0, init⟩) is the (+1)-eigenvector of U, we can write 2

√1 √1 √1 UEs (|m, p(s)m⟩ + |init⟩) = U (−|1,˜ g⟩ + |0, init⟩) = (−|1,˜ g⟩ + |0, init⟩). 2 2 2 | ⟩ Thus we√ have established that EsW(s) and UEs have the same actions on ( m, p(s)m + ⊥ |init⟩)/ 2. Next, for any |m ⟩ orthogonal to |m⟩, we have

| ⊥ ⟩ | ⊥ ⟩ | ⟩| ⊥⟩ EsW(s) m , pm⊥ = Es SWAP m , pm⊥ = 0 pm⊥ , m ,

and | ⊥ ⟩ | ⟩| ⊥ ⟩ | ⟩ | ⊥ ⟩ | ⟩| ⊥⟩ UEs m , pm⊥ = U 0 m , pm⊥ = 0 W m , pm⊥ = 0 pm⊥ , m . We use here the form of the circuit for U given in Figure 2.7. | ⊥⟩ Finally, for the vectors pm⊥ , m we have ( ) | ⊥⟩ · | ⟩⟨ | − 1 | ⊥⟩ EsW(s) pm⊥ , m = Es SWAP 2 ∑ x, p(s)x x, p(s)x pm⊥ , m ( x∈X ) | ⟩⟨ | ⟩⟨ | ⊥⟩ − | ⊥ ⟩ = Es 2 ∑ p(s)x, x x pm⊥ p(s)x m m , pm⊥ ( x∈X ) √ | ⟩ | ⟩ − − | ⊥ ⟩ = Es 2 ∑ px, x Dx,m⊥ + 2 p(s)m, m 1 s Dm,m⊥ m , pm⊥ , x̸=m

where we used Eq. 2.56 and Eq. 2.63. Therefore, using the definition of Es given by Eq. 2.60, we have on one hand that

| ⊥⟩ | ⟩| ⟩ θ˜ | ⟩ EsW(s) pm⊥ , m = 2 ∑ Dx,m⊥ 0 px, x + 2 sin Dm,m⊥ Es p(s)m, m x̸=m − | ⟩| ⊥ ⟩ 0 m , pm⊥ . (2.64)

On the other hand, | ⊥⟩ | ⟩| ⊥⟩ UEs pm⊥ , m = U( 0 pm⊥ , m ). Consider the circuit for U depicted in Figure 2.9. Applying the controlled rotation to the | ⟩| ⊥⟩ state 0 pm⊥ , m , we obtain the state

− θ˜ | ⟩ θ˜ | ⟩ | ⟩ − | ⟩| ⟩ | ⟩| ⊥⟩ [( cos(2 ) 0 + sin(2 ) 1 ) m, pm 0 m, pm ]Dm,m⊥ + 0 pm⊥ , m .

60 Chapter 2. Controlled quantum amplification

After applying the controlled-Z gate, the state is

− θ˜ | ⟩| ⟩ − θ˜ | ⟩| ⟩ | ⟩| ⊥⟩ Dm,m⊥ [ (cos(2 ) + 1) 0 m, pm sin(2 ) 1 m, pm ] + 0 pm⊥ , m .

which after applying the controlled-W gate, produces the state

− (cos(2θ˜) + 1)D ⊥ |0⟩|p , m⟩ − sin(2θ˜)D ⊥ |1⟩|m, p ⟩+ m,m m ( m,m m ) | ⟩ | ⟩ | ⟩ − | ⊥ ⟩ 0 2 ∑ px, x Dx,m⊥ + 2 pm, m Dm,m⊥ m , pm⊥ . x̸=m

Therefore

| ⊥⟩ 2 θ˜ | ⟩| ⟩ − θ˜ | ⟩| ⟩ UEs pm⊥ , m = 2 sin ( )Dm,m⊥ 0 pm, m sin(2 )Dm,m⊥ 1 m, pm + | ⟩| ⟩ − | ⟩| ⊥ ⟩ 2 ∑ Dx,m⊥ 0 px, x 0 m , pm⊥ . (2.65) x̸=m

| ⊥⟩ From Eq. 2.64 and Eq. 2.65, we conclude that, in order to show that EsW(s) pm⊥ , m = | ⊥⟩ UEs pm⊥ , m , we need to prove that

Es|p(s)m, m⟩ = sin θ˜|0⟩|pm, m⟩ − cos θ˜|1⟩|m, pm⟩.

This relation is proven in Lemma 51.

61 CHAPTER 3

Exact lower bounds for quantum unordered search

3.1 Introduction

Grover’s algorithm [Gro97] is one of the most celebrated quantum algorithms ever de- vised. The algorithm and its many extensions demonstrate that quantum computers can speed up many search-related problems by a quadratic factor over classical com- puters. The algorithm is based on some of the most fundamental properties of quantum mechanics and has consequently found uses in a very wide range of situations, includ- ing unordered searching [Gro97, BBHT98], communication complexity [BCW98], count- ing [BHMT02], cryptography [ACI+06], learning theory [SG04], network flows [AS06ˇ ], zero-knowledge [Wat09], and random walks [MNRS07], just to name a few. The under- lying principles of the algorithm are very versatile and are readily amendable to a num- ber of variations in applications. By clever insight into its basic principles, it has been adapted for instance to local searching of spatial structures [AA05], searching erroneous data [HMW03], and searching structures with variable search costs [Amb08]. Grover’s al- gorithm and its generalizations are in conclusion one of the most successful frameworks ever discovered for processing. In this chapter we give a new, intuitive proof for the fact that Grover’s algorithm is ex- actly optimal among all adaptive algorithms. We then turn our attention to non-adaptive algorithms (for which no query is allowed to depend on the outcome of a previous query). In this setting we give a new proof for the lower bound, and we find a matching algo-

62 Chapter 3. Exact lower bounds for quantum unordered search rithm, thus concluding that the lower bound is exact.

3.2 Exact lower bound for adaptive quantum al- gorithms

Given the many positive applications of Grover’s search algorithm, it is natural to ask if Grover’s algorithm is optimal, or if an even faster routine could take its place in the above applications. The unanimous answer is that no better adaptive algorithm ex- ists for searching unordered structures on a quantum computer. Grover’s algorithm is in other words optimal. The optimality of the algorithm has been established through many different approaches, including adversarial arguments [Amb02], degree of poly- nomials [BBC+01], hybrid arguments [BBBV97], Kolmogorov complexity [LM08], and spectral decompositions [BSS03]. Common for all the above lower bounds is, however, that they do not show that Grover’s algorithm is exactly optimal, but only asymptotically optimal. The above lower bound results on quantum searching do not exclude the possibility that there might be another quantum algorithm that solves Grover’s search problem say 10% faster than Grover’s own algorithm. Fortunately we do know that Grover’s algorithm is exactly optimal through the sin- gular work of Zalka [Zal99]. By carefully inspecting each step in Grover’s algorithm, Zalka is able to argue that it is exactly optimal. His proof is rather involved, using La- grange multiplies to solve constrained optimization problems, and eventually concluding that four different inequalities are saturated by Grover’s algorithm. Zalka’s construction seems to require an intimate understanding of Grover’s algorithm and fluency in finding extrema for various multi-variate functions. Grover and Radhakrishnan [GR05] make several simplifications to Zalka’s proof and construct a more explicit and rigorous proof that allows them to give a near-tight lower bound for Grover’s problem. Their near-optimal theorem applies without modifications to success probabilities of at least 0.9, whereas Zalka’s original proof applies to any choice of success probability and any size of search space. The aim of this section is to give a new tight lower bound for the unordered search problem that is as simple and transparent as possible, using only elementary mathemat- ics, and that does not presume any knowledge of Grover’s algorithm or any other upper bound. Our proof applies, as Zalka’s, to any choice of success probability and any size of search space.

63 Chapter 3. Exact lower bounds for quantum unordered search

3.2.1 The query model for adaptive quantum algorithms

In the unordered search problem, we are given a bitstring x as input. The input is given to us as an oracle so that the only knowledge we can gain about the input is in asking

queries to the oracle. We model the oracle Ox by   (− )xi | ⟩ ≤ ≤ | ⟩ 1 i; w if 1 i N Ox i; w =  |i; w⟩ if i = 0.

We are interested in solving the unordered search problem with the least number of queries to the oracle.

Definition 54 (Unordered search problem). We are given a bitstring x ∈ {0, 1}N as an ≤ ≤ oracle, and we are promised that there exists a unique index 1 i N for which xi = 1. We want to output an index 1 ≤ j ≤ N such that i = j with probability at least p.

In the following, we identify the N possible inputs with the set y ∈ {1, 2, ... , N} so

that for instance input y = 7 denotes the input bitstring x in which bit 7 is 1 (x7 = 1) and − ̸ the N 1 other bits are 0 (xi = 0 for i = 7). The most straightforward classical deterministic algorithm for solving the unordered − search problem would be to simply query the oracle for the N 1 bits x1, x2, ... , xN−1. If any of these N − 1 bits equals 1, we output the corresponding index, and otherwise, we output the unqueried index N. This algorithm always outputs the correct answer, uses N − 1 queries, and is optimal. An optimal probabilistic algorithm is to pick a set of T = ⌈pN − 1⌉ distinct indices uniformly at random, and query the oracle on those T indices. If any of the T bits equals 1, we output the corresponding index, and otherwise, we output one of the remaining N − T indices, picked again uniformly at random. The T+1 ≥ probability we output the correct index is N p. Grover’s algorithm is the best known quantum algorithm for the unordered( search problem.) It outputs the correct index after T queries with probability p = sin2 (2T + 1) arcsin √1 . N Any quantum algorithm in the oracle model starts in a state that is independent of the oracle x. For convenience, we take the start state to be |0⟩ in which all qubits are initialized to 0. It then evolves by applying arbitrary unitary operators U to the system, alternated

by queries Ox to the oracle x, followed by a conclusive measurement of the final state, the outcome of which is the result of the computation. We assume (without loss of generality) that the final measurement{ } is a von Neumann measurement represented by a finite set of orthogonal projectors Πy that sum to the identity. In symbols, a quantum algorithm A

64 Chapter 3. Exact lower bounds for quantum unordered search

that uses T queries to the oracle, computes the final state

|ΨT⟩ ··· | ⟩ x = UTOxUT−1 U1OxU0 0

|ΨT⟩ 2 which is then measured, yielding the answer y with probability Πy x . A more detailed and excellent introduction to the query model is given in [BW02], and a discussion of lower bounds for the model in [HS05ˇ ].

3.2.2 Exact lower bound for quantum searching

In the unordered search problem, we are given one of N possible inputs x, and we pro- ∈ { } duce one of N possible outputs y 1, 2, ... , N . For the algorithm to succeed with |ΨT⟩ 2 ≥ ∈ { } probability at least p, we require that Πx x p for all x 1, 2, ... , N . Let Ψt ··· | ⟩ ··· = UtOuUt−1 U1OuU0 0 denote the state after t queries when the oracle u = 00 0 Ψi,T is the all-zero bitstring, in which case, the oracle Ou acts as the identity. Let y = ··· ··· | ⟩ UTOy OyUi+1OyUiOu U1OuU0 0 denote the final state after T queries, where we use identity oracle for the first i oracle queries and the oracle y for the latter T − i oracle queries. We now give our new exactly optimal lower bound for the unordered search problem. We present our proof in parallel with the standard (asymptotically optimal) hybrid argu- ment lower bound derived from [BBBV97] which seems to be the simplest of the existing lower bounds. Both proofs require three steps, and we present each of these steps in a form so that the two proofs resemble each other as closely as possible and are as simple as possible. The key to our new lower bound is the use of angles as opposed to distances as in the standard proof in [BBBV97]. We define the quantum angle between two non-zero vectors as

⟨ψ|ψ′⟩ ] ψ ψ′ ( , ) = arccos ∥ψ∥∥ψ′∥.

This seems to be the most appropriate definition of angles for quantum computing; it can be readily generalized to mixed states through fidelity and satisfies, in particular, the triangle inequality.

First step The first step in the proof is to establish a Cauchy–Schwarz-like inequality (for each of our two measures, distances and angles) which will allow us to bound the amount of information we can learn by each individual query.

65 Chapter 3. Exact lower bounds for quantum unordered search

Lemma 55 (Cauchy-Schwarz – distance version). { } N N √ | ≤ ≤ 2 ≤ max ∑ ai 0 ai 1 and ∑ ai 1 = N. i=1 i=1 √ Proof. First note√ that when all ai’s are equal, the maximum value of the sum is N. Now, assume that N is not the maximum value of the sum. Then there exist N numbers

b1, ... , bN for which the maximum is attained. At least two of the bi’s are not equal, denote them by x and y. Replacing both x and y with their average, the sum we want to maximize remains unchanged, while the sum of squares strictly decreases since ( ) x + y 2 1 x2 + y2 − 2 = (x − y)2 > 0. 2 2

We can thus increase all bi’s by a tiny amount while keeping the sum of squares at most 1, contradicting the assumption that the bi’s attain the maximum. It follows the maximum is attained when all ai’s are equal. Lemma 56 (Cauchy–Schwarz — angle version). { } N π N θ | ≤ θ ≤ 2 θ ≤ √1 max ∑ i 0 i and ∑ sin i 1 = N arcsin . i=1 2 i=1 N

Proof. A general strategy would be to consider this as an optimization problem and solve it with Lagrange multipliers. Another approach would make use of convexity and Jensen’s inequality. We present here a direct derivation which has the advantage of being much simpler and completely elementary. First note that when all θ ’s are equal, the maximum value of the sum is N arcsin √1 . i N Now, assume that this is not the maximum value of the sum. Then there exist N angles θ θ θ 1, ... , N for which the maximum is attained. At least two of the i’s are not equal, denote them by u and v. Replacing both u and v with their average, the sum we want to maximize remains unchanged, while the sum of squares decreases since1 ( ) ( ) u + v u − v sin2 u + sin2 v − 2 sin2 = 2 sin2 cos(u + v) ≥ 0, 2 2 where we used the fact that u, v ∈ [0, π/2] and sin2 u + sin2 v ≤ 1 imply u + v ≤ π/2. We θ can thus increase all i’s by a tiny amount while keeping the sum of sine squares at most 1, 1 − 1 − 1 The equality can be proved by showing that both sides are equal to cos(u + v) 2 cos(2u) 2 cos(2v), or by applying Euler’s formula.

66 Chapter 3. Exact lower bounds for quantum unordered search

θ contradicting the assumption that the i’s attain the maximum. It follows the maximum θ is attained when all i’s are equal.

Second step The second step is then to show that the amount of information we learn by each of the T queries can only add up linearly (with respect to our two measures, distances and angles).

Lemma 57 (Increase in distance by T queries). The average distance after T queries is at most 2T √1 . N Proof. We have, using the triangle inequality,

N N N T−1 1 ΨT − ΨT 1 ΨT,T − Ψ0,T ≤ 1 Ψi+1,T − Ψi,T ∑ y = ∑ y y ∑ ∑ y y N y=1 N y=1 N y=1 i=0 T−1 N T−1 N 1 i i 1 i = ∑ ∑ Ψ − OyΨ = ∑ ∑ 2 ΠyΨ N i=0 y=1 N i=0 y=1 − T 1 1 1 ≤ 2 ∑ √ = 2T √ , i=0 N N

where the last inequality follows from the inequality proved in Lemma 55.

Lemma 58 (Increase in angle by T queries). The average angle after T queries is at most 2TΘ, where Θ = arcsin( √1 ). N Proof. We have, using the triangle inequality for angles,

N ( ) N ( ) N T−1 ( ) 1 ] ΨT ΨT 1 ] ΨT,T Ψ0,T ≤ 1 ] Ψi+1,T Ψi,T ∑ , y = ∑ y , y ∑ ∑ y , y N y=1 N y=1 N y=1 i=0 T−1 N ( ) 1 i i = ∑ ∑ ] Ψ , OyΨ . N i=0 y=1

θi ∥ Ψi∥ Denoting by y = arcsin Πy , we have

] Ψi Ψi − 2 θi θi ≤ θi ( , Oy ) = arccos 1 2 sin ( y) = arccos cos(2 y) 2 y

so

N ( ) T−1 N T−1 1 ] ΨT ΨT ≤ 1 θi ≤ Θ Θ ∑ , y ∑ ∑ 2 y 2 ∑ = 2T , N y=1 N i=0 y=1 i=0

67 Chapter 3. Exact lower bounds for quantum unordered search

where the last inequality follows from the inequality for angles proved in Lemma 56.

Third step The third and final step is then to show that by the end of the algorithm, after all the T queries, our measure (distance or angle, respectively) is large.

Lemma 59 (Distinguishability of final states – distance version). Suppose that the algorithm

correctly outputs y with probability at least p after T queries, given oracle Oy. Then the average distance is at least

N ( √ √ ) 1 ΨT − ΨT ≥ √1 − − − √2 ∑ y 1 + p 1 p . N y=1 2 N

Proof. The distance after T queries is at least ( ) ΨT − ΨT ≥ √1 ΨT − ΨT ⊥ΨT − ⊥ΨT y Πy y Πy + Πy Πy y 2 ( ) ≥ √1 ΨT − ΨT ⊥ΨT − ⊥ΨT Πy y Πy + Πy Πy y 2 (√ √ ) ≥ √1 − ΨT ⊥ΨT − − p Πy + Πy 1 p 2 (√ √ ) √1 T ≥ p − 1 − p + 1 − 2 ΠyΨ , 2

where the first inequality follows from the inequality (a − b)2 ≥ 0, the second-last in- equality from the success probability being at least p, and the other two from the triangle inequality. The average distance after T queries is thus at least

N N ( √ √ ) 1 ΨT − ΨT ≥ 1 √1 − − − ΨT ∑ y ∑ 1 + p 1 p 2 Πy N y=1 N y=1 2 ( √ √ N ) √1 2 T = 1 + p − 1 − p − ∑ ΠyΨ 2 N y=1 ( ) 1 √ √ 2 ≥ √ 1 + p − 1 − p − √ , 2 N

where the last inequality follows from the inequality proved in 55.

Lemma 60 (Distinguishability of final states – angle version). Suppose that the algorithm

correctly outputs y with probability at least p after T queries, given oracle Oy. Then the average

68 Chapter 3. Exact lower bounds for quantum unordered search

angle is at least

N ( ) 1 ] ΨT ΨT ≥ ΘT − Θ ∑ , y , N y=1

2 ΘT 2 Θ 1 where sin ( ) = p and sin ( ) = N . Proof. The angle difference after T queries is at least ( ) ( )

] ΨT, ΨT = arccos ⟨ΨT|ΨT⟩ y ( y ) ⊥ = arccos ⟨ΨT|(Π + Π )|ΨT⟩ ( y y y ) ⊥ ⊥ ≥ arccos Π ΨT · Π ΨT + Π ΨT · Π ΨT ( y y y y ) y y = arccos sin θT sin ϕT + cos θT cos ϕT ( y y ) y y = arccos cos(ϕT − θT) y y ϕT − θT = y y ,

ϕT ∥ ΨT∥ θT ∥ ΨT∥ where sin( y ) = Πy y and sin( y ) = Πy . The average angle difference after T queries is thus at least

N ( ) N ( ) N 1 ] ΨT ΨT ≥ 1 ϕT − θT ≥ ΘT − 1 θT ≥ ΘT − Θ ∑ , y ∑ y y ∑ y , N y=1 N y=1 N y=1 where the second-last inequality follows from the success probability being at least p, and the last inequality from the inequality for angles proved in Lemma 56.

Concluding the proof Since each of our two measures is initially zero, is large by the end of the algorithm, and can only increase modestly by each query, we conclude that a large number of queries is required.

Theorem 61 (Asymptotic lower bound for searching — distance version). The unordered √ ( √ √ ) search problem with success probability p requires T ≥ √N 1 + p − 1 − p − √2 queries. 2 2 N Theorem 62 (Asymptotic lower bound for searching — angle version). The unordered ≥ ΘT−Θ search problem with success probability p requires T 2Θ queries, where the angles 0 < Θ ΘT ≤ π 2 Θ 1 2 ΘT , 2 are such that sin ( ) = N and sin ( ) = p. We note that the number of queries needed by Grover’s algorithm to achieve success ⌈ ΘT−Θ ⌉ probability p is exactly 2Θ . Therefore, in the case of distances we conclude that Grover’s algorithm is asymptotically optimal, and in the case of angles we conclude that

69 Chapter 3. Exact lower bounds for quantum unordered search

Grover’s algorithm is exactly optimal. No other algorithm can achieve even a constant additive improvement with respect to the number of queries required for a given success probability. Compared to other lower bounds, and even to the hybrid argument, our proof seems surprisingly simple. It would be interesting to extend our method to obtain both simpler and better lower bounds for other problems, and also to find other uses of quantum angles.

3.3 Exact lower bound for non-adaptive quantum algorithms

In this section we consider non-adaptive algorithms for the unordered search problem. Such algorithms must make all the queries at the beginning of the computation, in par- allel, and no other queries are allowed after that; thus, the queries do not depend on the outcome of the previous queries. By modifying the weighted adversary method [Amb02] to the case of non-adaptive algorithms, Koiran et al. [KLPY10] showed that at least NCp queries√ are needed to solve 1−2 p(1−p) the unordered search problem with probability p, where Cp = 2 . The aim of this section is to obtain a new, tight lower bound for the unordered search problem for non-adaptive algorithms. We first give a new, direct proof of the lower bound and then we construct a non-adaptive algorithm which matches the lower bound. Thus, as in the case of adaptive algorithms, no other non-adaptive algorithm can achieve even a constant additive improvement with respect to the number of queries for a given success probability. After the completion of this work, Montanaro [Mon10] discovered independently a direct proof of Theorem 65.

3.3.1 The query model for non-adaptive quantum algorithms

In this section we are concerned with quantum algorithms which query the input non- adaptively. Even if this model might look restrictive, in the case of some problems it is possible to obtain significant speedups over any classical algorithms. For example, we can compute the parity of an input bitstring of length N with N/2 non-adaptive queries by using the Deutsch-Jozsa algorithm to compute the parity of two bits with a single query and iterating this routine N/2 times. This algorithm is optimal [FGGS98]. In general, we can learn all the N bits of an input bitstring with constant probability while paying

70 Chapter 3. Exact lower bounds for quantum unordered search √ ”half the price”, namely using N/2 + O( N) non-adaptive queries [Dam98]. Another Zn example is Simon’s algorithm for the hidden subgroup problem over 2 [Sim97], which achieves an exponential speed-up over the best classical algorithm. Here we are interested in solving the decision version of the unordered search problem with the least number of non-adaptive queries to the oracle.

Definition 63 (Unordered search problem – decision version). We are given a bitstring x ∈ {0, 1}N as an oracle, and promised that either there exists a unique index 1 ≤ i ≤ N N for which xi = 1 or x = 0 . We want to guess which is the case with probability at least p.

N Any non-adaptive algorithm is given access to the input x ∈ 0, 1 via an oracle Ox modelled by   (− )xi | ⟩ ≤ ≤ | ⟩ 1 i; w if 1 i N Ox i; w =  |i; w⟩ if i = 0.

If an algorithm A makes T non-adaptive queries, we can query any bit-string with Ham- ming weight at most T, so, on input x, we are in fact using the oracle Ox given by

x·u Ox|u⟩ = (−1) |u⟩, for any u ∈ {0, 1}N such that |u| ≤ T. Any non-adaptive quantum algorithm starts in a state that is independent of the in- put string x. For convenience, we take the starting state to be |0⟩, in which all qubits are initialized to 0. It then evolves by applying an arbitrary unitary operator U to the system,

then an oracle query Ox, followed by a conclusive measurement of the final state, the out- come of which is the result of the computation. In symbols, any non-adaptive quantum algorithm that makes T queries to the oracle, produces the final state

|Ψx⟩ = OxU|0⟩.

|Ψ ⟩ We say that an algorithm whose final state is x computes some boolean{ } function f ∈ with probability p [1/2, 1] if there is a set of orthogonal projections Π0, Π1 that sum to the identity such that |Ψ ⟩ 2 ≥ ∥Π0 x ∥ p for all inputs x with f (x) = 0, and

∥ |Ψ ⟩∥2 ≥ Π1 x p

71 Chapter 3. Exact lower bounds for quantum unordered search

for all inputs x with f (x) = 1. It is possible to allow more general POVM measure- ments for determining the outcome of the computation, but this does not make the model more powerful. Indeed, any POVM measurement can be performed by adding an ancilla qubit, applying a unitary transformation to the whole system, and then doing a projec- tive measurement of the ancilla qubit. Thus, we can restrict our attention to projective measurements. We follow these preliminary definitions with a detailed proof for the lower bound, and then we give an optimal algorithm for the unordered search problem.

3.3.2 Lower bound for non-adaptive quantum algorithms

We use the following well-known result for distinguishing quantum states. |ψ ⟩ |ψ ⟩ |⟨ψ |ψ ⟩| δ Lemma 64. Given one of two states 1 or 2 with 1 2 =√, there exists a measurement ϵ 1 − 1 − δ2 which distinguishes between them with error probability = 2 2 1 , and this is optimal. |ψ ⟩ |ψ ⟩ ϵ In other words, to distinguish√ between 1 and 2 with error probability at most , |⟨ψ |ψ ⟩| ≤ ϵ − ϵ we must have 1 2 2 (1 ).

Theorem 65 (Asymptotic lower bound for searching – non-adaptive algorithms). The un- ordered√ search problem with success probability p requires at least NCp queries, where Cp = 1−2 p(1−p) 2 . Proof. Consider any quantum algorithm that uses T nonadaptive queries. If the input is x = 0N, the final state of the algorithm is |ψ0⟩ = U|0⟩. On the other hand, if the input is a bitstring x ∈ {0, 1}N with |x| = 1, the final state of the algorithm is

x u·x |ψ ⟩ = ∑ au(−1) |u⟩. u∈{0,1}N, |u|≤T √ Distinguishability implies that |⟨ψ0|ψx⟩| ≤ 2 p(1 − p). Thus √ 2 u·x ∑ |au| (−1) ≤ 2 p(1 − p). u∈{0,1}N, |u|≤T

The amplitudes au do not depend on the input x, and the previous relation holds for any x. Taking x = 100 ... 00, x = 010 ... 00, . . . , x = 000 ... 01 and summing all the relations we obtain N 2 ∑ ∑ |au| ≥ NCp. | |≤ i=1 u T,ui=1

72 Chapter 3. Exact lower bounds for quantum unordered search

Since each amplitude au appears in the left hand side at most T times, we deduce that T ≥ NCp.

3.3.3 Exact non-adaptive algorithm for quantum searching

We prove that the lower bound given by Theorem 65 is exact, namely there exists an algorithm which√ solves the unordered search problem with success probability p using 1−2 p(1−p) exactly N 2 non-adaptive queries. Theorem 66. For any p ∈ (0, 1), there exists an algorithm which takes as input a bitstring x ∈ {0, 1}N such that either x = 0N √or |x| = 1, and identifies which is the case, with two-sided 1−2 p(1−p) error 1 − p. The algorithm uses N non-adaptive queries. √ 2 1−2 p(1−p) Proof. Let T = N 2 . We prepare the state

|ψ ⟩ √ 1 | ⟩ 0 = ( ) ∑ u . N |u|=T T

Then we make T non-adaptive queries. If the input was x = 0n, the state we obtain after |ψ ⟩ | | this step is 0 . On another hand, if the input was a string x with x = 1, the state we obtain at this point is √ 1 x·u |ψx⟩ = ( ) ∑ (−1) |u⟩. N |u|=T T

Since ( ) ( ) N N−1 − 2 − ⟨ψ |ψ ⟩ T ( ) T 1 − 2T 0 x = = 1 , N N T |ψ ⟩ Lemma 64 implies that there exists a measurement which distinguishes between 0 and |ψx⟩ with error probability √ ( ) √ 1 1 2T 2 1 1 − 4p(1 − p) − 1 − 1 − = − = p. 2 2 N 2 4

Moreover, this measurement is the best possible.

We conclude that the algorithm given by Theorem 66 is exactly optimal. No other algorithm can achieve even a constant additive improvement with respect to the number of queries required for a given success probability.

73 CHAPTER 4

Conclusions

4.1 Summary of original contributions

Many quantum algorithms have been developed using the framework of the abstract search algorithm. Examples include Grover’s algorithm, amplitude amplification, and many algorithms based on quantum walks. We are given a real unitary W, some initial state |init⟩ which is a (+1)-eigenvector of W, and a reflection G = 1 − 2|g⟩⟨g| about some target |g⟩. In the case of the detection problem, the goal is to determine if either G is a reflection about some state, of G is the identity operator (meaning that there exists no target). The more difficult case is the finding problem: we want not only to detect if there is a target |g⟩, but also to construct a final state that has constant overlap with the state |g⟩. The standard method used to solve both the detection and the finding problem is to iterate the operator A = W · G. For reversible quantum walks, this method allows to distinguish between the case when there are some targets and the case when there are no targets. The algorithm achieves a quadratic speedup compared to its classical counterpart. For some quantum walks, the standard approach does not solve the harder problem of producing a marked state when such a marked state exists. Some solutions have been proposed for the particular case of the grid or for state-transitive graphs. For quantum walks on reversible graphs with a single target, the operator W can be altered into another operator W(s) which produces the marked state, quadratically faster than the classical algorithm. In Chapter 2 we give a simple, general method to ensure that an abstract search algo-

74 Chapter 4. Conclusions

rithm produces the target state when there exists such a target. We embed any abstract search algorithm into a simple circuit which we call controlled quantum amplifier. For this embedding, we can use either the operator A or the operator W. Next, we develop a toolbox to analyze the cost of our method. We introduce a new notion of quantum hitting time and express the quantum hitting time of our embedding in terms of the quantum hitting time of A and the quantum hitting time of W. Then we prove that, whenever an abstract search algorithm A determines whether a target |g⟩ exists or not using T itera- tions, our new circuit both determines and finds the target using at most 2T applications of A. To our knowledge, this is the first general solution to the open question of turning an abstract search algorithm into an algorithm for finding the solution. As an application, we obtain a simple algorithm which enables quantum walks to find a unique solution with quadratic speed-up. We prove that our circuit can simulate Grover’s algorithm, am- plitude amplification, and the algorithm based on interpolated quantum walks.

In Chapter 3 we obtain exact lower bounds for unordered searching, both in the model of adaptive oracles and in the model of non-adaptive (parallel) oracles. For the adap- tive case, the general methods of proving lower bounds show that Grover’s algorithm is asymptotically optimal; thus, such results do not exclude the possibility that there might be another algorithm for the unordered search problem that is, say 10% faster than Grover’s algorithm. A complicated, specialized proof shows that Grover’s algorithm is exactly optimal. By quantifying with angles the progress of any possible quantum algo- rithm for the same problem, we obtain a simple, intuitive proof that Grover’s algorithm is exactly optimal in the adaptive oracle model. In the second part of Chapter 3 we give an exact algorithm and an exact lower bound for unordered searching when the oracle queries are not adaptive.

For all the statements in this thesis we have mathematical proofs and extensive nu- merical simulations.

The work in Chapter 3 concerning the exact lower bound for unordered searching in the adaptive model appeared in Cat˘ alin˘ Dohotaru and Peter Høyer. Exact quantum lower bound for Grover’s problem. Quantum Information and Computation, 9(5-6):533–540, 2009. As the work of Chapter 3, the work in Chapter 2 is also joint work with my supervisor, Peter Høyer, and is currently being submitted for publication.

75 Chapter 4. Conclusions

4.2 Future work

We identify several directions to continue the research in this thesis.

The first direction would be to analyze the new circuit in Figure 2.5 for multiple marked items. For the particular case of quantum walks, this would allow to compare the cost of running our controlled amplifier U with the square root of the classical hitting time. Since U can amplify any unitary, a major application would be to use U to amplify the success probability of a quantum algorithm which does not come from a random walk.

The work in Chapter 3 was based on choosing a new, right measure, to express the progress of any quantum algorithm for the unordered search problem. It is an interesting open question to find such a measure for another problem and to find another application of quantum angles.

76 Bibliography

[AA05] A Aaronson and A. Ambainis. Quantum search of spatial regions. Theory of Computing, 1:47–79, 2005. arXiv:quant-ph/0303041, doi:10.4086/toc. 2005.v001a004.

[AAKV01] D. Aharonov, A. Ambainis, J. Kempe, and U. Vazirani. Quantum walks on graphs. Proceedings of the 33rd Annual ACM Symposium on Theory of Computing, pages 50–59, 2001. arXiv:quant-ph/0012090, doi:10.1145/380752.380758.

[ABN+01] A. Ambainis, E. Bach, A. Nayak, A. Vishwanath, and J. Watrous. One- dimensional quantum walks. Proceedings of the Thirty-third Annual ACM Sym- posium on Theory of Computing, pages 37–49, 2001. doi:10.1145/380752. 380757.

[ACI+06] M. Adcock, R. Cleve, K. Iwama, R. Putra, and S. Yamashita. Quantum lower bounds for the Goldreich–Levin problem. Information Processing Letters, pages 208–211, 2006. doi:10.1016/j.ipl.2005.01.016.

[ACR+10] A. Ambainis, A. M. Childs, B. Reichardt, R. Spalek,ˇ and S. Zhang. Any AND-OR formula of size N can be evaluated in time N1/2+o(1) on a quantum computer. SIAM Journal on Computing, 39:2513–2530, 2010. arXiv:arXiv: 1302.3143, doi:10.1109/FOCS.2007.57.

[AF02] D. Aldous and J. Fill. Reversible Markov chains and random walks on graphs. 2002. Unfinished monograph, recompiled 2014, available at http://www. stat.berkeley.edu/~aldous/RWG/book.

77 Bibliography

[AK15] A. Ambainis and M. Kokainis. Analysis of the extended hitting time and its properties. Poster presented at QIP 2015, 2015.

[AKR05] A. Ambainis, J. Kempe, and A. Rivosh. Coins make quantum walks faster. Proceedings of the 16th ACM Symposium on Discrete Algorithms, pages 1099– 1108, 2005. arXiv:quant-ph/0402107.

[Amb02] A Ambainis. Quantum lower bounds by quantum arguments. Journal of Computer and System Sciences, 64:750–767, 2002. arXiv:quant-ph/0002066, doi:doi:10.1006/jcss.2002.1826.

[Amb03] A. Ambainis. Quantum walks and their algorithmic applications. Inter- national Journal of Quantum Information, 1:507–518, 2003. arXiv:quant-ph/ 0403120.

[Amb04] A. Ambainis. Quantum walk algorithm for element distinctness. Proceedings of the 45th IEEE Symposium on Foundations of Computer Science, pages 22–31, 2004. arXiv:quant-ph/0311001, doi:10.1109/FOCS.2004.54.

[Amb08] A Ambainis. Quantum search with variable times. Proceedings of the 25th Annual Symposium on Theoretical Aspects of Computer Science, Dagstuhl Semi- nar Proceedings, IBFI, Schloss Dagstuhl, Germany, pages 49–61, 2008. arXiv: quant-ph/0609168, doi:10.1007/s00224-009-9219-1.

[AS06]ˇ A. Ambainis and R. Spalek.ˇ Quantum algorithms for matching and net- work flows. Proceedings of the 23rd Annual Symposium on Theoretical As- pects of Computer Science, pages 172–183, 2006. arXiv:quant-ph/0508205, doi:10.1007/11672142_13.

[BBBV97] H. Bennett, E. Bernstein, G. Brassard, and U. Vazirani. Strengths and weak- nesses of quantum computing. SIAM Journal on Computing, 26:1510–1523, 1997. arXiv:quant-ph/9701001, doi:10.1137/S0097539796300933.

[BBC+01] R. Beals, R. Buhrman, R. Cleve, M. Mosca, and R. de Wolf. Quantum lower bounds by polynomials. Journal of the ACM, 48:778–797, 2001. arXiv: quant-ph/9802049, doi:10.1145/502090.502097.

[BBHT98] M. Boyer, G. Brassard, P. Høyer, and A. Tapp. Tight bounds on quantum searching. Fortschritte Der Physik, 46:493–505, 1998. arXiv: quant-ph/9605034, doi:10.1002/(SICI)1521-3978(199806)46:4/5<493:: AID-PROP493>3.0.CO;2-P.

78 Bibliography

[BCJ+13] A. Belovs, A.M. Childs, S. Jeffery, R. Kothari, and F. Magniez. Time-efficient quantum walks for 3-distinctness. 7965:105–122, 2013. arXiv:arXiv:1302. 3143, doi:10.1007/978-3-642-39206-1_10.

[BCW98] H. Buhrman, R. Cleve, and A. Wigderson. Quantum vs. classical com- munication and computation. Proceedings of the 30th Annual ACM Sympo- sium on Theory of Computing, pages 63–68, 1998. arXiv:quant-ph/9802040, doi:10.1145/276698.276713.

[Ben73] C. H. Bennett. Logical reversibility of computation. IBM Journal of Research and Development, 17(6):525–532, 1973. doi:10.1147/rd.176.0525.

[BHMT02] G. Brassard, P. Høyer, M. Mosca, and A. Tapp. Quantum amplitude ampli- fication and estimation. Contemporary Mathematics, 305:53–74, 2002. arXiv: quant-ph/0005055.

[BS06]ˇ H. Buhrman and R. Spalek.ˇ Quantum verification of matrix products. Proceed- ings of 17th ACM-SIAM Symposium On Discrete Algorithms, 76:880–889, 2006. arXiv:quant-ph/0409035.

[BSS03] H. Barnum, M. Saks, and M. Szegedy. Quantum decision trees and semidef- inite programming. Proceedings of the 18th IEEE Conference on Computational Complexity, 38:179–193, 2003.

[BW02] H. Buhrman and R. de Wolf. Complexity measures and decision tree com- plexity: A survey. Theoretical Computer Science, 288:21–43, 2002. doi:10.1016/ S0304-3975(01)00144-X.

[CEMM98] R. Cleve, A Ekert, C. Macchiavello, and M. Mosca. Quantum algorithms revisited. Proceedings of the Royal Society of London, Series A, 454:339–354, 1998. arXiv:quant-ph/9708016.

[CK11] A. M. Childs and R. Kothari. Quantum query complexity of minor-closed graph properties. 28th International Symposium on Theoretical Aspects of Com- puter Science (STACS 2011), 9:661–672, 2011. arXiv:quant-ph/1011.1443, doi:10.4230/LIPIcs.STACS.2011.661.

[Dam98] W. van Dam. Quantum oracle interrogation: Getting all information for al- most half the price. Proceedings of the 39th Annual IEEE Symposium on Foun- dations of Computer Science, pages 362–367, 1998. arXiv:quant-ph/9805006, doi:10.1109/SFCS.1998.743486.

79 Bibliography

[FGGS98] E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser. Limit on the speed of quantum computation in determining parity. , 81:5442– 5444, 1998. arXiv:0804.1440, doi:10.1103/PhysRevLett.81.5442.

[Gal14] F. Le Gall. Improved quantum algorithm for triangle finding via combina- torial arguments. Proceedings of the 55th Annual IEEE Symposium on Foun- dations of Computer Science, pages 216–225, 2014. arXiv:1407.0085, doi: 10.1109/FOCS.2014.31.

[GN15] F. Le Gall and S. Nakajima. Quantum algorithm for triangle finding in sparse graphs. Accepted to the 15th Asian Quantum Information Science Conference (AQIS 2015), 2015. arXiv:arXiv:1507.06878.

[GR05] L. K. Grover and J. Radhakrishnan. Is partial quantum search of a database any easier? Proceedings of the 17th Annual ACM Symposium on Parallelism in Algorithms and Architectures, pages 186–194, 2005. arXiv:quant-ph/0407122, doi:10.1145/1073970.1073997.

[Gro97] L. K. Grover. Quantum mechanics helps in searching for a needle in a haystack. Physical Review Letters, 79:325–328, 1997. arXiv:quant-ph/9706033, doi:10.1103/PhysRevLett.79.325.

[HJ90] A. Horn, R. and R. Johnson, C. Matrix Analysis. Cambridge University Press, 1990.

[HMW03] P. Høyer, M. Mosca, and R. de Wolf. Quantum search on bounded-error inputs. Proceedings of the 30th International Colloquium on Automata, Lan- guages, and Programming, Lecture Notes on Computer Science, 2719:291–299, 2003. arXiv:quant-ph/0304052.

[HS05]ˇ P. Høyer and R. Spalek.ˇ Lower bounds on quantum query complexity. Bul- letin of the European Association for Theoretical Computer Science, 87:78–103, 2005. arXiv:quant-ph/0509153.

[JKM13] S. Jeffery, R. Kothari, and F. Magniez. Nested quantum walks with quantum data structures. Proceedings of the 24th ACM-SIAM Symposium on Discrete Al- gorithms, pages 1474–1485, 2013. arXiv:arXiv:1210.1199, doi:10.1137/1. 9781611973105.106.

80 Bibliography

[Kem03] J. Kempe. Quantum random walks: An introductory overview. Contem- porary , 44:307–327, 2003. arXiv:quant-ph/0303081, doi:10.1080/ 00107151031000110776.

[Kit95] A. Kitaev. Quantum measurements and the abelian stabilizer problem. 1995. arXiv:quant-ph/9511026.

[KLPY10] P. Koiran, J Landes, N. Portier, and P. Yao. Adversary lower bounds for nonadaptive quantum algorithms. Journal of Computer and System Sciences, 76(5):347–355, 2010. arXiv:0804.1440, doi:10.1016/j.jcss.2009.10.007.

[KMOR10] H. Krovi, F. Magniez, M. Ozols, and J. Roland. Finding is as easy as detecting for quantum walks. Proceedings of 37st International Colloquium on Automata, Languages and Programming (ICALP), 6198:540551, 2010. arXiv:1002.2419v1.

[KMOR15] H. Krovi, F. Magniez, M. Ozols, and J. Roland. Quantum walks can find a marked element on any graph. Algorithmica, pages 1–57, 2015. arXiv: 1002.2419, doi:10.1007/s00453-015-9979-8.

[KOR10] H. Krovi, M. Ozols, and J. Roland. Adiabatic condition and the quantum hitting time of Markov chains. Physical Review A, 82:022333, 2010. arXiv: 1004.2721v1, doi:10.1103/PhysRevA.82.022333.

[LM08] S. Laplante and F. Magniez. Lower bounds for randomized and quantum query complexity using Kolmogorov arguments. SIAM Journal on Computing, 38:46–62, 2008. arXiv:quant-ph/0311189, doi:10.1137/050639090.

[MNRS07] F. Magniez, A. Nayak, J. Roland, and M. Santha. Search via quantum walk. Proceedings of the 39th ACM Symposium on Theory of Computing, pages 575–584, 2007. arXiv:quant-ph/0608026, doi:10.1137/090745854.

[MNRS12] F. Magniez, A. Nayak, P. Richter, and M. Santha. On the hitting times of quantum versus random walks. Algorithmica, 63(1):91116, 2012. arXiv:0808. 0084, doi:10.1007/s00453-011-9521-6.

[Mon10] A. Montanaro. Nonadaptive quantum query complexity. Information Pro- cessing Letters, 110:1110–1113, 2010. arXiv:1001.0018, doi:10.1016/j.ipl. 2010.09.009.

[MR95] R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, 1995.

81 Bibliography

[MSS07] F. Magniez, M. Santha, and M. Szegedy. Quantum algorithms for the triangle problem. SIAM Journal on Computing, 27:413–424, 2007. arXiv:quant-ph/ 0310134, doi:10.1137/050643684.

[MU05] M. Mitzenmacher and E. Upfal. Probability and Computing: Randomized Algo- rithms and Probabilistic Analysis. Cambridge University Press, 2005.

[NC00] M. Nielsen and I. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2000.

[NM07] A. Nayak and F. Magniez. Quantum complexity of testing group com- mutativity. Algorithmica, 48:221–232, 2007. arXiv:quant-ph/0506265, doi: 10.1007/s00453-007-0057-8.

[Rei11] B. Reichardt. Reflections for quantum query algorithms. Proceedings of the Twenty-second Annual ACM-SIAM Symposium on Discrete Algorithms, pages 560–569, 2011. arXiv:arxiv:1005.1601.

[Ric07] P. C. Richter. Quantum speedup of classical mixing processes. Physical Review A, 76:042306, 2007. arXiv:quant-ph/0609204, doi:10.1103/PhysRevA.76. 042306.

[San08] M. Santha. Quantum walk based search algorithms. International Conference on Theory and Applications of Models of Computation, pages 31–46, 2008. arXiv: 0808.0059, doi:10.1007/978-3-540-79228-4_3.

[SG04] R. Servedio and S. Gortler. Equivalences and separations between quantum and classical learnability. SIAM Journal on Computing, 33:1067–1092, 2004. doi:10.1137/S0097539704412910.

[Sim97] D. Simon. On the power of quantum computation. SIAM Journal on Comput- ing, 26(5):1474–1483, 1997. doi:10.1137/S0097539796298637.

[SKW03] N Shenvi, J. Kempe, and K. B. Whaley. Quantum random-walk search algo- rithm. Physical Review A, 67:052307, 2003. arXiv:quant-ph/0210064.

[Sze04] M. Szegedy. Quantum speed-up of Markov chain based algorithms. Pro- ceedings of the 45th IEEE Symposium on Foundations of Computer Science, pages 32–41, 2004. arXiv:quant-ph/0401053, doi:10.1109/FOCS.2004.53.

82 Bibliography

[Tul08] A Tulsi. Faster quantum walk algorithm for the two dimensional spatial search. Physical Review A, 78:012310, 2008. arXiv:0801.0497, doi:10.1103/ PhysRevA.78.012310.

[Wat01] J Watrous. Quantum simulations of classical random walks and undirected graph connectivity. Journal of Computer and System Sciences, 62:376–391, 2001. arXiv:arXiv:cs/9812012.

[Wat09] J Watrous. Zero-knowledge against quantum attacks. SIAM Journal on Com- puting, 39:25–58, 2009. arXiv:quant-ph/0511020, doi:10.1137/060670997.

[Zal99] C. Zalka. Grover’s quantum searching is optimal. Physical Review A, 60:2746– 2751, 1999. arXiv:quant-ph/9711070, doi:10.1103/PhysRevA.60.2746.

83 APPENDIX A

Examples of source code

A.1 Efficient simulation of the quantum walk on the grid clear; n = 20; % size of the grid is n^2 nSteps = 200; % number of steps markedP = zeros(1, nSteps); % probability of measuring the marked item mx = 10; my = 10; % marked location %------% set up the initial state %------% the four tables old contain the amplitudes on the edges oldU = ones(n, n)/(2*n); oldD = ones(n, n)/(2*n); oldR = ones(n, n)/(2*n); oldL = ones(n, n)/(2*n); initU = oldU; initD = oldD; initR = oldR; initL = oldL;

84 Appendix A. Examples of source code

% the initial success probability r(1) = abs(oldU(mx, my))^2 + abs(oldD(mx, my))^2 + ... abs(oldR(mx, my))^2 + abs(oldL(mx, my))^2; % d stores the dot product between init and the current state % of the walk d(1)=sum(sum(oldU.*initU + oldD.*initD + oldR.*initR + oldL.*initL)); %------% apply the walk %------for k = 2: nSteps

% simulate one step of the Grover coin newU = (-1/2)*oldU + (1/2)*oldD + (1/2)*oldR + (1/2)*oldL; newD = (1/2)*oldU + (-1/2)*oldD + (1/2)*oldR + (1/2)*oldL; newR = (1/2)*oldU + (1/2)*oldD + (-1/2)*oldR + (1/2)*oldL; newL = (1/2)*oldU + (1/2)*oldD + (1/2)*oldR + (-1/2)*oldL;

% apply the coin for marked vertex newU(mx, my) = - oldU(mx, my); newD(mx, my) = - oldD(mx, my); newR(mx, my) = - oldR(mx, my); newL(mx, my) = - oldL(mx, my);

% update the amplitudes oldU = newU; oldD = newD; oldR = newR; oldL = newL;

% simulate the shift operator newU = zeros(n, n); newD = zeros(n, n); newR = zeros(n, n); newL = zeros(n, n); % old up -> new down for x = 1:n

85 Appendix A. Examples of source code

for y = 1:n newD(x, (~(y==1)*(y-1) + (y==1)*n)) = oldU(x, y); end end % old down -> new up for x = 1:n for y = 1:n newU(x, (~(y==n)*(y+1) + (y==n)*1)) = oldD(x, y); end end % old right -> new left for x = 1:n for y = 1:n newL((~(x==n)*(x+1) + (x==n)*1), y) = oldR(x, y); end end % old left -> new right for x = 1:n for y = 1:n newR((~(x==1)*(x-1) + (x==1)*n), y) = oldL(x, y); end end % copy the new amplitudes into old oldU = newU; oldD = newD; oldR = newR; oldL = newL;

% compute the probability at the marked location % and the dot product with the initial state r(k) = abs(oldU(mx, my))^2 + abs(oldD(mx, my))^2 + ... abs(oldR(mx, my))^2 + abs(oldL(mx, my))^2; d(k) = sum(sum(oldU.*initU + oldD.*initD + oldR.*initR + oldL.*initL));

% animate the walk %prob = abs(oldU).^2 + abs(oldD).^2 + abs(oldR).^2 + abs(oldL).^2;

86 Appendix A. Examples of source code

% check that the probabilities sum to 1 % sum(sum(prob)) %hSurface = surf(prob); %zlim([0 0.21]); %set(hSurface,’FaceColor’, [mod(k+1,2)*1 mod(k,2)*10],’FaceAlpha’,0.5); %pause(0.3); % end of the animation end maxp = max(r) green = [23 113 19]./ 255; plot(1:nSteps, abs(r), ’LineWidth’, 2, ’Color’, green) hold on; plot(1:nSteps, abs(d),’r’, ’LineWidth’,2)

The code in this section was used to produce Figure 2.2.

A.2 Comparison with interpolated quantum walks

%------% Ws_equiv.m %------clear; n = 25; % the dimension of P

%------% Construct P and P’. Column sums are 1 %------[P, Pi] = symWalk(n); P1 = P; % construct P1 = P’ % say that (1,1) is marked P1(:, 1) = zeros(n, 1); P1(1, 1) = 1;

%------% construct W = Swap * Ref %------

87 Appendix A. Examples of source code

T = isom(P); RefA = 2*T*(T’) - eye(n^2);

% construct a swap matrix Swap = zeros(n^2, n^2); for i = 1:n for j = 1:n x = zeros(n, 1); x(i) = 1; y = zeros(n, 1); y(j) = 1; Swap = Swap + kron(y, x)*(kron(x, y))’; end end

W = Swap * RefA;

%------% construct s and W(s) %------init = T * sqrt(Pi); % the first item is marked m = zeros(n, 1); m(1,1) = 1; g = T * m; G = eye(n^2) - 2*g*(g’); a_0 = dot(init, g); nSteps = 20*floor(1/abs(a_0)); s = 1 - a_0^2/(1 - a_0^2); Ps = (1-s)*P + s*P1;

Ts = isom(Ps); RefA1 = 2*Ts*(Ts’) - eye(n^2); Ws = Swap * RefA1;

%------% construct init_bar %------

Pi_U = zeros(n, 1); Pi_U(2:n, 1) = Pi(2:n, 1); Pi_M = zeros(n, 1); Pi_M(1, 1) = Pi(1, 1); bad = sqrt(Pi_U)/sqrt(1-a_0^2); % normalized in L2 init_bar = T * bad;

88 Appendix A. Examples of source code

%------% run Ws %------rWs = zeros(1, nSteps); targetWs = Ts * m; % first test % test_state = T*x; % second test % test_state = Swap*T*x; % third test test_state = targetWs; state = test_state; state = test_state; for k = 1: nSteps rWs(k) = dot(test_state, state); state = Ws * state; end

%------% construct U %------thetaU = asin(a_0/sqrt(1-a_0^2)); one_bar = [-sin(thetaU); cos(thetaU)]; zero_bar = [cos(thetaU); sin(thetaU)];

U = ( kron([1 0; 0 0], W*G) - kron([0 0; 0 1], eye(n^2))) * ... ( kron(one_bar*one_bar’, G) + kron(zero_bar*zero_bar’, eye(n^2)));

%------% run U %------rU = zeros(1, nSteps); startU = kron([1; 0], init_bar); targetU = kron(one_bar, g); % first test % test_state = kron([1;0], T*x); % second test % test_state = kron([1;0], Swap*T*x); % third test test_state = -targetU; state = test_state;

89 Appendix A. Examples of source code

for k = 1: nSteps rU(k) = dot(test_state, state); state = U * state; end norm_prob_U_prob_Ws = norm(rU-rWs) angles_Ws = abs(sort(angle(eig(Ws))))(1:15)’ angles_U = abs(sort(angle(eig(U))))(1:15)’

%------% symWalk.m %------function [P, Pi] = symWalk(dim) % input: a positive dimension dim % output: a "random" symmetric walk matrix of dimension dim

% generate a unitary matrix U1 = orth(rand(dim, dim)); U1 = abs(U1); U2 = U1’; P = (U1.*U1 + U2.*U2)/2; Pi = ones(dim, 1)/dim; end

%------% torusWalk.m %------function [P] = torusWalk(n) % output: the walk matrix of a grid of size n x n

% adjacency matrix of a cycle a = zeros(n, n); for i = 1:n-1 a(i, i+1) = 1; a(i+1, i) = 1; end a(1, n) = 1; a(n, 1) = 1; % the torus is the cartesian graph product of two cycles P = kron(a, eye(n)) + kron(eye(n), a); P = P/4; end

%------

90 Appendix A. Examples of source code

% isom.m %------function [T] = isom(P) % input: a random walk P (columns sum to 1) % output: the isometry T: |x> -> |x>|p_x> dim = size(P, 2); T = zeros(dim^2, dim); for x = 1:dim Px = sqrt(P(:, x)); ketX = zeros(dim, 1); ketX(x) = 1; T = T + kron(ketX, Px)*(ketX)’; end end

%------% Sample execution %------>> Ws_equiv norm_prob_U_prob_Ws = 8.9421e-014 angles_Ws = Columns 1 through 8: 3.1416 3.1416 3.1416 3.1416 3.1416 3.1416 3.1416 3.1416 Columns 9 through 15: 3.1416 1.9337 1.8194 1.7863 1.7538 1.7422 1.6974 angles_U = Columns 1 through 8: 3.1416 3.1416 3.1416 3.1416 3.1416 1.9337 1.8194 1.7863 Columns 9 through 15: 1.7538 1.7422 1.6974 1.6935 1.6779 1.6560 1.6460 >>

91