<<

The implications of breaking the strong exponential time hypothesis on a quantum computer

Jorg Van Renterghem Student number: 01410124

Supervisor: Prof. Andris Ambainis

Master's dissertation submitted in order to obtain the academic degree of Master of Science in de informatica

Academic year 2018-2019 i

Samenvatting

In recent onderzoek worden reducties van de sterke exponenti¨eletijd hypothese (SETH) gebruikt om ondergrenzen op de complexiteit van problemen te bewij- zen [51]. Dit zorgt voor een interessante onderzoeksopportuniteit omdat SETH kan worden weerlegd in het kwantum computationeel model door gebruik te maken van Grover’s zoekalgoritme [7]. In de klassieke context is SETH wel nog geldig. We hebben dus een groep van problemen waarvoor er een klassieke onder- grens bekend is, maar waarvoor geen ondergrens bestaat in het kwantum com- putationeel model. Dit cre¨eerthet potentieel om kwantum algoritmes te vinden die sneller zijn dan het best mogelijke klassieke algoritme. In deze thesis beschrijven we dergelijke algoritmen. Hierbij maken we gebruik van Grover’s zoekalgoritme om een voordeel te halen ten opzichte van klassieke algoritmen. Grover’s zoekalgoritme lost het volgende probleem op in O(√N) queries: gegeven een input x , ..., x 0, 1 gespecifieerd door een zwarte doos 1 N ∈ { } die queries beantwoordt, zoek een i zodat xi = 1 [32]. We beschrijven een kwantum algoritme voor k-Orthogonale vectoren, Graaf diameter, Dichtste paar in een d-Hamming ruimte, Alle paren maximale stroom, Enkele bron bereikbaarheid telling, 2 sterke componenten, Geconnecteerde deel- graaf en S, T -bereikbaarheid. We geven ook nieuwe ondergrenzen door gebruik te maken van reducties en de sensitiviteit methode. Voor Dichtste paar in een d-Hamming ruimte, Enkele bron bereikbaarheid telling, 2 sterke componenten en Geconnecteerde deelgraaf geven we een ondergrens aan hun kwantum query complexiteit. Voor Dynamische problemen is het veel moeilijker om een goed kwantum algoritme te vinden. De oorzaak hiervan is dat kwantum algoritmes in het algemeen hun voordeel halen uit het feit dat tussenresultaten niet noodzakelijk moeten geweten zijn. Deze tussenresultaten zijn echter vaak handig om sneller de oplossing voor een licht aangepast probleem te vinden. Als laatste bekijken we ook een volledig ander model voor het beschrijven van kwantum algoritmen: span programma’s [37]. Dit model is interessant omdat het toelaat om grafen problemen zoals st-connectiviteit op een nieuwe manier op te lossen [12]. Pogingen om dit algoritme aan te passen zodat het ook st-pad lengte kan oplossen mislukken echter. De oorzaak hiervan is het feit dat span programma’s het mogelijk maken om lineaire combinaties te maken met negative factoren. Dit gaf de aanzet om kegel programma’s te beschouwen om kwantum al- goritmes voor te stellen. We botsten echter op het probleem dat kegel pro- gramma’s, ondanks het feit dat ze heel gelijkaardig zijn aan span programma’s, moeilijker te vertalen zijn naar kwantum algoritmes. Dit wordt veroorzaakt door het gebrek aan orthogonaliteit tussen vectoren in een kegel en die er buiten. Een vector w kan zich binnen een kegel bevinden terwijl w er buiten ligt. Maar het is| onmogelijki om w en w te onderscheiden− gebruikmakend | i van een kwantum meeting. | i − | i ii

Summary

In recent research the strong exponential time hypothesis (SETH) in combina- tion with fine grained reductions have been used to prove lower bounds on the complexity of problems [51]. This provides an interesting research opportunity, because SETH is valid in a classical context, but can be broken using Grover search in the quantum query model [7]. We thus have a set of problems for which a classical lower bound is known, but no such lower bound exist in the quantum query model. This creates the potential for finding quantum algorithms which are faster than any classical algorithm. In this thesis we provide such algorithms, making use of Grover search to get our advantage over classical algorithms. Grover search solves the following problem in O(√N) queries: Given an input x , ..., x 0, 1 specified by a 1 N ∈ { } black box that answers queries, find an i such that xi = 1 [32]. We provide new quantum algorithms for k-Orthogonal vectors, Graph di- ameter, Closest pair in d-Hamming space, All pairs max flow, Single source reachability count, 2 Strong components, Connected subgraph and S, T - reach- ability. We also provide new quantum lower bounds for these problems using re- ductions and the sensitivity method. For 2-Orthogonal vectors, Closest pair in d-Hamming space,Single source reachability count, 2 Strong components and Connected subgraph we give a new lower bound on their quantum query com- plexity. Dynamic problems are much harder to improve upon using quantum algo- rithms, because quantum algorithms in general get their advantage by the fact that not all intermediate results are necessary. Yet these intermediate results are often useful to find faster results for a slightly modified problem. Finally we look at a completely different model for describing quantum algo- rithms: span programs [37]. This program paradigm provides a new way to look at graph problems such as st-connectivity [12]. We try to modify the algorithm for st-connectivity to provide an algorithm for st-distance. This approach is unsuccessful. After a close examination the cause is found to be the fact that span programs allow linear combinations using negative factors. This was a reason to look into Cone programs, which are similar to span programs, to describe quantum algorithms. But Cone programs, while closely related to span programs, are much harder to translate into a quantum algo- rithm. This is caused by the lack of orthogonality in a cone program. A vector w can be inside a cone while a vector w is outside the cone. Yet it is im- |possiblei to differentiate between w and− | wi using a quantum measurement. | i − | i iii The implications of breaking the strong exponential time hypothesis on a quantum computer.

Jorg Van Renterghem Supervisor: Andris Ambainis

Abstract

In this study we provide quantum algorithms for problems which have a proven classical lower bound under the the strong exponential time hypothesis. We provide new quantum algorithms for k-Orthogonal vectors, Graph diameter, Closest pair in d-Hamming space, All pairs max flow, Single source reachability count, 2 Strong components, Connected subgraph and S, T -reachability. All of these algorithms are faster than the best known classical algorithm and most of them break the classical lower bound. For Closest pair in d-Hamming space, Single source reachability count, 2 Strong components and Connected subgraph we also give a new lower bound on their quantum query complexity. In a second part of this study we also look at cone programs as an alternative to span programs for defining quantum algorithms.

1 Introduction with clause size at most k (the so called k-SAT prob- (1 )n lem) and n variables cannot be solved in O(2 − ) In recent research the strong exponential time hy- time even by a randomized algorithm. pothesis (SETH) in combination with fine grained re- ductions have been used to prove lower bounds on the As the clause size k grows, the lower bound given complexity of problems [24]. This provides an inter- by SETH converges to 2n. SETH also implies that esting research opportunity, because SETH is valid in general CNF-SAT on formulas with n variables and n o(n) a classical context, but can be broken using Grover m clauses requires 2 − poly(m) time. search in a quantum query model [5]. SETH is motivated by the lack of fast algorithms We thus have a set of problems for which a classi- for k-SAT as k grows. It is a much stronger assump- cal lower bound is known, but no such lower bound tion than P = NP which assumes that SAT requires exist in the quantum query model. This creates the superpolynomial6 time. A weaker version, the Expo- potential for finding quantum algorithms which are nential Time Hypothesis (ETH) asserts that there is faster than any classical algorithm. some constant δ > 0 such that CNF-SAT requires Ω(2δn). 1.1 SETH In recent years there have been a considerable num- Impagliazzo, Paturi and Zane [14] introduced SETH ber of problems whose hardness has been proven un- to address the complexity of conjunctive normal form der SETH. Arguably the first such problem was the satisfiability problem (CNF-SAT). At the time they Orthogonal vector problem(OV) which was shown by only considered deterministic algorithms, but nowa- Williams [23] to require quadratic time under SETH. days it is common to extend SETH to allow random- Many conditional hardness results based on OV ization. and SETH have been discovered. For example for Graph diameter [20], All pairs max flow [16], dynamic Hypothesis 1 (SETH) For every  > 0 there ex- graph problems [17],... A more extensive listing can ists an integer k 3 such that CNF-SAT on formulas be found in [24]. ≥ iv

1.2 Grover This measure is useful because it provides a lower bound on the complexity of f. In this thesis we provide such algorithms, making use of Grover search to get our advantage over classical Theorem 1 Given a function f :[q]n [] with → algorithms. The problem Grover search solves can be q, n, l N. Ω( s(f)) is a lower bound on the quan- described as follows: tum query∈ complexity of f p

Definition 1 (Search) Given an input x1, ..., xN 0, 1 specified by a black box that answers queries.∈ 2 Quantum algorithms In{ a} query, we input i to the black box and it outputs xi. Output an i : xi = 1. In this section we look at a set of problems which have a lower bound on their complexity proven under Grover provided an algorithm which solves search SETH. All these problems are listed in an overview in O(√N) quantum queries [13]. Later some varia- paper by Williams [24]. For each of these problems tions and consequences of this algorithm showed that we give a new algorithm in the quantum query model variations on the search problem could also be solved that allows us to break the classical complexity lower in O(√N) quantum queries. An important example bound or give a reasoning why finding a good quan- of this is the minimum finding problem [12]. tum algorithm is hard. An overview of the results of this section can be found in table 1. 1.3 Sensitivity method 2.1 k-orthogonal vectors Even the quantum query model does not allow for infinite improvement upon algorithm complexity. It The first problem we discuss is the k-orthogonal vec- is thus useful to know a lower bound for the quantum tors problem (k-OV). This problem has a lower bound complexity of a problem. Here we will always look proven under SETH [23], but it is also used to pro- at quantum query complexity. We provide a lower vide a lower bound for many different problems. This bound on the number of queries to the input oracle lower bound is the now widely used k-OV hypothesis. any algorithm has to do in order to give a correct Definition 3 (k-OV) Let d = ω(log n); given k sets result with high probability for a certain problem. In A , ..., A 0, 1 d with A = ... = A = n, de- This thesis we will make use of the sensitivity method 1 k 1 k termine whether⊂ { there} exist| a| A, ...,| a| A so to achieve such lower bounds. 1 k k that a ... a = 0 where a ... a∈ := d Π∈k a [i]. The sensitivity of a function is a rather intuitive 1 · · k 1 · · k i=1 j=1 j measure expressing how much the result of a function Hypothesis 2 (k-OV Hypothesis)PNo random- is impacted by small changes in the input. Let [s] ized algorithm can solve k-OV on instances of size n an alphabet of size s , sensitivity can be more k  N in n − poly(d) time for constant  > 0. formally defined as follows:∈ If we look at classical algorithms for this problem, Definition 2 (Sensitivity) Given a function f : we find that simple exhaustive search already gives [q]n [l] with q, n, l N: an O(nkd) time algorithm. The best known algo- → ∈ f is sensitive to variable i on input x dom(f) if rithm only slightly improves on this with a run time ∈ there exists y dom(f) such that f(x) = f(y) and of nk 1/Θ(log(d/log(n))) [8][1]. ∈ 6 − for all j = i xj = yj. As exhaustive search is already one of the best so- 6 The sensitivity of f on x is the number of sensitive lutions for this problem, it is a perfect candidate for a variables. quantum speedup using Grover’s algorithm. To make The sensitivity of f: s(f) is the maximal sensitivity this more clear we will reduce the k-OV problem on of f on x for all x dom(f). the search problem. ∈ v

Classical Classical Quantum Quantum Problem upper bound lower bound upper bound lower bound k 1/Θ(log(d/log(n))) k  k/2 k-OV n − [8][1] n − [23] O(n ) / 2 2  3/2 Graph diameter O˜(m√n + n ) [20] m − [20] O(n√mlog (n)) /

Closest pair in 2 1/(d.log2(d/log(n) n)) 2  O(d) 2/3 O(n − − ) [3] n − 2 [4] O(n) Ω(n ) d-Hamming space All pairs max flow O˜(mn5/2log2(U)) [18] Ω(n3) [16] O˜(mn3/2log2(U) / Single source O(n1.575) [21] Ω(n) [2] O(l√n log n) Ω( l(n l)) * reachability count − 2 Strong components O(n1.575) [21] Ω(n) [2] O(√nm log n) [11] Ω(p√nm) [11] * Connected subgraph O(n1.575) [21] Ω(n) [2] O(n) [11] Ω(n)* S, T -reachbillity O(n2) [9] [10] Ω(n2) [2] O(n√m log n) /

Table 1: Bounds on the complexity of the problems discussed in section 2. Algorithms providing the upper bounds and the proofs for the lower bounds which don’t have references can be found in that section and in more detail in the full thesis. * These lower bounds are only valid for the non dynamic version of the problem.

k Given a set x1, ..., xN = A1 ... Ak with N = n tion f :[n] > [q] as an oracle, the Collision prob- and a function{F that given} i calculates× × a ... a for lem is to find− two distinct inputs i and j such that 1 · · k xi = a1, ..., ak and returns 1 if a1 ... ak = 0 and 0 f(i) = f(j), under the promise that such inputs exist. otherwise.{ Find} i such that F (i) =· 1. · The two-to-one Collision problem has the promise Grover tells us that this problem can be solved with that f is a two-to-one function. k O(√N) = O(n 2 ) calls to F . It is clear that F has a complexity of O(dk). This brings the total complex- k ity to O(n 2 dk) which breaks the k-OV Hypothesis. The Collision problem has a lower bound of 2.2 Closest pair in d-Hamming space Ω(n1/3) [22]. With a similar reduction as in section 2.1 we can The reduction is fairly simple. We only need a reduce Closest pair in d-Hamming space and other mapping m :[q] 0, 1 d which can simply be the point pair distance problems to the minimum finding binary representation→ { of [}q]. h(m(f(i)), m(f(j))) = 0 problem. This results in an algorithm that only uses if and only if f(i) = f(j), thus if such i, j exist then O(n) queries to the respective distance function. For their corresponding pair of points will have a minimal these problems we can also proof a lower bound of Hamming distance. This means that if the Hamming Ω(n2/3) using a reduction from the Collision problem. distance of the pair returned by Closest pair in d- As an example we will reduce the Collision problem Hamming space is not 0, no collision was found. to Closest pair in d-Hamming space. We solve Closest pair in d-Hamming space on the Definition 4 (Closest pair in d-Hamming space)mapping of m on the restriction of the oracle function Given Q, D 0, 1 d with Q = D = n, find u Q on two random set of O(√n) inputs. If the oracle is and v D such⊂ { that} h(u, v)| is| minimal.| | Here h(u,∈ v) two-to-one, a collision will be found with high prob- is the∈ Hamming distance between u and v. ability, by the Birthday Paradox. This results in an Ω(n2/3) lower bound for Closest pair in a d-Hamming Definition 5 (Collision problem) Given a func- space. vi

2.3 Graph problems quantum graph algorithms defined in [2] combined with minimum search. In this section we consider two graph problems: We also provide some lower bounds for this non Graph diameter and All pairs max flow. The similar- dynamic approach using the sensitivity method. We ity of these problems is that they want to calculate will describe a graph G = (V,E) using boolean vari- a maximal value of a function over all point pairs. ables xab for a, b V . xab = 0 if (a, b) E otherwise For graph diameter this is the maximal path length ∈ ∈ xab = 1. The booleans thus encode if an edge is in G. between every point pair and for All pairs max flow This will allow us to define a highly sensitive input this is the maximal flow between each point pair. for all connectivity based problems. The approach we use for Graph diameter is to use Just consider 2 sets A, B V of respectively nA minimum finding over all points querying the single ∈ and nB vertices. A and B are internally connected source version of the problem. The best algorithm for but there is no edge between A and B. Then there single source shortest paths is a quantum algorithm are n n edges that, when added, could change the 3/2 A B with complexity O(√nmlog (n)) [11]. Using this connectivity of the graph. we find a quantum algorithm for graph diameter with Using these techniques we can define an 3/2 complexity O(n√mlog (n)). O(l√n log n) algorithm for Single source reachability For All pairs max flow we use minimum finding count and prove a lower bound of Ω( l(n l)). over all point pairs. The current best classical al- We can give an Ω(n) lower bound for the 2 Strong− p gorithm to solve the max flow problem on directed components problem and the Connected subgraph ˜ 2 graphs [18], does this in O(m√nlog (U)) time, where problem. For S, T -reachability we give an algorithm U is the capacity ratio. This results in a quantum al- with quantum query complexity O(n√m log n). gorithm with complexity O˜(mn3/2log2(U)). 3 Span programs 2.4 Dynamic graph problems Abboud and Williams list a series of dynamic graph In this section we look at a different method problems and provide lower bounds for them under for providing quantum algorithms: span programs. SETH [2]. A dynamic graph problem is a graph prob- Span programs were introduced by Karchmer and lem for which we want to know the result under small Wigderson[15] as a linear algebraic model for com- variations of the graph, this could be edge or node in- puting Boolean functions. They have been used to sertions/deletions. The idea is that there is no need give quantum algorithms for example for the Major- to recalculate the whole problem for each small ad- ity function[15] and the Clique problem [6]. justment. Definition 6 (span program) A span program is When trying to define quantum algorithms for defined on a linear space W over a field K. The in- these problems we discovered that Grover search is put of the span program is a set of boolean variables ill suited for dynamic problems. Algorithms using x1, ..., xn and their negations. Each of these 2n lit- Grover search in general get their advantage by the erals has an associated set of vectors which span a fact that not all intermediate results are necessary. subspace in W . Let w = 0 be a specified vector. Yet these intermediate results are often useful to find This span program6 defines a Boolean function faster results of a slightly modified problem. f(x1, ..., xn) such that f(x1, ..., xn) = 1 iff w Bearing this in mind we are still able to give some ∈ U(x1, ..., xn). Here U(x1, ..., xn) is the subspace algorithms using Grover search which are better than spanned by the subspaces associated to all TRUE lit- the best known classical algorithm. We do this by ig- erals xi or xi. noring the dynamic part of the problem and recalcu- ¬ lating the solution from scratch upon every function These span programs became particularly interest- call. In order to achieve fast algorithms we use some ing for quantum algorithms after the paper by Re- vii ichardt and Spalek who showed that any span pro- for span programs. This algorithm applies two matri- gram can be efficiently solved by a quantum algo- ces U2U1 t times to an extension of the target vector rithm [19]. More specifically they showed that any w . The resulting state is then measured in the setup | i span program has a quantum complexity √ws+ws . w and its orthogonal complement. U1, U2 and t are − | i Here ws+ and ws are respectively the positive and chosen in such a way that this measurement returns negative witness size− of the span program. w if and only if w is in the span. | i | i Using span programs Belovs and Reichardt The problem for cone programs is that a vector were able to solve st-connectivity with complexity w can be inside the cone, while w is not. We O(n√d), with d the maximal path length [7]. They know| i that it is impossible to define− a | quantumi mea- defined a span program over the vector space Rn with surement that differentiates between w and w . the vertex set of G as orthonormal basis. The target We can conclude that we cannot use| a similari − algo- | i w = t s . For each vertex pair u, v in G add rithm for cone programs as the one we used for span | i − | i { } u v as input vector corresponding to the variable programs. |dependenti−| i on whether or not the edge (u, v) is in G. We try to modify this span program to solve st- path-length by adding an extra dimension which en- References codes the path length. This is 1 for each edge and k for the target vector, where k is the expected path [1] Amir Abboud, Ryan Williams, and Huacheng length. This new span program is unsuccessful in Yu. More applications of the polynomial method solving st-path-length because span programs allow to algorithm design. In Proceedings of the linear combinations of vectors with negative factors. twenty-sixth annual ACM-SIAM symposium on This problem was an incentive to look at cone pro- Discrete algorithms, pages 218–230. Society for grams. Industrial and Applied Mathematics, 2015.

[2] Amir Abboud and Virginia Vassilevska 3.1 Cone programs Williams. Popular conjectures imply strong In classical computing and mathematics cone pro- lower bounds for dynamic problems. In Founda- grams are often defined as an optimization problem, tions of Computer Science (FOCS), 2014 IEEE but here we will use a definition that is similar to the 55th Annual Symposium on, pages 434–443. one we used for span programs. IEEE, 2014. [3] Josh Alman, Timothy M Chan, and Ryan Definition 7 (cone program) A cone program is Williams. Polynomial representations of thresh- defined on a Hilbert space W over an ordered field old functions and algorithmic applications. K. The input of the span program is a set of boolean arXiv preprint arXiv:1608.04355, 2016. variables x1, ..., xn and their negations. Each of these 2n literals has an associated set of vectors in W . Let [4] Josh Alman and Ryan Williams. Probabilistic w = 0 be a specified vector. 6 polynomials and hamming nearest neighbors. In This cone program defines a Boolean function Foundations of Computer Science (FOCS), 2015 f(x , ..., x ) such that f(x , ..., x ) = 1 iff w 1 n 1 n ∈ IEEE 56th Annual Symposium on, pages 136– C(x1, ..., xn). Here C(x1, ..., xn) is the convex cone 150. IEEE, 2015. defined by the vectors associated to all TRUE literals xi or xi. [5] Andris Ambainis. Quantum search algorithms. ¬ ACM SIGACT News, 35(2):22–35, 2004. Our goal was to achieve a similar quantum com- plexity for cone programs as the one we have for span [6] L´aszl´oBabai, Anna G´al,and Avi Wigderson. programs. To do this we tried to adapt the algorithm Superpolynomial lower bounds for monotone viii

span programs. Combinatorica, 19(3):301–319, [16] Robert Krauthgamer and Ohad Trabelsi. Condi- 1999. tional lower bounds for all-pairs max-flow. ACM Transactions on Algorithms (TALG), 14(4):42, [7] Aleksandrs Belovs and Ben W Reichardt. 2018. Span programs and quantum algorithms for st-connectivity and claw detection. In Euro- [17] Marvin K¨unnemann,Ramamohan Paturi, and pean Symposium on Algorithms, pages 193–204. Stefan Schneider. On the fine-grained complex- Springer, 2012. ity of one-dimensional dynamic programming. arXiv preprint arXiv:1703.00941, 2017. [8] Timothy M Chan and Ryan Williams. Deter- ministic apsp, orthogonal vectors, and more: [18] Yin Tat Lee and Aaron Sidford. Path finding Quickly derandomizing razborov-smolensky. In methods for linear programming: Solving linear Proceedings of the twenty-seventh annual ACM- programs in o (vrank) iterations and faster al- SIAM symposium on Discrete algorithms, pages gorithms for maximum flow. In Foundations of 1246–1255. Society for Industrial and Applied Computer Science (FOCS), 2014 IEEE 55th An- Mathematics, 2016. nual Symposium on, pages 424–433. IEEE, 2014.

[9] Camil Demetrescu and Giuseppe F Italiano. [19] Ben W Reichardt and Robert Spalek. Span- Fully dynamic transitive closure: breaking program-based quantum algorithm for evaluat- through the o (n/sup 2/) barrier. In Proceed- ing formulas. arXiv preprint arXiv:0710.2630, ings 41st Annual Symposium on Foundations of 2007. Computer Science, pages 381–389. IEEE, 2000. [20] Liam Roditty and Virginia Vassilevska Williams. [10] Camil Demetrescu and Giuseppe F Italiano. Fast approximation algorithms for the diameter A new approach to dynamic all pairs shortest and radius of sparse graphs. In Proceedings of the paths. Journal of the ACM (JACM), 51(6):968– forty-fifth annual ACM symposium on Theory of 992, 2004. computing, pages 515–524. ACM, 2013. [11] Christoph D¨urr,Mark Heiligman, Peter HOyer, [21] Piotr Sankowski. Dynamic transitive closure via and Mehdi Mhalla. Quantum query complex- dynamic matrix inverse. In 45th Annual IEEE ity of some graph problems. SIAM Journal on Symposium on Foundations of Computer Sci- Computing, 35(6):1310–1328, 2006. ence, pages 509–517. IEEE, 2004. [12] Christoph Durr and Peter Hoyer. A quantum al- gorithm for finding the minimum. arXiv preprint [22] Yaoyun Shi. Quantum lower bounds for the col- quant-ph/9607014, 1996. lision and the element distinctness problems. In The 43rd Annual IEEE Symposium on Founda- [13] Lov K Grover. A fast quantum mechanical algo- tions of Computer Science, 2002. Proceedings., rithm for database search. In Proceedings of the pages 513–519. IEEE, 2002. twenty-eighth annual ACM symposium on The- ory of computing, pages 212–219. ACM, 1996. [23] Ryan Williams. A new algorithm for opti- mal 2-constraint satisfaction and its implica- [14] Russell Impagliazzo and Ramamohan Paturi. tions. Theoretical Computer Science, 348(2- On the complexity of k-sat. Journal of Computer 3):357–365, 2005. and System Sciences, 62(2):367–375, 2001. [24] Virginia Vassilevska Williams. On some fine- [15] Mauricio Karchmer and Avi Wigderson. On grained questions in algorithms and complexity. span programs. In [1993] Proceedings of the In Proceedings of the ICM, 2018. Eigth Annual Structure in Complexity Theory Conference, pages 102–111. IEEE, 1993. ix De gevolgen van het breken van de sterke exponenti¨ele tijd hypothese op een kwantumcomputer.

Jorg Van Renterghem Supervisor: Andris Ambainis

Samenvatting

In dit onderzoek beschrijven we kwantumalgoritmes voor problemen die een klassieke complexiteitson- dergrens hebben onder de sterke exponenti¨ele tijd hypothese. Meer specifiek geven we algoritmes voor k-Orthogonale vectoren, Graaf diameter, Dichtste paar in een d-Hamming ruimte, Alle paren maximale stroom, Enkele bron bereikbaarheid telling, 2 Sterke componenten, Geconnecteerde deelgraaf en S, T - bereikbaarheid. Al deze algoritmes hebben een lagere complexiteit dan het best gekende klassieke algoritme en de meeste doorbreken ook de klassieke ondergrens. Voor Dichtste paar in een d-Hamming ruimte, Enkele bron bereikbaarheid telling, 2 Sterke componenten en Geconnecteerde deelgraaf geven we ook een nieuwe ondergrens voor hun kwantum query complexiteit. In het tweede deel van ons onderzoek beschouwen we kegelprogramma’s als een alternatief voor spanprogramma’s om kwantumalgoritmen te defini¨eren.

1 Introductie tegenwoordig wordt de hypothese uitgebreid voor ge- randomiseerde algoritmes. In recent onderzoek worden reducties van de sterke Hypothesis 1 (SETH) Voor elke  > 0 bestaat er exponenti¨ele tijd hypothese(SETH) gebruikt om on- een getal k 3 zodat CNF-SAT op formules met dergrenzen op de complexiteit van problemen te be- clausules van≥ grote k en n variabelen niet kan op- (1 )n wijzen [24]. Dit zorgt voor een interessante onder- gelost worden in O(2 − ) tijd, zelfs niet door een zoeksopportuniteit omdat SETH kan worden weer- gerandomiseerd algoritme. legd in het kwantum computationeel model door ge- n bruik te maken van Grover’s zoek algoritme [5]. In Voor grote k convergeert de ondergrens naar 2 . de klassieke context is SETH wel nog geldig. SETH heeft als gevolg dat CNF-SAT op formules met m clausules en n variabelen 2n o(n)poly(m) tijd nodig We hebben dus een groep van problemen waarvoor − heeft. er een klassieke ondergrens bekend is, maar waarvoor De motivatie voor SETH is het gebrek aan snelle er geen ondergrens bestaat in het kwantum computa- algoritmes voor k-SAR voor grote k. De assumptie tioneel model. Dit cre¨eert het potentieel om kwan- is veel sterker dan P = NP wat enkel aanneemt dat tumalgoritmes te vinden die sneller zijn dan het best SAT superpolynomiale6 tijd nodig heeft. Een zwak- mogelijke klassieke algoritme. kere versie, de exponenti¨ele tijd hypothese (ETH) stelt dat er een constante δ > 0 bestaat zodat CNF- 1.1 SETH SAT Ω(2δn) tijd nodig heeft. Tegenwoordig is er al een grote groep problemen Impagliazzo, Paturi en Zane [14] introduceerden waarvan de sterkte bewezen is onder SETH. Het eer- SETH als hypothese over de complexiteit van het ste probleem waarvoor dit gebeurde was het Orthogo- vervulbaarheidsprobleem van een formule in conjunc- nale vector probleem (OV) waarvoor Williams heeft tieve normaalvorm (CNF-SAT). Destijds hielden ze aangetoond dat het kwadratische tijd nodig heeft enkel rekening met deterministische algoritmes, maar [23]. x

Ondertussen zijn veel conditionele sterkte resulta- Definition 2 (Sensitiviteit) Gegeven een functie ten gebaseerd op OP en SETH ontdekt. Bijvoor- f :[q]n [l] met q, n, l N: beeld voor Graaf diameter [20], Alle paren maximale f is sensitief→ ten opzichte∈ van variabele i op in- stroom [16], Dynamische grafen problemen [17],... put x dom(f) als er een y dom(f) bestaat zodat ∈ ∈ Een meer uitgebreide lijst kan worden teruggevonden f(x) = f(y) en voor alle j = i xj = yj. in de overzichtspaper van Williams [24]. De6 sensitiviteit van f op6 x is het aantal sensitieve variabelen. 1.2 Grover De sensitiviteit van f: s(f) id e maximale sensiti- viteit van f op x over alle x dom(f). In deze thesis geven we zulke algoritmes, gebruikma- ∈ kend van Grover’s zoek algoritme om een voordeel te Deze waarde is nuttig omdat ze een ondergrens behalen ten opzichte van klassieke algoritmes. Het bied aan de complexiteit van f. probleem dat Grover’s zoek algoritme oplost kan als Theorem 1 Gegeven een functie f :[q]n [l] met volgt beschreven worden: → q, n, l N. Ω( s(f)) is een ondergrens voor de kwantum∈ query complexiteit van f Definition 1 Gegeven een input x1, ..., xN 0, 1 p voorgesteld door een zwarte doos die queries∈ beant- { } woordt. In een query geven wij i als input aan de 2 Kwantumalgoritmes zwarte doos en krijgen we xi als output. Geef een i zodat xi = 1. In dit onderdeel bekijken we een groep van problemen Grover geeft een algoritme dat dit probleem oplost die een bewezen complexiteitsondergrens hebben on- in O(√N) kwantum queries [13]. Later werd voor een der SETH. Al deze problemen worden opgelijst in aantal variaties van dit probleem aangetoond dat ze de overzichtspaper van Williams [24]. Voor elke van via een gelijkaardig algoritme ook in O(√N) kwan- deze problemen geven we een algoritme dat deze on- tum queries kunnen worden opgelost. Een belangrijk dergrens doorbreekt of een redenering waarom het voorbeeld hiervan is het minimum zoekprobleem [12]. moeilijk is om zo een algoritme te vinden. Een over- zicht van de resultaten uit dit onderdeel is te vinden in tabel 1. 1.3 Sensitiviteit methode Zelfs het kwantum computationeel model laat geen 2.1 k-orthogonale vectoren oneindige verbetering toe op vlak van complexiteit. Het is dus nuttig om een ondergrens te weten voor Het eerste probleem dat we beschouwen is het k- de kwantumcomplexiteit van een probleem. In dit orthogonale vectoren probleem. (k-OV) Dit pro- onderzoek zullen we steeds kijken naar de kwantum bleem heeft een ondergrens die bewezen is onder query complexiteit. We geven een ondergrens op het SETH [23], maar die daarbovenop ook gebruikt wordt aantal queries naar het input orakel die nodig zijn om om ondergrenzen voor andere problemen te bewijzen. met hoge probabiliteit een correct resultaat terug te Deze ondergrens is de ondertussen wijdverspreide k- geven. In deze thesis zullen we gebruik maken van OV hypothese. de sensitiviteit methode om dergelijke ondergrenzen Definition 3 (k-OV) Stel d = ω(log n); gegeven k te bepalen. d sets A1, ..., Ak 0, 1 met A1 = ... = Ak = De gevoeligheid van een functie is een eerder ⊂ { } | | | | n, bepaal of er a1 A, ..., ak Ak bestaan zodat intu¨ıtieve methode om uit te drukken in welke mate ∈ ∈ d k a1 ... ak = 0 met a1 ... ak := Π aj[i]. het resultaat van een functie be¨ınvloed wordt door · · · · i=1 j=1 kleine wijzigingen in de invoer van de functie. Stel Hypothesis 2 (k-OV Hypothese)P Geen gerando- [s] een alfabet met grote s N, dan kunnen we sen- miseerd algoritme kan k-OV met sets van grote n op- ∈ k  sitiviteit als volgt defini¨eren: lossen in tijd n − poly(d) voor een constante  > 0. xi

Klassieke Klassieke Kwantum Kwantum Probleem bovengrens ondergrens bovengrens ondergrens k 1/Θ(log(d/log(n))) k  k/2 k-OV n − [8][1] n − [23] O(n ) / 2 2  3/2 Graaf diameter O˜(m√n + n ) [20] m − [20] O(n√mlog (n)) /

Dichtste paar 2 1/(d.log2(d/log(n) n)) 2  O(d) 2/3 O(n − − ) [3] n − 2 [4] O(n) Ω(n ) in een d-Hamming ruimte Alle paren maximale stroom O˜(mn5/2log2(U)) [18] Ω(n3) [16] O˜(mn3/2log2(U) / Enkele bron O(n1.575) [21] Ω(n) [2] O(l√n log n) Ω( l(n l)) * bereikbaarheid telling − 2 Sterke componenten O(n1.575) [21] Ω(n) [2] O(√nm log n) [11] Ω(p√nm) [11] * Geconnecteerde deelgraaf O(n1.575) [21] Ω(n) [2] O(n) [11] Ω(n)* S, T -bereikbaarheid O(n2) [9] [10] Ω(n2) [2] O(n√m log n) /

Tabel 1: Complexiteitsgrenzen voor de problemen die worden besproken in sectie 2. Algoritmes voor de bovengrenzen en bewijzen voor de ondergrenzen die geen referentie hebben kan je terugvinden in dit onderdeel en in meer detail in de volledige thesis. * Deze ondergrenzen gelden enkel voor de niet dynamische versie van het probleem.

Als we klassieke algoritmes voor dit probleem 2.2 Dichtste paar in een d-Hamming beschouwen, zien we dat simpel exhaustief zoeken ruimte ons een algoritme met complexiteit O(nkd) geeft. Het beste tot nu toe bekende algoritme doet het Met een gelijkaardige reductie als in sectie 2.1 kunnen maar een klein beetje beter met een complexiteit we Dichtste paar in een d-Hamming ruimte en andere k 1/Θ(log(d/log(n))) Punt paar afstandsproblemen reduceren naar het mi- n − [8][1]. nimum zoekprobleem. Dit resulteert in een algoritme dat slechts O(n) queries gebruikt naar de respectie- Aangezien exhaustief zoeken al een van de best mo- velijke afstandsfunctie. Voor deze problemen kunnen gelijke oplossingen is voor dit probleem, is dit pro- we ook een ondergrens bewijzen die stelt dat de mi- bleem ideaal geschikt om op te lossen met Grover’s nimale complexiteit Ω(n2/3 is. We maken hiervoor zoek algoritme. Om dit te verduidelijken zullen we gebruik van een reductie van het Botsing probleem. k-OV reduceren op het zoekprobleem. Ter illustratie zullen we het Botsing probleem redu- ceren op Dichtste paar in een d-Hamming ruimte.

Gegeven een set x1, ..., xN = A1 ... Ak met Definition 4 (Dichtste paar in een d-Hamming ruimte) k { } × × d N = n en een functie F die gegeven i, a1 ... ak Gegeven Q, D 0, 1 met Q = D = n, vind · · ⊂ { } | | | | berekent voor xi = a1, ..., ak 1 terug geeft als a1 u Q en v D zodat h(u, v) minimaal is. Hier { } · ∈ ∈ ... ak = 0 en 0 in het andere geval. Zoek i zodat wordt h(u, v) gedefinieerd als de Hamming afstand · F (i) = 1. tussen u en v. Definition 5 (Botsing probleem) Gegeven een Dankzij Grover weten we dat dit probleem kan functie f :[n] > [q] als orakel, het Botsing probleem k − worden opgelost met O(√N) = O(n 2 ) queries naar heeft als doel twee verschillende inputs i en j te F . Het is duidelijk dat F een complexiteit heeft van vinden zodat f(i) = f(j), onder de belofte dat deze orde O(dk). Dit resulteert in een totale complexiteit inputs bestaan. Het twee-naar-een Botsing probleem k O(n 2 dk) die de k-OV Hypothese verbreekt. heeft de belofte dat f een twee-naar-een functie is. xii

Het Botsing probleem heeft een Ω(n1/3) ondergrens 2.4 Dynamische grafenproblemen [22]. Abboud and William geven een lijst van dynamische De reductie is vrij eenvoudig. We hebben en- grafenproblemen waarvoor ze een ondergrens onder kel een mapping m :[q] 0, 1 d nodig, de bi- SETH aantonen [2]. Een dynamisch grafenprobleem naire representatie van [q] kan→{ hiervoor} dienst doen. is een grafenprobleem waarvoor we de oplossing wil- h(m(f(i)), m(f(j))) = 0 als en slechts als f(i) = len weten na kleine variaties op de input. Deze vari- f(j), dus als er zo een paar i, j bestaat dan is hun aties kunnen bijvoorbeeld betekenen dat bogen ver- Hamming afstand minimaal. Dit wil zeggen dat als wijderd of toegevoegd worden aan de graaf. Het idee de hamming afstand van het paar, dat gevonden is is dat het niet noodzakelijk is om steeds het complete door Dichtste paar in een d-Hamming ruimte, niet probleem te herberekenen. nul is, er geen botsing is. We hebben echter ontdekt dat Grover’s zoek algo- We passen Dichtste paar in een d-Hamming ruimte ritme een slechte methode is om dit soort problemen toe op de mapping m over de restrictie van de orakel aan te pakken. De oorzaak hiervan is dat kwantumal- functie op twee willekeurige deelverzamelingen van goritmes gebaseerd op Grover’s zoek algoritme in het O(√n) inputs. Als het orakel twee-naar-een is zal er algemeen hun voordeel halen uit het feit dat tussenre- met grote kans een botsing gevonden worden. Dit sultaten niet noodzakelijk moeten geweten zijn. Deze volgt uit de Verjaardagsparadox. Het resultaat is tussenresultaten zijn echter vaak handig om sneller de een Ω(n2/3) ondergrens voor Dichtste paar in een d- oplossing voor een licht aangepast probleem te vin- Hamming ruimte. den. Hiermee rekening houdend zijn we nog steeds in 2.3 Grafenproblemen staat om een aantal algoritmes te geven die sneller zijn dan het best gekende klassieke algoritme. Dit In dit onderdeel bespreken we twee grafenproblemen: kan doordat we het dynamische deel van het pro- Graaf diameter en Alle paren maximale stroom. De bleem negeren en steeds de volledig oplossing van het gelijkenis tussen deze problemen is dat beide het probleem herberekenen. We maken ook gebruik van maximum berekenen van een bepaalde functie over enkele grafen algoritmes die Abboud heeft gedefini- alle punten paren. Voor Graaf diameter is deze func- eerd [2]. tie het kortste pad, voor Alle paren maximale stroom Daarnaast zijn we ook in staat om ondergrenzen is dit de maximale stroom. te geven voor de kwantum query complexiteit indien De aanpak die we gebruiken voor Graaf diameter we de problemen niet dynamisch oplossen. Hiervoor is om minimum zoeken toe te passen over alle punten maken we gebruik van de sensitiviteit methode. We waar we queryen naar de versie van dit probleem met beschrijven een graaf G = (V,E) door booleans xab een vaste bron: Enkel bron kortste pad. Het beste al- met a, b V . x = 0. x = 0 als (a, b) E anders ∈ ab ab ∈ goritme hiervoor is een kwantumalgoritme met com- xab = 1. Deze booleans beschrijven dus of een boog plexiteit O(√nmlog3/2(n)) [11]. Gebruikmakend van al dan niet voorkomt in G. Deze beschrijving laat dit algoritme behalen we een totale complexiteit van ons toe om zeer gevoelige inputs te defini¨eren voor O(n√mlog3/2(n)) voor Graaf diameter. problemen gerelateerd aan connectiviteit. Voor Alle paren maximale stroom passen we Beschouw twee verzamelingen A, B V met A = ∈ | | minimum zoeken toe over alle paren van pun- nA en B = nB. A en B zijn intern geconnecteerd | | ten. Het huidige beste algoritme voor maxi- maar tussen A en B zijn er geen bogen. Dan zijn er male stroom op gerichte grafen heeft een tijdscom- nAnB bogen zodat indien deze worden toegevoegd de plexiteit O˜(m√nlog2(U)) met U de capaciteitsra- connectiviteit van de graaf verandert. tio [18]. Dit resulteert in een kwantumalgoritme Gebruik makend van deze technieken kunnen we voor Alle paren maximale stroom met complexiteit een O(l√n log n) algoritme geven voor Enkele bron O˜(mn3/2log2(U)). bereikbaarheid telling en een ondergrens bewijzen xiii van Ω( l(n l)). We kunnen ook een Ω(n) onder- w = t s . Voor elk punt paar u, v in G voeg grens geven voor− het 2 Sterke componenten probleem u | vi −toe | i als input vector aan{ de variabele} die p en het Geconnecteerde deelgraaf probleem. Voor encodeert| i − | i of de boog (u, v) in G is. S, T -bereikbaarheid hebben we een algoritme met We proberen dit algoritme door een extra dimensie complexiteit O(n√m log n). toe te voegen die de lengte van het pad encodeerd. De bedoeling is om zo st-pad lengte te kunnen oplossen. Deze poging mislukt echter omdat spanprogramma’s 3 Spanprogramma’s toelaten om lineaire combinaties te maken met nega- tieve factoren. Hierdoor heeft het inverse pad plots In dit onderdeel bekijken we een nieuwe methode een negatieve lengte. Dit probleem heeft ons ertoe om kwantumalgoritmes te beschrijven: spanpro- aangezet om kegelprogramma’s te beschouwen. gramma’s. Spanprogramma’s zijn ge¨ıntroduceerd door Karchmer and Wigderson[15] als een lineair al- gebra¨ısch model om booleaanse functies te bereke- 3.1 Kegelprogramma’s nen. Ze zijn al gebruikt om kwantumalgoritmes te In de klassieke computerwetenschappen en de wis- geven voor het Meerderheid probleem[15] en het Cli- kunde worden kegelprogramma’s vaak gedefinieerd que probleem[6]. als een optimalisatieprobleem. Hier zullen we ech- Definition 6 (spanprogramma) Een spanpro- ter een definitie gebruiken die heel gelijkaardig is aan gramma is gedefinieerd over een lineaire ruimte W deze die we gebruikt hebben voor spanprogramma’s. over een veld K. De invoer van het spanprogramma Definition 7 (kegelprogramma) een kegelpro- is de set van booleaanse variabelen x1, ..., xn en hun negaties. Elk van deze 2n literalen heeft een geasso- gramma is gedefinieerd over een Hilbert ruimte cieerde verzameling van vectoren die een deelruimte W over een geordend veld K. De invoer van het opspannen in W . Stel w = 0 is een gespecificeerde kegelprogramma is de set van booleaanse variabelen vector. 6 x1, ..., xn en hun negaties. Elk van deze 2n literalen Dit spanprogramma definieert een booleaanse func- heeft een geassocieerde verzameling van vectoren. Stel w = 0 is een gespecificeerde vector. tie f(x1, ..., xn) zodat f(x1, ..., xn) = 1 als en slechts 6 als w U(x , ..., x ). Hier betekent U(x , ..., x ) de Dit kegelprogramma definieert een booleaanse func- ∈ 1 n 1 n deelruimte opgespannen door de deelruimtes geasso- tie f(x1, ..., xn) zodat f(x1, ..., xn) = 1 als en slechts als w C(x , ..., x ). Hier betekent C(x , ..., x ) de cieerd met alle ware literalen xi of xi. 1 n 1 n ¬ kegel gegeven∈ door de vectoren geassocieerd met alle Deze spanprogramma’s zijn vooral interessant ge- ware literalen xi of xi. worden na de paper van Reichardt en Spalek die ¬ toonde dat elk spanprogramma effici¨ent kan worden Ons doel is om een gelijkaardige complexiteit te omgezet naar een kwantumalgoritme [19]. Meer pre- hebben voor kegelprogramma’s als voor spanpro- cies toonden ze aan dat elk spanprogramma een com- gramma’s. Om dit te bereiken hebben we geprobeerd plexiteit heeft van √ws+ws . Hier zijn ws+ en ws om het algoritme voor spanprogramma’s aan te pas- − − de grootte van respectievelijk de positieve en de ne- sen. Dit algoritme past 2 matrices U2U1 t keer toe op gatieve getuige van het spanprogramma. een extensie van de doelvector w . De resulterende | i Gebruikmakend van spanprogramma’s waren Be- toestand wordt dan gemeten ten opzichte van w en | i lovs en Reichardt in staat om st-connectiviteit op te diens orthogonaal complement. U1, U2 en t zijn op zo lossen met een complexiteit gelijk aan O(n√d), waar- een manier gekozen dat deze meeting w terug geeft | i bij d de lengte is van het pad tussen s en t indien als en slechts als w zich in de span bevindt. | i dit bestaat [7]. Ze definieerden een spanprogramma Het probleem voor kegelprogramma’s is echter dat over de vector ruimte Rn met de verzameling van een vector w zich in de kegel kan bevinden terwijl punten in G als orthonormale basis. De doelvector w zich| daarbuiteni bevindt. We weten dat het − | i xiv onmogelijk is om w en w van elkaar te onder- Quickly derandomizing razborov-smolensky. In scheiden door middel| i van− een | i kwantum meeting.We Proceedings of the twenty-seventh annual ACM- kunnen dus concluderen dat we onmogelijk een ge- SIAM symposium on Discrete algorithms, pages lijkaardig algoritme voor kegelprogramma’s kunnen 1246–1255. Society for Industrial and Applied gebruiken. Mathematics, 2016. [9] Camil Demetrescu and Giuseppe F Italiano. Referenties Fully dynamic transitive closure: breaking through the o (n/sup 2/) barrier. In Pro- [1] Amir Abboud, Ryan Williams, and Huacheng ceedings 41st Annual Symposium on Foundati- Yu. More applications of the polynomial me- ons of Computer Science, pages 381–389. IEEE, thod to algorithm design. In Proceedings of the 2000. twenty-sixth annual ACM-SIAM symposium on [10] Camil Demetrescu and Giuseppe F Italiano. A Discrete algorithms, pages 218–230. Society for new approach to dynamic all pairs shortest pa- Industrial and Applied Mathematics, 2015. ths. Journal of the ACM (JACM), 51(6):968– [2] Amir Abboud and Virginia Vassilevska Willi- 992, 2004. ams. Popular conjectures imply strong lower [11] Christoph Durr,¨ Mark Heiligman, Peter HOyer, bounds for dynamic problems. In Foundations of and Mehdi Mhalla. Quantum query complexity Computer Science (FOCS), 2014 IEEE 55th An- of some graph problems. SIAM Journal on Com- nual Symposium on, pages 434–443. IEEE, 2014. puting, 35(6):1310–1328, 2006. [3] Josh Alman, Timothy M Chan, and Ryan Wil- [12] Christoph Durr and Peter Hoyer. A quantum al- liams. Polynomial representations of threshold gorithm for finding the minimum. arXiv preprint functions and algorithmic applications. arXiv quant-ph/9607014, 1996. preprint arXiv:1608.04355, 2016. [13] Lov K Grover. A fast quantum mechanical algo- [4] Josh Alman and Ryan Williams. Probabilistic rithm for database search. In Proceedings of the polynomials and hamming nearest neighbors. In twenty-eighth annual ACM symposium on The- Foundations of Computer Science (FOCS), 2015 ory of computing, pages 212–219. ACM, 1996. IEEE 56th Annual Symposium on, pages 136– 150. IEEE, 2015. [14] Russell Impagliazzo and Ramamohan Paturi. On the complexity of k-sat. Journal of Computer [5] Andris Ambainis. Quantum search algorithms. and System Sciences, 62(2):367–375, 2001. ACM SIGACT News, 35(2):22–35, 2004. [15] Mauricio Karchmer and Avi Wigderson. On [6] L´aszl´oBabai, Anna G´al,and Avi Wigderson. span programs. In [1993] Proceedings of the Superpolynomial lower bounds for monotone Eigth Annual Structure in Complexity Theory span programs. Combinatorica, 19(3):301–319, Conference, pages 102–111. IEEE, 1993. 1999. [16] Robert Krauthgamer and Ohad Trabelsi. Condi- [7] Aleksandrs Belovs and Ben W Reichardt. tional lower bounds for all-pairs max-flow. ACM Span programs and quantum algorithms for Transactions on Algorithms (TALG), 14(4):42, st-connectivity and claw detection. In Euro- 2018. pean Symposium on Algorithms, pages 193–204. Springer, 2012. [17] Marvin Kunnemann,¨ Ramamohan Paturi, and Stefan Schneider. On the fine-grained com- [8] Timothy M Chan and Ryan Williams. Deter- plexity of one-dimensional dynamic program- ministic apsp, orthogonal vectors, and more: ming. arXiv preprint arXiv:1703.00941, 2017. xv

[18] Yin Tat Lee and Aaron Sidford. Path finding methods for linear programming: Solving linear programs in o (vrank) iterations and faster al- gorithms for maximum flow. In Foundations of Computer Science (FOCS), 2014 IEEE 55th An- nual Symposium on, pages 424–433. IEEE, 2014. [19] Ben W Reichardt and Robert Spalek. Span- program-based quantum algorithm for evalua- ting formulas. arXiv preprint arXiv:0710.2630, 2007.

[20] Liam Roditty and Virginia Vassilevska Williams. Fast approximation algorithms for the diameter and radius of sparse graphs. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 515–524. ACM, 2013.

[21] Piotr Sankowski. Dynamic transitive closure via dynamic matrix inverse. In 45th Annual IEEE Symposium on Foundations of Computer Sci- ence, pages 509–517. IEEE, 2004. [22] Yaoyun Shi. Quantum lower bounds for the col- lision and the element distinctness problems. In The 43rd Annual IEEE Symposium on Founda- tions of Computer Science, 2002. Proceedings., pages 513–519. IEEE, 2002. [23] Ryan Williams. A new algorithm for opti- mal 2-constraint satisfaction and its implica- tions. Theoretical Computer Science, 348(2- 3):357–365, 2005. [24] Virginia Vassilevska Williams. On some fine- grained questions in algorithms and complexity. In Proceedings of the ICM, 2018. xvi

Vulgarising summary

This thesis is situated in the field of quantum algorithms. An algorithm is a series of steps to solve a problem. For example a cooking recipe is an algorithm that gives some steps on how to make a certain dish given that you have the required ingredients. In the study of algorithms we are interested in finding the algorithm which needs the smallest amount of basic steps. But more important than the exact amount of basic steps, is how well the algorithm scales for larger problems. This is what we call the complexity of the algorithm. For example consider the following problem: you are given a shuffled deck of cards and you are asked to find a specific card. Because the cards are in a random order you will have to look at all the cards to be certain that you have found your card. Even if you are happy with a fifty percent chance of success you will have to look at half of the cards. In general we can say that if we have N cards then our algorithm uses cN steps to find the specific card. c is here a constant. The algorithm thus has a complexity of order N. This complexity gives us an idea how the algorithm performs for large prob- lems. Consider an algorithm with complexity of the order 2N . This means if the input of the problem gets 1 larger, for example 1 extra card in the deck, then the algorithm will need twice as many steps to solve the problem. We thus want to find algorithms with a complexity that is as low as possible. Now the question is raised if an algorithm is optimal. This means that there exist no other algorithm that solves the same problem and has a lower complex- ity. The question sets the basis for the study of lower bounds on complexity: what is the lowest complexity any algorithm that solves a certain problem can have? Now we still need to explain the difference between quantum algorithms and classical algorithms. This difference lies in the previously mentioned steps. In a classical computational model those basic steps are bit manipulations. A bit is either zero or one and manipulations could be NOT, AND and OR which are functions that given one or two bits return one new bit. Although most algorithms are written in a more understandable language, they could all be translated in a series of simple bit manipulations. These bits can then be im- plemented in a computer for example by a one being represented by a current and a zero by no current. For quantum algorithms we use a different set of basic steps, which are based on quantum mechanics. These basic steps are qubit manipulations. A qubit is a quantum mechanical element such as a photon or an electron. Mathematically it can be represented as a vector. The manipulations which are allowed are defined by the laws of quantum mechanics and can mathematically be represented by unitary matrices. With these basic steps we can do all the things we could do with the bits from classical computing, but we can do even more. This quantum model is interesting because researchers are effectively imple- menting qubits using photons and other quantum mechanical elements in real quantum computers. If we can find an algorithm now using these basic steps (a quantum algorithm) it will be able to solve a problem once the quantum xvii computers are ready. Now let us go back to our deck of cards. Grover has given us a quantum algorithm that can find the specific card only using in the order of √N steps. We will not explain how this works as this would take us to deep into the laws of quantum mechanics. What interests us the most in this thesis are the consequences of this algorithm. Using this algorithm we can show that a lot of the lower bounds on the complexity of problems in the classical model are not valid in the quantum model. This means that we have a set of problems for which we know that we cannot make them any faster on a normal computer, but which could potentially be solved faster on a quantum computer. In the first part of this thesis we look at this set of problems. We provide a quantum algorithm which is faster than the best classical algorithm for some problems. For other problems we explain why it is hard to give a better algorithm. The problems which were hardest to solve were dynamic problems. A dy- namic problem is a problem for which we want to know the result under slight variations of the input. Let us look back at our deck of cards. Suppose we have already looked at all cards to find our wanted card. Now one card is randomly added to the deck. If we are now asked to find a certain card, we do not need to look at all cards again. We know where all cards were before this one new card was added, and adding one card does not change much to the deck, so the card we were looking for will still be close to its old position. In quantum algorithms we do not have this advantage. One side effect of being able to find our card in √N steps is that we do not know the position of all the other cards. Our best solution is just to do √N steps again to find the new card, and ignoring our previous information. On average, both algorithms take the same amount of time. To show this we consider the case after √N cards have been added to or removed from the deck. The quantum algorithm ignores what happens previously and just need √N steps to find a card. For the classical algorithm so much has changed in the deck by now that we will have to check in the order of √N cards around the original position to find the card we are looking for. The best strategy now is to look at all cards again so that we have an updated idea of where all cards are located. Thus after every √N modifications we have to look at N cards. This means that on average we have to look at √N cards every time. In the second part of this thesis we look at span programs. This is a math- ematical way of describing a decision problem. For an example of a decision problem we can look back at our deck of cards. A decision problem could be: does the deck of cards contain the card we are searching. This differs from our original problem because in the original problem we had to return the card while here we just need to answer a yes or no question. A span program works by defining a set of basis vectors and a target vector. If the target vector lies within the space spanned by the basis vectors, the decision problem returns true. Otherwise it is false. To define a span program you need to be able to translate your decision problem into a set of vectors such that a span program returns true if and only if the decision problem is xviii true. Span programs are particularly useful in the field of quantum algorithms because for every good span program there exist an efficient quantum algorithm. Generally this translation from a decision problem to a span program is hard to find but there are some interesting examples. st -reachabillity is such an example. This problem is defined on graphs but for simplicity we will explain it using a road map. Suppose we want to plan a road trip from city s to city t. The first thing we want to know is if s and t are connected by roads. It would for example be possible to drive from Riga to Paris, but impossible to drive from Riga to New York. st -reachabillity solves the following problem: given any road map and any two cities s and t on this map, are s and t connected by the roads on this map. We can translate this into a span program by saying that every road on the map is a basis vector. The target vector is a road from s to t. If such a route exist then there also exists a linear combination of all roads that give the target vector. In this thesis we unsuccessfully try to modify this translation to solve an other problem: st-distance. This problem solves the following question: given any road map and any two cities s and t on this map, is there a route from s to t of at most length k on this map. You are thus not only interested if the road trip is possible, but also how long it would take. We were unable to make the translation to span programs because span programs allow negative linear combinations. This means that taking a road in one direction would add x kilometers to the route while taking the same road in the opposite direction would subtract x kilometers. Negative kilometers are of course impossible thus this was a bad translation. We tried to solve this by defining a new mathematical model to describe decision problems: Cone programs. Cone programs are similar to span pro- grams in every way except that they do not allow negative linear combinations. Regretfully it turned out that cone programs are less useful as we were unable to prove a relation with quantum algorithms. xix

Acknowledgments

First of all, I want to thank my thesis advisor, Andris Ambainis, for being a source of many interesting problems and papers, and for giving me advice whenever I got stuck. I am thankful to Alexander Belovs for providing me with some usefull in- sights. I would also like to thank Gunar Brinckman and Frank Verstraete for their help with the practical organization and their general research advice. Finally, I am grateful to Louise Deconinck, my parents and my friends for many things not directly related to the thesis. xx

Permission for usage

This author gives permission to make this master dissertation available for con- sultation and to copy parts of this master dissertation for personal use. In the case of any other use, the limitations of the copyright have to be respected, in particular with regard to the obligation to state expressly the source when quoting results from this master dissertation. De auteur geeft de toelating deze masterproef voor consultatie beschikbaar te stellen en delen van de masterproef te kopi¨erenvoor persoonlijk gebruik. Elk ander gebruik valt onder de bepalingen van het auteursrecht, in het bijzonder met betrekking tot de verplichting de bron uitdrukkelijk te vermelden bij het aanhalen van resultaten uit deze masterproef. 23/05/2019 Jorg Van Renterghem Contents

Introduction 1 Outline ...... 2

1 Preliminaries 3 1.1 Fine-grained complexity ...... 3 1.1.1 The strong exponential time hypothesis ...... 3 1.1.2 Other hypotheses ...... 4 1.1.3 Fine-grained reductions ...... 4 1.1.4 Results form hardness under SETH ...... 5 1.2 The quantum query model ...... 6 1.3 Grover search ...... 6 1.3.1 Breaking SETH ...... 7 1.3.2 Results from Grover’s algorithm ...... 8 1.3.3 Amplitude amplification ...... 8 1.4 Finding Minima ...... 9 1.5 The sensitivity method ...... 10

2 Quantum algorithms 13 2.1 Orthogonal vectors ...... 13 2.2 Graph diameter ...... 15 2.3 Closest pair in d-Hamming space ...... 15 2.4 All pairs max flow ...... 16 2.5 Dynamic graph problems ...... 17 2.5.1 Single source reachability count ...... 18 2.5.2 2 Strong components ...... 19 2.5.3 Connected subgraph ...... 20 2.5.4 S, T - reachability ...... 20 2.6 Other problems ...... 21

3 Span programs 23 3.1 st-connectivity ...... 24 3.2 st-distance ...... 24 3.3 Cone programs ...... 26

xxi xxii CONTENTS

4 Conclusion 29 4.1 Future work ...... 31 Introduction

The study of algorithms and their complexity predates the first computers. Since then the field has only grown in importance. The technological demand for ever faster solutions for problems is not fully answered by the growing speed of computers by Moore’s law. Many problems in cutting edge research fields such as computational biology and quantum mechanics are still intractable. And due to the upcoming trend of big data many algorithms which were relatively good become unsuitable for handling this big amount of data. Thus there is still a huge incentive for finding better algorithms for many problems. But after years of researching the same problems without any mea- surable improvements, the question rises if such a better algorithm even exists. This gave rise to the study of lower bounds, giving a bound on the minimal complexity of every algorithm for a certain problem. A tool that has long been used to separate problems with polynomial complexity from those with an ex- ponential complexity are reductions. Given one problem A that is supposedly hard and given a problem B such that if you can solve B efficiently one can also solve A as efficiently, then problem B is equally hard. These reductions always depend on a hypothesis that one of the root problems has a certain complexity, for example exponential. The most famous example of this is the exponential time hypothesis(ETH) which says that conjunctive normal form satisfiability has exponential time complexity. In recent years ETH has been sharpened in the form of the strong expo- nential time hypothesis(SETH). This hypothesis allowed to prove more lower bounds using fine-grained reductions [51]. Some of these lower bounds are close to the complexities of the best known algorithms, thus proving that no big improvements should be expected for these problems. But while these lower bounds show the limitations of the classical computa- tion model, they also highlight the potential of other models. In this thesis we will specifically look at the quantum query model. Algorithms such as Shor’s factoring algorithm [46] and Grover search [32] are great examples of quantum algorithms outperforming their classical counterparts. To highlight the poten- tial of the quantum model even more it would be useful to have a set of problems for which the quantum complexity is lower than their classical lower bound. In this thesis we create such a set of problems. To do this we start from prob- lems whose lower bounds are proven under SETH. This hypothesis is already known to be broken in the quantum query model [7]. We now have a list of

1 2 CONTENTS problems which have a known lower bound classically, but no such lower bound in the quantum query model. In table 4.2 you can find the complexities of the quantum algorithms we provided in this thesis. Most of these improve on the classical lower bounds and all improve on the best known classical algorithm, whose complexities can be found in table 4.1.

Outline

This thesis consists of three main parts. In chapter 1 we describe the previous results our thesis is built on. We start by explaining fine grained complexity and the use of the strong exponential time hypothesis(SETH) to prove classical lower bounds. Then we move to quantum algorithms, explaining Grover search and its impact on SETH. Using Grover search we prove that SETH is no longer valid on a quantum computer and thus the results from fine-grained reductions using SETH are also no longer valid. We also look at some algorithms derived from Grover search which we will later use in chapter 2 to describe new quantum algorithms. Finally this chapter also describes the sensitivity method to provide lower bounds on the quantum query complexity of an algorithm. In chapter 2 we try to find quantum algorithms for problems described by Williams [51]. We also provide new lower bounds on the quantum query com- plexity of those problems where possible. Specifically we provide an O(nk/2dk) algorithm for the k-Orthogonal vectors problem. We also provide an O(n√m log3/2 n) algorithm for Graph diameter and an O(n) algorithm for finding a closest pair in a d-Hamming space. For All pairs max flow we give an algorithm with com- plexity O˜(mn3/2 log2 U). In section 2.5 we discuss the disadvantages for using Grover search based algorithms for dynamic problems. We then look at some examples and try to find a non-dynamic solution which improves upon the dynamic lower bound. For Single source reachability count and S, T - reachability we provide an algorithm that breaks this lower bound. For 2 Strong components and Connected subgraph we prove a lower bound using sensitivity matching the lower bound under SETH. Finally in chapter 3 we look at a completely different model for describing quantum algorithms: span programs. First we go over some existing research explaining span programs and its correlation to quantum algorithms. We then look at st-connectivity as an example algorithm using span programs. This algo- rithm is then slightly modified to give an unsuccessful algorithm for st-distance. We explain what problems occur and propose a new model for describing algo- rithms which solves those problems: cone programs. We then give a reasoning why cone programs, while closely related to span programs, are much harder to translate into a quantum algorithm. Chapter 1

Preliminaries

1.1 Fine-grained complexity

The study of fine-grained complexity tries to provide lower bounds for poly- nomial time problems. The idea, as described by Williams [51], is to mimic NP-completeness using the following approach: 1. Identify some believable fine-grained hardness hypotheses.

1  2. Use fine-grained reductions to show that obtaining an O(t(n) − ) time algorithm for a problem for  > 0 violates one or more of these hypothesis. These reductions also have to be tailored to the O(t(n)) runtime.

1.1.1 The strong exponential time hypothesis One of the key hypothesis that is used in this field is the strong exponential time hypothesis(SETH). Impagliazzo, Paturi and Zane [34] introduced SETH to address the complexity of conjunctive normal form satisfiability problem (CNF- SAT). At the time they only considered deterministic algorithms, but nowadays it is common to extend SETH to allow randomization. Hypothesis 1 (SETH). For every  > 0 there exists an integer k 3 such that CNF-SAT on formulas with clause size at most k (the so called≥ k-SAT prob- (1 )n lem) and n variables cannot be solved in O(2 − ) time even by a randomized algorithm. As the clause size k grows, the lower bound given by SETH converges to 2n. SETH also implies that general CNF-SAT on formulas with n variables and m n o(n) clauses requires 2 − poly(m) time. SETH is motivated by the lack of fast algorithms for k-SAT as k grows. It is a much stronger assumption than P = NP which assumes that SAT requires superpolynomial time. A weaker version,6 the Exponential Time Hypothesis (ETH) asserts that there is some constant δ > 0 such that CNF-SAT requires Ω(2δn).

3 4 CHAPTER 1. PRELIMINARIES

Both ETH and SETH are used within Fixed Parameter and Exponential Time algorithms as hardness hypotheses, and they imply meaningful hardness results for a variety of problems (see e.g. [20]).

1.1.2 Other hypotheses SETH is not the only hypothesis used to provide lower bounds on classical complexity using fine grained reductions. Some other famous hypotheses are the 3-SUM Hypothesis and the All-Pairs Shortest Paths (APSP) Hypothesis.

Definition 1 (3-SUM). Given a set S of n integers from nc, ..., nc for some constant c, determine whether there are x, y, z S such that{− x + y +}z = 0. ∈ Hypothesis 2 (3-SUM Hypothesis). 3-SUM on n integers in n4, ..., n4 2  {− } cannot be solved in O(n − ) time for any  > 0 by a randomized algorithm. The 3-SUM hypothesis was introduced by Gajentaan and Overmars [30] [31] who used it to show that many problems in computational geometry require quadratic time, assuming that 3-SUM does. Quadratic lower bounds for 3-SUM are known in restricted models of computation such as the linear decision tree model in which each decision is based on the sign of an affine combination of at most 3 inputs (see e.g. [27] [26]). However, in the more general linear decision tree model, Kane et al. [36] show that O(n log2 n) queries suffice to solve 3-SUM, so that such lower bounds should be taken with a grain of salt.

Definition 2 (APSP). Given an n node graph G = (V,E), and integer edge weights w : E M, ..., M for some M = poly(n), compute for every u, v V ,the (shortest→ path) {− distance} d(u, v) in G from u to v. ∈

Hypothesis 3 (APSP Hypothesis). No randomized algorithm can solve APSP 3  c c in O(n − ) time for  > 0 on n node graphs with edge weights in n , ..., n and no negative cycles for large enough c. −

The textbook algorithm for APSP is the O(n3) time Floyd-Warshall algo- rithm from the 1960s based on dynamic programming. Many other algorithms run in the same time. For instance, one can run Dijkstra’s algorithm from every vertex, after computing new non negative edge weights using Johnson’s trick [35]. Following many polylogarithmic improvements (e.g. [29] [18]), the current best APSP running time is a breakthrough n3/exp(√log n) runtime by R. Williams [48]. Despite the long history, the cubic runtime of the textbook algorithm has remained unchallenged.This motivates the APSP Hypothesis, im- plicitly used in many papers (e.g. [43]). Its first explicit use as a hardness hypothesis is in [52].

1.1.3 Fine-grained reductions The goal, as described by Williams in [51], is as follows. Consider problem A with textbook runtime a(n) and problem B with textbook runtime b(n). Given 1.1. FINE-GRAINED COMPLEXITY 5

1  a supposed O(b(n) − ) time algorithm for B for  > 0, we want to compose it with another algorithm (the reduction) that transforms instances of A into 1 0 instances of B, to obtain an algorithm for A running in time O(a(n) − ) time for 0 > 0 (a function of ). The most common reductions used in complexity are polynomial time (or sometimes logspace) reductions. Williams states that for our purposes such reductions are not sufficient since we truly care about the runtimes a(n) and b(n) that we are trying to relate, and our reductions need to run faster than a(n) time for sure; merely polynomial time does not suffice. Beyond the time restriction, reductions differ in whether they are Karp or Turing reductions, according to Williams. Karp (also called many-one) reduc- tions transform an instance of A into a single instance of B. Turing reductions are allowed to produce multiple instances, i.e. oracle calls to B. If we restrict ourselves to Karp-style reductions, then we would not be able to reduce a search problem to any decision problem: decision problems return a single bit and if we only make one oracle call to a decision problem, in general we would not get enough information to solve the original search problem. We use Turing-style reductions. The most general definition, as defined by Williams, is:

Definition 3 ([51] fine-grained reductions). Assume that A and B are compu- tational problems and a(n) and b(n) are their conjectured running time lower bounds, respectively. Then we say A (a, b)-reduces to B,A a,b B, if for every ≤ 1 δ  > 0, there exists δ > 0, and an algorithm R for A that runs in time a(n) − on inputs of length n, making q calls to an oracle for B with query lengths n1, ..., nq , where q 1  1 δ b(n ) − a(n) − . (1.1) i ≤ i=1 X If A a,b B and B b,a A, we say that A and B are fine-grained equivalent, A ≤B. ≤ ≡a,b

The definition implies that if A a,b B and B has an algorithm with running 1  ≤ time O(b(n) − ), then, A can be solved by replacing the oracle calls by the 1 δ corresponding runs of the algorithm, obtaining a runtime of O(a(n) − ) for A for some δ > 0. If A a,b B, then arguably the reason why we have not been able to improve upon≡ the runtimes a(n) and b(n) for A and B, respectively, is the same. Notice that the oracle calls in the definition need not be independent — the ith oracle call might be adaptively chosen, according to the outcomes of the first i 1 oracle calls. − 1.1.4 Results form hardness under SETH In recent years there have been a considerable number of problems whose hard- ness has been proven under SETH. Arguably the first such problem was the 6 CHAPTER 1. PRELIMINARIES

Orthogonal vector problem(OV) which was shown by Williams [47] to require quadratic time under SETH. Many conditional hardness results based on OV and SETH have been dis- covered. For example for Graph diameter [42], All pairs max flow [38], dynamic graph problems [39],... A more extensive listing can be found in [51].

1.2 The quantum query model

All the previous results are given in the classical computational model. In this thesis we will improve upon these results by using the quantum query model. In this section we give the description of this model by Høyer and Spalek˘ [33]. The quantum query model is a so-called oracle model in which the input is given as an oracle so that the only knowledge we can gain about the input is in asking queries to the oracle. The input is a finite bitstring x 0, 1 N of ∈ { } some length N, where x = x1x2...xN . The goal is to compute some function F : 0, 1 N 0, 1 m of the input x. Some of the functions we consider are boolean,{ } some→ not. { We} use the shorthand notation [N] = 1, 2, ..., N . { } The oracle model is called decision trees in the classical setting. A classical query consists of an index i [N], and the answer of the bit xi. There is a natural way of modelling a query∈ so that it is reversible. The input is a pair (i, b), where i [N] is an index and b 0, 1 a bit. The output is the pair (i, b x ), where∈ the bit b is flipped if x ∈= 1. ⊕ i i As our measure of complexity, we use the query complexity. The query complexity of an algorithm A computing a function F is the number of queries used by A. The query complexity of F is the minimum query complexity of any algorithm computing F . An alternative measure of complexity would be to use the time complex- ity which counts the number of basic operations used by an algorithm. The time complexity is always at least as large as the query complexity since each query takes one unit step,and thus a lower bound on the query complexity is also a lower bound on the time complexity. For most existing quantum al- gorithms,including Grover’s algorithm , the time complexity is within poly- logarithmic factors of the query complexity. A notorious exception is the so- called Hidden Subgroup Problem which has polynomial query complexity [28], yet polynomial time algorithms are known only for some instances of the prob- lem. All the algorithms given in this thesis are based upon Grover’s search. Their time complexity is within poly-logarithmic factors of the query complex- ity.

1.3 Grover search

The lower bounds on complexity given in 1.1.4 all depend on SETH. Although this hypothesis still stands classically, this is not the case for the quantum query model. Ambainis shows in [7] that Grover’s search algorithm [32] can be used 1.3. GROVER SEARCH 7

n to solve SAT with time complexity O(2 2 ) for any k. To make this clear we will first describe the essential results of Grover’s search algorithm. The problem it solves can be described as follows:

Definition 4 (Search). Given an input x1, ..., xN 0, 1 specified by a black box that answers queries. In a query, we input i to∈ the { black} box and it outputs xi . Output an i : xi = 1.

Classically N queries are needed to solve this problem deterministically. Even a probabilistic algorithm requires Ω(N) queries. Grover got the following results using his quantum algorithm [32]:

Theorem 1. Search can be solved in O(√N) quantum queries.

Grover’s algorithm is often described as ”Database search”. Although this is partially true, as this algorithm can solve database search in O(√N) queries, it underestimates the real power of this algorithm. One of the strengths ignored by this statement is that the algorithm is able to search functions. This means that it can find an input, in a range of N inputs, matching a given output using only O(√N) queries to the function.

1.3.1 Breaking SETH

As an example of this strength we will describe how Grover search can be used to solve SAT. To do this we reduce SAT on search. n We know that there are N = 2 possible assignments x1, ..., xN . Each of these assignments can easily be tested by a function F which returns 1 if it satisfies the formula and 0 otherwise. This function has a complexity of O(l) with l the length of the formula. n Using Grover search we can search F for a valid assignment with O(2 2 ) calls to F. As said before this breaks SETH. Thus all complexity lower bounds proven using this hypothesis are also broken on a quantum computer. Or more specifically their proof is broken. To actually break the lower bounds a better quantum algorithm has to be found. That is the main goal of this thesis. SETH is not the only hardness hypothesis which is broken on a quantum computer. The 3-SUM and APSP hypotheses are also no longer valid in the quantum query model. Belovs has proven a tight Ω(nk/(k+1)) lower bound on the more general k-SUM problem using the adversary method [13]. This results in a Ω(n3/4) lower bound for 3-SUM. Because the lower bound is tight we expect there to be an algorithm for 3-SUM with only a polylogarithmicly higher query complexity. Using a slight modification of our Graph diameter algorithm (section 2.2) APSP can be solved in O(n√m log3/2 n) time in the quantum query model. At this point we should also state that ETH isn’t broken, meaning that P = NP could still hold on a quantum computer. 6 8 CHAPTER 1. PRELIMINARIES

1.3.2 Results from Grover’s algorithm Since its discovery, Grover’s algorithm has been widely studied. Some important results have already been summarized by Ambainis [7]. We will paraphrase his summary below as some of these results will also be used in our quantum algorithms. (See chapter 2) 1. In general, Grover’s algorithm is bounded-error. Given a black-box x , ..., x 1 N ∈ 0, 1 where some xi are equal to 1, the algorithm might not find any of them{ } with a small probability. However, if we know that the number of i : xi = 1 is exactly k, then the algorithm can be tuned so that it finds N one of them with certainty (probability 1) in O( k ) steps [15]. q 2. Moreover, if we know that the number of i : xi = 1 is exactly k, the algo- rithm is exactly optimal [53]. The number of queries cannot be improved even by 1. For finding i : xi = 1 with certainty, the minimum number of queries is known to be exactly

π 1 π N < (1.2) 4arcsin 1 − 2 4 k & √N/k ' r If the number of queries t is less than that, the best probability with which any quantum algorithm can find an i : xi = 1 is exactly the one achieved by running Grover’s algorithm with t queries. 3. If k is unknown, O(√N) queries are still sufficient. If k is unknown but it is known that k k , O( N/k ) queries suffice [14]. ≥ 0 0 4. In this case, the algorithmp is inherently bounded-error. There is no quan- tum algorithm with less than N queries that solves Grover’s problem with certainty for arbitrary x1, ..., xN [11].

5. If we have an instance x1, ..., xN with k elements equal to 1 and would like to find all k of them, Θ(√Nk) queries are sufficient and necessary.

1.3.3 Amplitude amplification Amplitude amplification is a generalization of Grover’s algorithm which solves the following problem. Let A be a (classical or quantum) algorithm with one sided error. If the correct answer is “no”, A always outputs “no”. If the correct answer is “yes”, A outputs “yes” with at least some (small) probability  > 0. How many times do we need to repeat the algorithm to increase its success probability from a small  to a constant (for example, 2/3)? Theorem 2. [15] Let A be a quantum algorithm with one-sided error and suc- cess probability at least  > 0. Then there is a quantum algorithm B that solves 1 the same problem with success probability 2/3 by invoking A O(  ) times. For more details see [15]. q 1.4. FINDING MINIMA 9

1.4 Finding Minima

This section describes an implementation of Grover’s algorithm that finds a global minimum of a function.

Definition 5 (Global Minimum). Given an integer-valued function f(i) of one variable i 1, 2, ..., N , specified by black box that answers queries. The input of a query∈ is { i 1, 2,} ..., N , the output is f(i). Find i such that f(i) f(j) for any j = i. ∈ { } ≤ 6 Theorem 3. [25] Global Minimum can be solved with O(√N) quantum queries.

Classically, Ω(N)queries are required. We will describe a slightly simplified form of the algorithm, as by Ambainis in [7]. A more detailed version that includes all technicalities can be found in [25].

1. Choose x uniformly at random from 1, ..., N ; { } 2. Repeat:

(a) Use Grover’s search to search for y with f(y) < f(x) (b) If search succeeds, set x = y. Otherwise, stop and output x as the minimum.

We provide Ambainis’ sketch why O(√N) queries are sufficient for this al- gorithm, on intuitive but “hand-waving” level [7]. (For a more detailed and rigorous argument, see [25].) For simplicity, assume that, for all x, the values of f(x) are distinct. Let x0 be the value of x at the beginning of the algo- rithm and xi be the value of x after the ith Grover’s search. Since x0 is a N random element of 1, ..., N , f(x0) will, on average, be the 2 th smallest el- ement of f(1), ..., f{(N) . After} the first iteration, x is some element with { } 1 f(x1) < f(x0). By inspecting Grover’s algorithm, we can find out that the probabilities of algorithm outputting x1 are equal for all x1 with f(x1) < f(x0). Thus x1 is uniformly random among numbers with f(x1) < f(x0). Since f(x0) N was, on average, the 2 th smallest element of f(1), ..., f(N) , this means that N { } f(x1) is, on average, the 4 th smallest element. By a similar argument, f(xi) is, on average the N th smallest element in f(1), ..., f(N) . 2i { } N We now remember that Grover’s search uses O( k ) queries where k is the number of solutions. Consider repetitions of the minimumq finding algorithm in the order from the last to the first. By the argument above, we would expect that, in the last iteration before finding the minimum, k 1, then, in the iteration before that, k 2 then k 4 and so on. Then, the≈ total number of queries in all the repetitions≈ of Grover’s≈ search is of order

N N N N √N + + + ... = √N(√1 + + + ...) (1.3) r 2 r 4 r 1 r 1 10 CHAPTER 1. PRELIMINARIES

The term in brackets is a decreasing geometric progression and, therefore, sums up to a constant. This means that the sum of equation 1.3 is of order O(√N).

1.5 The sensitivity method

Even the quantum query model does not allow for infinite improvement upon algorithm complexity. It is thus useful to know a lower bound for the quantum complexity of a problem. Here we will always look at quantum query complexity. We provide a lower bound on the number of queries to the input oracle any algorithm has to do in order to give a correct result with high probability for a certain problem. In this thesis we will make use of the sensitivity method to achieve such lower bounds. The sensitivity of a function is a rather intuitive measure expressing how much the result of a function is impacted by small changes in the input. It can be more formally defined as follows:

Definition 6 (Sensitivity). Given a function f :[q]n [l] with q, n, l N. → ∈ f is sensitive to variable i on input x dom(f) if there exist y dom(f) such that f(x) = f(y) and for all j = i x =∈ y . ∈ 6 6 j j The sensitivity of f on x is the number of sensitive variables. The sensitivity of f: s(f) is the maximal sensitivity of f on x for all x dom(f). ∈

This measure is useful because it provides a lower bound on the complexity of f.

Theorem 4. Given a function f :[q]n [l] with q, n, l N. Ω( s(f)) is a lower bound on the quantum query complexity→ of f ∈ p ˜ Proof: The idea is to reduce ORs(f) to f.

Definition 7. OR˜ is a function from 0, 1 n to 0, 1 . n { } { } OR˜ n(x) = 0 if the Hamming weight of x is 0 OR˜ n(x) = 1 if the Hamming weight of x is 1 Otherwise OR˜ n is undefined. Let x be an input of f with sensitivity s(f). Let y(1), ..., y(s(f)) be the inputs that differ in exactly one variable from x and for which f(x) = f(y(i)). Suppose there is an algorithm A solving f. We will describe an oracle6 O for f that ˜ takes as input an oracle z for ORs(f) such that A(O(z)) = f(x) if and only if ˜ ORs(f)(z) = 0. The oracle works as follows:

The ith bit is requested • If the ith bit is not a sensitive variable of x: return x • i 1.5. THE SENSITIVITY METHOD 11

Let y(j) be the input for which the ith bit is different from x • Check the jth bit of z • if z = 0 return x otherwise return y(j) • j i i For each call to O there is at most one call to z. Thus if A can solve f in q ˜ queries then ORs(f) can be solved in q queries. We know that the query complexity lower bound for OR˜ n is Ω(√n) [11]. Thus q = Ω( bs(f)) p 12 CHAPTER 1. PRELIMINARIES Chapter 2

Quantum algorithms

As discussed in section 1.3.1, the use of a quantum computer allows SETH to be broken. Yet this hypothesis is one of the key hypotheses in the study of fine-grained complexity [51]. The implication is that all proofs based on SETH are no longer valid. This creates a set of problems which have a classically proven lower bound on complexity (under SETH), but have no such lower bound in the quantum query model. This makes this set of problems particularly interesting for quantum algorithm research. Finding a quantum algorithm which breaks these classical lower bounds means finding an algorithm which is better than any possible clas- sical algorithm for the same problem. This is of course under the assumption that SETH is not broken classically, which in itself would be a major break- through. Williams provides us with a list of such problems in [51]. In this chapter we will look at some problems in this list and try to find a quantum algorithm for each of them. We also provide new lower bounds on the quantum query complexity for some of the problems. An overview of these results can be found in table 4.2.

2.1 Orthogonal vectors

The Orthogonal vectors problem(OV) problem is defined as follows: Definition 8 (OV). Let d = ω(log n); given two sets A, B 0, 1 d with A = B = n, determine whether there exist a A, b B so⊂ that { a} b = 0 | | | | ∈ ∈ · where a.b := d a[i] b[i]. i=1 · There is alsoP a more general version called the k-OV problem which gener- alizes to k sets for k 2. ≥ d Definition 9 (k-OV). Let d = ω(log n); given k sets A1, ..., Ak 0, 1 with A = ... = A = n, determine whether there exist a A, ...,⊂ {a }A so | 1| | k| 1 ∈ k ∈ k that a ... a = 0 where a ... a := d Πk a [i]. 1 · · k 1 · · k i=1 j=1 j P 13 14 CHAPTER 2. QUANTUM ALGORITHMS

The OV and its generalization k-OV not only have a lower bound proven under SETH [47], they are also the basis for many other fine-grained reductions. This lower bound is the now widely used k-OV hypothesis.

Hypothesis 4 (k-OV Hypothesis). No randomized algorithm can solve k-OV k  on instances of size n in n − poly(d) time for constant  > 0.

At this point it has to be noted that in some special case this hypothesis has already been broken classically. For example, Williams and Yu [50] showed that the 2-OV Hypothesis is false when operations are over the ring Zm , or k over the field Fm for any prime power m = p . In the first case, OV can be m 1 p(k 1) solved in O(nd − ) time, and in the second case, in O(nd − ) time. But at this moment no classical algorithm exists that breaks the k-OV Hypothesis in its most general form. Simple exhaustive search already gives an O(nkd) time algorithm. The best k 1/Θ(log(d/log(n))) known algorithm only slightly improves on this with a run time of n − [19][2]. As exhaustive search is already one of the best solutions for this problem, it is a perfect candidate for a quantum speedup using Grover’s algorithm (see section 1.3). To make this more clear we will reduce the k-OV problem to the search problem. Given a set x , ..., x = A ... A with N = nk and a function F that { 1 N } 1 × × k given i calculates a1 ... ak for xi = a1, ..., ak and returns 1 if a1 ... ak = 0 and 0 otherwise. Find· i ·such that F ({i) = 1. } · · k Grover tells us that this problem can be solved with O(√N) = O(n 2 ) calls to F . It is clear that F has a complexity of O(dk). This brings the total k complexity to O(n 2 dk) which breaks the k-OV Hypothesis. Now we will provide a lower bound for the 2-OV problem of Ω(n2/3). To achieve this we reduce the two-to-one Collision problem upon 2-OV.

Definition 10 (Collision problem). Given a function f :[n] > [q] as an oracle, the Collision problem is to find two distinct inputs i and− j such that f(i) = f(j), under the promise that such inputs exist. The two-to-one Collision problem has the promise that f is a two-to-one function.

The Collision problem has a lower bound of Ω(n1/3) [45]. The reduction to 2-OV is fairly simple. First we define two bijective map- d pings m :[q] V and m0 :[q] W with V,W 0, 1 . These mappings are → → ⊆ { } chosen such that m(i) m0(j) = 0 for i, j [q] if and only if i = j. The two input · ∈ sets to the 2-OV problem are given by the respectively applying m and m0 on the restriction of the oracle function on two random sets of O(√n) inputs. If the oracle is two-to-one, a collision will be found with high probability, by the Birthday Paradox. 2.2. GRAPH DIAMETER 15

2.2 Graph diameter

The Graph diameter is the maximal distance between any vertex and any other vertex in a graph. More formally it can be defined as follows:

Definition 11 (Graph Diameter). For a weighted graph G = V,E the diam- { } eter is d = maxu.v V (δ(u, v)) with δ(u, v) defined as the length of the shortest path between u and∈v in G.

This problem can be solved by solving the all pairs shortest path (APSP) and then finding the maximum with a complexity of O˜(mn) in a graph with n nodes and m edges. To improve this there has been an effort to find approximate solutions, but even here the range for improvements is rather small. The best classical algorithm has complexity O˜(m√n + n2) [42]. It is also shown that if hypothesis 5 can be broken, SETH can be refuted [42].

2  3 Hypothesis 5. There is no O(m − ) time ( 2 )-approximation algorithm for the diameter of undirected unweighted graphs,− for some constant  > 0.

Here we will give an O(n√mlog3/2(n)) quantum algorithm for finding the Graph diameter. To achieve this we combine an algorithm for single source shortest path (SSSP) with the Minimum finding algorithm discussed in sec- tion 1.4. [24] gives a O(√nmlog3/2(n)) quantum algorithm for SSSP. This algorithm can easily be modified to a function F which returns only the maximal dis- tance from v0: maxu V (δ(v0, u)). Now we apply a slightly adjusted version of the Minimum finding∈ algorithm in section 1.4 to find the node for which F is maximal with O(√n) queries to F. Although the above algorithm is bounded error, and not exact, we can get an arbitrary close approximation independent of the size of G. This means that hypothesis 5 can be broken on a quantum computer.

2.3 Closest pair in d-Hamming space

The problem we will solve in this section is to find the closest pair in a d- Hamming space. Definition 12 gives a more formal description of the problem.

Definition 12 (Closest pair in d-Hamming space). Given Q, D 0, 1 d with Q = D = n, find u Q and v D such that h(u, v) is minimal.⊂ Here { }h(u, v) |is| the| Hamming| distance∈ between∈u and v.

2 1 [5] solves this problem classically in O(n − (d.log2(d/log(n)−n)) ). This is very 2  O(d) close to the lower bound of n − 2 for  > 0 under SETH [6]. We already know that SETH is broken on a quantum computer thus we aren’t necessarily restricted by the same lower bound. The problem is fairly similar to the 2-OV problem described in section 2.1. The only difference is that instead of searching for an exact value in a function of two vectors, we are 16 CHAPTER 2. QUANTUM ALGORITHMS searching for a minimal value. To solve this issue we can use the minima finding algorithm described in section 1.4 which has exactly the same complexity as the standard search algorithm. Following the same reasoning as in section 2.1 we get an O(n) algorithm to find the closest pair in a d-Hamming space. We can also use the same idea as for 2-OV to prove a lower bound for Closest pair in a d-Hamming space. We reduce the Collision problem on Closest pair in a d-Hamming space. In this case the reduction is even easier as we only need one mapping m :[q] 0, 1 d which can simply be the binary representation of [q]. h(m(f(i)), m(f→(j))) { =} 0 if and only if f(i) = f(j), thus if such i, j exist then their corresponding pair of points will have a minimal Hamming distance. This means that if the Hamming distance of the pair returned by Closest pair in d-Hamming space is not 0, no collision was found. We again only need the restriction of the oracle function on two random set of O(√n) inputs. If the oracle is two-to-one, a collision will be found with high probability, by the Birthday Paradox. This results in an Ω(n2/3) lower bound for Closest pair in a d-Hamming space. With the same algorithm we can improve all point pair distance problems described in [49]. We also give a similar proof for the lower bound, using a mapping to the respective point space. We have thus given a quantum algorithm with complexity O(n) and a quantum query lower bound Ω(n2/3) for all point pair distance problems.

2.4 All pairs max flow

To define the All pairs max flow (APMF)problem, we should first define the max flow problem, see definition 13. The all pairs version extends upon the problem, by searching for the maximal max flow(u, v) over all pairs u, v V . ∈ Definition 13 (Max flow). Given graph G = V,E , a source s V , a sink { } ∈ t V and a capacity function c : E R+. ∈ → A flow is a mapping f : E R+, denoted by f(u, v) , subject to the following two constraints: →

1. f(u, v) c(u, v) for each (u, v) E ≤ ∈

2. u:(u,v) E f(u, v) = u:(v,u) E f(v, u) for each v V s, t ∈ ∈ ∈ \{ } P P The value of flow is defined by f = v:(s,v) E f(s, v), where s is the source of N. It represents the amount of| flow| passing∈ from the source to the sink. The problem of max flow is to findPf such that f is maximal. | | Krauthgamer and Trabelsi show in [38] that APMF on directed graphs can- not be solved significantly faster than Ω(n3) even for sparse graphs m = O(n). For more general m they show that Ω(mn2) time is necessary. All these lower bounds are valid under SETH. 2.5. DYNAMIC GRAPH PROBLEMS 17

The current best classical algorithm to solve the max flow problem on di- rected graphs [40], does this in O˜(m√nlog2(U)) time, where U is the capacity ratio. Using this algorithm we can create a quantum algorithm that solves APMF in O˜(mn3/2log2(U)). This breaks the classical lower bound for sparse graphs with m = o(n3/2). The quantum algorithm is fairly simple. We again use the minima search al- gorithm of section 1.4 to search for a pair (u, v) V V such that max flow(u, v) is maximal. This requires O(n) calls to max flow∈ ×.

2.5 Dynamic graph problems

Abboud and Williams list a series of dynamic graph problems and provide lower bounds for them in [4]. A dynamic graph problem is a graph problem for which we want to know the result under small variations of the graph, this could be edge or node insertions/deletions. The idea is that there is no need to recalculate the whole problem for each small adjustment. Some of the lower bounds they provide also make use of SETH. This again makes these lower bounds interesting for quantum complexity research. First we will look at the general setting of quantum algorithms for dynamic graph problems and then we discuss some problems in more detail. Dynamic problems teach us one of the weaknesses of quantum algorithms based upon Grover search. For this we first look at the general idea behind classical algorithms for dynamic problems: Find an intermediate state such that finding the corresponding state for a small input change is easy and finding the problem solution from the intermediate state is easy. The strength of quantum algorithms based upon Grover search, such as we have been using this far, is that we do not need the intermediate results, but only the end result. For example in our algorithm for APMF discussed in section 2.4 not all flows between each pair of points are calculated. This is how the algorithm achieves a speedup over it’s classical counterpart, because less queries to this function are needed. But the strength of this type of quantum algorithm is also its weakness, as there is no intermediary state which can be used for further calculations. This does not mean that there can be no dynamic quantum algorithm, but there is a new approach needed. In this study we look at non dynamic quantum approaches to the problem, this means we do not reuse any information of previous calculations. Sometimes this results in lower bounds similar to the classical lower bounds while in other cases we are able to break the classical lower bound by a small margin. In all cases we give an algorithm which improves upon the best known classical algorithm. 18 CHAPTER 2. QUANTUM ALGORITHMS

2.5.1 Single source reachability count Definition 14 (Single source reachability count). Given a directed graph G = (V,E) and a fixed source vertex s V determine if the amount of vertices reachable from s is greater then l under∈ edge insertion and deletions. The classical lower bound for dynamic Single source reachability count (SSR#) is Ω(n) with n the number of vertices [4]. The best known classical algorithm solves it in O(n1.575) time using the full transitive closure [44]. We prove a lower bound on the quantum query complexity of Ω( l(n l)) for the non dynamic version of the problem. We also give an algorithm which− has quantum query p complexity O(l√n log n). Here we will prove a similar lower bound as the classical case, showing that we cannot improve upon the classical lower bound with a non dynamic algorithm for l = Ω(n). For this we will use sensitivity to provide a lower bound as described in section 1.5. We will describe a graph G = (V,E) using boolean variables x for a, b V . ab ∈ xab = 0 if (a, b) E otherwise xab = 1. The booleans thus encode if an edge is in G. ∈ Now we want to estimate the sensitivity of SSR#. We look at a graph G0 where exactly l vertices can be reached from s. Adding one extra vertex would change the result of SSR#. There are l(n l) edges that can connect an extra vertex. This means there are l(n l) variables− for which SSR# is sensitive on − G0. This gives us a lower bound on the quantum query complexity of SSR# of Ω( l(n l)). If l = O(n) then we match the classical lower bound, but otherwise there− is still room for improvements. p Now we will describe a quantum algorithm for SSR# which has quantum query complexity O(l√n log n). [24] describes an algorithm to solve Single source reachability in O√nm log n queries. The algorithm works as follows:

1. Initially the edge set A is empty. 2. Let S = s be a set of reachable vertices, and T = s a stack of vertices to be processed.{ } { } 3. While T = do 6 {} (a) Let u be the top most vertex of stack T . (b) Search for a neighbor v of u not in S (c) If succeed, add (u, v) to A, add v to S, and push v onto T (d) Otherwise, remove u from T

The advantage we have is that we can stop earlier as we only need to know + if S > l. Now we can calculate the complexity. For any vertex u let bu be the | | + out-degree in the tree A produced by the algorithm and du be the out-degree + the original graph G. Then the total number of queries spent in finding the bu + bu + + + neighbors of u is of the order of t=1 du /t√log n which is in O( bu du log n). P q p 2.5. DYNAMIC GRAPH PROBLEMS 19

+ + + + u S bu du log n u S bu u S du √log n = O(l√n log n) ∈ ≤ ∈ ∈ ForP smallp l = o(√n)q weP can thusqP break the classical lower bound. For l = O(n) we still improve upon the best known classical algorithm.

2.5.2 2 Strong components To define the dynamic 2 Strong components (2SC) problem we first need to define the concept of a strongly connected component in a directed graph.

Definition 15 (Strongly connected component). Given a directed graph G = (V,E). A subset C of V is called a strongly connected component iff for every x, y C there exists a path x, y and a path y, x in G that only uses nodes in C and∈ there is no v V C such that for all x C there exists a path x, v and a path v, x in G that∈ only\ uses nodes in C. ∈

Definition 16 (2 Strong components). Given a directed graph G = (V,E), determine if there are 2 or more strongly connected components in G under edge insertions and deletions.

Under SETH the lower bound for the complexity of 2SC is Ω(n) [4]. The best known classical algorithm solves it in O(n1.575) using the full transitive closure [44]. Using the sensitivity method described in section 1.5 we can prove the same lower bound on the quantum query complexity for the non dynamic case of this problem. The sensitivity of 2SC can be determined similarly to that of SSR#. We describe our input graph G = (V,E) using boolean variables x for a, b V . ab ∈ xab = 0 if (a, b) E otherwise xab = 1. The booleans thus encode if an edge is in G. ∈ Now suppose we have an input G0 = (V 0,E0) with the following conditions:

n 1. it has three strongly connected components A, B, C V 0 of size Ω( ) ⊂ 3

2. there exists an a A and b B such that (a, b) E0 ∈ ∈ ∈

Adding any edge (u, v) with u B and v A to G0 would reduce the number of strongly connected components∈ to two. There∈ exist Ω(n2) such edges. We thus find a lower bound on quantum query complexity of Ω(n) which matches the classical lower bound. But it is possible to define an even higher lower bound. Using a reduction from Parity a lower bound Ω(√nm) can be proven for strong connectivity [24]. Strong connectivity can be easily reduced to 2SC by adding a new strongly connected component to the graph. There exist a quantum algorithm for finding the number of strong compo- nents with complexity O(√nm log n) [24]. This improves upon the classically best known algorithm and it is also close to optimal. 20 CHAPTER 2. QUANTUM ALGORITHMS

2.5.3 Connected subgraph

The dynamic Connected subgraph problem has a linear lower bound under SETH [4]. We show the same quantum query complexity lower bound for non dynamic solutions. Classicaly the best solution for this problem is again calcu- lating the transitive closure giving an O(n1.575) algorithm [44].

Definition 17 (Connected subgraph). Given an undirected graph G = (V,E) and a vertex subset S V is the subgraph of G containing only the vertices of S connected, under node⊂ insertions and deletions in S.

Again we will use sensitivity to provide a lower bound for this problem. We use the same description of the graph using boolean variables. As the graph is undirected xab = xba for all a, b V . ∈ n Now consider an input with two vertex sets A, B S of size 2 , with n = S , which are internally connected but there is no edge∈ between A and B. Then| | adding any edge connecting these two components to the graph would make the graph connected. There are O(n2) such edges. This results in a lower bound on the quantum query complexity of Ω(n) for the non dynamic version of the problem. This is close to the classical lower bound for the dynamic problem if S = O( V ). | | | | [24] has an algorithm for connectivity matching this lower bound. They also prove this lower bound using a reduction from parity to connectivity.

2.5.4 S,T - reachability

Definition 18 (S, T - reachability). Given a directed graph G = (V,E) and two fixed vertex subsets S, T V , determine if there exists a t T and s S such that t is not reachable from∈ s, under edge insertions and deletions.∈ ∈

The dynamic classical lower bound under SETH for S, T - reachability is Ω(n2) with n the number of vertices. The best classical algorithm matches this lower bound [22] [23]. We can give a quantum algorithm for this with query complexity O(n√m log n). We will use the Single source reachability algorithm described in [24]. This algorithm builds a full tree of all reachable vertices from the given source vertex with a complexity of O√nm log n. We can modify this algorithm to return 1 if all vertices from T are in this tree and 0 otherwise, without increasing the complexity. Now we simply need to do a Grover search over all s S to find a vertex s for which our algorithm returns 0. ∈ Because of the extra layer of Grover search we need to do more repetitions of the algorithm to achieve a success probability which is independent of n. This is the cause of an extra log n factor resulting in a total complexity of O(n√m log n). 2.6. OTHER PROBLEMS 21

2.6 Other problems

Williams also lists some more problems which have a proven lower bound under SETH [51]. In this section we will discuss why we were not able to provide any improvements for these problems. Some of those problems can be grouped under string matching problems. A quadratic lower bound is provided for local alignment[3], LCS[1][17], edit distance[10] and frechet distance[16]. All these problems also have an algorithm matching the complexity lower bound using dynamic programming. It is hard to improve upon these classical algorithms using Grover search based quantum algorithms. The original search space for most of these problems is much larger than O(n4). For example the search space of edit distance is of the order O(3n) if you have three edit operations (insert, substitute, delete). A simple Grover search for a solution over all outputs would result in an algorithm with at least the same complexity as the best known classical solutions, and most probably a much higher complexity. This is of course the most basic approach, the next step is to look at some of the classical algorithms and improve them by applying for example Grover search where possible. The problem here is that the algorithms for the string matching problems are based on dynamic programming. This means that all partial results are reused for other partial results and the end result. This is a mismatch with Grover search which can only provide a speedup when only the end result is needed, for example the maximum of a function, and all other values of that function become obsolete. Another problem is incremental and decremental max flow. This has an 1  Ω(m − ) lower bound on its amortized update time under SETH [21]. This is again a dynamic graph problem as discussed in section 2.5 which means the same issues apply for finding a good dynamic solution. This time we are not able to break that bound using a non dynamic quantum algorithm. The best quantum algorithm for max flow by Ambainis has complexity O(min(n7/6√mU 1/3, √nUm) log n), where U is the capacity ratio [8]. The current best classical algorithm to solve the max flow problem on directed graphs, does this O˜(m√nlog2(U) time [40]. Which is still slightly better. 22 CHAPTER 2. QUANTUM ALGORITHMS Chapter 3

Span programs

Span programs were introduced by Karchmer and Wigderson[37] as a linear algebraic model for computing Boolean functions. They have been used to give quantum algorithms for example for the Majority function[37] and the Clique problem [9].

Definition 19 (span program). A span program is defined on a linear space W over a field K. The input of the span program is a set of boolean variables x1, ..., xn and their negations. Each of these 2n literals has an associated set of vectors which span a subspace in W . Let w = 0 be a specified vector. 6 This span program defines a Boolean function f(x1, ..., xn) such that f(x1, ..., xn) = 1 iff w U(x1, ..., xn). Here U(x1, ..., xn) is the subspace spanned by the sub- spaces associated∈ to all TRUE literals x or x . i ¬ i These span programs became particularly interesting for quantum algo- rithms after the paper by Reichardt and Spalek who showed that any span program can be efficiently solved by a quantum algorithm [41]. More specifi- cally they showed that any span program has a quantum complexity √ws+ws . − Here ws+ and ws are respectively the positive and negative witness size of the span program. Let− D[W ] be the dimension of W for any vector space W .

Definition 20 (positive witness size). A positive witness is a linear combination of basis vectors v1, ..., vk of the subspace U(x1, ..., xn) such that αivi = w. Such a witness exists iff f(x1, ..., xn) = 1. P The positive witness size ws (x , ..., x ) = D[U(x1,...,xn)] α 2. + 1 n i=0 | i| ws+ = maxx¯ f −1(1)(ws+(¯x)) ∈ P Definition 21 (negative witness size). A negative witness is a vector β such that β, v = 0 i 0, .., k and β, w = 1 with v , ..., v basis vectors of the h ii ∀ ∈ { } h i 1 k subspace U(x1, ..., xn). Such a witness exists iff f(x1, ..., xn) = 0. D[W ] 2 The negative witness size ws (x1, ..., xn) = β, vi . − i=0 |h i| ws = maxx¯ f −1(0)(ws (¯x)) − ∈ − P 23 24 CHAPTER 3. SPAN PROGRAMS

3.1 st-connectivity

Here we will look at an interesting usecase of span programs to define quan- tum algorithms. This is the algorithm by Belovs and Reichardt to solve st- connectivity [12]. Definition 22 (st-connectivity). Given an undirected n-vertex graph G and two vertices s and t, determine whether there is a path from s to t in G. Theorem 5. Consider the st-connectivity problem on a graph G given by its adjacency matrix. Assume there is a promise that if s and t are connected by a path, then they are connected by a path of length at most d. Then the problem can be decided in O(n√d) quantum queries [12]. Define a span program over the vector space Rn with the vertex set of G as orthonormal basis. The target w = t s . For each vertex pair u, v in G add u v as input vector corresponding| i − | i to the variable dependent{ on} whether or| noti − |thei edge (u, v) is in G. If there exists a path t = u0, u1, ..., um = s in G then there is a positive witness. All vectors ui ui+1 are available and they sum up to t s . The witness size is at most| im − | d. i | i − | i If there exist no such≤ path, s and t are in a different component. We can define a negative witness w by w u = 1 if u is in the connected component of t and 0 otherwise. Then| −wi th −s | =i 1 and w v = 0 for all input vectors v. We know that w b h1− for| − alli other potentialh −| inputi vectors b. As there are at most n2 potentialh −| i input ≤ vectors, the negative witness size ws n2. This results in a total witness size of O(n2d) for this span program− ≤ and an O(n√d) quantum algorithm for st-connectivity.

3.2 st-distance

The algorithm described in section 3.1 gives us an interesting idea to use span programs for defining quantum algorithms on graphs. Let us now extend upon this idea to try and find new quantum graph algorithms. Definition 23 (st-distance). Given an undirected n-vertex graph G and two vertices s and t, determine whether there is a path from s to t with length k in G. To solve st-distance we define a span program over Rn+1. The orthonormal basis is the vertex set of G extended with l which will encode the distance. The target vector w = t s + k l . For| i each vertex pair u, v in G add u v + l as input vector| i − corresponding | i | i to the variable dependent{ } on whether or| i−| noti the| edgei (u, v) is in G. The idea is simple: If there exists a path t = u0, u1, ..., um = s in G then the vectors ui ui+1 + l are available and they sum up to t s + k l . Yet the described| i − |spani program| i does not encode st-distance. Let| i − us | analyzei | i what is wrong. 3.2. ST -DISTANCE 25

First of all our encoding of edges is not symmetrical: u v + l = ( v u + l ). This means that we are actually encoding| ai directed − | i graph| i 6 as− | wei − differentiate | i | i based upon the direction of the edge. But it is not that big a problem as it is already nice to solve directed st-distance. So the question remains: does the span program encode directed st-distance? We amend our previous statement: If there exists a directed path t = u0, u1, ..., um = s in G then the vectors ui ui+1 + l are available and they sum up to t s + k l . This statement| i − is | nowi true.| i But we still have to find a negative| i witness − | i in| thei case were there is no path of length k.

u1

U1 U3

U2 U4 s u2 t U w l 1 1 1 1 1 s 1 1 0 0 1 u  1 0 1 0   0  1 − u  0 1 1 1   0  2  − −    t  0 0 0 1  1  −  −      Figure 3.1: An example where there is no path of length 1 but the vector w = t s + l is in the span of U. | i − | i | i Figure 3.1 shows a an example where there is no path of length 1 but the vector w = t s + l is in the span of U. w = 2U2 U1 U3 + U4 with Ui the ith column| i − of | Ui . | i − − Again our algorithm does not preform as we hoped. In general for all k if there exist two paths p and p between s and t with l(p ) = l(p ) then we can 1 2 1 6 2 create p3 = ap1 + bp2 with a, b R such that l(p3) = k. So now we only have an algorithm∈ for directed layered graphs, where there are only connections between two consecutive layers. An example of such graphs can be found in figure 3.2. For these graphs there exists a function η : V → N such that η(v) = η(u) + 1 iff there exists a directed edge (u, v) in G and minv V (η(v)) = 0. This function maps a node to its layer number. Now∈ we can define our negative witness w by w u = η(t) η(u) and w l = 1. w is orthogonal to all input| vectors.−i h w−| i( s t−+ k l ) = ηh (t−)| i η(s) +|k −whichi is zero if and only if the numberh of− layers| | i − between | i |siand t is k−. This does not mean there is a path from s to t. To show this we can look at figure 3.3 . Here η(s) η(t) = 2 but there is no path from s to t. Still the − − algorithm we defined will give a positive result because w = U2 U1 + U3 + U4. Again the algorithm does not behave as expected. But by− now the cause 26 CHAPTER 3. SPAN PROGRAMS

Figure 3.2: Directed Layered Graph of this is quite clear. The inclusion of negative factors of the input vectors within the span allows for some unwanted behaviour. This brings us back to the definition of span programs. These are defined on a linear space W over a field K. We would prefer them to be defined over an ordered field that is closed under linear combinations with positive coefficients. This brings us to the next section: Cone programs.

3.3 Cone programs

To define cone programs we first need to define closed convex cones. Definition 24 (closed convex cone). Let C be a nonempty subset of a Hilbert space W . C is a closed convex cone if the following two properties hold:

1. For all x C and all non negative real numbers λ, we have λx C. ∈ ∈ 2. For all x, y C, we have x + y C ∈ ∈ A subset B of C is called a basis of C iff for all x C there exist a set of positive real numbers λ , ..., λ such that k λ b =∈x with b , ..., b B. { 1 k} i=1 i i 1 k ∈ In classical computing and mathematicsP cone programs are often defined as an optimization problem, but here we will use a definition that is similar to the one we used for span programs. Definition 25 (cone program). A cone program is defined on a Hilbert space W over an ordered field K. The input of the span program is a set of boolean vari- ables x1, ..., xn and their negations. Each of these 2n literals has an associated set of vectors in W . Let w = 0 be a specified vector. 6 This cone program defines a Boolean function f(x1, ..., xn) such that f(x1, ..., xn) = 1 iff w C(x1, ..., xn). Here C(x1, ..., xn) is the convex cone defined by the vec- tors associated∈ to all TRUE literals x or x . i ¬ i 3.3. CONE PROGRAMS 27

s u1 U1

U2

U3 U4 u2 u3 t U w l 1 1 1 1 2 s 1 0 0 0 1 u  1 1 0 0   0  1 − − u  0 1 1 0   0  2     u  0 0 1 1   0  3  −    t  0 0 0 1  1  −  −      Figure 3.3: an example where there is no path from s to t but the algorithm we defined will give a positive result.

We can again define a positive and negative witness and their associated sizes, similarly to those defined for span programs.

Definition 26 (positive witness size). A positive witness is a linear combination of basis vectors v1, ..., vk of the convex cone C(x1, ..., xn) such that αivi = w. Such a witness exists iff f(x1, ..., xn) = 1. C(x1,...,xn) 2 P The positive witness size ws (x , ..., x ) = | | α . + 1 n i=0 | i| ws+ = maxx¯ f −1(1)(ws+(¯x)) ∈ P Definition 27 (negative witness size). A negative witness is a vector β such that β, b 0 b B with B a basis of C(x , ..., x ) and β, w < 0. Such a h i ≥ ∀ ∈ 1 n h i witness exists iff f(x1, ..., xn) = 0. W 2 The negative witness size ws (x1, ..., xn) = | | β, bi . − i=0 |h i| ws = maxx¯ f −1(0)(ws (¯x)) − ∈ − P We are now hoping to achieve a bound on the quantum query complexity for cone programs similar to the one defined on span programs by [41]. If we could show that cone programs also have a complexity of √ws+ws this would be a good step towards many new quantum algorithms. − In this research we tried to edit the quantum algorithm used for span pro- grams to solve cone programs. First we will describe the main idea of this algorithm. The algorithm defines a state φ and two unitary transformations U and | i 1 U2. These are defined in a way that there exists a state w such that φ w and U U w = w if and only if the output of the span| programi is 1.| i ≈ | i 1 2 | i | i The algorithm consists of performing U2U1 t times upon φ and then do- ing a measurement between φ and its orthogonal complement.| i If the span | i 28 CHAPTER 3. SPAN PROGRAMS

t t program outputs 1, then (U2U1) φ (U2U1) w = w φ and the final measurement gives φ with a large| i probability. ≈ | Ifi the| spani ≈program | i outputs 0, there is no such |wi and if we choose t appropriately, we can achieve that (U U )t φ is sufficiently| i close to being orthogonal to φ . 2 1 | i | i U2 and U1 act on a d + n dimensional space, where d is the dimension of the Hilbert space W and n is the number of variables of the span program. The state φ is defined as the target vector of the span program. We won’t go into further details of this algorithm, as we now have enough information to understand the fundamental problem which makes this algorithm unsuitable for cone programs. Suppose we have the closed convex cone C defined by two basis vectors 0 and 1 . The vector w = 0 + 1 is in C while w is not. The only difference| i between| i w and | wi is| thei global| i phase, thus− they | i are indistinguishable in a quantum measurement.| i − | i It is thus impossible to define U U and t such that w can be distinguished from w after t applications 1 2 | i − | i of U1U2. Chapter 4

Conclusion

Research in complexity lower bounds, such as the fined grained reductions pro- vide us with the limitations on the classical computational model. This is not only a bound on our ability to solve exponentially hard problems, they also provide limitations to the class of ”simpler” polynomial problems. Table 4.1 shows us the classical lower bounds using fine grained reductions from SETH for all problems in chapter 2. Although not all those lower bounds are tight, as there is still some room for improvement on some of the upper bounds, they clearly limit our expectations on future algorithms. Quantum computing provides a framework to overcome those limitations. In this thesis we have provided a list of new quantum algorithms which improve

Problem upper bound lower bound

k 1/Θ(log(d/log(n))) k  k-Orthogonal vectors n − [19][2] n − [47] 2 1/Θ(log(d/log(n))) 2  2-Orthogonal vectors n − [19][2] n − [47] 2 2  Graph diameter O˜(m√n + n ) [42] m − [42] 2 1/(d.log2(d/log(n) n)) 2  O(d) Closest pair in d-Hamming space O(n − − ) [5] n − 2 [6] All pairs max flow O˜(mn5/2log2(U)) [40] Ω(n3) [38] Single source reachability count O(n1.575) [44] Ω(n) [4] 2 Strong components O(n1.575) [44] Ω(n) [4] Connected subgraph O(n1.575) [44] Ω(n) [4] S, T - reachability O(n2) [22] [23] Ω(n2) [4]

Table 4.1: Classical lower and upper bounds for the problems discussed in chapter 2. The upper bounds are given by the fastest currently known algorithm, the lower bounds are proven using reductions from SETH.

29 30 CHAPTER 4. CONCLUSION on the best known classical algorithms and in most cases also on the classical lower bound for those problems. The results can be found in table 4.2. This clearly separates quantum query complexity from classical complexity, assuming the Strong exponential time hypothesis is valid classically. Yet the quantum query model also has its own boundaries and limitations. This is why we have also tried to provide a lower bound on the quantum query complexity in table 4.2 for most problems discussed. This gives a clearer picture on which algorithms are optimal and for which problems there is still room for improvement, either on the algorithm or on the lower bound. In section 2.5 and 2.6 we have encountered some dynamic problems or prob- lems which are classically solved using dynamic programming. It seems that these problems are hard to solve using the currently known methods in quan- tum computing. Although some non-dynamic solutions are already better than the best known dynamic algorithms, there is still a lot of potential for real dynamic quantum algorithms. In chapter 3 we looked at span programs, which provide a new way of defining quantum algorithms. This technique has so far mostly been used for boolean functions, but there are some interesting cases were span programs for graph problems are provided. Starting from an example algorithm, we have also made a case for cone programs, as it seemed possible to describe more graph problems in this paradigm. To be able to use cone programs to define quantum algorithms, we needed to prove a reduction upon quantum algorithms similar to the one for span programs. When looking deeper into this proof we discovered that one of the

Problem upper bound lower bound k-Orthogonal vectors O(nk/2) / 2-Orthogonal vectors O(n) Ω(n2/3) Graph diameter O(n√mlog3/2(n)) / Closest pair in d-Hamming space O(n) Ω(n2/3) All pairs max flow O˜(mn3/2log2(U) / Single source reachability count O(l√n log n) Ω( l(n l)) * − 2 Strong components O(√nm log n) [24] Ω(p√nm) [24] * Connected subgraph O(n) [24] Ω(n)* S, T - reachability O(n√m log n) /

Table 4.2: Bounds on the quantum query complexity of the problems discussed in chapter 2. Both the algorithms providing the upper bound as the proofs for the lower bounds can be found in that chapter. * These lower bounds are only valid for the non dynamic version of the problem. 4.1. FUTURE WORK 31 main backbones of the span program reduction was orthogonality. This led us to the conclusion that it is impossible to reduce cone programs upon quantum algorithms using a similar method as used for span programs. We were also unable to find another reduction and fear that the complexity of cone programs is harder than that of span programs due to the lack of orthogonality between correct and incorrect solutions.

4.1 Future work

Both in section 2.5 and 2.6 dynamic problems are listed. Up until now there are no general techniques for solving these problems any better in the quantum query model then in the classical model. An algorithm providing such a dynamic technique would allow improvements on a large set of dynamic problems. It can also be interesting to expand upon the idea of using span programs to define quantum algorithms for graph problems. There are currently only few of these algorithms but they could provide a good alternative to the Grover search based algorithms that are more frequently used. In section 1.3.1 we have also seen that the 3-SUM and APSP hypotheses can be broken in the quantum query model. This creates a list of potentially interesting problems for new quantum research. 32 CHAPTER 4. CONCLUSION Bibliography

[1] Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams. Tight hardness results for lcs and other sequence similarity measures. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 59–78. IEEE, 2015.

[2] Amir Abboud, Ryan Williams, and Huacheng Yu. More applications of the polynomial method to algorithm design. In Proceedings of the twenty- sixth annual ACM-SIAM symposium on Discrete algorithms, pages 218– 230. Society for Industrial and Applied Mathematics, 2015.

[3] Amir Abboud, V Vassilevska Williams, and Oren Weimann. Consequences of faster sequence alignment. Proc. of 41st ICALP, pages 39–51, 2014.

[4] Amir Abboud and Virginia Vassilevska Williams. Popular conjectures im- ply strong lower bounds for dynamic problems. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pages 434–443. IEEE, 2014.

[5] Josh Alman, Timothy M Chan, and Ryan Williams. Polynomial represen- tations of threshold functions and algorithmic applications. arXiv preprint arXiv:1608.04355, 2016.

[6] Josh Alman and Ryan Williams. Probabilistic polynomials and hamming nearest neighbors. In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages 136–150. IEEE, 2015.

[7] Andris Ambainis. Quantum search algorithms. ACM SIGACT News, 35(2):22–35, 2004.

[8] Andris Ambainis and Robert Spalek.ˇ Quantum algorithms for matching and network flows. In Bruno Durand and Wolfgang Thomas, editors, STACS 2006, pages 172–183, Berlin, Heidelberg, 2006. Springer Berlin Hei- delberg.

[9] L´aszl´oBabai, Anna G´al, and Avi Wigderson. Superpolynomial lower bounds for monotone span programs. Combinatorica, 19(3):301–319, 1999.

33 34 BIBLIOGRAPHY

[10] Arturs Backurs and Piotr Indyk. Edit distance cannot be computed in strongly subquadratic time (unless seth is false). In Proceedings of the forty-seventh annual ACM symposium on Theory of computing, pages 51– 58. ACM, 2015.

[11] Robert Beals, Harry Buhrman, Richard Cleve, Michele Mosca, and Ronald De Wolf. Quantum lower bounds by polynomials. Journal of the ACM (JACM), 48(4):778–797, 2001.

[12] Aleksandrs Belovs and Ben W Reichardt. Span programs and quantum algorithms for st-connectivity and claw detection. In European Symposium on Algorithms, pages 193–204. Springer, 2012.

[13] Aleksandrs Belovs and Robert Spalek. Adversary lower bound for the k- sum problem. arXiv preprint arXiv:1206.6528, 2012.

[14] Michel Boyer, Gilles Brassard, Peter Høyer, and Alain Tapp. Tight bounds on quantum searching. Fortschritte der Physik: Progress of Physics, 46(4- 5):493–505, 1998.

[15] Gilles Brassard, Peter Hoyer, Michele Mosca, and Alain Tapp. Quan- tum amplitude amplification and estimation. Contemporary Mathematics, 305:53–74, 2002.

[16] Karl Bringmann. Why walking the dog takes time: Frechet distance has no strongly subquadratic algorithms unless seth fails. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 661–670. IEEE, 2014.

[17] Karl Bringmann and Marvin K¨unnemann. Quadratic conditional lower bounds for string problems and dynamic time warping. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 79– 97. IEEE, 2015.

[18] Timothy M Chan. All-pairs shortest paths for unweighted undirected graphs in o (mn) time. In Proceedings of the seventeenth annual ACM- SIAM symposium on Discrete algorithm, pages 514–523. Society for Indus- trial and Applied Mathematics, 2006.

[19] Timothy M Chan and Ryan Williams. Deterministic apsp, orthogonal vec- tors, and more: Quickly derandomizing razborov-smolensky. In Proceed- ings of the twenty-seventh annual ACM-SIAM symposium on Discrete algo- rithms, pages 1246–1255. Society for Industrial and Applied Mathematics, 2016.

[20] Marek Cygan, Fedor V Fomin,Lukasz Kowalik, Daniel Lokshtanov, D´aniel Marx, Marcin Pilipczuk, Michal Pilipczuk, and Saket Saurabh. Parame- terized algorithms, volume 3. Springer, 2015. BIBLIOGRAPHY 35

[21] Søren Dahlgaard. On the hardness of partially dynamic graph problems and connections to diameter. CoRR, abs/1602.06705, 2016.

[22] Camil Demetrescu and Giuseppe F Italiano. Fully dynamic transitive clo- sure: breaking through the o (n/sup 2/) barrier. In Proceedings 41st Annual Symposium on Foundations of Computer Science, pages 381–389. IEEE, 2000.

[23] Camil Demetrescu and Giuseppe F Italiano. A new approach to dynamic all pairs shortest paths. Journal of the ACM (JACM), 51(6):968–992, 2004.

[24] Christoph D¨urr,Mark Heiligman, Peter HOyer, and Mehdi Mhalla. Quan- tum query complexity of some graph problems. SIAM Journal on Com- puting, 35(6):1310–1328, 2006.

[25] Christoph Durr and Peter Hoyer. A quantum algorithm for finding the minimum. arXiv preprint quant-ph/9607014, 1996.

[26] Jeff Erickson. Lower bounds for linear satisfiability problems. In SODA, pages 388–395, 1995.

[27] Jeff Erickson and Raimund Seidel. Better lower bounds on detecting affine and spherical degeneracies. Discrete & Computational Geometry, 13(1):41– 57, 1995.

[28] Mark Ettinger, Peter Høyer, and Emanuel Knill. The quantum query com- plexity of the hidden subgroup problem is polynomial. Information Pro- cessing Letters, 91(1):43–48, 2004.

[29] Fedor V Fomin, Daniel Lokshtanov, Saket Saurabh, MichaLPilipczuk, and Marcin Wrochna. Fully polynomial-time parameterized computations for graphs and matrices of low treewidth. ACM Transactions on Algorithms (TALG), 14(3):34, 2018.

[30] Anka Gajentaan and Mark H Overmars. On a class of o (n2) problems in computational geometry. Computational geometry, 5(3):165–185, 1995.

[31] Anka Gajentaan and Mark H Overmars. On a class of o (n2) problems in computational geometry. Computational Geometry, 45(4):140–152, 2012.

[32] Lov K Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, pages 212–219. ACM, 1996.

[33] Peter Hoyer and Robert Spalek. Lower bounds on quantum query com- plexity. arXiv preprint quant-ph/0509153, 2005.

[34] Russell Impagliazzo and Ramamohan Paturi. On the complexity of k-sat. Journal of Computer and System Sciences, 62(2):367–375, 2001. 36 BIBLIOGRAPHY

[35] Donald B Johnson. Efficient algorithms for shortest paths in sparse net- works. Journal of the ACM (JACM), 24(1):1–13, 1977. [36] Daniel M Kane, Shachar Lovett, and Shay Moran. Near-optimal linear decision trees for k-sum and related problems. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, pages 554– 563. ACM, 2018. [37] Mauricio Karchmer and Avi Wigderson. On span programs. In [1993] Pro- ceedings of the Eigth Annual Structure in Complexity Theory Conference, pages 102–111. IEEE, 1993. [38] Robert Krauthgamer and Ohad Trabelsi. Conditional lower bounds for all-pairs max-flow. ACM Transactions on Algorithms (TALG), 14(4):42, 2018. [39] Marvin K¨unnemann,Ramamohan Paturi, and Stefan Schneider. On the fine-grained complexity of one-dimensional dynamic programming. arXiv preprint arXiv:1703.00941, 2017.

[40] Yin Tat Lee and Aaron Sidford. Path finding methods for linear program- ming: Solving linear programs in o (vrank) iterations and faster algorithms for maximum flow. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pages 424–433. IEEE, 2014.

[41] Ben W Reichardt and Robert Spalek. Span-program-based quantum algo- rithm for evaluating formulas. arXiv preprint arXiv:0710.2630, 2007. [42] Liam Roditty and Virginia Vassilevska Williams. Fast approximation algo- rithms for the diameter and radius of sparse graphs. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 515–524. ACM, 2013. [43] Liam Roditty and Uri Zwick. On dynamic shortest paths problems. In European Symposium on Algorithms, pages 580–591. Springer, 2004. [44] Piotr Sankowski. Dynamic transitive closure via dynamic matrix inverse. In 45th Annual IEEE Symposium on Foundations of Computer Science, pages 509–517. IEEE, 2004. [45] Yaoyun Shi. Quantum lower bounds for the collision and the element dis- tinctness problems. In The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings., pages 513–519. IEEE, 2002.

[46] Peter W Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM review, 41(2):303–332, 1999. [47] Ryan Williams. A new algorithm for optimal 2-constraint satisfaction and its implications. Theoretical Computer Science, 348(2-3):357–365, 2005. BIBLIOGRAPHY 37

[48] Ryan Williams. Faster all-pairs shortest paths via . In Proceedings of the forty-sixth annual ACM symposium on Theory of com- puting, pages 664–673. ACM, 2014. [49] Ryan Williams. On the difference between closest, furthest, and orthogonal pairs: nearly-linear vs barely-subquadratic complexity. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1207–1215. Society for Industrial and Applied Mathematics, 2018. [50] Ryan Williams and Huacheng Yu. Finding orthogonal vectors in discrete structures. In Proceedings of the twenty-fifth annual ACM-SIAM sympo- sium on Discrete algorithms, pages 1867–1877. SIAM, 2014.

[51] Virginia Vassilevska Williams. On some fine-grained questions in algo- rithms and complexity. In Proceedings of the ICM, 2018. [52] Virginia Vassilevska Williams and Ryan Williams. Subcubic equivalences between path, matrix and triangle problems. In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pages 645–654. IEEE, 2010. [53] Christof Zalka. Grover’s quantum searching algorithm is optimal. Physical Review A, 60(4):2746, 1999.