Sampling-Based Sublinear Low-Rank Matrix Arithmetic Framework for Dequantizing Quantum Machine Learning
Total Page:16
File Type:pdf, Size:1020Kb
Sampling-Based Sublinear Low-Rank Matrix Arithmetic Framework for Dequantizing Quantum Machine Learning Nai-Hui Chia András Gilyén Tongyang Li University of Texas at Austin California Institute of Technology University of Maryland Austin, Texas, USA Pasadena, California, USA College Park, Maryland, USA [email protected] [email protected] [email protected] Han-Hsuan Lin Ewin Tang Chunhao Wang University of Texas at Austin University of Washington University of Texas at Austin Austin, Texas, USA Seattle, Washington, USA Austin, Texas, USA [email protected] [email protected] [email protected] ABSTRACT KEYWORDS We present an algorithmic framework for quantum-inspired clas- quantum machine learning, low-rank approximation, sampling, sical algorithms on close-to-low-rank matrices, generalizing the quantum-inspired algorithms, quantum machine learning, dequan- series of results started by Tang’s breakthrough quantum-inspired tization algorithm for recommendation systems [STOC’19]. Motivated by ACM Reference Format: quantum linear algebra algorithms and the quantum singular value Nai-Hui Chia, András Gilyén, Tongyang Li, Han-Hsuan Lin, Ewin Tang, transformation (SVT) framework of Gilyén et al. [STOC’19], we and Chunhao Wang. 2020. Sampling-Based Sublinear Low-Rank Matrix develop classical algorithms for SVT that run in time independent Arithmetic Framework for Dequantizing Quantum Machine Learning. In of input dimension, under suitable quantum-inspired sampling Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Com- assumptions. Our results give compelling evidence that in the cor- puting (STOC ’20), June 22–26, 2020, Chicago, IL, USA. ACM, New York, NY, responding QRAM data structure input model, quantum SVT does USA, 14 pages. https://doi.org/10.1145/3357713.3384314 not yield exponential quantum speedups. Since the quantum SVT framework generalizes essentially all known techniques for quan- 1 INTRODUCTION tum linear algebra, our results, combined with sampling lemmas from previous work, suffices to generalize all recent results about 1.1 Motivation dequantizing quantum machine learning algorithms. In particu- Quantum machine learning (QML) is a relatively new field of study lar, our classical SVT framework recovers and often improves the with a rapidly growing number of proposals for how quantum com- dequantization results on recommendation systems, principal com- puters could significantly speed up machine learning tasks10 [ , 19]. ponent analysis, supervised clustering, support vector machines, If any of these proposals yield substantial practical speedups, it low-rank regression, and semidefinite program solving. We also could be the killer application motivating the development of scal- give additional dequantization results on low-rank Hamiltonian able quantum computers [38]. Many of the proposals are based simulation and discriminant analysis. Our improvements come from on Harrow, Hassidim, and Lloyd’s algorithm (HHL) for solving identifying the key feature of the quantum-inspired input model sparse linear equation systems in time poly-logarithmic in input that is at the core of all prior quantum-inspired results: `2-norm size [25]. However, QML applications are less likely to admit expo- sampling can approximate matrix products in time independent of nential speedups in practice compared to, say, Shor’s algorithm for their dimension. We reduce all our main results to this fact, making factoring [42], because unlike their classical counterparts, QML al- our exposition concise, self-contained, and intuitive. gorithms must make strong input assumptions and learn relatively little from their output [1]. These caveats arise because both loading CCS CONCEPTS input data into a quantum computer and extracting amplitude data from an output quantum state are hard in their most generic forms. • Theory of computation ! Quantum computation theory; A recent line of research analyzes the speedups of QML algo- Machine learning theory; Sketching and sampling. rithms by developing classical counterparts that carefully exploit these restrictive input and output assumptions. This began with a breakthrough 2018 paper by Tang [45] showing that the quantum Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed recommendation systems algorithm [29], previously believed to be for profit or commercial advantage and that copies bear this notice and the full citation one of the strongest candidates for a practical exponential speedup on the first page. Copyrights for components of this work owned by others than the in QML, does not give an exponential speedup. Specifically, Tang author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission described a “dequantized” algorithm that solves the same problem and/or a fee. Request permissions from [email protected]. as the quantum algorithm and only suffers from a polynomial slow- STOC ’20, June 22–26, 2020, Chicago, IL, USA down. Tang’s algorithm crucially exploits the structure of the input © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-6979-4/20/06...$15.00 assumed by the quantum algorithm, which is used for efficiently https://doi.org/10.1145/3357713.3384314 preparing states. Subsequent work relies on similar techniques to 387 STOC ’20, June 22–26, 2020, Chicago, IL, USA Nai-Hui Chia, András Gilyén, Tongyang Li, Han-Hsuan Lin, Ewin Tang, and Chunhao Wang dequantize a wide range of QML algorithms, including those for (Remark 2.13). For example, if input is loaded in the QRAM data principal component analysis and supervised clustering [44], low- structure, as commonly assumed in QML in order to satisfy state rank linear system solving [9, 21], low-rank semidefinite program preparation assumptions [10, 37], then we have log-time sampling solving [8], support vector machines [13], nonnegative matrix fac- and query access to it. Consequently, a fast classical algorithm for torization [7], and minimal conical hull [18]. These results show a problem in this classical model implies lack of quantum speedup that the advertised exponential speedups of many QML algorithms for the problem. disappear if the corresponding classical algorithms can use input assumptions analogous to the state preparation assumptions of the Matrix arithmetic. We make a conceptual contribution by defin- quantum algorithms. Previous papers [9, 21, 44] have observed that ing the slightly more general notion of oversampling and query ac- these techniques can likely be used to dequantize all QML that cess to a vector or matrix (Definition 2.7), where we have sampling operates on low-rank data. Apart from a few QML algorithms that and query access to another vector/matrix that gives an entry-wise assume sparse input data [25], much of QML depends on some upper bound on the absolute values of the entries of the actual low-rank assumption. As a consequence, these dequantization re- vector/matrix, which we can only query. With this definition comes sults have drastically changed our understanding of the landscape the insight that this input model is closed under arithmetic operations. of potential QML algorithm speedups, by either providing strong Though this closure property comes into play relatively little in barriers for or completely disproving the existence of exponential applications to dequantizing QML, the essential power of quantum- quantum speedups for the corresponding QML problems. inspired algorithms lies in its ability to use sampling and query Recent works on quantum algorithms use the primitive of sin- access to input matrices to build oversampling and query access gular value transformation to unify many quantum algorithms to increasingly complex arithmetic expressions on input, possibly ranging from quantum walks to QML, under a quantum linear al- with some approximation error, without paying the (at least) linear gebra framework called quantum singular value transformation time necessary to compute such expressions in conventional ways. As a simple example, if we have oversampling and query access to (QSVT) [6, 22, 34]. Since this framework effectively captures all ¹1º ¹tº known linear algebraic QML techniques, a natural question is what matrices A ;:::; A , we have oversampling and query access to Ít ¹iº aspects of this framework can be dequantized. Understanding the linear combinations i=1 λiA as well (Lemma 2.12). quantum-inspired analogue of QSVT promises a unification of de- The “oversampling” input model is also closed under (approxi- quantization results and more intuition about potential quantum mate) matrix products — the key technique underlying our main speedups, which helps to guide future quantum algorithms research. results. Such results have been known in the classical literature for some time [14]; we now give an example illustrating the flavor of 1.2 Main Results main ideas. Suppose we are given sampling and query access to two matrices A 2 Cm×n and B 2 Cm×p , and desire (over)sampling and Our work gives a simple framework of quantum-inspired classi- query access to AyB. AyB is a sum of outer products Í A¹i; ·ºyB¹i; ·º, cal algorithms with wide applicability, grasping the capabilities so we can randomly sample them to get a good estimator for and limitations of these techniques. We use