Quantum Hardness of Learning Shallow Classical Circuits

Quantum Hardness of Learning Shallow Classical Circuits

Electronic Colloquium on Computational Complexity, Report No. 41 (2019) Quantum hardness of learning shallow classical circuits Srinivasan Arunachalam* Alex B. Grilo † Aarthi Sundaram ‡ Abstract In this paper, we study the quantum learnability of constant-depth classical circuits under the uniform distribution and in the distribution-independent framework of PAC learning. In order to attain our results, we establish connections between quantum learning and quantum- secure cryptosystems. We then achieve the following results. 1. Hardness of learning AC0 and TC0 under the uniform distribution. Our first result concerns the concept class TC0 (resp. AC0), the class of constant-depth and polynomial- sized circuits with unbounded fan-in majority gates (resp. AND;OR;NOT gates). We show that if there exists no quantum polynomial time (resp. sub-exponential time) algorithm to solve the Learning with Errors (LWE) problem, then there exists no polynomial time quantum learning algorithm for TC0 (resp. AC0) under the uniform distribution (even with access to quantum membership queries). The main technique in this result uses explicit pseudo-random generators that are believed to be quantum-secure to construct concept classes that are hard to learn quantumly under the uniform distribution. 0 2. Hardness of learning TC2 in the PAC setting. Our second result shows that if there exists no quantum polynomial-time algorithm for the LWE problem, then there exists no 0 0 polynomial-time quantum-PAC learning algorithm for the class TC2, i.e., depth-2 TC cir- cuits. The main technique in this result is to establish a connection between the quantum security of public-key cryptosystems and the learnability of a concept class that consists of decryption functions of the cryptosystem. This gives a strong conditional negative answer to one of the “Ten Semi-Grand Challenges for Quantum Computing Theory” raised by Aaronson [Aar05], who asked if AC0 and TC0 can be PAC-learned in quantum polynomial time. *Center for Theoretical Physics, MIT. Funded by the MIT-IBM Watson AI Lab under the project Machine Learning in Hilbert space. [email protected] †CWI and QuSoft, Amsterdam, The Netherlands. Supported by ERC Consolidator Grant 615307-QPROGRESS. [email protected] ‡Joint Center for Quantum Information and Computer Science, University of Maryland, USA. Supported by the Department of Defense. [email protected] ISSN 1433-8092 1 Introduction Machine learning is a diverse field of research with many real-world applications and has received tremendous attention in the past decade. From a theoretical perspective, since the seminal paper of Valiant [Val84], there has been a lot of theoretical effort in considering different learning models that formalize what we mean by learning and understanding which problems can (or cannot) be efficiently learned within these models. In recent years, due to the considerable development of quantum computation (both on the theoretical and experimental fronts), there has been an increased focus on understanding the tasks for which quantum computers can offer speedups. Machine learning tasks have emerged as a candidate in this respect. To this end, results in quantum learning theory aim to identify learning problems for which quantum computers provably provide a (significant) advantage. More concretely, in learning theory, the goal is to devise a learning algorithm (or learner) for a set of functions which is called a concept class. The functions in the concept class are referred to C as concepts. In this paper, we will consider (without loss of generality) concepts that are Boolean functions c : 0;1 n 0;1 . The learner is provided with examples of the form (x;c(x)), where c is f g ! f g an unknown concept lying in and x is picked uniformly at random from 0;1 n and the goal of C f g the learner is to learn c, i.e., it should output a hypothesis h that is close to c.1 We say that a learner learns , if for every c , learns c. In learning theory, the intent is to devise efficient learning A C 2 C A algorithms for an interesting concept class , i.e., the learner should use few examples and not too C much time in order to learn . C In quantum learning theory, the goal is still to learn concept classes , but now with the access C to quantum resources. Bshouty and Jackson [BJ99] introduced a quantum learning model wherein the learner is a quantum algorithm and is provided with quantum examples of the form 1 X x;c(x) p n j i 2 x 0;1 n 2f g for some concept c . In order to see why quantum examples generalize classical examples, 2 C observe that a quantum learner could choose to measure the quantum example in the computa- tional basis, which results in a classical example (x;c(x)) for a uniformly random x 0;1 n. The 2 f g advantage of quantum learning usually comes from the fact that one can perform arbitrary uni- tary operations on these quantum examples, enabling one to improve sample or time complexity for learning the concept class . C The first example of a quantum advantage for a learning problem was showed by Bernstein and P Vazirani [BV97]. They showed how to learn the concept class of linear functions = c(x) = s x C f i i i mod 2 : s 0;1 n with a constant number of quantum examples. Classically, in order to learn this 2 f g g concept class efficiently, it is necessary and sufficient to obtain Θe(n) examples. With these many examples a classical learner can use Gaussian elimination and learn the unknown target concept in polynomial time. Subsequently, Bshouty and Jackson [BJ99] showed that the class of Boolean functions that can be represented as polynomial-sized DNF formulas can be learned in quantum polynomial 1More formally, this is referred to as the uniform-distribution learning setting. In learning theory there are several variations of this model that we skip for the sake of simplicity. See Section 1.1 for a brief introduction and Section 2.4 for more details. 1 time.2 One crucial ingredient for their quantum learning algorithm was the ability to perform Fourier sampling using quantum examples (which we discuss in Section 1.2). Classically, Ver- beurgt [Ver90] showed that DNFs can be learned in quasi-polynomial time using classical exam- ples and this has remained the state-of-the art for the last thirty years! This emphasizes the power of quantum examples for learning. There have been many other instances (which we discuss in Section 1.2) where quantum examples give an advantage for quantum learning algorithms. A natural follow-up question to the quantum-efficient learnability of DNF formulas, i.e., depth- 2 circuits, is: Can the concept class of shallow, i.e., constant-depth, classical circuits be learned more efficiently using quantum resources as compared to classical resources? In particular, are there efficient (i.e., polynomial-time) quantum learning algorithms for Boolean functions that can be represented by AC0 circuits, i.e., constant-depth circuits with unbounded fan- in AND;OR;NOT gates? More ambitiously, can we quantum-efficiently learn TC0, i.e., constant- depth circuits with majority gates.3 This question was raised by Aaronson [Aar05] as one of the “Ten Semi-Grand Challenges for Quantum Computing Theory”. In this work we address this question and give evidence that efficient quantum algorithms do not exist for learning AC0 and TC0. More concretely, under the assumption that the Learning with Errors problem (LWE)[Reg09] cannot be solved in polynomial time by quantum computers, the class of TC0 functions cannot be learned in quantum polynomial time. Also, under the less- standard assumption that LWE cannot be solved in sub-exponential time by quantum computers, we show that polynomial-time learning algorithms for AC0 do not exist. LWE is one of the leading candidates for public-key post-quantum cryptography and it is believed to be hard to solve, even for quantum computers. For instance, the current best-known quantum algorithms for it run in exponential time [BKW03, AG11]. 1.1 Learning models In this section, we first recall the definitions of the classical and quantum learning models. For a detailed introduction to these models, we refer the reader to Section 2.4. Classical learning model. In the Probably Approximately Correct (PAC) model of learning [Val84], a concept class is a subset of Boolean functions, i.e., c : 0;1 n 0;1 and every element C C ⊆ f f g ! f gg c : 0;1 n 0;1 in is referred to as a concept . The goal of the learning algorithm is to learn f g ! f g C A an unknown target concept c given labeled examples of the form (x;c(x)) where x is drawn from 2 C an unknown distribution D : 0;1 n [0;1]. We say that a learning algorithm learns if it sat- f g ! A C isfies the following: for every c and every distribution D : 0;1 n [0;1], with probability at 2 C n f g ! least 2=3, outputs a hypothesis h : 0;1 0;1 that satisfies Prx D [h(x) = c(x)] 1 ". The A f g ! f g ∼ ≥ − advantage of the learner over a random guess is β := 1 " and β is called the bias of the learner. 2 − 2 The learner properly learn the concept class if its output hypothesis is always in the concept A C class, i.e., h . Else, is an improper learner for . Similarly, is a weak learner if β = n c for 2 C A C A − some constant c > 0.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    44 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us