Quantum Circuit-Like Learning: a Fast and Scalable Classical Machine-Learning Algorithm with Similar Performance to Quantum Circuit Learning

Quantum Circuit-Like Learning: a Fast and Scalable Classical Machine-Learning Algorithm with Similar Performance to Quantum Circuit Learning

Quantum circuit-like learning: A fast and scalable classical machine-learning algorithm with similar performance to quantum circuit learning Naoko Koide-Majimaa†, Kei Majimab† a National Institute of Information and Communications Technology, Osaka 565-0871, Japan b Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan † These authors contributed equally to this work. The application of near-term quantum devices to machine learning (ML) has attracted much attention. In one such attempt, Mitarai et al. (2018) proposed a framework to use a quantum circuit for supervised ML tasks, which is called quantum circuit learning (QCL). Due to the use of a quantum circuit, QCL can employ an exponentially high-dimensional Hilbert space as its feature space. However, its efficiency compared to classical algorithms remains unexplored. In this study, using a statistical technique called “count sketch,” we propose a classical ML algorithm that uses the same Hilbert space. In numerical simulations, our proposed algorithm demonstrates similar performance to QCL for several ML tasks. This pro- vides a new perspective with which to consider the computational and memory efficiency of quantum ML algorithms. I. INTRODUCTION known as “count sketch” [37–40]. Given a high-dimen- sional vector space, the count sketch technique provides a Because quantum computers with tens or hundreds of projection from the given space to a lower dimensional qubits are becoming available, their application to machine space that approximately preserves the inner product in the learning has attracted much attention. Such quantum de- original space (see Section II for details). Therefore, this vices are called noisy intermediate-scale quantum (NISQ) enables us to use the same 2Q-dimensional Hilbert space as devices [1], and many quantum-classical hybrid algo- the feature space with a low computational cost. To demon- rithms have been proposed to use NISQ devices efficiently. strate the similarities between QCL and QCLL as machine- For example, the variational quantum eigensolver (VQE) learning algorithms, we perform numerical simulations in has recently been used to find the ground state of a given which these two algorithms are applied to several machine- Hamiltonian [2–5] and the quantum approximate optimi- leaning tasks. Our numerical results demonstrate that the zation algorithm (QAOA) enables us to obtain approxi- behavior and performance of QCLL are very similar to mate solutions to combinatory optimization problems [6– those of QCL. We believe that our proposed algorithm and 8]. More recently, several studies have proposed algo- simulation results provide a new perspective with which to rithms that use NISQ devices for machine-learning consider the computational and memory efficiency of tasks [9–29], some of which have been experimentally quantum machine learning algorithms. tested using actual quantum devices [30–36]. In one such attempt, Mitarai et al. [26] proposed a frame- II. ALGORITHM work to train a parameterized quantum circuit for super- vised classification and regression tasks; this is called This section is organized as follows. First, we introduce quantum circuit learning (QCL). In QCL, input data (i.e., QCL based on the work of Mitarai et al. [26]. In the fol- feature vectors) are nonlinearly mapped into a 2Q-dimen- lowing subsection, we explain the count sketch technique, sional Hilbert space, where Q is the number of qubits, and which is used in QCLL. Finally, we explain QCLL. Note then a parameterized unitary transformation is applied to that a Python implementation of QCLL is available at our the mapped data (see Section II for details). The parame- GitHub repository [https://github.com/nkmjm/]. ters of the unitary transformation are tuned to minimize a given cost function (i.e., a prediction error). QCL employs A. Quantum circuit learning the high-dimensional Hilbert spaces implemented by quan- tum circuits; however, its computational and memory effi- Here, we introduce the QCL framework proposed by Mi- ciency compared to classical algorithms remains uncertain. tarai et al. [26]. In supervised learning, an algorithm is $ given a training dataset � = {(�!, �!)}!"# of � pairs of % In this study, we present a machine-learning algorithm with �-dimensional real input vectors �! ∈ ℝ and target out- similar properties to QCL whose computational time and puts �! . The outputs �! take continuous values in the required memory size are linear with respect to the hy- case of regression and discrete values in the case of classi- perparameter corresponding to the number of qubits, fication. The algorithm is required to predict the target out- % which we refer to as quantum circuit-like learning (QCLL). put for a new input vector �&'( ∈ ℝ . In QCL, the pre- To implement this algorithm, we use a statistical technique diction model is constructed using a quantum circuit with 1 FIG. 1. Quantum circuit learning (QCL) and quantum circuit-like learning (QCLL). (a) Flowchart of QCL. In QCL, an input vector is encoded (mapped) into a quantum state and then a parameterized unitary transformation is applied to it. (b) Count sketch technique. An example of a 3 × 5 count sketch matrix is shown. This provides a projection from a five-dimensional space to a three-dimensional space. (c) Flowchart of QCLL. In QCLL, the count sketch of the quantum state vector that encodes an input vector is computed by the tensor sketch algorithm. The tensor sketch algorithm enables us to perform this computation without accessing the quantum state vector, which is typically high dimensional. Then, the inner product between the resultant count sketch and a parameterized unit vector is computed as the output. multiple learnable parameters as follows (see also Figure the unitary gates �'&*(�!) were constructed using rota- 1(a)) tions of individual qubits around the y-axis. Following their procedure, in this study, we treat a case in which a 0 % 1. Encode input vectors �! (� = 1, ⋯ , �) into quan- given input vector � = (�#, ⋯ , �%) ∈ ℝ is encoded in ⊗, tum states |�)&(�!)⟩ = �'&*(�!)|0⟩ by applying ⊗, 0 unitary gates � (� ) to the initialized state |0⟩ . % ,! 3 '&* ! |�)&(�)⟩ = ⨂1"# L⨂2 "# M�1, N1 − �1 P Q , (4) ! 2. Apply a �-parameterized unitary �(�) to the states where �1 is the number of qubits used to encode the �- |�)&(�!)⟩ and generate the output states % th element of the vector �. Note that � = ∑1"# �1. The |�-./(�!, �)⟩ = �(�)|�)&(�!)⟩. elements of the vector |�)&(�)⟩ include high-order poly- ,! 3. Measure the expectation values of a predefined ob- nomials such as �1 ; this introduces nonlinearity into the prediction model. servable � for the output states |�-./(�!, �)⟩. Using a predefined function �(∙), a scaling parameter �, and an intercept parameter �, the predictions from In step 2, a parameterized unitary transformation �(�) is this machine-learning model are defined as applied to the states prepared in step 1. In Mitarai et al. [26], this unitary transformation was constructed as a chain of rotations of individual qubits and random unitary transfor- �=! = �(�⟨�-./(�!, �)|�|�-./(�!, �)⟩ + �). (1) mations. Following their procedure here, we assume a case 4. Minimize the cost function where �(�) takes the form $ �(�) = �4(�)�4�45#(�)�45# ⋯ �#(�)�#, (5) �(�, �, �) = B �(�!, �=!) (2) !"# with respect to the parameters (�, �, �), where where � is the depth of the quantum circuit, �(∙,∙): ℝ × ℝ ⟶ ℝ is a function to measure the pre- �#(�), ⋯ , �4(�) are rotations of individual qubits whose diction error. angles are specified by elements of �, and �#, ⋯ , �4 are random unitary transformations. As a result, each element of the output vector (i.e., the output state) |� (�, �)⟩ 5. Make a prediction for a new input vector �&'( by -./ computing has the form of the inner product between |�)&(�)⟩ and a unit vector randomly parameterized by �. In other words, when we denote |� (�)⟩ and |� (�, �)⟩ by �(�⟨�-./(�&'(, �)|�|�-./(�&'(, �)⟩ + �). (3) )& -./ 0 In step 1 above, the input vectors are nonlinearly mapped �)&(�) = M�)&,#(�), ⋯ , �)&,3"(�)P into a 2, -dimensional Hilbert space, where � is the number of qubits used for encoding. In Mitarai et al. [26], and 2 0 Count sketches are random vectors. According to conven- � (�, �) = M� (�, �), ⋯ , � "(�, �)P , -./ -./,# -./,3 tion, the random vector �� and a sample (i.e., an obser- vation) from �� are both called the count sketch of �. each element of |�-./(�, �)⟩ has the form Count sketch matrices have the property shown in Theo- , �-./,7(�, �) = �7(�) ∙ �)&(�) (� = 1, ⋯ , 2 ), (6) rem 3; this allows us to use count sketches as compact rep- resentations of high-dimensional vectors. where �7(�) is a unit vector randomly parameterized by �. Theorem 3 We assume that � is a �8 × � count sketch matrix. The predefined function �(∙) and the loss function �(∙,∙) Given two �-dimensional column vectors �# and �3, used in steps 3 and 4 are manually defined. In a regression 3 task, �(�) = � and �(�, �=) = (� − �=) are often used. �9[��# ∙ ��3] = �# ∙ �3, (8) In a classification task, the softmax and cross-entropy functions are often used. To minimize the cost function 1 Var [�� ∙ �� ] ≤ {(� ∙ � )3 + ‖� ‖3‖� ‖3}, (9) �(�, �, �), gradient-based methods [26] or gradient-free 9 # 3 �8 # 3 # 3 optimization methods, such as Nelder–Mead [2], are used. Optionally, a recently proposed, more efficient method can where �:[�(�)] and Var:[�(�)] denote the expectation be used under some conditions [41]. and variance, respectively, of a function �(�)

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us