A Quantum Implementation Model for Artificial Neural Networks

A Quantum Implementation Model for Artificial Neural Networks

1 A Quantum Implementation Model for Artificial Neural Networks Ammar Daskin T Abstract—The learning process for multi layered neural net- where v = x w is the activation of the output cell and σ′(v) works with many nodes makes heavy demands on computational is the derivative of the activation function which specifies the resources. In some neural network models, the learning formulas, output of a cell in the considered network, y = σ(v): e.g., the such as the Widrow-Hoff formula, do not change the eigenvectors sigmoid function, σ(v)=1/(1 + exp( v)). of the weight matrix while flatting the eigenvalues. In infinity, this − iterative formulas result in terms formed by the principal com- While in the Hebbian iteration the weight vector is moved ponents of the weight matrix: i.e., the eigenvectors corresponding in the direction of the input vector by an amount proportional to the non-zero eigenvalues. to the target, in the Widrow-Hoff iteration, the change is In quantum computing, the phase estimation algorithm is proportional to the error (t y). If we consider multi-neurons; known to provide speed-ups over the conventional algorithms − for the eigenvalue-related problems. Combining the quantum the activation, the output, and the target values becomes amplitude amplification with the phase estimation algorithm, a vectors: viz., v, y and t, respectively. quantum implementation model for artificial neural networks When there are several input and target associations, the set using the Widrow-Hoff learning rule is presented. The complexity of inputs, targets, activations, and outputs can be represented of the model is found to be linear in the size of the weight by the matrices X, T, V, and Y , respectively. Then, the above matrix. This provides a quadratic improvement over the classical algorithms. equations come in matrix forms as follows: T W[j+1] = W[j] ηXT , (3) I. INTRODUCTION AND BACKGROUND − T W = W η(σ′(V ) ⊛ X)(T Y ) , (4) Artificial neural networks (ANN) [1–3] are adaptive statis- [j+1] [j] − − tical models which mimic the neural structure of the human where W represents the matrix of synaptic weights. brain to find optimal solutions for multivariate problems. In the It is known that the learning task for multi layered neural design of ANN, the followings are determined: the structure networks with many nodes makes heavy demands on com- of the network, input-output variables, local activation rules, putational resources. Algorithms in quantum computational and a learning algorithm. Learning algorithms are generally model provide computational speed-up over their classical linked to the activities of neurons and describe a mathematical counterparts for some particular problems: e.g., Shor’s fac- cost function. Often, a minimization of this cost function toring algorithm[7] and Grover’s search algorithm[8]. Using composed of the weights and biases describes the learning adiabatic quantum computation[9, 10] or mapping data set to process in artificial neural networks. Moreover, the learning quantum random access memory [11, 12] speed-ups in big data rule in this process specifies how the synaptic weights should analysis have been shown to be possible [13–15]. Furthermore, be updated at each iteration. In general, learning rules can Lloyd et al.[16] have described a quantum version for principal be categorized as supervised and unsupervised learning: In component analysis. supervised learning rules, the distance between the response In the recent decades, particularly relating the neurons in the of the neuron and a specified response, called target , is t networks with qubits [17], a few different quantum analogous arXiv:1609.05884v2 [quant-ph] 2 Feb 2018 considered. However, it is not required in unsupervised learn- of the artificial neural networks have also been developed: ing rules. Hebbian learning rule[4] is a typical example of e.g.[18–23] (For a complete review and list of references, the unsupervised learning, in which the weight vector at the please refer to Ref.[24]). These models should not be confused th iteration is updated by the following formula (We (j + 1) with the classical algorithms (e.g. see Ref.[25, 26]) inspired by will mainly follow Ref.[2] to describe learning rules.): the quantum computing. Furthermore, using the Grover search w[j+1] = w[j] ηtx. (1) algorithm [8], a quantum associative memory is introduced − [27]. Despite some promising results, there is still need for Here, x is the input vector, η is a positive learning constant further research on new models[24]. and w[j] represents the weights at the jth iteration. And t is the target response. Learning is defined by getting The quantum phase estimation algorithm (PEA)[28] pro- an output closer to the target response. vides computational speed-ups over the known classical algo- On the other hand, Widrow-Hoff learning rule[5], which is rithms in eigenvalue related problems. The algorithm mainly the main interest of this paper, illustrates a typical supervised finds the phase value of the eigenvalue of a unitary matrix learning rule [2, 3, 6]: (considered as the time evolution operator of a quantum Hamiltonian) for a given approximate eigenvector. Because w = w ησ′(v)(t y)x, (2) [j+1] [j] − − of this property, PEA is ubiquitously used as a subcomponent A. Daskin is with the Computer Engineering Department, Istanbul of other algorithms. While in the general case, PEA requires Medeniyet University, Istanbul, Turkey, email: [email protected] a good initial estimate of an eigenvector to produce the phase; in some cases, it is able to find the phase by using an initial limj Φ[j] = I. Thus, in infinity, the learning process W →∞ T equal superposition state: e.g., Shor’s factoring algorithm [7]. ends up as: W[ ] = QQ . In Ref.[29], it is shown that a flag register can be used in the ∞ phase estimation algorithm to eliminate the ill-conditioned part B. Quantum Algorithms Used in the Model of a matrix by processing the eigenvalues greater than some In the following, we shall first explain two well-known threshold value. Amplitude amplification algorithm [8, 30– quantum algorithms and then describe how they are used in 32] is used to amplify amplitudes of certain chosen quantum Ref.[35] to obtain the linear combination of the eigenvectors. states considered. In the definition of quantum reinforcement 1) Quantum Phase Estimation Algorithm: The phase esti- learning [33, 34], states and actions are represented as quantum mation algorithm (PEA) [28, 36] finds an estimation for the states. And based on the observation of states a reward is phase of an eigenvalue of a given operator. In mathematical applied to the register representing actions. Later, the quantum terms, the algorithm seen in Fig.1 as a circuit works as follows: amplitude amplification is applied to amplify the amplitudes of An estimated eigenvector ϕ associated to the jth eigen- rewarded states. In addition, in a prior work [35] combining the • | j i value eiφj of a unitary matrix, U of order N is assumed amplitude amplification with the phase estimation algorithm, given. U is considered as a time evolution operator of we have showed a framework to obtain the eigenvalues in the Hamiltonian (H) representing the dynamics of the a given interval and their corresponding eigenvectors from quantum system: an initial equal superposition state. This framework can be ~ U = eitH/ , (6) used as a way of doing quantum principal component analysis (QPCA). where t represents the time and ~ is the Planck constant. For a given weight matrix W ; in As a result, the eigenvalues of U and H are related: while iφj linear auto-associators using the Widrow-Hoff learning rule; e is the eigenvalue of U, its phase φj is the eigenvalue during the learning process, the eigenvectors does not change of H. while the eigenvalues goes to one [2, 6]: i.e., The algorithm uses two quantum registers dedicated limj W[j] • converges to QQT , where Q represents the eigenvectors→∞ of to the eigenvalue and the eigenvector, respectively, W . Therefore, for a given input x, the considered network reg1 and reg2 with m and (n = log2N) number | i | i produces the output QQT x. In this paper, we present a quan- of qubits. The initial state of the system is set to tum implementation model for the artificial neural networks reg reg = 0 ϕ , where 0 is the first standard basis | 1i| 2i | i| j i | i by employing the algorithm in Ref.[35]. In particular, we vector. T show how to construct QQ x on quantum computers in Then, the quantum Fourier transform is applied to reg1 , • | i linear time. In the following section, we give the necessary which produces the following equal superposition state: description of Widrow-Hoff learning rule and QPCA described M 1 1 − in Ref.[35]. In Sec.III, we shall show how to apply QPCA to U reg reg = k ϕ , (7) QF T | 1i | 2i √ | i | j i the neural networks given by the Widrow-Hoff learning rule M kX=0 and discuss the possible implementation issues such as the where M =2m and k is the kth standard basis vector. circuit implementation of W , the preparation of the input x For each kth qubit in the| i first register, a quantum operator, • k−1 as a quantum circuit, and determining the number of iterations U 2 , controlled by this qubit is applied to the second in the algorithm. In Sec.IV, we analyze the complexity of the register. This operation leads the first register to hold the whole application. Finally, in Sec.V, an illustrative example is discrete Fourier transform of the phase, φj .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us