<<

arXiv:2105.13273v1 [quant-ph] 27 May 2021 agmn ftedphhspooe h blt faN to NN a of ability the en- promoted the has result, depth a the As of [11]. largement improvements hardware computer continuous any approximate to [10]. become [7], capacity network function the multilayer with resulting universal w the nested generaliz perceptrons When [6]. and other potentials [9] activation mech response characterized graded activation was to neuron perceptron, the –i.e. linked [8], nism and theory [1] learning algorithms other logic model biological computational threshold into a discrete as on signals 1943 based in universa feeding first [3], and Introduced [6] layers years computing. [5], netwo recent parallel-processed in neural allowing the organized neurons artificial in brains, are learning biological (NNs) deep by thanks Inspired of [2] [4]. blooming (ML) learning the machine to of models computational erig unu Perceptron Learning, eosrt h uulrifreeto ohnua networ neural perceptron both physics. quantum exampl of quantum ri These reinforcement and gives performance. a mutual adiabaticity robust the neural and to of demonstrate time quantum shortcuts operation dynamics of a short aid sensing of a the the quantum design with on new the optimized a hand, based other design review network the to paper On network applica the protocol. This neural hand, one classical the counterparts. On of scenarios. classical two these in their quantum examples regression a and to to classification such of use compare resources applied performance quantum pro the the tool optimization enhance of (i) to quantum use a or as: the entanglement, as or sciences superposition, (ii) learning quantum problems, machine in physics classical displayed gets of partic In latter learning. machine the and mechanics quantum of biosis h oooyo N a nrae nprle ihthe with parallel in increased has NNs of topology The fruitful most the of one are [1] networks neural Artificial ne Terms Index Abstract eeomnso erlNtok nQuantum in Networks Neural of Developments Qatmmcielann mre rmtesym- the from emerges learning machine —Quantum nvriyo h aqeCutyUPV/EHU Country Basque the of University nttt eF de Instituto Nua ewr,QatmSnig Machine Sensing, , —Neural eateto hsclChemistry Physical of Department nvria alsIId Madrid de III Carlos Universidad [email protected] eatmnod F de Departamento .I I. 3 rd [email protected] NTRODUCTION ´ sc udmna IFF-CSIC Fundamental ısica ard Spain Madrid, rkTorrontegui Erik ibo Spain Bilbao, 1 st u Ban Yue ´ ısica Physics tocols tasks eto se [7] l ular, tion rks ith ed ks to es a- s s s fNswt h i fM lost nac h accuracy useful the diversely enhance is to ML allows protocols. ML quantum of of aid efficiency universali the the and hand, with one the NNs On a fields. of both bringing on [43], progress [42], of (QML) lot learning machine quantum field, [41]. counterparts classical their hig than with efficiency tasks parall specific execute performing to computations superposition processing use that [40], computers mak communications quantum that unbreakable for [39] entanglement channels of detecti quantum uses the [34]–[38], fields in external efficiency providin of and sensors resolution atomic high are unprecedented revolution in quantum emerged second computing technologies of this quantum examples to Some the cryp- others. [32], among quantum for [33] sensing from forms, outbreak quantum of [31], offer variety tography wide will a in physics coming future quantum pro the that All vari [26]–[30]. indicates in fidelity high states with of registers such platform squeezing resources or the at exploiting manipulated entanglement, by be level can particle matter single control, technologi quantum quantum new to to Thanks rise giving revolution second the its develop to employed it. also do intelligence. are to artificial CNN expected how property, decision this detailing tasks and to necessarily perform Due [24], to programmer trained the [23], be can without [21] control networks recognition Neural vehicle [25]. object making [22], or filters [20] in speech spam capabilities as incredible such shown have applications [19], Neu Convolutional (CNN) Learnin the Networks Deep particularly so-called and the [16]–[18] recently, paradigm More [15]. example such other for fields among science diverse or covers transportation, NNs industry, economy, of versatility explosion the an information Thus, the [12] [14]. in data [13] of algorithms of amount complexity increasing exponentially the process KRAQE aqeFudto o Science for Foundation Basque IKERBASQUE, h eto Ladqatmpyisgvsbrht novel a to birth gives physics quantum and ML of meet The experienced has physics quantum decades, two last the Over nvriyo h aqeCutyUPV/EHU Country Basque the of University nvriyo h aqeCutyUPV/EHU Country Basque the of University eateto lcrct n Electronics and Electricity of Department eateto hsclChemistry Physical of Department [email protected] [email protected] 2 4 nd th ibo Spain Bilbao, ibo Spain Bilbao, og Casanova Jorge airEchanobe Javier ,see s, gress ous her era el- es. ral on or as as ty d g g e , in measurements protocols [44]–[46], states preparation [47]– 0´ 1 (or 0´ 1 ), a target electromagnetic field with | i → | i | i→|− i [50], entanglement and states classification [51]–[53], quantum frequency ωtg and amplitude Ωtg, in the form of Ωtg cos(ωtg) communication [54], learning Hamiltonians [55] and handling can be probed based on the responses that the sensor gives. with open quantum system dynamics [56]–[59]. Recently, ML In the dressed basis u , d , B , D , where u = ( B + also starts to attract its attention in quantum sensing and 0 )/√2, d = ( B {| i0| )i/√| 2i, |Di}= ( 1| i 1 |)/√i 2, metrology, in particular, adaptive protocols for phase [60]–[62] |0´i = 0´ ,| withi B| =i − ( | i1 + 1 |)/√i 2, the|− statei − probability | i | i | i | i |− i | i and parameter estimation [63]–[66], or calibrating quantum PD(t) is considered as the response given by the sensor with sensors [67]. On the other hand, the use of quantum resources P (0) = 1. Under the condition that RWA is valid, Ωtg D ≪ allows improvements of the NN accuracy in classification and Ω ωtg, the sensor releases the harmonic response. However, regression tasks compared to their classical counterparts [68]– when≪ the condition of RWA is not satisfied, i.e., with large

[72]. Since the seminal publication by Kak [73], different Ωtg, small ωtg or small detuning ξ in ωtg = ω1 ω0´ + ξ, the efforts have been made to reproduce the nonlinear response sensor offers complex responses that can be analized− with a for the perceptron at the quantum level and its nesting to the feed-forward NN. design of artificial quantum neural networks (QNNs) [74]– We demonstrated our method in the following manner: Al- [84]. though in reality the training/validation/test data are taken from In this article, we review two examples of QML, demon- experimental measurement, their acquisitions in the equivalent strating the uses of NNs in quantum physics. This article conditions can be numerically simulated. At every ti for each is structured as follows. In Sec. II, we review the use of measurement, the result is either 0 or 1. With the shot times Nm a classical NN applied to quantum sensing [85]. Such a n 1, 2, ..., N , the results become P = z ; /N , ∈ m i n=1 n i m NN is used for parameter predictions of an external field, where zn;i is from a Bernoulli distribution deriving from the P showing its performance in a wide range of working regime, theoretical probability PD(ti). The NN has the input from even beyond the rotating wave approximation (RWA) [86], the measured data X = P1, P2, ..., P and sends the { Np } [87] with a training/validation/test dataset fully obtained from outputs Y = y1,y2 after processing, aiming at the targets { } experimental measurements. The establishment, training and A = a1,a2 = Ωtg, ξ . The elements P in the input data { } { } i operation of the NN require a minimal knowledge of the set X are collected at every time instant t = ti with shot physical system in contrast with bayesian inference methods times Nm, in the time interval [0.5t0,t0], where we choose [38]. In Sec. III, we review the quantum perceptron where the the period for the idea harmonic response as t0. sigmoid-like activation is mimic by the excitation probability The parameters of the NN are adopted in the Table I, of a qubit. Implemented as an efficient and reversible unitary in order to obtain the high accuracy for the approximation operation, the resulting QNN is encoded by an Ising model F (X)= Y A. Levenberg-Marquardt backpropagation, one [82]. With the aid of shortcuts to adiabaticity (STA) [83] the of the gradient≈ descent algorithms is applied to train the NN. perceptron is optimized providing faster and more robust non- To build a training/validation/test dataset, 241 examples in linear responses. The results of these work indicate the mutual reinforcement of machine learning and quantum physics. TABLE I

II. APPLICATION OF NNS IN QUANTUM SENSING Number of neurons in the input layer Np 101 Number of hidden layers 5 Target parameters have been usually encoded in harmonic Number of neurons in each hidden layer 40 20 12 6 3 responses of quantum sensors [86], [87]. However, this ap- Activation function in each hidden layer y = tanh(x) proach limits the working regime of the sensor, as the deviation Activation function in the output layer y = x −3 from harmonic responses leads to a failure on the parameter Learning rate 5 × 10 Shot times for each expectation value 100 estimation. In addition, being in this limited working regime Repetition times for each simulated acquisition 20 requires enough pre-knowledge of the physical model. In this scenario, a well-trained feed-forward NN (such as the one Parameters for training the NN to predict Y = {y1,y2} approximating A proposed in Ref. [85]) can predict the parameters of the = {Ωtg, ξ}. external field just from the experimental acquisitions without the necessity of learning the underlying physical model. In Ωtg/(2π) [1, 25] kHz multiplied by 11 examples chosen particular, in Ref. [85] we focused on an 171Yb+ atomic in ξ/(2π) ∈ [ 0.3, 0.3] kHz are extracted. In Ref. [85], ∈ − quantum sensor [34]–[38] and demonstrated that its working we consider Nm = 100 shot times for each expectation regime was extended to complex scenarios. value. The finite measurement times gives rise to shot noise, 2 The atomic sensor works as follows: In the 1 manifold S 2 bringing statistical fluctuations. To this end, 20 times is of a 171Yb+ ion, there are four hyperfine levels 0 , 0´ , 1 repeated for each simulated acquisition. Therefore, the total and 1 , with a Zeeman splitting in the late| threei | statesi | i training/validation/test dataset contains 241 11 20 examples, |− i × × induced by a static magnetic field Bz. Two microwave drivings 70%, 15%, 15% from which randomly consist of the training, with the same amplitude Ω and resonant with 0 1 and validation and test sets. | i → | i 0 1 are applied to cancel potential noisy fluctuations With the presence of shot noise, the estimation of Ωtg | i→|− i in Bz. Via the detection of the state tranisition between the and ξ behave in different manners. While the accuracy of the estimation of Ωtg keeps high, it is more difficult for the constructed via a unitary transformation, written as NN to learn ξ at small Ωtg. We divide the whole dataset into the following intervals: Ωtg/(2π) [1, 25] kHz, [3.4, 25] Uˆ (ˆx ; f) 0 = 1 f(ˆx ) 0 + f(ˆx ) 1 . (1) ∈ j j | j i − j | j i j | j i kHz, [8.2, 25] kHz, [13, 25] kHz, [17.8, 25] kHz and [22.6, 25] q q kHz, but keeping the same range for ξ/(2π) [ 0.3, 0.3] This transformation is motivated by the resting and active kHz. For all the above ranges, the correlation∈ coefficient− R states of a classical neuron that in the quantum description measures the linear dependence between the outputs from are encoded by the ground 0 and excited 1 states of a | i | i the training/validation/test dataset and the targets while the qubit. The sigmoidal activation potential of the neuron is linear regression of the outputs as the function of the targets codified by the excitation probability of the qubit f(ˆxj ) = 2 y = αa + β is compatible with the perfectly matching linear 1j Uˆ(ˆxj ; f) 0j . In a feed-forward network, the perceptron r r r r |h | | i| relation y = a, where y = y1,y2 and a = Ωtg, ξ gate is conditioned on the field generated by neurons in earlier with the superscript r which represents{ } the rescaling{ of each} layers, xˆ = w σˆz b , with similar weights w and j k

NN training results illustrated by , and in different intervals of Φ(xj /Ω(t)) = 1 f(xj /Ω(t)) 0 + f(xj /Ω(t)) 1 , (3) R α β | i − | i | i Ωtg/(2π) with ξ/(2π) ∈ [−0.3, 0.3] kHz. The correlation coefficient R q q measures the linear dependence between the outputs from the with a sigmoid-like excitation probability of the form training/validation/test dataset and the targets, while the linear regression of the outputs is expressed in the function of the targets y = αa + β. 1 x f(x)= 1+ . (4) 2 √1+ x2 accuracy that the outputs approach the targets is obtained when   Ωtg is much larger than the one where the harmonic response Originally, the final state (3) is achieved in three steps: (i) is achieved, i.e., Ωtg = 2π 1 kHz. Near this regime, it is set the perceptron to the superposition + = 0 = 1 | i H| i more difficult for the NN to× learn different values of ξ, as ( 0 + 1 ) with a Hadamard gate; (ii) instantaneously boost √2 | i | i the fluctuation of the responses caused by shot noise can even the magnetic field Ω(0) = Ω0 xˆ ; (iii) adiabatically ≫ | j | exceed the difference of the ideal sensor response obtained ramp-down the transverse field Ω(tf )=Ωf in a time tf , from the theoretical model. to do the transformation (ˆx ) + Φ(ˆx /Ω ) . However, A j | i ≃ | j f i One may choose the dataset with the range Ωtg/(2π) a faster dynamics is always desirable, as a shorter operation [8.2, 25] kHz and ξ/(2π) [ 0.3, 0.3] kHz, randomly pick∈ time to construct the perceptron response (4) can enhance up N examples with target∈ parameters− outside the dataset the performance and reduce the decoherence in the quantum and check the outputs from the NN. The average value of registers. Inverse engineering (IE) methods [88] can accelerate 1 N j j j the accuracy Fi = N j=1 yi ai /ai , (i = 1, 2) is above the dynamics providing different sigmoid-like excitation prob- 0.97. Of course, one can detect| − responses| at longer time or abilities of the qubit leading to an smoother external driving P increase the shot times Nm to improve the detection. However, profile Ω(t) that favors its experimental realization. Moreover, more energy and time cost with more experiment resources as universality does not depends on the specific shape of the should be paid. To make a balance between the gains and the sigmoid-like activation function [7], the resulting improved cost is always needed. In Ref. [85], we demonstrate that in QNN is still universal [83]. a 171Yb+ atomic sensor, adequately trained neural networks More in detail, we parameterize the dynamical state with enable to estimate the parameters of the external field in the undetermined polar θ and athimuzal β auxiliary angles on regimes presenting complex responses under the shot noise the Bloch sphere, due to a finite number of measurements. iβ/2 iβ/2 Ψ(t) = cos(θ/2)e 0 + sin(θ/2)e− 1 . (5) III.IMPROVEDQUANTUMPERCEPTRONWITH STA | i | i | i In Ref. [82], a quantum perceptron labelled by the sub- Substituting this state Ψ(t) into the time-dependent script j offering the nonlinear response to an input field is Schr¨odinger equation driven| by thei Hamiltonian in Eq. (2), we obtain that the auxiliary angles satisfy the following coupled project of the Spanish MICINN, as well as support from differential equations, the UPV/EHU through the grant EHUrOPE. E. T. acknowl- edges financial support from Project PGC2018-094792-B-I00 Ω(t) = θ/˙ sin β, (6) (MCIU/AEI/FEDER,UE), CSIC Research Platform PTI-001, ˙ ˙ xj = θ cot θ cot β β. (7) CAM/FEDER Project No. S2018/TCS-4342 (QUITEMAD- − CM), and by Comunidad de Madrid-EPUC3M14. Setting the wavefunction Ψ(0) = + and Ψ(tf ) = Φ(x /Ω ) at the initial and| finali times| i imposes| boundaryi | j f i REFERENCES conditions on θ(0) and θ(tf ). Moreover, Eq. (6) also imposes [1] W. S. McCulloch and W. Pitts W, “A Logical Calculus of Ideas similar boundary conditions on θ˙(0) at and θ˙(tf ) once the Immanent in Nervous Activity” , Bull. Math. Biol. vol. 5, pp. 115-133, initial and final values of the external transverse field Ω(0) 1943. and Ω(tf ) are specified. The set of Eqs. (6) and (7) are solved [2] C. M. Bishop, “Pattern Recognition and Machine Learning,” Springer, by first, interpolating with a polynomial ansatz the function 2006. ˙ [3] K.-Su Oh and K. Jung, “GPU implementation of neural networks” , θ(t) so the boundary conditions of θ and θ are fulfilled at Pattern Recognit. vol. 37, pp. 1311-1314, 2004. t = 0 and tf , subsequently, deriving β(t) from Eq. (7), [4] Y. LeCun, Y. Bengio, and. G. Hinton, “Deep learning”, Nature vol. 521, and finally, once θ(t) and β(t) are fully determined, Ω(t) is pp. 436-444, 2015. [5] F. Rosenblatt, “The perceptron - A perceiving and recognizing automa- deduced from Eq. (6), see [83] for more details. As result, ton”, Tech. Rep. 85-460-1, Cornell Aeronautical Laboratory, 1957. the initial state Ψ(0) is not necessarily the eigenstate of the [6] J. J. Hopfield, “Neurons with graded response have collective computa- Hamiltonian avoiding| i large Ω(0) values leading to a smooth tional properties like those of two-state neurons”, Proc. Nat. Acad. Sci. and less experimentally demanding profile compared to vol. 81, pp. 3088-3092, 1984. Ω(t) [7] G. Cybenko, “Approximation by superpositions of a sigmoidal function” fast-quasi-adiabatic implementations [82]. Moreover, the IE Math. Control Signals Syst. vol. 2, pp. 303-314, 1989. protocol allows a shorter operation time to construct a quantum [8] D. O. Hebb, “The organization of behavior”, Wiley, New York, 1949. perceptron and a stable performance with respect to timing [9] F. Rosenblatt, “The perceptron: A probabilistic model for information storage and organization in the brain,” Psychol. Rev. vol. 65, pp. 386- errors and the variations of the input potential. In Ref. [83], 408, 1958. by using STA we propose a speed-up quantum perceptron [10] K. Hornik, M. Stinchcombe and H. White, “Multilayer feedforward which has faster performance compared to fast-quasi-adiabatic networks are universal approximators,” Neural Netw., vol. 2, pp. 359- 366, 1989. protocols and enhanced robustness against imperfections in [11] D. Steinkrau, P. Y. Simard, and I. Buck, “Using GPUs for machine the controls. The neural potential can include multi-qubit learning algorithms,” Eighth International Conference on Document interactions deviating from the current network paradigm of Analysis and Recognition (ICDAR’05), 2005. [12] C. Walter, “Kryder’s Law”, Sci. Am. vol. 293, pp. 32-33, 2005. additive activations so that one can avoid internal hidden [13] A. Kolmogorov, “On Tables of Random Numbers”, Sankhya Ser. A. 25, layers in a QNN without sacrificing approximative power for 369, 1963. information processing tasks, see Ref [84]. [14] M. Castells, “The Information Age: Economy, Society and Culture”, Blackwell, Oxford, 1996. IV. CONCLUSION [15] A. Shukla, R, Tiwari and R. Kala, “Real life applications of soft computing”, CRC Press, 2010. In this paper, we introduce the application of NNs on quan- [16] I. Goodfellow, Y. Bengio and A. Courville. “Deep Learning”, MIT Press, tum sensing and the development of QNNs by using quantum 2016. [17] J. Schmidhuber, “Deep Learning in Neural Networks: An Overview,” resources. Particularly, we review the neural-network-based Neural Networks, vol. 61, pp. 85–117, 2015. atomic quantum sensor of a 171Yb+ ion, working in the regime [18] Y. Bengio, Y. LeCun, G. Hinton, “Deep Learning,” Nature, vol. 521, pp. of complex responses beyond harmonic ones. In general, the 436–444, 2015. [19] Y. LeCun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning protocol is applicable to arbitrary quantum detection scenarios. applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. We also review the construction of a quantum perceptron 2278–2324, 1998. with inverse engineering strategies, an efficient method of [20] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury, “ quantum control, providing faster perceptron performance as Deep Neural Networks for Acoustic Modeling in Speech Recognition: well as enhanced robustness against imperfections in the The Shared Views of Four Research Groups” IEEE Signal Process. Mag. controls. We hope that more complex quantum problems with vol. 29, pp. 82-97, 2012. [21] G. E. Hinton, S. Osindero, and Y. Teh, “A fast learning algorithm for different environmental noise will be addressed by NN, and the deep belief nets”. Neural Comput. vol. 18, pp. 1527-1554, 2006. performance and training of NN will be improved by quantum [22] E. G. Dada, J. S. Bassi, H. Chiroma, S. M.Abdulhamid, A. O. Adetun- resources. mbi, and O. E. Ajibuwa, “Machine learning for email spam filtering: review, approaches and open research problems,” Heliyon vol. 5, pp. ACKNOWLEDGMENT e01802, 2019. [23] M. Buehler, K. Iagnemma, and S. Singh, The DARPA urbanchallenge: We acknowledge financial support from Spanish Govern- autonomous vehicles in city traffic. Springer, vol. 56, 2009. ment via PGC2018-095113-B-I00 (MCIU/AEI/FEDER, UE), [24] S. Devi, P. Malarvezhi, R. Dayana and K. Vadivukkarasi, “A Compre- hensive Survey on Autonomous Driving Cars: A Perspective View”. Basque Government via IT986-16, as well as from QMiCS Wirel. Pers. Commun. vol. 114, pp. 2121-2133, 2020. (820505) and OpenSuperQ (820363) of the EU Flagship [25] P. Kashyap, “Machine Learning for Decision Makers: Cognitive Com- on Quantum Technologies, and the EU FET Open Grant puting Fundamentals for Better Decision Making”, APress, US, 2018. [26] D. Leibfried, R. Blatt, C. Monroe, C. and D. Wineland, “Quantum Quromorphic (828826). J. C. acknowledges the Ram´on y Ca- dynamics of single trapped ions,” Rev. Mod. Phys. vol. 75, pp. 281, jal program (RYC2018-025197-I) and the EUR2020-112117 2003. [27] M. H. Devoret and R. J. Schoelkopf, “Superconducting Circuits for [52] C. Harney, S. Pirandola, A. Ferraro and M. Paternostro, “Entan- glement : An Outlook,” Science, vol. 339, pp. 1169-1174, classification via neural network quantum states,” New J. Phys. vol. 22, 2013. pp. 045001, 2020. [28] I. Bloch, “Ultracold quantum gases in optical lattices,” Nat. Phys. vol. [53] J. Gray, L. Banchi, A. Bayat and S. Bose, “Machine-learning- assisted 1, pp. 23-30, 2005. many-body entanglement measurement,” Phys. Rev. Lett. vol. 121, pp. [29] J. L. O’Brien, A. Furusawa, and Vuck˘ovi´c, J. Photonic quantum tech- 150503, 2018. nologies. Nat. 3, 687-695 (2009). [54] J. Walln ofer, A. A. Melnikov, W. D ur and H. J. Briegel, “Machine [30] R. Hanson, L. P. Kouwenhoven, J. R. Petta, S. Tarucha, and L. M. K. learning for long-distance quantum communication,” PRX Quantum vol. Vandersypen, “Spins in few-electron quantum dots,” Rev. Mod. Phys. 1, pp. 010301, 2020. vol. 79, pp. 1217, 2007. [55] J. Wang, S. Paesani, R. Santagati, S. Knauer, A. A. Gentile, N. Wiebe, M. [31] M. Dusek, N. Lutkenhaus, and M. Hendrych, “,” Petruzzella, J. L. O’Brien, J. G. Rarity, A. Laing and M. G. Thompson, Prog. Opt. vol. 49, pp. 381-454, 2006. “ Experimental quantum Hamiltonian learning,” Nat. Phys. vol. 13, pp. [32] C. L. Degen, F. Reinhard, and P. Cappellaro, “Quantum sensing,” Rev. 551, 2017. Mod. Phys. vol. 89, pp. 035002, 2017. [56] A. Nagy and V. Savona, “Variational quantum monte carlo method with [33] D. Loss and D. P. DiVincenzo, “Quantum computation with quantum a neural-network ansatz for open quantum systems,” Phys. Rev. Lett. dots,” Phys. Rev. A vol. 57, pp. 120, 1998. vol. 122, pp. 250501, 2019. [34] I. Baumgart, J. M. Cai, A. Retzker, M. B. Plenio and Ch. Wunderlich, [57] M. J. Hartmann and G. Carleo, “Neural-network approach to dissipative “Ultrasensitive using a Single Atom,” Phys. Rev. Lett. quantum many-body dynamics,” Phys. Rev. Lett. vol. 122, pp. 250502, vol. 116, pp. 240801, 2016. 2019. [35] N. Timoney, I. Baumgart, M. Johanning, A. F. Var´on, M. B. Plenio, [58] F. Vicentini, A. Biella, N. Regnault and C. Ciuti, “Variational Neural- A. Retzker and Ch. Wunderlich, “Quantum gates and memory using Network Ansatz for Steady States in Open Quantum Systems,” Phys. microwave-dressed states,” Nature vol. 476, pp. 185, 2011. Rev. Lett. vol. 122, pp. 250503, 2019. [36] S. Weidt, J. Randall, S. C. Webster, K. Lake, A. E. Webb, I. Cohen, T. [59] N. Yoshioka, and R. Hamazaki, “Constructing neural stationary states for Navickas, B. Lekitsch, A. Retzker and W. K. Hensinger, “Trapped-Ion open quantum many-body systems,” Phys. Rev. B vol. 99, pp. 214306, with Global Radiation Fields,” Phys. Rev. Lett. vol. 117, 2019. pp. 220501, 2016. [60] A. Lumino, E. Polino, A.S. Rab, G. Milani, N. Spagnolo, N. Wiebe [37] S. Olmschenk, K. C. Younge, D. L. Moehring, D. N. Matsuke- vich, P. and F. Sciarrino, “Experimental Phase Estimation Enhanced by Machine Maunz, and C. Monroe, “Manipulation and detection of a trapped Yb+ Learning,” Phys. Rev. Appl. vol. 10, pp. 044033, 2018. hyperfine qubit, Phys. Rev. A vol. 76, pp. 052314, 2007. [61] T. Xiao, J. Huang, J. Fan and G. Zeng, Continuous-variable Quantum [38] R. Puebla, Y. Ban, J. F. Haase, M. B. Plenio, M. Paternostro, and Phase Estimation based on Machine Learning, Sci. Rep. vol. 9, pp. J. Casanova, “versatile Atomic Magnetometry Assisted by Bayesian 12410, 2019. Inference,” arXiv:2003.02151, 2020. [62] P. Palittapongarnpim and B. Sanders, “Robustness of quantum enhanced [39] C. Weedbrook, S. Pirandola, R. Garc´ıa-Patr´on, N. J. Cerf, T. C. Ralph, J. adaptive phase estimation,” Phys. Rev. A vol. 100, pp. 012106, 2019. H. Shapiro, and S. Lloyd, “Gaussian quantum information,” Rev. Mod. [63] H. Xu, J. Li, L. Liu, Y. Wang, H. Yuan and X. Wang, “Generalizable Phys. vol. 84, pp. 621, 2012. control for quantum parameter estimation through reinforcement learn- [40] N. Gisin and R. Thew, “Quantum communication,” Nat. Photonics vol. ing,” npj Q. Info. vol. 5, pp. 82, 2019. 1, pp. 165–171, 2007. [64] Y. Peng and H. Fan, “Feedback ansatz for adaptive feedback quantum [41] F. Arute, K. Arya, R. Babbush, et al., “ using a pro- metrology training with machine learning,” Phys. Rev. A vol. 101, pp. grammable superconducting processor,” Nature vol. 574, pp. 505–510, 022107, 2020. 2019. [65] J. Schuff, L. J. Fiderer and D. Braun, “Improving the dynamics of [42] J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe and S. quantum sensors with reinforcement learning,” New J. Phys. vol. 22, Lloyd, “ learning,” Nature, vol. 549, pp. 195-202, pp. 03500, 2020. 2017. [66] L.J. Fiderer, J. Schuff and D. Braun, “Neural-Network Heuristics for [43] M. Schuld, I. Sinayskiy, and F. Petruccione, “An introduction to quantum Adaptive Bayesian Quantum Estimation,” arXiv: 2003.02183, 2020. machine learning,” Contemp. Phys. vol. 56, pp. 172-185, 2015. [67] V. Cimini, I. Gianani, N. Spagnolo, F. Leccese, F. Sciarrino and M. [44] E. Magesan, J. M. Gambetta, A. D. C´orcoles and J. M. Chow, “Machine Barbieri, “Calibration of Quantum Sensors by Neural Networks,” Phys. Learning for Discriminating Quantum Measurement Trajectories and Rev. Lett. vol. 123, pp. 230502, 2019. Improving Readout,” Phys. Rev. Lett. vol. 114, pp. 200501, 2015. [68] S. Gupta and R. K. P. Zia, “Quantum Neural Networks”, J. Comput. [45] W. Liu, J. Huang, Y. Li, H. Li, C. Fang, Y. Yu and G. Zeng, “Parameter Syst. Sci. vol. 63, pp. 355-383, 2001. estimation via weak measurement with machine learning,” J. Phys. B: [69] G. D. Paparo, V. Dunjko, A. Makmal, M. A. Martin-Delgado, and H. J At. Mol. Opt. Phys. vol. 52, 045504, 2019. Briegel, “Quantum Speedup for Active Learning Agents”, Phys. Rev. X [46] G. Torlai, G. Mazzola, J. Carrasquilla, M. Troyer, R. Melko, and G. vol. 4, 031002, 2014. Carleo, “Neural-network tomography,” Nat. Phys. vol. [70] M. Benedetti, J. Realpe-G´omez, R. Biswas, and A. Perdomo-Ortiz, Ale- 14, pp. 447, 2018. jandro, “Quantum-assisted learning of graphical models with arbitrary [47] A. Rocchetto, S. Aaronson, S. Severini, G. Carvacho, D. Poderini, I. pairwise connectivity”, Phys. Rev. X vol. 7, 041052, 2017. Agresti, M. Bentivegna, and F. Sciarrino, “Experimental learning of [71] G. Sent´ıs, M. Guta, and G. Adesso, “Quantum learning of coherent states quantum states,” Sci. Adv. vol. 5, pp. eaau1946, 2019. ”, EPJ Quantum Technol. vol. 2, 17, 2015. [48] S. Yu, F. Albarran-Arriagada, J. C. Retamal, Y.-T. Wang, W. Liu, Z.-J. [72] V. Havl´ıcek, A. D. Corc´oles, K. Temme, A. W. Harrow, A. Kandala, Ke, Y. Meng, Z.-P. Li, J.-S. Tang, E. Solano, L. Lamata, C.-F. Li, and J. M. Chow, and J. M. Gambetta, “Supervised learning with quantum- G.-C. Guo, “Reconstruction of a photonic qubit state with reinforcement enhanced feature spaces”, Nature 567, pp. 209-212, 2019. learning,” Adv. Quantum Technol. vol. 2, pp. 1800074, 2019. [73] S. Kak, “On Quantum Neural Computing,” Inform. Sci. vol. 83, pp. [49] N. Spagnolo, E. Maiorino, C. Vitelli, M. Bentivegna, A. Crespi, R. 143-160, 2015. Ramponi, P. Mataloni, R. Osellame, and F. Sciarrino, “Learning an [74] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, Phys. Rev. A 98, unknown transformation via a genetic approach,” Sci. Rep. vol. 7, pp. 032309, 2018. 14316 (2017). [75] M. Schuld, A. Bocharov, K. Svore, and N. Wiebe, Phys. Rev. A 101, [50] M. Ostaszewski, J. A. Miszczak, L. Banchi, and P. Sadowski, “ Ap- 032308, 2020. proximation of quantum control correction scheme using deep neural [76] E. Farhi and H. Neven, arXiv:1802.06002 (2018). networks,” Quantum Inf. Processing vol. 18, pp. 126 (2019). [77] Y. Cao, G. G. Guerreschi, G. G. and A. Aspuru-Guzik, “Quantum [51] T. Giordani, A. Suprano, E. Polino, F. Acanfora, L. Innocenti, A. Ferraro, Neuron: an elementary building block for machine learning on quantum M. Paternostro, N. Spagnolo and F. Sciarrino, “Machine learning-based computers,” arXiv:1711.11240, 2017. classification of vector vortex beams,” Phys. Rev. Lett. vol. 124, pp. [78] A. P´erez-Salinas, A. Cervera-Lierta, E. Gil-Fuster, and J. I. Latorre, 160401 (2020). Quantum vol. 4, pp. 226, 2020. [79] K. H. Wan, O. Dahlsten, H. Kristj´ansson, R. Gardner and M. S. Kim, “Quantum generalisation of feedforward neural networks,” npj Quantum Inf. vol. 3, pp. 36, 2017. [80] E. Farhi and H. Neven, “Classification with Quantum Neural Networks on Near Term Processors,” arXiv:1802.06002, 2018. [81] N. Killoran, T. R. Bromley, J. M. Arrazola, M. Schuld, N. Quesada and S. Lloyd, “ Continuous-variable quantum neural networks,” Phys. Rev. Research vol. 1, pp. 033063, 2019. [82] E. Torrontegui and J. J. Garc´ıa-Ripoll, “Unitary quantum perceptron as efficient universal approximator,” Europhys. Letts. vol. 125, pp. 30004, 2019. [83] Y. Ban, X. Chen, E. Torrontegui, E. Solano and J. Casanova, “Speeding up quantum perceptron via shortcuts to adiabaticity”, Sci. Rep. vol. 11, pp. 5783, 2021. [84] Y. Ban, E. Torrontegui, and J. Casanova, “Quantum neural networks with multi-qubit potentials”, arXiv: 2105.02756 (2021). [85] Y. Ban, J. Echanobe, Y. Ding, R. Puebla and J. Casanova, “ Neural- network-based parameter estimation for quantum detection,” arXiv: 2012.07677, 2020. [86] J. R. Maze, P. L. Stanwix, J. S. Hodges, S. Hong, J. M. Taylor, P. Cappellaro, L. Jiang, M. V. Gurudev Dutt, E. Togan, A. S. Zi- brov, A. Yacoby, R. L. Walsworth and M. D. Lukin, “Nanoscale magnetic sensing with an individual electronic in diamond,” Nature vol. 455, pp. 644-647, 2008. [87] J. M. Taylor, P. Cappellaro, L. Childress, L. Jiang, D. Budker, P. R. Hemmer, A. Yacoby, R. Walsworth and M. D. Lukin, “High- sensitivity diamond magnetometer with nanoscale resolution,” Nat. Phys. vol. 4, pp. 810-816, 2008. [88] X. Chen, E. Torrontegui and J. G. Muga, “Lewis-Riesenfeld invariants and transitionless quantum driving,” Phys. Rev. A vol. 83, pp. 062116, 2011.