Revisiting Online Quantum State Learning
Total Page:16
File Type:pdf, Size:1020Kb
The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Revisiting Online Quantum State Learning Feidiao Yang, Jiaqing Jiang, Jialin Zhang, Xiaoming Sun Institute of Computing Technology, Chinese Academy of Sciences, Bejing, China University of Chinese Academy of Sciences, Beijing, China {yangfeidiao, jiangjiaqing, zhangjialin, sunxiaoming}@ict.ac.cn Abstract theories may also help solve some interesting problems in quantum computation and quantum information (Carleo and In this paper, we study the online quantum state learn- Troyer 2017). In this paper, we apply the online learning the- ing problem which is recently proposed by Aaronson et al. (2018). In this problem, the learning algorithm sequentially ory to solve an interesting problem of learning an unknown predicts quantum states based on observed measurements and quantum state. losses and the goal is to minimize the regret. In the previ- Learning an unknown quantum state is a fundamental ous work, the existing algorithms may output mixed quan- problem in quantum computation and quantum information. tum states. However, in many scenarios, the prediction of a The basic version is the quantum state tomography prob- pure quantum state is required. In this paper, we first pro- lem (Vogel and Risken 1989), which aims to fully recover pose a Follow-the-Perturbed-Leader (FTPL) algorithm that the classical description of an unknown quantum state. Al- can guarantee to predict pure quantum states. Theoretical√ O( T ) though quantum state tomography gives a complete char- analysis shows that our algorithm can achieve an ex- acterization of the target state, it is quite costly. Recent pected regret under some reasonable settings. In the case that the pure state prediction is not mandatory, we propose an- advancement showed that fully reconstructing an unknown other deterministic learning algorithm which is simpler and quantum state in the worst case needs exponential copies of more efficient. The algorithm is based on the online√ gradient the state (Haah et al. 2016; Odonnell and Wright 2016). descent (OGD) method and can also achieve an O( T ) re- However, in some applications, it is unnecessary to fully gret bound. The main technical contribution of this result is reconstruct an unknown quantum state. Some side informa- an algorithm of projecting an arbitrary Hermitian matrix onto tion is sufficient. Therefore, some learning tasks move on the set of density matrices with respect to the Frobenius norm. to learn the success probabilities of applying a collection of We think this subroutine is of independent interest and can be two-outcome measurements to an unknown state, with re- widely used in many other problems in the quantum comput- ing area. In addition to the theoretical analysis, we evaluate spect to some metrics. Of which, the shadow tomography the algorithms with a series of simulation experiments. The problem (Aaronson 2018) requires to estimate the success experimental results show that our FTPL method and OGD probabilities uniformly over all measurements in the collec- method outperform the existing RFTL approach proposed by tion. Aaronson (2018) showed that the required number of Aaronson et al. (2018) in almost all settings. In the imple- copies of the unknown state in the shadow tomography is mentation of the RFTL approach, we give a closed-form so- nearly linear to the number of qubits and poly-logarithmic lution to the algorithm. This provides an efficient, accurate, in terms of the number of the measurements. and completely executable solution to the RFTL method. More generally, it may not need to estimate the success probabilities within an error uniformly over all two-outcome Introduction measurements. Following the idea of the statistical learn- The interdisciplinary research between quantum comput- ing theory, we may assume that there is a distribution over ing and machine learning is becoming an attractive area in some possible two-outcome measurements. And our goal is recent years (Biamonte et al. 2017; Lloyd, Mohseni, and to learn a quantum state such that the expected difference be- Rebentrost 2014). On one hand, people expect to take advan- tween the success probabilities of applying a measurement tage of the great power of quantum computers to improve the sampled from the distribution to the learned state and the tar- efficiency of the algorithms in big data processing and ma- get state respectively is within a specific error. This is called chine learning. One representative example following this the statistical learning model or the PAC-learning model of idea is the HHL algorithm (Harrow, Hassidim, and Lloyd quantum states. Aaronson (2007) proved that the number of 2009). On the other hand, machine learning algorithms and samples for the PAC-learning of quantum states only grows linearly with the number of qubits of the state, which is sur- Copyright c 2020, Association for the Advancement of Artificial prisingly an exponential reduction compared with the full Intelligence (www.aaai.org). All rights reserved. quantum state tomography. 6607 However, the assumption that there is a distribution over First, predicting a pure quantum state is of special inter- some two-outcome measurements and the data are i.i.d. sam- est in quantum state learning (Lee, Lee, and Bang 2018; ples from this distribution does not always hold. The En- Benedetti et al. 2019) since a pure state has its unique value vironment may change over time or it is even adversar- in theory and practice. However, the existing RFTL method ial. Complementary to the statistical learning theory, the cannot make such a guarantee since its prediction is always online learning theory is good at coping with arbitrary or a full rank matrix, which corresponds to a mixed state. It is even adversarial sequential data. Therefore, Aaronson et al. still a challenge to predict pure states in online quantum state (2018) further proposed the model of online quantum states learning. In this paper, we propose a Follow-the-Perturbed- learning. In this model, the data, such as measurements and Leader (FTPL) algorithm (Kalai and Vempala 2005) that can losses, are provided sequentially. The learning algorithm is guarantee to predict pure quantum states every round. The to predict a series of quantum states interactively. Its goal is key idea is to formulate the optimization objective with a to minimize the regret, which is the difference in the total stochastic linear regularization to be a special semi-definite loss between the learning algorithm and the best fixed quan- programming (SDP), and we show this SDP always has a tum state in hindsight. rank-1 solution which is corresponding to a pure quantum Although the existing theory has provided helpful ideas state. Our analysis shows that the regret√ with respect to the for this problem, it is still a challenge to design and analyze expected prediction is bounded as O( T ). We further adapt algorithms for online quantum state learning. For example, the FTPL method to a typical and reasonable setting with√L1 a feasible solution in conventional online learning is often loss. In this case, our FTPL method can achieve an O( T ) a vector in the real Euclidean space, but a feasible solution expected regret. in quantum state learning is a complex matrix with special Second, if pure states are not mandatory, the online gra- constraints. Besides, a direct adaption of the existing tech- dient descent (OGD) method (Zinkevich 2003) is a sim- niques can not utilize the properties of the quantum setting. ple and efficient approach for this problem. Actually, it is If we can take advantage of these unique features or lever- widely used in practice for online learning. However, the age techniques from quantum computing, we may get better OGD method relies on a subroutine of projection if the fea- results or different solutions. sible solutions are constrained. In this paper, we propose an In (Aaronson et al. 2018), the authors proposed three very algorithm of projecting an arbitrary Hermitian matrix onto different approaches. They evaluated the algorithms with the set of quantum states with respect to the Frobenius norm. two metrics, the regret in online learning and the number The key idea is to reduce the problem to project a vector of errors. First, they adapted the Regularized Follow-the- onto the probability simplex. Our algorithm is exact and ef- Leader (RFTL) algorithm (Abernethy, Hazan, and Rakhlin ficient. It could be widely used as a subroutine in many other 2008; Shalev-Shwartz and Singer 2007) to the online quan- problems in quantum computing. We apply our method to tum state learning. Particularly, they employed the nega- the projected online gradient descent algorithm for quantum tive von Neumann entropy as the regularization. The RFTL state√ learning and we show that this method can achieve an √ O T method can achieve an O(L nT ) regret, where L is the ( ) regret. Lipschitz coefficient of the loss functions, n is the number Third, the RFTL method due to Aaronson et al. (2018) re- of qubits of a state, and T is the time horizon of the learn- lies on an offline oracle of solving a linear optimization with n the negative von Neumann entropy regularization, which is ing process. Its number of errors is O( ε2 ) under some as- sumptions. The RFTL method has the best theoretical guar- not fully discussed in the original work. In this paper, we antee over the other two methods. Their second method give a closed-form solution to this offline convex optimiza- employs the technique of postselection-based learning pro- tion problem. Our result provides an efficient, accurate, and cedure (Aaronson 2005). Its error number is bounded as completely executable solution to the RFTL method. We n n also implement this solution in our experiments. O( ε3 log ε ) and the regret bound is not available.