Numerical Ranges and Applications in Quantum Information
Total Page:16
File Type:pdf, Size:1020Kb
Numerical Ranges and Applications in Quantum Information by Ningping Cao A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Doctor of Philosophy in Mathematics & Statistics Guelph, Ontario, Canada © Ningping Cao, August, 2021 ABSTRACT NUMERICAL RANGES AND APPLICATIONS IN QUANTUM INFORMATION Ningping Cao Advisor: University of Guelph, 2021 Dr. Bei Zeng Dr. David Kribs The numerical range (NR) of a matrix is a concept that first arose in the early 1900’s as part of efforts to build a rigorous mathematical framework for quantum mechanics andother challenges at the time. Quantum information science (QIS) is a new field having risen to prominence over the past two decades, combining quantum physics and information science. In this thesis, we connect NR and its extensions with QIS in several topics, and apply the knowledge to solve related QIS problems. We utilize the insight offered by NRs and apply them to quantum state inference and Hamiltonian reconstruction. We propose a new general deep learning method for quantum state inference in chapter 3, and employ it to two scenarios – maximum entropy estimation of unknown states and ground state reconstruction. The method manifests high fidelity, exceptional efficiency, and noise tolerance on both numerical and experimental data. A new optimization algorithm is presented in chapter 4 for recovering Hamiltonians. It takes in partial measurements from any eigenstates of an unknown Hamiltonian H, then provides an estimation of H. Our algorithm almost perfectly deduces generic and local generic Hamiltonians. Inspired by hybrid classical-quantum error correction (Hybrid QEC), the higher rank (k : p)-matricial range is defined and studied in chapter 5. This concept is a new extension of NR. We use it to study Hybrid QEC, also to investigate the advantage of Hybrid QEC over QEC under certain conditions. Dedication To my mother, Ailan Peng. “月明闻杜宇,南北总关心。” iv Acknowledgements My advisors, Bei Zeng and David Kribs, have been great role models and guides for me. Bei is extremely enthusiastic about scientific problems and has a broad collaborating network; nevertheless, she still finds a lot of time answering my questions in a timely fashion and guiding my research through a productive path. Bei’s energy and concentration on research set an excellent example for me. She encourages me to try a wide range of problems and teaches me that the only path to learn things concretely is by doing them. She also tries to help people as much as possible, even for those who meet by chance. David is always kind and supportive. He provides various opportunities to all of his students; his patience and care helped me through difficult times. Looking at them always makes me think about what kind of person I want to become. My life forever benefits from their supervision during my PhD studies. I also would like to thank Rajesh Pereira and Allan Willms. They taught me beneficial courses; their office doors are always open to students who need help. I wish tothankZeny Feng for her friendship and guidance. I had a lot of fun listening and talking to her, ranging from statistics to cooking. I am also very grateful to Raymond Laflamme for providing me with a wide range of advice, from modifying my presentation slides to general principles of becoming a better researcher. I also would like to thank him for tolerating my dry sense of humour. I thank my collaborators for teaching me many things through collaboration: Yiu-Tung v Poon, Chi-Kwong Li, Lijian Zhang, Shi-Yao Hou, Jun Li, Chenfeng Cao, Jie Xie, Aonan Zhang, Junan Lin... Everyone in my office, Comfort Mintah, Conner Gregor, and Thomas Kielstra, deserve my thanks. My PhD journey would be way more grey without you guys. Every graduate student in our department is indebted to Susan McCormick; she is the “superhero” that takes good care of us. Sirui Lu, Shilin Huang, and Zhu-Xi Luo, though we are not living in the same country, often share academic news, thoughts, and viewpoints with me through instant-messaging tools. They are continually reminding me what should a better graduate student be. I also wish to thank my boyfriend, Maxwell Fitzsimmons, for his encouragement, support and company. (Max, also thank you for your tasteless joke on the acknowledgement in your master thesis, looking forward to a better one in your PhD thesis.) And, of course, I wish to thank my family. None of them work in academia, and none of them could deeply understand my work, but their earnestness, decency, integrity and caring nurtures me. I wouldn’t be able to reach here without them. I thank the couch in the MacNaughton building and coffee machines in IQC; the sun and snow; the days and nights. vi Contents Abstract ii Dedication iv Acknowledgements v List of Tables ix List of Figures xiii List of Abbreviations xiv 1 Introduction 1 1.1 Numerical Range and its Extensions in Quantum Information . 1 1.2 Thesis Organization and Contribution . 3 2 Preliminary 5 2.1 Quantum Information and Quantum Error Correction . 6 2.1.1 Quantum Information Basics . 6 2.1.2 Introduction to Quantum Error Correction . 8 2.2 Numerical Range and its Extensions . 9 2.2.1 Definitions and Geometry properties . 10 2.2.2 Interpolations in Quantum Information . 14 3 Quantum State Inference via Machine Learning 18 3.1 A Neural Network method for Quantum State Inference . 20 3.1.1 Numerical Ranges and Quantum State Inference . 20 3.1.2 The Neural Network Method for Quantum State Inference . 25 3.2 Maximum Entropy Estimation of Quantum States . 29 3.2.1 Methodology . 29 3.2.2 Data Preparation and Network Training . 32 3.2.3 Experimental Verification and the Effect of Error . 37 3.2.4 Comparing with Other Methods . 39 vii 3.3 Ground States from Partial Measurements . 42 3.3.1 Method . 43 3.3.2 Numerical and Experimental Results . 48 3.3.3 In Comparison with Other Methods . 54 4 Determining System Hamiltonian from Eigenstate Measurements 57 4.1 Numerical Ranges and Hamiltonian . 58 4.2 The Optimization Algorithm for Reconstructing Hamiltonian . 60 4.2.1 Method . 61 4.3 Numerical Results . 65 4.3.1 Results with General Operators . 66 4.3.2 Results with Local Operators . 68 4.4 Further Analysis of the Algorithm . 69 4.4.1 In Comparison with the Correlation Matrix method . 73 5 Higher Rank Matricial Ranges and Hybrid Quantum Error Correction 78 5.1 Operator Algebra Quantum Error Correction and Hybrid QEC . 79 5.2 Joint Rank (k:p)-Matricial Range . 82 5.3 Application to Hybrid Quantum Error Correction . 86 5.4 Exploring Advantages of Hybrid Quantum Error Correction . 90 6 Conclusion And Outlook 94 Appendices 96 .1 Influences of Training Data Distribution . 97 .2 Experimental Details . 98 .3 The Value of β .................................. 100 .4 Calculation of the Gradient of f(~c) ....................... 101 .4.1 General Method . 101 .4.2 Derivative of Matrix Exponentials . 102 .5 Software Implementations . 104 Bibliography 108 viii List of Tables 2.1 Convexity and starshapeness of NR and its extensions. 13 3.1 Statistics of numerical results. 36 3.2 Average fidelities on the test set by using different numbers of train- ing data and epochs. The batch size is 512, and the size of the test dataset is 5000. As the amount of training data increases, we find the average fidelity of estimated states and the true test states goes up, and the neural network reaches a certain performance after we fed sufficient training data. We also observe more training data requires more training epochs; . 47 3.3 Average fidelities on the test set by using different batch sizes. The size of test dataset is 5,000. The optimal batch size for the 4-qubit case is 1024 and for the 7-qubit case is 512. 48 3.4 The statistical performance of our neural networks for 4-qubit and 7-qubit cases. 50 4.1 Time consumption of each trial and the number of trials. For general operator cases and the 4-qubit local operators case, we used 200 examples to achieve the averages. For 7-qubit local operators, The result is obtained from 40 examples. 73 1 Relative errors of different operators. 100 2 The types of the variables and values of the intermediate functions. 103 ix List of Figures 1.1 The thesis structure. 3 2.1 Three examples of numerical ranges. 11 2.2 The joint algebraic numerical range (JANR) of two randomly generated Her- mitian matrices fF1;F2g............................. 15 3.1 The joint algebraic numerical range of two generic 9 × 9 Hermitian matrices fF1;F2g....................................... 21 3.2 Bloch Sphere (Image credit: Wikipedia) . 22 3.3 The schematic diagram of our method. The green arrow depicts the quantum forward problem; the red arrow is the quantum state inferences. We tackle the problem by supervised learning a deep neural network. 26 3.4 Training and testing data generation. Parameters β and ~a are randomly generated. The Hermitian operator set F associate with the forward model A provides information for the distribution of β. Generated pairs of (~c;β~a) are training data: ~c is the input of the neural network, β~a is the idea output. The left block is the parametrization function P , the right block is the foreword model A for this problem. 31 −1 −1 3.5 The process. The trained network ΦNN is the approximation of P ◦ A , can produce a parameter estimation β0~a0 according to an input ~c. The right block is the parameterization function P that maps estimated parameters β0~a0 to the estimation of MEE ρest .........................