
The Tensor-Train Format and Its Applications Modeling and Analysis of Chemical Reaction Networks, Catalytic Processes, Fluid Flows, and Brownian Dynamics Dissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) eingereicht im Fachbereich Mathematik und Informatik der Freien Universität Berlin vorgelegt von Patrick Gelß Berlin 2017 Erstgutachter: Prof. Dr. Christof Schütte Freie Universität Berlin Fachbereich Mathematik und Informatik Arnimallee 6 14195 Berlin Zweitgutachter: Prof. Dr. Reinhold Schneider Technische Universität Berlin Institut für Mathematik Straße des 17. Juni 136 10623 Berlin Tag der Disputation: 28. Juni 2017 Copyright © 2017 by Patrick Gelß Abstract The simulation and analysis of high-dimensional problems is often infeasible due to the curse of dimensionality. In this thesis, we investigate the potential of ten- sor decompositions for mitigating this curse when considering systems from several application areas. Using tensor-based solvers, we directly compute numerical solu- tions of master equations associated with Markov processes on extremely large state spaces. Furthermore, we exploit the tensor-train format to approximate eigenval- ues and corresponding eigentensors of linear tensor operators. In order to analyze the dominant dynamics of high-dimensional stochastic processes, we propose sev- eral decomposition techniques for highly diverse problems. These include tensor representations for operators based on nearest-neighbor interactions, construction of pseudoinverses for tensor-based reformulations of dimensionality reduction meth- ods, and the approximation of transfer operators of dynamical systems. The results show that the tensor-train format enables us to compute low-rank approximations for various numerical problems as well as to reduce the memory consumption and the computational costs compared to classical approaches significantly. We demon- strate that tensor decompositions are a powerful tool for solving high-dimensional problems from various application areas. iii Acknowledgements I would like to take this opportunity to express my gratitude to all those who encouraged me to write this thesis. First and foremost, I would like to thank my supervisor Christof Schütte for his continuous support and guidance as well as for offering me the possibility of writing this thesis. I wish to express my sincere appreciation to Stefan Klus for proofreading this thesis and for providing valuable comments and suggestions. I have greatly benefited from his support and advice. My gratitude also goes to Sebastian Matera for drawing my attention to various application areas for tensors since the beginning of my PhD. Special thanks should be given to Thomas von Larcher for providing me with CFD data and to Sebastian Peitz for his visualizations. Additionally, I want to thank all the people from the Biocomputing Group at the FU Berlin, the members of the CRC 1114, and the research group around Reinhold Schneider at TU Berlin for the interesting discussions and valuable inputs. Finally, I would like to thank my family and friends for their support during the preparation of this work. In particular, I am deeply grateful to Nadja. Without her love and help during the last years this all would not have been possible. This research has been funded by the Berlin Mathematical School and the Einstein Center for Mathematics. v To my son, Finn. Contents 1. Introduction1 Part I: Foundations of Tensor Approximation5 2. Tensors in Full Format7 2.1. Definition and Notation......................7 2.2. Tensor Calculus..........................9 2.2.1. Addition and Scalar Multiplication............9 2.2.2. Index Contraction..................... 10 2.2.3. Tensor Multiplication................... 10 2.2.4. Tensor Product...................... 12 2.3. Graphical Representation..................... 13 2.4. Matricization and Vectorization................. 15 2.5. Norms............................... 17 2.6. Orthonormality.......................... 19 3. Tensor Decomposition 23 3.1. Rank-One Tensors........................ 23 3.2. Canonical Format......................... 24 3.3. Tucker and Hierarchical Tucker Format............. 27 3.4. Tensor-Train Format....................... 29 3.4.1. Core Notation....................... 31 3.4.2. Addition and Multiplication................ 32 3.4.3. Orthonormalization.................... 35 3.4.4. Calculating Norms..................... 36 3.4.5. Conversion......................... 38 3.5. Modified Tensor-Train Formats.................. 41 3.5.1. Quantized Tensor-Train Format.............. 42 3.5.2. Block Tensor-Train Format................ 43 3.5.3. Cyclic Tensor-Train Format................ 44 4. Optimization Problems in the Tensor-Train Format 47 4.1. Overview............................. 47 4.2. (M)ALS for Systems of Linear Equations............ 48 4.2.1. Problem Statement.................... 48 4.2.2. Retraction Operators................... 49 4.2.3. Computational Scheme.................. 52 4.2.4. Algorithmic Aspects.................... 54 4.3. (M)ALS for Eigenvalue Problems................. 57 4.3.1. Problem Statement.................... 57 4.3.2. Computational Scheme.................. 58 4.4. Properties of (M)ALS....................... 59 4.5. Methods for Solving Initial Value Problems........... 61 vii Part II: Progress in Tensor-Train Decompositions 63 5. Tensor Representation of Markovian Master Equations 65 5.1. Markov Jump Processes...................... 65 5.2. Tensor-Based Representation of Infinitesimal Generators.... 66 6. Nearest-Neighbor Interaction Systems in the Tensor-Train Format 69 6.1. Nearest-Neighbor Interaction Systems.............. 69 6.2. General SLIM Decomposition................... 71 6.3. SLIM Decomposition for Markov Generators........... 74 7. Dynamic Mode Decomposition in the Tensor-Train Format 79 7.1. Moore-Penrose Inverse...................... 79 7.2. Computation of the Pseudoinverse................ 80 7.3. Tensor-Based Dynamic Mode Decomposition.......... 82 8. Tensor-Train Approximation of the Perron–Frobenius Operator 87 8.1. Perron–Frobenius Operator.................... 87 8.2. Ulam’s Method.......................... 88 Part III: Applications of the Tensor-Train Format 93 9. Chemical Reaction Networks 95 9.1. Elementary Reactions....................... 95 9.2. Chemical Master Equation.................... 96 9.3. Numerical Experiments...................... 97 9.3.1. Signaling Cascade..................... 97 9.3.2. Two-Step Destruction................... 103 10.Heterogeneous Catalysis 109 10.1. Heterogeneous Catalytic Processes................ 109 10.2. Reduced Model for the CO Oxidation at RuO2 ......... 110 10.3. Numerical Experiments...................... 113 10.3.1. Scaling with System Size................. 113 10.3.2. Varying the CO Pressure................. 114 10.3.3. Increasing the Oxygen Desorption Rate.......... 117 11.Fluid Dynamics 119 11.1. Computational Fluid Dynamics.................. 119 11.2. Numerical Examples....................... 120 11.2.1. Rotating Annulus..................... 120 11.2.2. Flow Around a Blunt Body................ 123 12.Brownian Dynamics 125 12.1. Langevin Equation........................ 125 12.2. Numerical Experiments...................... 126 12.2.1. Two-Dimensional Triple-Well Potential.......... 126 12.2.2. Three-Dimensional Quadruple-Well Potential....... 128 viii 13.Summary and Conclusion 131 14.References 133 A. Appendix 145 A.1. Proofs................................. 145 A.1.1. Inverse Function for Little-Endian Convention........ 145 A.1.2. Equivalence of the Master Equation Formulations....... 147 A.1.3. Equivalence of SLIM Decomposition and Canonical Represen- tation.............................. 148 A.1.4. Equivalence of SLIM Decomposition and Canonical Represen- tation for Markovian Master Equations............ 149 A.1.5. Functional Correctness of Pseudoinverse Algorithm...... 150 A.2. Algorithms............................... 152 A.2.1. Orthonormalization of Tensor Trains............. 152 A.2.2. ALS for Systems of Linear Equations............. 153 A.2.3. MALS for Systems of Linear Equations............ 154 A.2.4. ALS for Eigenvalue Problems................. 155 A.2.5. MALS for Eigenvalue Problems................ 156 A.2.6. Compression of Two-Dimensional TT Operators....... 157 A.2.7. Construction of SLIM Decompositions for Markovian Master Equations............................ 158 A.3. Deutsche Zusammenfassung (German Summary)........... 159 A.4. Eidesstattliche Erklärung (Declaration)................ 160 ix List of Figures 2.1. Low-dimensional tensors represented by arrays............7 2.2. Graphical representation of tensors................... 14 2.3. Graphical representation of tensor contractions............ 14 2.4. Orthonormal tensors........................... 20 2.5. QR decompositions of a tensor..................... 21 2.6. Singular value decomposition of a tensor................ 21 3.1. Graphical representation of the Tucker format and the HT format.. 28 3.2. Graphical representation of tensor trains................ 30 3.3. The TT format as a special case of the HT format.......... 31 3.4. Multiplication of two tensor-train operators.............. 34 3.5. Orthonormal tensor trains........................ 35 3.6. Left-orthonormalization of a tensor train................ 36 3.7. Calculating the 2-norm of a tensor train................ 37 3.8. Conversion from full format into TT format.............. 39 3.9. Conversion from TT into QTT format................
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages178 Page
-
File Size-