The Tensor-Train Format and Its Applications

Total Page:16

File Type:pdf, Size:1020Kb

The Tensor-Train Format and Its Applications The Tensor-Train Format and Its Applications Modeling and Analysis of Chemical Reaction Networks, Catalytic Processes, Fluid Flows, and Brownian Dynamics Dissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) eingereicht im Fachbereich Mathematik und Informatik der Freien Universität Berlin vorgelegt von Patrick Gelß Berlin 2017 Erstgutachter: Prof. Dr. Christof Schütte Freie Universität Berlin Fachbereich Mathematik und Informatik Arnimallee 6 14195 Berlin Zweitgutachter: Prof. Dr. Reinhold Schneider Technische Universität Berlin Institut für Mathematik Straße des 17. Juni 136 10623 Berlin Tag der Disputation: 28. Juni 2017 Copyright © 2017 by Patrick Gelß Abstract The simulation and analysis of high-dimensional problems is often infeasible due to the curse of dimensionality. In this thesis, we investigate the potential of ten- sor decompositions for mitigating this curse when considering systems from several application areas. Using tensor-based solvers, we directly compute numerical solu- tions of master equations associated with Markov processes on extremely large state spaces. Furthermore, we exploit the tensor-train format to approximate eigenval- ues and corresponding eigentensors of linear tensor operators. In order to analyze the dominant dynamics of high-dimensional stochastic processes, we propose sev- eral decomposition techniques for highly diverse problems. These include tensor representations for operators based on nearest-neighbor interactions, construction of pseudoinverses for tensor-based reformulations of dimensionality reduction meth- ods, and the approximation of transfer operators of dynamical systems. The results show that the tensor-train format enables us to compute low-rank approximations for various numerical problems as well as to reduce the memory consumption and the computational costs compared to classical approaches significantly. We demon- strate that tensor decompositions are a powerful tool for solving high-dimensional problems from various application areas. iii Acknowledgements I would like to take this opportunity to express my gratitude to all those who encouraged me to write this thesis. First and foremost, I would like to thank my supervisor Christof Schütte for his continuous support and guidance as well as for offering me the possibility of writing this thesis. I wish to express my sincere appreciation to Stefan Klus for proofreading this thesis and for providing valuable comments and suggestions. I have greatly benefited from his support and advice. My gratitude also goes to Sebastian Matera for drawing my attention to various application areas for tensors since the beginning of my PhD. Special thanks should be given to Thomas von Larcher for providing me with CFD data and to Sebastian Peitz for his visualizations. Additionally, I want to thank all the people from the Biocomputing Group at the FU Berlin, the members of the CRC 1114, and the research group around Reinhold Schneider at TU Berlin for the interesting discussions and valuable inputs. Finally, I would like to thank my family and friends for their support during the preparation of this work. In particular, I am deeply grateful to Nadja. Without her love and help during the last years this all would not have been possible. This research has been funded by the Berlin Mathematical School and the Einstein Center for Mathematics. v To my son, Finn. Contents 1. Introduction1 Part I: Foundations of Tensor Approximation5 2. Tensors in Full Format7 2.1. Definition and Notation......................7 2.2. Tensor Calculus..........................9 2.2.1. Addition and Scalar Multiplication............9 2.2.2. Index Contraction..................... 10 2.2.3. Tensor Multiplication................... 10 2.2.4. Tensor Product...................... 12 2.3. Graphical Representation..................... 13 2.4. Matricization and Vectorization................. 15 2.5. Norms............................... 17 2.6. Orthonormality.......................... 19 3. Tensor Decomposition 23 3.1. Rank-One Tensors........................ 23 3.2. Canonical Format......................... 24 3.3. Tucker and Hierarchical Tucker Format............. 27 3.4. Tensor-Train Format....................... 29 3.4.1. Core Notation....................... 31 3.4.2. Addition and Multiplication................ 32 3.4.3. Orthonormalization.................... 35 3.4.4. Calculating Norms..................... 36 3.4.5. Conversion......................... 38 3.5. Modified Tensor-Train Formats.................. 41 3.5.1. Quantized Tensor-Train Format.............. 42 3.5.2. Block Tensor-Train Format................ 43 3.5.3. Cyclic Tensor-Train Format................ 44 4. Optimization Problems in the Tensor-Train Format 47 4.1. Overview............................. 47 4.2. (M)ALS for Systems of Linear Equations............ 48 4.2.1. Problem Statement.................... 48 4.2.2. Retraction Operators................... 49 4.2.3. Computational Scheme.................. 52 4.2.4. Algorithmic Aspects.................... 54 4.3. (M)ALS for Eigenvalue Problems................. 57 4.3.1. Problem Statement.................... 57 4.3.2. Computational Scheme.................. 58 4.4. Properties of (M)ALS....................... 59 4.5. Methods for Solving Initial Value Problems........... 61 vii Part II: Progress in Tensor-Train Decompositions 63 5. Tensor Representation of Markovian Master Equations 65 5.1. Markov Jump Processes...................... 65 5.2. Tensor-Based Representation of Infinitesimal Generators.... 66 6. Nearest-Neighbor Interaction Systems in the Tensor-Train Format 69 6.1. Nearest-Neighbor Interaction Systems.............. 69 6.2. General SLIM Decomposition................... 71 6.3. SLIM Decomposition for Markov Generators........... 74 7. Dynamic Mode Decomposition in the Tensor-Train Format 79 7.1. Moore-Penrose Inverse...................... 79 7.2. Computation of the Pseudoinverse................ 80 7.3. Tensor-Based Dynamic Mode Decomposition.......... 82 8. Tensor-Train Approximation of the Perron–Frobenius Operator 87 8.1. Perron–Frobenius Operator.................... 87 8.2. Ulam’s Method.......................... 88 Part III: Applications of the Tensor-Train Format 93 9. Chemical Reaction Networks 95 9.1. Elementary Reactions....................... 95 9.2. Chemical Master Equation.................... 96 9.3. Numerical Experiments...................... 97 9.3.1. Signaling Cascade..................... 97 9.3.2. Two-Step Destruction................... 103 10.Heterogeneous Catalysis 109 10.1. Heterogeneous Catalytic Processes................ 109 10.2. Reduced Model for the CO Oxidation at RuO2 ......... 110 10.3. Numerical Experiments...................... 113 10.3.1. Scaling with System Size................. 113 10.3.2. Varying the CO Pressure................. 114 10.3.3. Increasing the Oxygen Desorption Rate.......... 117 11.Fluid Dynamics 119 11.1. Computational Fluid Dynamics.................. 119 11.2. Numerical Examples....................... 120 11.2.1. Rotating Annulus..................... 120 11.2.2. Flow Around a Blunt Body................ 123 12.Brownian Dynamics 125 12.1. Langevin Equation........................ 125 12.2. Numerical Experiments...................... 126 12.2.1. Two-Dimensional Triple-Well Potential.......... 126 12.2.2. Three-Dimensional Quadruple-Well Potential....... 128 viii 13.Summary and Conclusion 131 14.References 133 A. Appendix 145 A.1. Proofs................................. 145 A.1.1. Inverse Function for Little-Endian Convention........ 145 A.1.2. Equivalence of the Master Equation Formulations....... 147 A.1.3. Equivalence of SLIM Decomposition and Canonical Represen- tation.............................. 148 A.1.4. Equivalence of SLIM Decomposition and Canonical Represen- tation for Markovian Master Equations............ 149 A.1.5. Functional Correctness of Pseudoinverse Algorithm...... 150 A.2. Algorithms............................... 152 A.2.1. Orthonormalization of Tensor Trains............. 152 A.2.2. ALS for Systems of Linear Equations............. 153 A.2.3. MALS for Systems of Linear Equations............ 154 A.2.4. ALS for Eigenvalue Problems................. 155 A.2.5. MALS for Eigenvalue Problems................ 156 A.2.6. Compression of Two-Dimensional TT Operators....... 157 A.2.7. Construction of SLIM Decompositions for Markovian Master Equations............................ 158 A.3. Deutsche Zusammenfassung (German Summary)........... 159 A.4. Eidesstattliche Erklärung (Declaration)................ 160 ix List of Figures 2.1. Low-dimensional tensors represented by arrays............7 2.2. Graphical representation of tensors................... 14 2.3. Graphical representation of tensor contractions............ 14 2.4. Orthonormal tensors........................... 20 2.5. QR decompositions of a tensor..................... 21 2.6. Singular value decomposition of a tensor................ 21 3.1. Graphical representation of the Tucker format and the HT format.. 28 3.2. Graphical representation of tensor trains................ 30 3.3. The TT format as a special case of the HT format.......... 31 3.4. Multiplication of two tensor-train operators.............. 34 3.5. Orthonormal tensor trains........................ 35 3.6. Left-orthonormalization of a tensor train................ 36 3.7. Calculating the 2-norm of a tensor train................ 37 3.8. Conversion from full format into TT format.............. 39 3.9. Conversion from TT into QTT format................
Recommended publications
  • Tensor-Tensor Products with Invertible Linear Transforms
    Tensor-Tensor Products with Invertible Linear Transforms Eric Kernfelda, Misha Kilmera, Shuchin Aeronb aDepartment of Mathematics, Tufts University, Medford, MA bDepartment of Electrical and Computer Engineering, Tufts University, Medford, MA Abstract Research in tensor representation and analysis has been rising in popularity in direct response to a) the increased ability of data collection systems to store huge volumes of multidimensional data and b) the recognition of potential modeling accuracy that can be provided by leaving the data and/or the operator in its natural, multidi- mensional form. In comparison to matrix theory, the study of tensors is still in its early stages. In recent work [1], the authors introduced the notion of the t-product, a generalization of matrix multiplication for tensors of order three, which can be extended to multiply tensors of arbitrary order [2]. The multiplication is based on a convolution-like operation, which can be implemented efficiently using the Fast Fourier Transform (FFT). The corresponding linear algebraic framework from the original work was further developed in [3], and it allows one to elegantly generalize all classical algorithms from linear algebra. In the first half of this paper, we develop a new tensor-tensor product for third-order tensors that can be implemented by us- ing a discrete cosine transform, showing that a similar linear algebraic framework applies to this new tensor operation as well. The second half of the paper is devoted to extending this development in such a way that tensor-tensor products can be de- fined in a so-called transform domain for any invertible linear transform.
    [Show full text]
  • The Pattern Calculus for Tensor Operators in Quantum Groups Mark Gould and L
    The pattern calculus for tensor operators in quantum groups Mark Gould and L. C. Biedenharn Citation: Journal of Mathematical Physics 33, 3613 (1992); doi: 10.1063/1.529909 View online: http://dx.doi.org/10.1063/1.529909 View Table of Contents: http://scitation.aip.org/content/aip/journal/jmp/33/11?ver=pdfcov Published by the AIP Publishing Articles you may be interested in Irreducible tensor operators in the regular coaction formalisms of compact quantum group algebras J. Math. Phys. 37, 4590 (1996); 10.1063/1.531643 Tensor operators for quantum groups and applications J. Math. Phys. 33, 436 (1992); 10.1063/1.529833 Tensor operators of finite groups J. Math. Phys. 14, 1423 (1973); 10.1063/1.1666196 On the structure of the canonical tensor operators in the unitary groups. I. An extension of the pattern calculus rules and the canonical splitting in U(3) J. Math. Phys. 13, 1957 (1972); 10.1063/1.1665940 Irreducible tensor operators for finite groups J. Math. Phys. 13, 1892 (1972); 10.1063/1.1665928 Reuse of AIP Publishing content is subject to the terms: https://publishing.aip.org/authors/rights-and-permissions. Downloaded to IP: 130.102.42.98 On: Thu, 29 Sep 2016 02:42:08 The pattern calculus for tensor operators in quantum groups Mark Gould Department of Applied Mathematics, University of Queensland,St. Lucia, Queensland,Australia 4072 L. C. Biedenharn Department of Physics, Duke University, Durham, North Carolina 27706 (Received 28 February 1992; accepted for publication 1 June 1992) An explicit algebraic evaluation is given for all q-tensor operators (with unit norm) belonging to the quantum group U&(n)) and having extremal operator shift patterns acting on arbitrary U&u(n)) irreps.
    [Show full text]
  • Arxiv:2012.13347V1 [Physics.Class-Ph] 15 Dec 2020
    KPOP E-2020-04 Bra-Ket Representation of the Inertia Tensor U-Rae Kim, Dohyun Kim, and Jungil Lee∗ KPOPE Collaboration, Department of Physics, Korea University, Seoul 02841, Korea (Dated: June 18, 2021) Abstract We employ Dirac's bra-ket notation to define the inertia tensor operator that is independent of the choice of bases or coordinate system. The principal axes and the corresponding principal values for the elliptic plate are determined only based on the geometry. By making use of a general symmetric tensor operator, we develop a method of diagonalization that is convenient and intuitive in determining the eigenvector. We demonstrate that the bra-ket approach greatly simplifies the computation of the inertia tensor with an example of an N-dimensional ellipsoid. The exploitation of the bra-ket notation to compute the inertia tensor in classical mechanics should provide undergraduate students with a strong background necessary to deal with abstract quantum mechanical problems. PACS numbers: 01.40.Fk, 01.55.+b, 45.20.dc, 45.40.Bb Keywords: Classical mechanics, Inertia tensor, Bra-ket notation, Diagonalization, Hyperellipsoid arXiv:2012.13347v1 [physics.class-ph] 15 Dec 2020 ∗Electronic address: [email protected]; Director of the Korea Pragmatist Organization for Physics Educa- tion (KPOP E) 1 I. INTRODUCTION The inertia tensor is one of the essential ingredients in classical mechanics with which one can investigate the rotational properties of rigid-body motion [1]. The symmetric nature of the rank-2 Cartesian tensor guarantees that it is described by three fundamental parameters called the principal moments of inertia Ii, each of which is the moment of inertia along a principal axis.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • Geometric-Algebra Adaptive Filters Wilder B
    1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • 5 the Dirac Equation and Spinors
    5 The Dirac Equation and Spinors In this section we develop the appropriate wavefunctions for fundamental fermions and bosons. 5.1 Notation Review The three dimension differential operator is : ∂ ∂ ∂ = , , (5.1) ∂x ∂y ∂z We can generalise this to four dimensions ∂µ: 1 ∂ ∂ ∂ ∂ ∂ = , , , (5.2) µ c ∂t ∂x ∂y ∂z 5.2 The Schr¨odinger Equation First consider a classical non-relativistic particle of mass m in a potential U. The energy-momentum relationship is: p2 E = + U (5.3) 2m we can substitute the differential operators: ∂ Eˆ i pˆ i (5.4) → ∂t →− to obtain the non-relativistic Schr¨odinger Equation (with = 1): ∂ψ 1 i = 2 + U ψ (5.5) ∂t −2m For U = 0, the free particle solutions are: iEt ψ(x, t) e− ψ(x) (5.6) ∝ and the probability density ρ and current j are given by: 2 i ρ = ψ(x) j = ψ∗ ψ ψ ψ∗ (5.7) | | −2m − with conservation of probability giving the continuity equation: ∂ρ + j =0, (5.8) ∂t · Or in Covariant notation: µ µ ∂µj = 0 with j =(ρ,j) (5.9) The Schr¨odinger equation is 1st order in ∂/∂t but second order in ∂/∂x. However, as we are going to be dealing with relativistic particles, space and time should be treated equally. 25 5.3 The Klein-Gordon Equation For a relativistic particle the energy-momentum relationship is: p p = p pµ = E2 p 2 = m2 (5.10) · µ − | | Substituting the equation (5.4), leads to the relativistic Klein-Gordon equation: ∂2 + 2 ψ = m2ψ (5.11) −∂t2 The free particle solutions are plane waves: ip x i(Et p x) ψ e− · = e− − · (5.12) ∝ The Klein-Gordon equation successfully describes spin 0 particles in relativistic quan- tum field theory.
    [Show full text]
  • Chapter 5 ANGULAR MOMENTUM and ROTATIONS
    Chapter 5 ANGULAR MOMENTUM AND ROTATIONS In classical mechanics the total angular momentum L~ of an isolated system about any …xed point is conserved. The existence of a conserved vector L~ associated with such a system is itself a consequence of the fact that the associated Hamiltonian (or Lagrangian) is invariant under rotations, i.e., if the coordinates and momenta of the entire system are rotated “rigidly” about some point, the energy of the system is unchanged and, more importantly, is the same function of the dynamical variables as it was before the rotation. Such a circumstance would not apply, e.g., to a system lying in an externally imposed gravitational …eld pointing in some speci…c direction. Thus, the invariance of an isolated system under rotations ultimately arises from the fact that, in the absence of external …elds of this sort, space is isotropic; it behaves the same way in all directions. Not surprisingly, therefore, in quantum mechanics the individual Cartesian com- ponents Li of the total angular momentum operator L~ of an isolated system are also constants of the motion. The di¤erent components of L~ are not, however, compatible quantum observables. Indeed, as we will see the operators representing the components of angular momentum along di¤erent directions do not generally commute with one an- other. Thus, the vector operator L~ is not, strictly speaking, an observable, since it does not have a complete basis of eigenstates (which would have to be simultaneous eigenstates of all of its non-commuting components). This lack of commutivity often seems, at …rst encounter, as somewhat of a nuisance but, in fact, it intimately re‡ects the underlying structure of the three dimensional space in which we are immersed, and has its source in the fact that rotations in three dimensions about di¤erent axes do not commute with one another.
    [Show full text]
  • Some Applications of Eigenvalue Problems for Tensor and Tensor–Block Matrices for Mathematical Modeling of Micropolar Thin Bodies
    Mathematical and Computational Applications Article Some Applications of Eigenvalue Problems for Tensor and Tensor–Block Matrices for Mathematical Modeling of Micropolar Thin Bodies Mikhail Nikabadze 1,2,∗ and Armine Ulukhanyan 2 1 Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, 119991 Moscow, Russia 2 Department of Computational Mathematics and Mathematical Physics, Bauman Moscow State Technical University, 105005 Moscow, Russia; [email protected] * Correspondence: [email protected] Received: 26 January 2019; Accepted: 21 March 2019; Published: 22 March 2019 Abstract: The statement of the eigenvalue problem for a tensor–block matrix (TBM) of any order and of any even rank is formulated, and also some of its special cases are considered. In particular, using the canonical presentation of the TBM of the tensor of elastic modules of the micropolar theory, in the canonical form the specific deformation energy and the constitutive relations are written. With the help of the introduced TBM operator, the equations of motion of a micropolar arbitrarily anisotropic medium are written, and also the boundary conditions are written down by means of the introduced TBM operator of the stress and the couple stress vectors. The formulations of initial-boundary value problems in these terms for an arbitrary anisotropic medium are given. The questions on the decomposition of initial-boundary value problems of elasticity and thin body theory for some anisotropic media are considered. In particular, the initial-boundary problems of the micropolar (classical) theory of elasticity are presented with the help of the introduced TBM operators (tensors–operators). In the case of an isotropic micropolar elastic medium (isotropic and transversely isotropic classical media), the TBM operator (tensors–operators) of cofactors to TBM operators (tensors–tensors) of the initial-boundary value problems are constructed that allow decomposing initial-boundary value problems.
    [Show full text]
  • Simply-Riemann-1588263529. Print
    Simply Riemann Simply Riemann JEREMY GRAY SIMPLY CHARLY NEW YORK Copyright © 2020 by Jeremy Gray Cover Illustration by José Ramos Cover Design by Scarlett Rugers All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law. For permission requests, write to the publisher at the address below. [email protected] ISBN: 978-1-943657-21-6 Brought to you by http://simplycharly.com Contents Praise for Simply Riemann vii Other Great Lives x Series Editor's Foreword xi Preface xii Introduction 1 1. Riemann's life and times 7 2. Geometry 41 3. Complex functions 64 4. Primes and the zeta function 87 5. Minimal surfaces 97 6. Real functions 108 7. And another thing . 124 8. Riemann's Legacy 126 References 143 Suggested Reading 150 About the Author 152 A Word from the Publisher 153 Praise for Simply Riemann “Jeremy Gray is one of the world’s leading historians of mathematics, and an accomplished author of popular science. In Simply Riemann he combines both talents to give us clear and accessible insights into the astonishing discoveries of Bernhard Riemann—a brilliant but enigmatic mathematician who laid the foundations for several major areas of today’s mathematics, and for Albert Einstein’s General Theory of Relativity.Readable, organized—and simple. Highly recommended.” —Ian Stewart, Emeritus Professor of Mathematics at Warwick University and author of Significant Figures “Very few mathematicians have exercised an influence on the later development of their science comparable to Riemann’s whose work reshaped whole fields and created new ones.
    [Show full text]