Pseudospectral Shattering, the Sign Function, and Diagonalization in Nearly Matrix Multiplication Time

Total Page:16

File Type:pdf, Size:1020Kb

Pseudospectral Shattering, the Sign Function, and Diagonalization in Nearly Matrix Multiplication Time 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS) Pseudospectral Shattering, the Sign Function, and Diagonalization in Nearly Matrix Multiplication Time Jess Banks Jorge Garza-Vargas Archit Kulkarni Nikhil Srivastava Mathematics Mathematics Mathematics Mathematics UC Berkeley UC Berkeley UC Berkeley UC Berkeley [email protected] [email protected] [email protected] [email protected] Abstract—We exhibit a randomized algorithm which given decades of research. In particular, the currently best known n×n a square matrix A ∈ C with A≤1 and δ>0, computes provable algorithms for this problem run in time O(n10/δ2) V D with high probability an invertible and diagonal such that [3] or O(nc log(1/δ)) [7] with c ≥ 12 where δ>0 is A − VDV−1≤δ an error parameter, depending on the model of computation and notion of approximation considered.1 To be sure, the O T n 2 n/δ in ( MM( )log ( )) arithmetic operations on a floating non-Hermitian case is well-motivated: coupled systems of point machine with O(log4(n/δ)logn) bits of precision. The computed similarity V additionally satisfies V V −1≤ differential equations, linear dynamical systems in control 2.5 O(n /δ). Here TMM(n) is the number of arithmetic operations theory, transfer operators in mathematical physics, and the required to multiply two n × n complex matrices numerically nonbacktracking matrix in spectral graph theory are but a ω+η stably, known to satisfy TMM(n)=O(n ) for every η>0 few situations where finding the eigenvalues and eigenvec- ω where is the exponent of matrix multiplication [1]. The tors of a non-Hermitian matrix is important. algorithm is a variant of the spectral bisection algorithm in numerical linear algebra [2] with a crucial Gaussian pertur- The key difficulties in dealing with non-normal matrices bation preprocessing step. Our running time is optimal up are the interrelated phenomena of non-orthogonal eigenvec- to polylogarithmic factors, in the sense that verifying that a tors and spectral instability, the latter referring to extreme given similarity diagonalizes a matrix requires at least matrix sensitivity of the eigenvalues and invariant subspaces to multiplication time. It significantly improves the previously best known provable running times of O(n10/δ2) arithmetic perturbations of the matrix. Non-orthogonality slows down operations for diagonalization of general matrices [3], and (with convergence of standard algorithms such as the power regards to the dependence on n) O(n3) arithmetic operations method, and spectral instability can force the use of very for Hermitian matrices [4], and is the first algorithm to achieve high precision arithmetic, also leading to slower algorithms. nearly matrix multiplication time for diagonalization in any Both phenomena together make it difficult to reduce the model of computation (real arithmetic, rational arithmetic, or finite arithmetic). eigenproblem to a subproblem by “removing” an eigenvector The proof rests on two new ingredients. (1) We show or invariant subspace, since this can only be done approx- that adding a small complex Gaussian perturbation to any imately and one must control the spectral stability of the matrix splits its pseudospectrum into n small well-separated subproblem. components. In particular, this implies that the eigenvalues of In this paper, we overcome these difficulties by identifying the perturbed matrix have a large minimum gap, a property of independent interest in random matrix theory. (2) We give a and leveraging a phenomenon we refer to as pseudospectral rigorous analysis of Roberts’ [5] Newton iteration method for shattering: adding a small complex Gaussian perturbation computing the sign function of a matrix in finite arithmetic, to any matrix yields a matrix with well-conditioned eigen- itself an open problem in numerical analysis since at least vectors and a large minimum gap between the eigenvalues, 1986 [6]. This is achieved by controlling the evolution of the implying spectral stability. This result builds on the recent pseudospectra of the iterates using a carefully chosen sequence of shrinking contour integrals in the complex plane. solution of Davies’ conjecture [8], and is of independent interest in random matrix theory, where minimum eigenvalue Keywords -Numerical Analysis, Random Matrix Theory. gap bounds in the non-Hermitian case were previously only known for i.i.d. models [9], [10]. I. INTRODUCTION We complement the above by proving that a variant of We study the algorithmic problem of approximately find- the well-known spectral bisection algorithm in numerical ing all of the eigenvalues and eigenvectors of a given linear algebra [2] is both fast and numerically stable (i.e., can arbitrary n × n complex matrix. While this problem is quite be implemented using a polylogarithmic number of bits of well-understood in the special case of Hermitian matrices (see, e.g., [4]), the general non-Hermitian case has remained 1A detailed discussion of these and other related results appears in mysterious from a theoretical standpoint even after several Section I-C. 2575-8454/20/$31.00 ©2020 IEEE 529 DOI 10.1109/FOCS46700.2020.00056 precision) when run on a pseudospectrally shattered matrix. quantities such as eigenvalue multiplicity can have a The key step in the bisection algorithm is computing the sign significant meaning. function of a matrix, a problem of independent interest in many areas such including control theory and approximation Backward Approximation. Compute (λi,vi) which are the theory [11]. Our main algorithmic contribution is a rigorous exact eigenpairs of a matrix A satisfying analysis of the well-known Newton iteration method [5] for computing the sign function in finite arithmetic, showing A − A≤δ, that it converges quickly and numerically stably on matrices for which the sign function is well-conditioned, in particular i.e., find the exact solution to a nearby problem. This is on pseudospectrally shattered ones. the appropriate and standard notion in scientific computing, The end result is an algorithm which reduces the general where the matrix is of physical or empirical origin and diagonalization problem to a polylogarithmic (in the desired is not assumed to be known exactly (and even if it were, roundoff error would destroy this exactness). Note that since accuracy and dimension n) number of invocations of stan- Cn×n dard numerical linear algebra routines (multiplication, inver- diagonalizable matrices are dense in , one can hope to always find a complete set of eigenpairs for some nearby sion, and QR factorization), each of which is reducible to −1 matrix multiplication [12], yielding a nearly matrix multipli- A = VDV , yielding an approximate diagonalization of cation runtime for the whole algorithm. This improves on the A: −1 previously best known running time of O(n3 +n2 log(1/δ)) A − VDV ≤δ. (1) arithmetic operations even in the Hermitian case [4]. We now proceed to give precise mathematical formula- Note that the eigenproblem in either of the above tions of the eigenproblem and computational model, fol- formulations is not easily reducible to the problem of lowed by statements of our results and a detailed discussion computing eigenvalues, since they can only be computed of related work. approximately and it is not clear how to obtain approximate eigenvectors from approximate eigenvalues. We now A. Problem Statement introduce a condition number for the eigenproblem, which An eigenpair of a matrix A ∈ Cn×n is a tuple (λ, v) ∈ measures the sensitivity of the eigenpairs of a matrix C × Cn such that to perturbations and allows us to relate its forward and backward approximate solutions. Av = λv, and v is normalized to be a unit vector. The eigenprob- Condition Numbers. For diagonalizable A, the eigenvector lem is the problem of finding a maximal set of linearly condition number of A, denoted κV (A), is defined as: independent eigenpairs (λi,vi) of a given matrix A; note −1 κV (A):=infV V , (2) that an eigenvalue may appear more than once if it has V geometric multiplicity greater than one. In the case when A is diagonalizable, the solution consists of exactly n where the infimum is over all invertible V such that A = −1 eigenpairs, and if A has distinct eigenvalues then the solution VDV for some diagonal D, and its minimum eigenvalue is unique, up to the phases of the vi. gap is defined as: 1) Accuracy and Conditioning: Due to the Abel-Ruffini gap(A):=min|λi(A) − λj(A)|, theorem, it is impossible to have a finite-time algorithm i= j which solves the eigenproblem exactly using arithmetic where λi are the eigenvalues of A (with multiplicity). We operations and radicals. Thus, all we can hope for is 2 approximate eigenvalues and eigenvectors, up to a desired define the condition number of the eigenproblem to be : accuracy δ>0. There are two standard notions of κV (A) ≤ κ (A):= ∈ [0, ∞]. (3) approximation. We assume A 1 for normalization, eig gap(A) where throughout this work, · denotes the spectral norm (the 2 → 2 operator norm). It follows from the following proposition (whose proof appears in [14]) that a δ-backward approximate solution Forward Approximation. Compute pairs (λi,vi) such that of the eigenproblem is a 6nκeig(A)δ-forward approximate 3 solution |λi − λi|≤δ and vi − vi≤δ for the true eigenpairs (λi,vi), i.e., find a solution close 2This quantity is inspired by but not identical to the “reciprocal of the to the exact solution. This makes sense in contexts where distance to ill-posedness” for the eigenproblem considered by Demmel [13], to which it is polynomially related. the exact solution is meaningful; e.g. the matrix is of 3 In fact, it can be shown that κeig(A) is related by a poly(n) factor to theoretical/mathematical origin, and unstable (in the entries) the smallest constant for which (4) holds for all sufficiently small δ>0.
Recommended publications
  • Banach J. Math. Anal. 2 (2008), No. 2, 59–67 . OPERATOR-VALUED
    Banach J. Math. Anal. 2 (2008), no. 2, 59–67 Banach Journal of Mathematical Analysis ISSN: 1735-8787 (electronic) http://www.math-analysis.org . OPERATOR-VALUED INNER PRODUCT AND OPERATOR INEQUALITIES JUN ICHI FUJII1 This paper is dedicated to Professor Josip E. Peˇcari´c Submitted by M. S. Moslehian Abstract. The Schwarz inequality and Jensen’s one are fundamental in a Hilbert space. Regarding a sesquilinear map B(X, Y ) = Y ∗X as an operator- valued inner product, we discuss operator versions for the above inequalities and give simple conditions that the equalities hold. 1. Introduction Inequality plays a basic role in analysis and consequently in Mathematics. As surveyed briefly in [6], operator inequalities on a Hilbert space have been discussed particularly since Furuta inequality was established. But it is not easy in general to give a simple condition that the equality in an operator inequality holds. In this note, we observe basic operator inequalities and discuss the equality conditions. To show this, we consider simple linear algebraic structure in operator spaces: For Hilbert spaces H and K, the symbol B(H, K) denotes all the (bounded linear) operators from H to K and B(H) ≡ B(H, H). Then, consider an operator n n matrix A = (Aij) ∈ B(H ), a vector X = (Xj) ∈ B(H, H ) with operator entries Xj ∈ B(H), an operator-valued inner product n ∗ X ∗ Y X = Yj Xj, j=1 Date: Received: 29 March 2008; Accepted 13 May 2008. 2000 Mathematics Subject Classification. Primary 47A63; Secondary 47A75, 47A80. Key words and phrases. Schwarz inequality, Jensen inequality, Operator inequality.
    [Show full text]
  • Computing the Polar Decomposition---With Applications
    This 1984 technical report was published as N. J. Higham. Computing the polar decomposition---with applications. SIAM J. Sci. Stat. Comput., 7(4):1160-1174, 1986. The appendix contains an alternative proof omitted in the published paper. Pages 19-20 are missing from the copy that was scanned. The report is mainly of interest for how it was produced. - The cover was prepared in Vizawrite 64 on a Commodore 64 and was printed on an Epson FX-80 dot matrix printer. - The main text was prepared in Vuwriter on an Apricot PC and printed on an unknown dot matrix printer. - The bibliography was prepared in Superbase on a Commodore 64 and printed on an Epson FX-80 dot matrix printer. COMPUTING THE POLAR DECOMPOSITION - WITH APPLICATIONS N.J. Higham • Numerical Analysis Report No. 94 November 1984 This work was carried out with the support of a SERC Research Studentship • Department of Mathematics University of Manchester Manchester M13 9PL ENGLAND Univeriity of Manchester/UMIST Joint Nu•erical Analysis Reports D•p•rt••nt of Math••atic1 Dep•rt•ent of Mathe~atic1 The Victoria Univ•rsity University of Manchester Institute of of Manchester Sci•nce •nd Technology Reque1t1 for individu•l t•chnical reports •ay be addressed to Dr C.T.H. Baker, Depart•ent of Mathe••tics, The University, Mancheiter M13 9PL ABSTRACT .. A quadratically convergent Newton method for computing the polar decomposition of a full-rank matrix is presented and analysed. Acceleration parameters are introduced so as to enhance the initial rate of convergence and it is shown how reliable estimates of the optimal parameters may be computed in practice.
    [Show full text]
  • Geometry of Matrix Decompositions Seen Through Optimal Transport and Information Geometry
    Published in: Journal of Geometric Mechanics doi:10.3934/jgm.2017014 Volume 9, Number 3, September 2017 pp. 335{390 GEOMETRY OF MATRIX DECOMPOSITIONS SEEN THROUGH OPTIMAL TRANSPORT AND INFORMATION GEOMETRY Klas Modin∗ Department of Mathematical Sciences Chalmers University of Technology and University of Gothenburg SE-412 96 Gothenburg, Sweden Abstract. The space of probability densities is an infinite-dimensional Rie- mannian manifold, with Riemannian metrics in two flavors: Wasserstein and Fisher{Rao. The former is pivotal in optimal mass transport (OMT), whereas the latter occurs in information geometry|the differential geometric approach to statistics. The Riemannian structures restrict to the submanifold of multi- variate Gaussian distributions, where they induce Riemannian metrics on the space of covariance matrices. Here we give a systematic description of classical matrix decompositions (or factorizations) in terms of Riemannian geometry and compatible principal bundle structures. Both Wasserstein and Fisher{Rao geometries are discussed. The link to matrices is obtained by considering OMT and information ge- ometry in the category of linear transformations and multivariate Gaussian distributions. This way, OMT is directly related to the polar decomposition of matrices, whereas information geometry is directly related to the QR, Cholesky, spectral, and singular value decompositions. We also give a coherent descrip- tion of gradient flow equations for the various decompositions; most flows are illustrated in numerical examples. The paper is a combination of previously known and original results. As a survey it covers the Riemannian geometry of OMT and polar decomposi- tions (smooth and linear category), entropy gradient flows, and the Fisher{Rao metric and its geodesics on the statistical manifold of multivariate Gaussian distributions.
    [Show full text]
  • Computing the Polar Decomposition—With Applications
    Computing the Polar Decomposition—with Applications Higham, Nicholas J. 1986 MIMS EPrint: 2007.9 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: http://eprints.maths.manchester.ac.uk/ And by contacting: The MIMS Secretary School of Mathematics The University of Manchester Manchester, M13 9PL, UK ISSN 1749-9097 SIAM J. Sci. STAT. COMPUT. (C) 1986 Society for Industrial and Applied Mathematics Vol. 7, No. 4, October 1986 OO7 COMPUTING THE POLAR DECOMPOSITION---WITH APPLICATIONS* NICHOLAS J. HIGHAMf Abstract. A quadratically convergent Newton method for computing the polar decomposition of a full-rank matrix is presented and analysed. Acceleration parameters are introduced so as to enhance the initial rate of convergence and it is shown how reliable estimates of the optimal parameters may be computed in practice. To add to the known best approximation property of the unitary polar factor, the Hermitian polar factor H of a nonsingular Hermitian matrix A is shown to be a good positive definite approximation to A and 1/2(A / H) is shown to be a best Hermitian positive semi-definite approximation to A. Perturbation bounds for the polar factors are derived. Applications of the polar decomposition to factor analysis, aerospace computations and optimisation are outlined; and a new method is derived for computing the square root of a symmetric positive definite matrix. Key words, polar decomposition, singular value decomposition, Newton's method, matrix square root AMS(MOS) subject classifications. 65F25, 65F30, 65F35 1. Introduction. The polar decomposition is a generalisation to matrices of the familiar complex number representation z r e i, r ->_ 0.
    [Show full text]
  • NEW FORMULAS for the SPECTRAL RADIUS VIA ALUTHGE TRANSFORM Fadil Chabbabi, Mostafa Mbekhta
    NEW FORMULAS FOR THE SPECTRAL RADIUS VIA ALUTHGE TRANSFORM Fadil Chabbabi, Mostafa Mbekhta To cite this version: Fadil Chabbabi, Mostafa Mbekhta. NEW FORMULAS FOR THE SPECTRAL RADIUS VIA ALUTHGE TRANSFORM. 2016. hal-01334044 HAL Id: hal-01334044 https://hal.archives-ouvertes.fr/hal-01334044 Preprint submitted on 20 Jun 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. NEW FORMULAS FOR THE SPECTRAL RADIUS VIA ALUTHGE TRANSFORM FADIL CHABBABI AND MOSTAFA MBEKHTA Abstract. In this paper we give several expressions of spectral radius of a bounded operator on a Hilbert space, in terms of iterates of Aluthge transformation, numerical radius and the asymptotic behavior of the powers of this operator. Also we obtain several characterizations of normaloid operators. 1. Introduction Let H be complex Hilbert spaces and B(H) be the Banach space of all bounded linear operators from H into it self. For T ∈ B(H), the spectrum of T is denoted by σ(T) and r(T) its spectral radius. We denote also by W(T) and w(T) the numerical range and the numerical radius of T. As usually, for T ∈B(H) we denote the module of T by |T| = (T ∗T)1/2 and we shall always write, without further mention, T = U|T| to be the unique polar decomposition of T, where U is the appropriate partial isometry satisfying N(U) = N(T).
    [Show full text]
  • Spectral and Polar Decomposition in AW*-Algebras
    Spectral and Polar Decomposition in AW*-Algebras M. Frank We show the possibility and the uniqueness of polar decomposition of elements of arbitrary AW*-algebras inside them. We prove that spectral decomposition of normal elements of normal AW*-algebras is possible and unique inside them. The possibility of spectral decomposition of normal elements does not depend on the normality of the AW*-algebra under consideration. Key words: Operator algebras, monotone complete C*-algebras, AW*-algebras, spectral decom- position and polar decomposition of operators AMS subject classification: 46L05, 46L35, 47C15 The spectral decomposition of normal linear (bounded) operators and the polar decom- position of arbitrary linear (bounded) operators on Hilbert spaces have been interesting and technically useful results in operator theory [3, 9, 13, 20]. The development of the concept of von Neumann algebras on Hilbert spaces has shown that both these decompo- sitions are always possible inside of the appropriate von Neumann algebra [14]. New light on these assertions was shed identifying the class of von Neumann algebras among all C*-algebras by the class of C*-algebras which possess a pre-dual Banach space, the W*- algebras. The possibility of C*-theoretical representationless descriptions of spectral and arXiv:funct-an/9303001v1 11 Mar 1993 polar decomposition of elements of von Neumann algebras (and may be of more general C*-algebras) inside them has been opened up. Steps to get results in this direction were made by several authors. The W*-case was investigated by S. Sakai in 1958-60, [18, 19]. Later on J. D. M. Wright has considered spectral decomposition of normal elements of embeddable AW*-algebras, i.e., of AW*-algebras possessing a faithful von Neumann type representation on a self-dual Hilbert A-module over a commutative AW*-algebra A (on a so called Kaplansky–Hilbert module), [23, 24].
    [Show full text]
  • Arxiv:1211.0058V1 [Math-Ph] 31 Oct 2012 Ydfiiina Operator an Definition by Nay4h6,(70)(35Q80)
    NOTE ON THE SPECTRAL THEOREM T. L. GILL AND D. WILLIAMS Abstract. In this note, we show that the spectral theorem, has two representations; the Stone-von Neumann representation and one based on the polar decomposition of linear operators, which we call the de- formed representation. The deformed representation has the advantage that it provides an easy extension to all closed densely defined linear operators on Hilbert space. Furthermore, the deformed representation can also be extended all separable reflexive Banach spaces and has a limited extension to non-reflexive Banach spaces. Introduction Let C[B] be the closed densely defined linear operators on a Banach space. By definition an operator A, defined on a separable Banach space B is of Baire class one if it can be approximated by a sequence {An} ⊂ L[B], of arXiv:1211.0058v1 [math-ph] 31 Oct 2012 bounded linear operators. If B is a Hilbert space, then every A ∈C[B] is of Baire class one. However, it turns out that, if B is not a Hilbert space, there may be operators A ∈C[B] that are not of Baire class one. 1991 Mathematics Subject Classification. Primary (46B03), (47D03) Sec- ondary(47H06), (47F05) (35Q80). Key words and phrases. spectral theorem, vector measures, vector-valued functions, Reflexive Banach spaces. 1 2 GILL AND WILLIAMS A Banach space B is said to be quasi-reflexive if dim{B′′/B} < ∞, and nonquasi-reflexive if dim{B′′/B} = ∞. Vinokurov, Petunin and Pliczko [VPP] have shown that, for every nonquasi-reflexive Banach space B, there is a closed densely defined linear operator A which is not of Baire class one (for example, C[0, 1] or L1[Rn], n ∈ N).
    [Show full text]
  • On Compatibility Properties of Quantum Observables Represented by Positive Operator-Valued Measures
    On Compatibility Properties of Quantum Observables represented by Positive Operator-Valued Measures Bachelorthesis Mathematics Radboud University Nijmegen Yari Kraak Supervisor: Klaas Landsman July 2018 Abstract In a rigorous post-von Neumann mathematical formalism of quantum mechanics, ob- servables are represented by normalized positive operator-valued measures instead of self-adjoint operators. In this formalism, the notion of joint measurability of two ob- servables is more complex then in the von Neumann formalism, where observables are jointly measurable if and only if they commute. We look into various notions of compatibility, among which joint measurability, coexistence, commutativity, and joint measurability of binarizations, and investigate for which classes of observables these are equivalent. Contents 1 Introduction 3 2 Positive Operator-Valued Measures 4 2.1 Naimark dilation theorem . 6 3 States, Effects and Observables 10 3.1 Classes of observables . 15 4 Compatibility properties 18 4.1 Coexistence and Joint Measurability . 21 4.2 Joint Measurability of Binarizations and Coexistence . 25 5 Conclusion 27 Appendices 29 A Functional Analysis 29 A.1 Trace-class operators . 34 A.2 Spectral Theorem . 38 B Measure theory 40 C Convexity 42 References 44 2 1 Introduction In the literature, quantum-mechanical observables are usually described by self-adjoint op- erators, as introduced by von Neumann [20]. But in certain cases there are observables that cannot be fully described by self-adjoint operators [7, 9, 15, 17]. An example of such an observable is the covariant phase space observable in a single-mode optical field. Therefore, a new formalism of quantum mechanics has been formulated, where observables are repre- sented by positive operator-valued measures (POVMs).
    [Show full text]
  • Polar Decomposition
    Polar decomposition November 25, 2019 Contents 1 Introduction 1 2 Orthogonal decompositions 4 3 Length of T v 5 4 Proof of the polar decomposition 9 1 Introduction The polar decomposition proven in the book (Theorem 7.45 on page 233) concerns a linear map T 2 L(V ) from a single inner product space to itself. Exactly the same ideas treat that the case of a map T 2 L(V; W )(V; W fin-diml inner product spaces): (1.1a) I stated the result in class; the point of these notes is to write down some details of the proof, as well as to talk a bit more about why the result is useful. To get to the statement, we need some notation. I'll think of the linear map T as an arrow going from V to W : V −−−!T W ; (1.1b) this is just another notation for saying that T is a function that takes any- thing in v 2 V and gives you something T (v) 2 W . Because these are inner product spaces, we get also the adjoint of T ∗ W −−−−!T V; hT v; wi = hv; T ∗wi (v 2 V; w 2 W ): (1.1c) 1 The property written on the right defines the linear map T ∗. Using these two maps, we immediately get two subspaces of each of V and W : Null(T ); Range(T ∗) ⊂ V; Null(T ∗); Range(T ) ⊂ W: (1.1d) The first basic fact is that these spaces provide orthogonal direct sum de- compositions of V and W .
    [Show full text]
  • Mathematical Introduction to Quantum Information Processing
    Mathematical Introduction to Quantum Information Processing (growing lecture notes, SS2019) Michael M. Wolf June 22, 2019 Contents 1 Mathematical framework 5 1.1 Hilbert spaces . .5 1.2 Bounded Operators . .8 Ideals of operators . 10 Convergence of operators . 11 Functional calculus . 12 1.3 Probabilistic structure of Quantum Theory . 14 Preparation . 15 Measurements . 17 Probabilities . 18 Observables and expectation values . 20 1.4 Convexity . 22 Convex sets and extreme points . 22 Mixtures of states . 23 Majorization . 24 Convex functionals . 26 Entropy . 28 1.5 Composite systems and tensor products . 29 Direct sums . 29 Tensor products . 29 Partial trace . 34 Composite and reduced systems . 35 Entropic quantities . 37 1.6 Quantum channels and operations . 38 Schrödinger & Heisenberg picture . 38 Kraus representation and environment . 42 Choi-matrices . 45 Instruments . 47 Commuting dilations . 48 1.7 Unbounded operators and spectral measures . 51 2 Basic trade-offs 53 2.1 Uncertainty relations . 53 Variance-based preparation uncertainty relations . 54 Joint measurability . 55 2 CONTENTS 3 2.2 Information-disturbance . 56 No information without disturbance . 56 2.3 Time-energy . 58 Mandelstam-Tamm inequalities . 58 Evolution to orthogonal states . 59 These are (incomplete but hopefully growing) lecture notes of a course taught in summer 2019 at the department of mathematics at the Technical University of Munich. 4 CONTENTS Chapter 1 Mathematical framework 1.1 Hilbert spaces This section will briefly summarize relevant concepts and properties of Hilbert spaces. A complex Hilbert space is a vector space over the complex numbers, equipped with an inner product h·; ·i : H × H ! C and an induced norm k k := h ; i1=2 w.r.t.
    [Show full text]
  • The Interplay of the Polar Decomposition Theorem And
    preprint - UTM 631 The interplay of the polar decomposition theorem and the Lorentz group Valter Moretti Department of Mathematics, University of Trento and I.N.F.N. Gruppo Collegato di Trento, via Sommarive 14, I-38050 Povo (TN), Italy. E-mail: [email protected] November 2002 Abstract: It is shown that the polar decomposition theorem of operators in (real) Hilbert spaces gives rise to the known decomposition in boost and spatial rotation part of any matrix of the orthochronous proper Lorentz group SO(1, 3) . This result is not trivial because the polar ↑ decomposition theorem is referred to a positive defined scalar product while the Lorentz-group decomposition theorem deals with the indefinite Lorentz metric. A generalization to infinite dimensional spaces can be given. It is finally shown that the polar decomposition of SL(2, C) is preserved by the covering homomorphism of SL(2, C) onto SO(1, 3) . ↑ I. Introduction and notation. If H is a, either real or complex, Hilbert space, a bounded bijective operator T : H H can ′ ′ ′ → be uniquely decomposed as both T = UP and T = P U where U, U are orthogonal/unitary operators and P, P ′ are bounded self-adjoint positive operators. These decompositions are called the polar decompositions of T . Consider the special orthochronous Lorentz group [1, 2, 3] SO(1, 3) := Λ M(4, R) ΛηΛt = η , det Λ = 1 , Λ0 > 0 , (1) ↑ { ∈ | 0 } where M(n, R) denotes real vector space of real n n matrices, 0 in Λ0 is referred to the first × 0 arXiv:math-ph/0211047v1 20 Nov 2002 4 element of the canonical basis of R , e0, e1, e2, e3 and η = diag( 1, 1, 1, 1).
    [Show full text]
  • Decompositions of Linear Mapsн1)
    TRANSACTIONS OF THE AMERICAN MATHEMATICALSOCIETY Volume 230, 1977 DECOMPOSITIONS OF LINEAR MAPSÍ1) BY SZE-KAIJ. TSUI Abstract. In the first part we show that the decomposition of a bounded selfadjoint linear map from a C*-algebra into a given von Neumann algebra as a difference of two bounded positive linear maps is always possible if and only if that range algebra is a "strictly finite" von Neumann algebra of type I. In the second part we define a "polar decomposition" for some bounded linear maps and show that polar decomposition is possible if and only if the map satisfies a certain "norm condition". We combine the concepts of polar and positive decompositions to show that polar decomposition for a selfad- joint map is equivalent to a strict Hahn-Jordan decomposition(see Theorems 2.2.4and 2.2.8). 0. Introduction. In this paper we study bounded linear maps between C*- algebras. We are particularly concerned with decompositions of such maps. These decompositions are analogues of Hahn-Jordan and polar decomposi- tions for linear functionals on C*-algebras. The study of positive decomposi- tion of bounded linear functionals of partially ordered normed linear spaces may be traced back to M. Kreïn [12] and J. Grosberg [6] around 1939. Later Z. Takeda [18] worked out the same problem on the C*-algebra setting. Recently some independent efforts were made to study the positive decompo- sition for bounded linear maps between two partially ordered normed linear spaces [20]. The major result in Chapter 1 is that the decomposition of any selfadjoint linear map as a difference of positive maps into a given von Neumann algebra is always possible if and only if that algebra is a "strictly finite" von Neumann algebra of type I (see Theorem 1.4.6).
    [Show full text]