Chapter 6 the Singular Value Decomposition Ax=B Version of 11 April 2019

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 6 the Singular Value Decomposition Ax=B Version of 11 April 2019 Matrix Methods for Computational Modeling and Data Analytics Virginia Tech Spring 2019 · Mark Embree [email protected] Chapter 6 The Singular Value Decomposition Ax=b version of 11 April 2019 The singular value decomposition (SVD) is among the most important and widely applicable matrix factorizations. It provides a natural way to untangle a matrix into its four fundamental subspaces, and reveals the relative importance of each direction within those subspaces. Thus the singular value decomposition is a vital tool for analyzing data, and it provides a slick way to understand (and prove) many fundamental results in matrix theory. It is the perfect tool for solving least squares problems, and provides the best way to approximate a matrix with one of lower rank. These notes construct the SVD in various forms, then describe a few of its most compelling applications. 6.1 Eigenvalues and eigenvectors of symmetric matrices To derive the singular value decomposition of a general (rectangu- lar) matrix A IR m n, we shall rely on several special properties of 2 ⇥ the square, symmetric matrix ATA. While this course assumes you are well acquainted with eigenvalues and eigenvectors, we will re- call some fundamental concepts, especially pertaining to symmetric matrices. 6.1.1 A passing nod to complex numbers Recall that even if a matrix has real number entries, it could have eigenvalues that are complex numbers; the corresponding eigenvec- tors will also have complex entries. Consider, for example, the matrix 0 1 S = − . " 10# 73 To find the eigenvalues of S, form the characteristic polynomial l 1 det(lI S)=det = l2 + 1. − 1 l " − #! Factor this polynomial (e.g., using the quadratic formula) to get det(lI S)=l2 + 1 =(l i)(l + i), − − where i = p 1. Thus, we conclude that S (a matrix with real entries) − To find the eigenvector associated with has the complex eigenvalues l1, we need to find some nonzero v N(l I S). To do so, solve the l = i, l = i 2 1 − 1 2 − consistent but underdetermined system i1 v 0 with corresponding eigenvectors 1 = . 1i v 0 − 2 i i The first row requires v1 = , v2 = − . " 1 # " 1 # iv1 + v2 = 0, while the second row requires Suppose we want to compute the norm of the eigenvector v1. v1 + iv2 = 0. Using our usual method, we would have − Multiply that last equation by i and − i1 i you obtain the first equation: so if you 2 T 2 satisfy the second equation ( = i ), v1 = v1 v1 = = i + 1 = 1 + 1 = 0. v1 v2 k k h i" 1 # − you satisfy them both. Thus let v iv This result seems strange, no? How could the norm of a nonzero v = 1 = 2 . v2 v2 vector be zero? The specific eigenvector v (given in the This example reveals a crucial shortcoming in our definition of the 1 main text) follows from picking v2 = 1. norm, when applied to complex vectors. Instead of x = x2 + x2 + + x2 = pxTx, k k 1 2 ··· n q we want If z = a + ib C with a, b IR, 2 2 2 2 2 x = x1 + x2 + + xn . then z = pa2 + b2. We call z then k k | | | | ··· | | | | | | magnitude of the complex number z. For real vectors x IR n,q both definitions are the same. For complex 2 vectors x Cn they can be very different. Just as the norm of a real 2 vector has the compact notation v = pvTv, so too does the norm The complex conjugate of z = a + ib is k k of a complex vector: z = a ib, − T v = v v, allowing us to write k k p z 2 = zz =(a ib)(a + i) where v denotes the complex conjugate of v. Now apply this defini- | | − = a2 ib + ib + a2 = a2 + b2. tion of the norm to v1: − 2 T i1 i 2 v1 = v1 v1 = − = i + 1 = 1 + 1 = 2, k k h i" 1 # − a much more reasonable answer than we had before. We will wrap up this complex interlude by proving that the real symmetric matrices that will be our focus in this course can never have complex eigenvalues. 74 6.1.2 The spectral theorem for symmetric matrices Theorem 8 All eigenvalues of a real symmetric matrix are real. Proof. Let S denote a symmetric matrix with real entries, so ST = S (since S is symmetric) and S = S (since S is real). Let (l, v) be an arbitrary eigenpair of S, so that Sv = lv. Without loss of generality, we can assume that v is scaled so that v = 1, i.e., k k vTv = v 2 = 1. Thus Since we do not yet know that v is k k real-valued, we must use the norm l = l v 2 = l(vTv)=vT(lv)=vT(Sv). definition for complex vectors discussed k k in the previous subsection. T Since S is real and symmetric, S = S , and so If z = a + ib and z = z, then a + ib = a ib, i.e., T T T T T T 2 − v (Sv)=v S v = (Sv) v = (lv) v = lv v = l v = l. b = b, k k − which is only possible if b = 0. We have shown that l = l, which is only possible if l is real. It immediately follows that if l is an eigenvalue of the real sym- metric matrix S, then we can always find a real-valued eigenvector v of S corresponding to l, simply by finding a real-valued vector in the null space N(lI S), − since lI S is a real-valued matrix. − Crucially, the eigenvalues of a real symmetric matrix S associated with distinct eigenvalues must be orthogonal. Theorem 9 Eigenvectors of a real symmetric matrix associated with dis- tinct eigenvalues are orthogonal. Proof. Suppose l and g are distinct eigenvalues of a real symmetric matrix S associated with eigenvectors v IR n and w IR n: 2 2 Sv = lv, Sw = gw with l = g. Now consider 6 What if the eigenvalues are not distinct? T T T T T lw v = w (lv)=w (Sv)=w S v, Consider the simple 2 2 identity ⇥ matrix, I. Any nonzero x IR 2 is an 2 where we have used the fact that S = ST. Now eigenvector of I associated with the eigenvalue l = 1, since T T T T T w S v =(Sw) v =(gw) v = gw v. Ix = 1x. We have thus shown that Thus we have many eigenvectors that are not orthogonal. However, we can always find vectors, like lwTv = gwTv. 1 0 v = , v = T 1 0 2 1 Since l = g, this statement can only be true if w v = 0, i.e., if v and 6 w are orthogonal. that are orthogonal. We are ready to collect relevant facts in the Spectral Theorem. 75 n n Theorem 10 (Spectral Theorem) Suppose S IR ⇥ is symmetric, T 2 S = S. Then there exist n (not necessarily distinct) eigenvalues l1,...,ln and corresponding unit-length eigenvectors v1,...,vn such that Svj = ljvj. The eigenvectors form an orthonormal basis for IR n: For example, when n IR = span v1,...,vn 3 1 { } S = − , 13 T T 2 − and vj vk = 0 when j = k, and vj vj = vj = 1. 6 k k we have l1 = 4 and l2 = 2, with As a consequence of the Spectral Theorem, we can write any sym- p2/2 p2/2 v1 = , v2 = . n n p2/2 p2/2 metric matrix S IR ⇥ in the form − 2 Note that these eigenvectors are unit n vectors, and they are orthogonal. We T S = Â ljvjvj . (6.1) can write j=1 T T S = l1v1v1 ++l2v2v2 1/2 1/2 1/2 1/2 This equation expresses S as the sum of the special rank-1 matrices = 4 − + 2 . T 1/2 1/2 1/2 1/2 ljvjvj . The singular value decomposition will provide a similar way − to tease apart a rectangular matrix. n n For the example above, Definition 21 A symmetric matrix S IR ⇥ is positive definite pro- T n2 T n x T 3 1 x vided x Sx > 0 for all nonzero x IR ; if x Sx 0 for all x IR , we say xT Sx = 1 − 1 2 ≥ 2 x 13 x S is positive semidefinite. 2 − 2 = 3x2 2x x + x2 1 − 1 2 2 = 2(x x )2 +(x + x )2. Theorem 11 All eigenvalues of a symmetric positive definite matrix are 1 − 2 1 2 positive; all eigenvalues of a symmetric positive semidefinite matrix are This last expression, the sum of squares, is clearly positive for all nonzero x, so S nonnegative. is positive definite. Proof. Let (lj, vj) denote an eigenpair of the symmetric positive n n 2 T definite matrix S IR ⇥ with vj = v v = 1. Since S is symmetric 2 k k j j Can you prove the converse of this , lj must be real. We conclude that theorem? (A symmetric matrix with positive eigenvalues is positive defi- T T T nite.) Hint: use the Spectral Theorem. lj = ljvj vj = vj (ljvj)=vj Svj, With this result, we can check if S is positive definite by just looking at its which must be positive since S is positive definite and v = 0. j 6 eigenvalues, rather than working out a The proof for positive semidefinite matrices is the same, except we formula for xT Sx, as done above. can only conclude that l = vTSv 0. j j j ≥ 6.2 Derivation of the singular value decomposition: Full rank case We seek to derive the singular value decomposition of a general rect- angular matrix.
Recommended publications
  • Singular Value Decomposition (SVD)
    San José State University Math 253: Mathematical Methods for Data Visualization Lecture 5: Singular Value Decomposition (SVD) Dr. Guangliang Chen Outline • Matrix SVD Singular Value Decomposition (SVD) Introduction We have seen that symmetric matrices are always (orthogonally) diagonalizable. That is, for any symmetric matrix A ∈ Rn×n, there exist an orthogonal matrix Q = [q1 ... qn] and a diagonal matrix Λ = diag(λ1, . , λn), both real and square, such that A = QΛQT . We have pointed out that λi’s are the eigenvalues of A and qi’s the corresponding eigenvectors (which are orthogonal to each other and have unit norm). Thus, such a factorization is called the eigendecomposition of A, also called the spectral decomposition of A. What about general rectangular matrices? Dr. Guangliang Chen | Mathematics & Statistics, San José State University3/22 Singular Value Decomposition (SVD) Existence of the SVD for general matrices Theorem: For any matrix X ∈ Rn×d, there exist two orthogonal matrices U ∈ Rn×n, V ∈ Rd×d and a nonnegative, “diagonal” matrix Σ ∈ Rn×d (of the same size as X) such that T Xn×d = Un×nΣn×dVd×d. Remark. This is called the Singular Value Decomposition (SVD) of X: • The diagonals of Σ are called the singular values of X (often sorted in decreasing order). • The columns of U are called the left singular vectors of X. • The columns of V are called the right singular vectors of X. Dr. Guangliang Chen | Mathematics & Statistics, San José State University4/22 Singular Value Decomposition (SVD) * * b * b (n>d) b b b * b = * * = b b b * (n<d) * b * * b b Dr.
    [Show full text]
  • C H a P T E R § Methods Related to the Norm Al Equations
    ¡ ¢ £ ¤ ¥ ¦ § ¨ © © © © ¨ ©! "# $% &('*),+-)(./+-)0.21434576/),+98,:%;<)>=,'41/? @%3*)BAC:D8E+%=B891GF*),+H;I?H1KJ2.21K8L1GMNAPO45Q5R)S;S+T? =RU ?H1*)>./+EAVOKAPM ;<),5 ?H1G;P8W.XAVO4575R)S;S+Y? =Z8L1*)/[%\Z1*)]AB3*=,'Q;<)B=,'41/? @%3*)RAP8LU F*)BA(;S'*)Z)>@%3/? F*.4U ),1G;^U ?H1*)>./+ APOKAP;_),5a`Rbc`^dfeg`Rbih,jk=>.4U U )Blm;B'*)n1K84+o5R.EUp)B@%3*.K;q? 8L1KA>[r\0:D;_),1Ejp;B'/? A2.4s4s/+t8/.,=,' b ? A7.KFK84? l4)>lu?H1vs,+D.*=S;q? =>)m6/)>=>.43KA<)W;B'*)w=B8E)IxW=K? ),1G;W5R.K;S+Y? yn` `z? AW5Q3*=,'|{}84+-A_) =B8L1*l9? ;I? 891*)>lX;B'*.41Q`7[r~k8K{i)IF*),+NjL;B'*)71K8E+o5R.4U4)>@%3*.G;I? 891KA0.4s4s/+t8.*=,'w5R.BOw6/)Z.,l4)IM @%3*.K;_)7?H17A_8L5R)QAI? ;S3*.K;q? 8L1KA>[p1*l4)>)>lj9;B'*),+-)Q./+-)Q)SF*),1.Es4s4U ? =>.K;q? 8L1KA(?H1Q{R'/? =,'W? ;R? A s/+-)S:Y),+o+D)Blm;_8|;B'*)W3KAB3*.4Urc+HO4U 8,F|AS346KABs*.,=B)Q;_)>=,'41/? @%3*)BAB[&('/? A]=*'*.EsG;_),+}=B8*F*),+tA7? ;PM ),+-.K;q? F*)75R)S;S'K8ElAi{^'/? =,'Z./+-)R)K? ;B'*),+pl9?+D)B=S;BU OR84+k?H5Qs4U ? =K? ;BU O2+-),U .G;<)Bl7;_82;S'*)Q1K84+o5R.EU )>@%3*.G;I? 891KA>[ mWX(fQ - uK In order to solve the linear system 7 when is nonsymmetric, we can solve the equivalent system b b [¥K¦ ¡¢ £N¤ which is Symmetric Positive Definite. This system is known as the system of the normal equations associated with the least-squares problem, [ ­¦ £N¤ minimize §* R¨©7ª§*«9¬ Note that (8.1) is typically used to solve the least-squares problem (8.2) for over- ®°¯± ±²³® determined systems, i.e., when is a rectangular matrix of size , .
    [Show full text]
  • Eigen Values and Vectors Matrices and Eigen Vectors
    EIGEN VALUES AND VECTORS MATRICES AND EIGEN VECTORS 2 3 1 11 × = [2 1] [3] [ 5 ] 2 3 3 12 3 × = = 4 × [2 1] [2] [ 8 ] [2] • Scale 3 6 2 × = [2] [4] 2 3 6 24 6 × = = 4 × [2 1] [4] [16] [4] 2 EIGEN VECTOR - PROPERTIES • Eigen vectors can only be found for square matrices • Not every square matrix has eigen vectors. • Given an n x n matrix that does have eigenvectors, there are n of them for example, given a 3 x 3 matrix, there are 3 eigenvectors. • Even if we scale the vector by some amount, we still get the same multiple 3 EIGEN VECTOR - PROPERTIES • Even if we scale the vector by some amount, we still get the same multiple • Because all you’re doing is making it longer, not changing its direction. • All the eigenvectors of a matrix are perpendicular or orthogonal. • This means you can express the data in terms of these perpendicular eigenvectors. • Also, when we find eigenvectors we usually normalize them to length one. 4 EIGEN VALUES - PROPERTIES • Eigenvalues are closely related to eigenvectors. • These scale the eigenvectors • eigenvalues and eigenvectors always come in pairs. 2 3 6 24 6 × = = 4 × [2 1] [4] [16] [4] 5 SPECTRAL THEOREM Theorem: If A ∈ ℝm×n is symmetric matrix (meaning AT = A), then, there exist real numbers (the eigenvalues) λ1, …, λn and orthogonal, non-zero real vectors ϕ1, ϕ2, …, ϕn (the eigenvectors) such that for each i = 1,2,…, n : Aϕi = λiϕi 6 EXAMPLE 30 28 A = [28 30] From spectral theorem: Aϕ = λϕ 7 EXAMPLE 30 28 A = [28 30] From spectral theorem: Aϕ = λϕ ⟹ Aϕ − λIϕ = 0 (A − λI)ϕ = 0 30 − λ 28 = 0 ⟹ λ = 58 and
    [Show full text]
  • Singular Value Decomposition (SVD) 2 1.1 Singular Vectors
    Contents 1 Singular Value Decomposition (SVD) 2 1.1 Singular Vectors . .3 1.2 Singular Value Decomposition (SVD) . .7 1.3 Best Rank k Approximations . .8 1.4 Power Method for Computing the Singular Value Decomposition . 11 1.5 Applications of Singular Value Decomposition . 16 1.5.1 Principal Component Analysis . 16 1.5.2 Clustering a Mixture of Spherical Gaussians . 16 1.5.3 An Application of SVD to a Discrete Optimization Problem . 22 1.5.4 SVD as a Compression Algorithm . 24 1.5.5 Spectral Decomposition . 24 1.5.6 Singular Vectors and ranking documents . 25 1.6 Bibliographic Notes . 27 1.7 Exercises . 28 1 1 Singular Value Decomposition (SVD) The singular value decomposition of a matrix A is the factorization of A into the product of three matrices A = UDV T where the columns of U and V are orthonormal and the matrix D is diagonal with positive real entries. The SVD is useful in many tasks. Here we mention some examples. First, in many applications, the data matrix A is close to a matrix of low rank and it is useful to find a low rank matrix which is a good approximation to the data matrix . We will show that from the singular value decomposition of A, we can get the matrix B of rank k which best approximates A; in fact we can do this for every k. Also, singular value decomposition is defined for all matrices (rectangular or square) unlike the more commonly used spectral decomposition in Linear Algebra. The reader familiar with eigenvectors and eigenvalues (we do not assume familiarity here) will also realize that we need conditions on the matrix to ensure orthogonality of eigenvectors.
    [Show full text]
  • A Singularly Valuable Decomposition: the SVD of a Matrix Dan Kalman
    A Singularly Valuable Decomposition: The SVD of a Matrix Dan Kalman Dan Kalman is an assistant professor at American University in Washington, DC. From 1985 to 1993 he worked as an applied mathematician in the aerospace industry. It was during that period that he first learned about the SVD and its applications. He is very happy to be teaching again and is avidly following all the discussions and presentations about the teaching of linear algebra. Every teacher of linear algebra should be familiar with the matrix singular value deco~??positiolz(or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well-known theo1-j~of diagonalization for sylnmetric matrices makes the topic immediately accessible to linear algebra teachers and, indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Gilbert Strang was aware of these facts when he introduced the SVD in his now classical text [22, p. 1421, obselving that "it is not nearly as famous as it should be." Golub and Van Loan ascribe a central significance to the SVD in their defini- tive explication of numerical matrix methods [8, p, xivl, stating that "perhaps the most recurring theme in the book is the practical and theoretical value" of the SVD. Additional evidence of the SVD's significance is its central role in a number of re- cent papers in :Matlgenzatics ivlagazine and the Atnericalz Mathematical ilironthly; for example, [2, 3, 17, 231.
    [Show full text]
  • The Column and Row Hilbert Operator Spaces
    The Column and Row Hilbert Operator Spaces Roy M. Araiza Department of Mathematics Purdue University Abstract Given a Hilbert space H we present the construction and some properties of the column and row Hilbert operator spaces Hc and Hr, respectively. The main proof of the survey is showing that CB(Hc; Kc) is completely isometric to B(H ; K ); in other words, the completely bounded maps be- tween the corresponding column Hilbert operator spaces is completely isometric to the bounded operators between the Hilbert spaces. A similar theorem for the row Hilbert operator spaces follows via duality of the operator spaces. In particular there exists a natural duality between the column and row Hilbert operator spaces which we also show. The presentation and proofs are due to Effros and Ruan [1]. 1 The Column Hilbert Operator Space Given a Hilbert space H , we define the column isometry C : H −! B(C; H ) given by ξ 7! C(ξ)α := αξ; α 2 C: Then given any n 2 N; we define the nth-matrix norm on Mn(H ); by (n) kξkc;n = C (ξ) ; ξ 2 Mn(H ): These are indeed operator space matrix norms by Ruan's Representation theorem and thus we define the column Hilbert operator space as n o Hc = H ; k·kc;n : n2N It follows that given ξ 2 Mn(H ) we have that the adjoint operator of C(ξ) is given by ∗ C(ξ) : H −! C; ζ 7! (ζj ξ) : Suppose C(ξ)∗(ζ) = d; c 2 C: Then we see that cd = (cj C(ξ)∗(ζ)) = (cξj ζ) = c(ζj ξ) = (cj (ζj ξ)) : We are also able to compute the matrix norms on rectangular matrices.
    [Show full text]
  • Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions
    Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions Xin Liu∗ Zaiwen Weny Yin Zhangz March 22, 2012 Abstract In many data-intensive applications, the use of principal component analysis (PCA) and other related techniques is ubiquitous for dimension reduction, data mining or other transformational purposes. Such transformations often require efficiently, reliably and accurately computing dominant singular value de- compositions (SVDs) of large unstructured matrices. In this paper, we propose and study a subspace optimization technique to significantly accelerate the classic simultaneous iteration method. We analyze the convergence of the proposed algorithm, and numerically compare it with several state-of-the-art SVD solvers under the MATLAB environment. Extensive computational results show that on a wide range of large unstructured matrices, the proposed algorithm can often provide improved efficiency or robustness over existing algorithms. Keywords. subspace optimization, dominant singular value decomposition, Krylov subspace, eigen- value decomposition 1 Introduction Singular value decomposition (SVD) is a fundamental and enormously useful tool in matrix computations, such as determining the pseudo-inverse, the range or null space, or the rank of a matrix, solving regular or total least squares data fitting problems, or computing low-rank approximations to a matrix, just to mention a few. The need for computing SVDs also frequently arises from diverse fields in statistics, signal processing, data mining or compression, and from various dimension-reduction models of large-scale dynamic systems. Usually, instead of acquiring all the singular values and vectors of a matrix, it suffices to compute a set of dominant (i.e., the largest) singular values and their corresponding singular vectors in order to obtain the most valuable and relevant information about the underlying dataset or system.
    [Show full text]
  • Matrices and Linear Algebra
    Chapter 26 Matrices and linear algebra ap,mat Contents 26.1 Matrix algebra........................................... 26.2 26.1.1 Determinant (s,mat,det)................................... 26.2 26.1.2 Eigenvalues and eigenvectors (s,mat,eig).......................... 26.2 26.1.3 Hermitian and symmetric matrices............................. 26.3 26.1.4 Matrix trace (s,mat,trace).................................. 26.3 26.1.5 Inversion formulas (s,mat,mil)............................... 26.3 26.1.6 Kronecker products (s,mat,kron).............................. 26.4 26.2 Positive-definite matrices (s,mat,pd) ................................ 26.4 26.2.1 Properties of positive-definite matrices........................... 26.5 26.2.2 Partial order......................................... 26.5 26.2.3 Diagonal dominance.................................... 26.5 26.2.4 Diagonal majorizers..................................... 26.5 26.2.5 Simultaneous diagonalization................................ 26.6 26.3 Vector norms (s,mat,vnorm) .................................... 26.6 26.3.1 Examples of vector norms.................................. 26.6 26.3.2 Inequalities......................................... 26.7 26.3.3 Properties.......................................... 26.7 26.4 Inner products (s,mat,inprod) ................................... 26.7 26.4.1 Examples.......................................... 26.7 26.4.2 Properties.......................................... 26.8 26.5 Matrix norms (s,mat,mnorm) .................................... 26.8 26.5.1
    [Show full text]
  • Notes on the Spectral Theorem
    Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner prod- uct space V has an adjoint. (T ∗ is defined as the unique linear operator on V such that hT (x), yi = hx, T ∗(y)i for every x, y ∈ V – see Theroems 6.8 and 6.9.) When V is infinite dimensional, the adjoint T ∗ may or may not exist. One useful fact (Theorem 6.10) is that if β is an orthonormal basis for a finite dimen- ∗ ∗ sional inner product space V , then [T ]β = [T ]β. That is, the matrix representation of the operator T ∗ is equal to the complex conjugate of the matrix representation for T . For a general vector space V , and a linear operator T , we have already asked the ques- tion “when is there a basis of V consisting only of eigenvectors of T ?” – this is exactly when T is diagonalizable. Now, for an inner product space V , we know how to check whether vec- tors are orthogonal, and we know how to define the norms of vectors, so we can ask “when is there an orthonormal basis of V consisting only of eigenvectors of T ?” Clearly, if there is such a basis, T is diagonalizable – and moreover, eigenvectors with distinct eigenvalues must be orthogonal. Definitions Let V be an inner product space. Let T ∈ L(V ). (a) T is normal if T ∗T = TT ∗ (b) T is self-adjoint if T ∗ = T For the next two definitions, assume V is finite-dimensional: Then, (c) T is unitary if F = C and kT (x)k = kxk for every x ∈ V (d) T is orthogonal if F = R and kT (x)k = kxk for every x ∈ V Notes 1.
    [Show full text]
  • Asymptotic Spectral Measures: Between Quantum Theory and E
    Asymptotic Spectral Measures: Between Quantum Theory and E-theory Jody Trout Department of Mathematics Dartmouth College Hanover, NH 03755 Email: [email protected] Abstract— We review the relationship between positive of classical observables to the C∗-algebra of quantum operator-valued measures (POVMs) in quantum measurement observables. See the papers [12]–[14] and theB books [15], C∗ E theory and asymptotic morphisms in the -algebra -theory of [16] for more on the connections between operator algebra Connes and Higson. The theory of asymptotic spectral measures, as introduced by Martinez and Trout [1], is integrally related K-theory, E-theory, and quantization. to positive asymptotic morphisms on locally compact spaces In [1], Martinez and Trout showed that there is a fundamen- via an asymptotic Riesz Representation Theorem. Examples tal quantum-E-theory relationship by introducing the concept and applications to quantum physics, including quantum noise of an asymptotic spectral measure (ASM or asymptotic PVM) models, semiclassical limits, pure spin one-half systems and quantum information processing will also be discussed. A~ ~ :Σ ( ) { } ∈(0,1] →B H I. INTRODUCTION associated to a measurable space (X, Σ). (See Definition 4.1.) In the von Neumann Hilbert space model [2] of quantum Roughly, this is a continuous family of POV-measures which mechanics, quantum observables are modeled as self-adjoint are “asymptotically” projective (or quasiprojective) as ~ 0: → operators on the Hilbert space of states of the quantum system. ′ ′ A~(∆ ∆ ) A~(∆)A~(∆ ) 0 as ~ 0 The Spectral Theorem relates this theoretical view of a quan- ∩ − → → tum observable to the more operational one of a projection- for certain measurable sets ∆, ∆′ Σ.
    [Show full text]
  • Lecture 5 Ch. 5, Norms for Vectors and Matrices
    KTH ROYAL INSTITUTE OF TECHNOLOGY Norms for vectors and matrices — Why? Lecture 5 Problem: Measure size of vector or matrix. What is “small” and what is “large”? Ch. 5, Norms for vectors and matrices Problem: Measure distance between vectors or matrices. When are they “close together” or “far apart”? Emil Björnson/Magnus Jansson/Mats Bengtsson April 27, 2016 Answers are given by norms. Also: Tool to analyze convergence and stability of algorithms. 2 / 21 Vector norm — axiomatic definition Inner product — axiomatic definition Definition: LetV be a vector space over afieldF(R orC). Definition: LetV be a vector space over afieldF(R orC). A function :V R is a vector norm if for all A function , :V V F is an inner product if for all || · || → �· ·� × → x,y V x,y,z V, ∈ ∈ (1) x 0 nonnegative (1) x,x 0 nonnegative || ||≥ � �≥ (1a) x =0iffx= 0 positive (1a) x,x =0iffx=0 positive || || � � (2) cx = c x for allc F homogeneous (2) x+y,z = x,z + y,z additive || || | | || || ∈ � � � � � � (3) x+y x + y triangle inequality (3) cx,y =c x,y for allc F homogeneous || ||≤ || || || || � � � � ∈ (4) x,y = y,x Hermitian property A function not satisfying (1a) is called a vector � � � � seminorm. Interpretation: “Angle” (distance) between vectors. Interpretation: Size/length of vector. 3 / 21 4 / 21 Connections between norm and inner products Examples / Corollary: If , is an inner product, then x =( x,x ) 1 2 � The Euclidean norm (l ) onC n: �· ·� || || � � 2 is a vector norm. 2 2 2 1/2 x 2 = ( x1 + x 2 + + x n ) . Called: Vector norm derived from an inner product.
    [Show full text]
  • Matrix Norms 30.4   
    Matrix Norms 30.4 Introduction A matrix norm is a number defined in terms of the entries of the matrix. The norm is a useful quantity which can give important information about a matrix. ' • be familiar with matrices and their use in $ writing systems of equations • revise material on matrix inverses, be able to find the inverse of a 2 × 2 matrix, and know Prerequisites when no inverse exists Before starting this Section you should ... • revise Gaussian elimination and partial pivoting • be aware of the discussion of ill-conditioned and well-conditioned problems earlier in Section 30.1 #& • calculate norms and condition numbers of % Learning Outcomes small matrices • adjust certain systems of equations with a On completion you should be able to ... view to better conditioning " ! 34 HELM (2008): Workbook 30: Introduction to Numerical Methods ® 1. Matrix norms The norm of a square matrix A is a non-negative real number denoted kAk. There are several different ways of defining a matrix norm, but they all share the following properties: 1. kAk ≥ 0 for any square matrix A. 2. kAk = 0 if and only if the matrix A = 0. 3. kkAk = |k| kAk, for any scalar k. 4. kA + Bk ≤ kAk + kBk. 5. kABk ≤ kAk kBk. The norm of a matrix is a measure of how large its elements are. It is a way of determining the “size” of a matrix that is not necessarily related to how many rows or columns the matrix has. Key Point 6 Matrix Norm The norm of a matrix is a real number which is a measure of the magnitude of the matrix.
    [Show full text]