Math 480 Diagonalization and the Singular Value Decomposition These Notes Cover Diagonalization and the Singular Value Decomposi

Total Page:16

File Type:pdf, Size:1020Kb

Math 480 Diagonalization and the Singular Value Decomposition These Notes Cover Diagonalization and the Singular Value Decomposi Math 480 Diagonalization and the Singular Value Decomposition These notes cover diagonalization and the Singular Value Decomposition. 1. Diagonalization. Recall that a diagonal matrix is a square matrix with all off-diagonal entries equal to zero. Here are a few examples of diagonal matrices: 0 4 0 0 0 1 0 4 0 0 1 −6 0 0 2 0 0 ; 0 2 0 ; B C : 0 2 @ A B 0 0 1 0 C 0 0 1 @ A 0 0 0 −1 Definition 1.1. We say that an n×n matrix A is diagonalizable if there exists an invertible matrix S such that S−1AS is diagonal. Note that if D = S−1AS is diagonal, then we can equally well write A = SDS−1. So diagonalizable matrices are those that admit a factorization A = SDS−1 with D diagonal. Example: If D is a diagonal n × n matrix and S is an invertible n × n matrix, then A = SDS−1 is diagonalizable, since S−1AS = S−1(SDS−1)S = D: −1 2 For instance, the matrix S = is invertible, so 2 4 −6 0 −2 2 S S−1 = 0 2 8 −2 is diagonalizable. Fact 1.2. If A is a diagonalizable n × n matrix, with S−1AS = D, then the columns of S are eigenvectors of A, and the diagonal entries of D are eigenvalues of A. In particular, if A is diagonalizable then there must exist a basis for Rn consisting of eigenvectors of A. This follows from a simple computation: since S−1AS = D; multiplying both sides by S yields AS = SD: Write S = [~v1 : : :~vn] and set 0 1 λ1 0 0 ··· 0 B 0 λ2 0 ··· 0 C B . C B . C D = B 0 0 λ3 . C : B . C @ . .. 0 A 0 0 ··· 0 λn Since multiplying S by a diagonal matrix (on the right) just scales the columns, SD = [λ1~v1 : : : λn~vn]: On the other hand, AS = A[~v1 : : :~vn] = [A~v1 : : : A~vn]: So the equation AS = SD tells us that A~vi = λi~vi (for each i), which precisely says that ~vi is an eigenvector with eigenvalue λi. The previous discussion also works in reverse, and yields the following conclusion. n Fact 1.3. If A is an n × n matrix and there exists a basis ~v1; : : : ;~vn for R such that ~vi is an eigenvector of A with eigenvalue λi, then A is diagonalizable. More specifically, if S = [~v1 : : :~vn], then S−1AS = D; where D is the n × n diagonal matrix with diagonal entries λ1; : : : ; λn. Example. I claim that the matrix 0 4 2 2 1 A = @ 2 4 2 A 2 2 4 has eigenvalues 2 and 8. To find the corresponding eigenvectors, you can analyze N(A − 2I) and N(A−8I). By considering the parametric form for the homogeneous systems (A−2I)~x = ~0 and (A − 8I)~x = ~0, you'll find that the vectors 0 −1 1 0 −1 1 @ 1 A and @ 0 A 0 1 form a basis for the eigenspace associated to the eigenvalue 2, and 0 1 1 @ 1 A 1 forms a basis for the eigenspace associated with the eigenvalue 8. We can then conclude that S−1AS = D, where 0 −1 −1 1 1 0 2 0 0 1 S = @ 1 0 1 A and D = @ 0 2 0 A : 0 1 1 0 0 8 Note that order is important here: since we put eigenvectors corresponding to 2 into the first two columns of S, we have to put the eigenvalue 2 into the first two diagonal entries of D. We could, however, have switched the order of the eigenvectors corresponding to 2 without changing D, giving a second way of diagonalizing A. A third way of diagonalizing A would be to set 0 1 −1 −1 1 0 8 0 0 1 T = @ 1 0 1 A and E = @ 0 2 0 A ; 1 1 0 0 0 2 and again we have T −1AT = E. Exercise 1: Check these formulas without computing S−1 and T −1. (Multiply both sides of the equations S−1AS = D and T −1AT = E by S or T and check instead that AS = SD and AT = TE.) 1 1 Example: The matrix A = is not diagonalizable. If you compute the character- 0 1 istic polynomial det(A − λI), you'll see that it is simply (1 − λ)2, so the only eigenvalue is λ = 1. The corresponding eigenspace is N(A−1·I) = N(A−I). This space is 1{dimensional (why?) so there cannot be a basis for Rn consisting of eigenvectors of A. So Fact ?? tells us that we can't diagonalize A. Exercise 2: Determine whether or not the following matrices are diagonalizable. For the ones that are diagonalizable, write them in the form SDS−1 with D diagonal. 0 5 −1 −4 1 −3 1 −4 6 ; ; −2 4 −2 : −1 −1 −8 10 @ A −3 −3 6 2. Diagonalization of Symmetric Matrices In general, it's hard to tell if a matrix is diagonalizable, because it's hard to find eigenvalues exactly: they're roots of a complicated polynomial. However, in some cases one can tell very quickly that a matrix is diagonalizable. Theorem 2.1 (The Spectral Theorem). If A is an n × n symmetric matrix, then A is diagonalizable. In other words, there is a basis for Rn consisting of eigenvectors of A. This is hard to prove, and we'll simply take it for granted. However, some additional information is much easier to establish. Fact 2.2. If A is an n × n symmetric matrix, and ~v and ~w are eigenvectors of A with different eigenvalues, then ~v and ~w are perpendicular. This is relatively easy to check using our understanding of orthogonality. Say A~v = λ1~v T and A~w = λ2 ~w with λ1 6= λ2. We need to check that h~v; ~wi = 0. Since h~v; ~wi = ~v ~w, T T λ1~v ~w = (λ1~v) ~w = (A~v)T ~w = ~vT AT ~w T T = ~v A~w = ~v λ2 ~w T = λ2~v ~w: T T T So λ1~v ~w = λ2~v ~w, and since λ1 6= λ2, we conclude that ~v ~w = 0. In this computation, we used the fact that A is symmetric (where?) and the fact that ~w is an eigenvector (where?). We'll mostly be interested in symmetric matrices of the form AT A, where A is any m × n matrix. Remember that all such matrices are symmetric, because (AT A)T = AT (AT )T = AT A: Fact 2.3. For any m × n matrix A, the eigenvalues of the symmetric matrix AT A are all non-negative (real) numbers. This is again easy to check. If λ is an eigenvalue of AT A, then we can always find a (non- zero) eigenvector ~v associated with λ, and dividing ~v by jj~vjj yields a length-one eigenvector. So let's just assume that jj~vjj = 1 and AT A~v = λ~v. Then we have jjA~vjj2 = hA~v; A~vi = (A~v)T A~v = ~vT (AT A~v) = ~vT (λ~v) = λh~v;~vi = λ. So λ = jjA~vjj2, which is a non-negative real number. In this computation, we used the fact that ~v is an eigenvector of AT A (where?) and the fact that jj~vjj = 1 (where?). Exercise 3: Write each of the following symmetric matrices in the form SDS−1 with D diagonal. In the second case, the eigenvalues are −1 and 11. 0 5=2 1=2 0 1 0 3 4 4 1 @ 1=2 5=2 0 A ; @ 4 3 4 A 0 0 5 4 4 3 3. The Singular Value Decomposition Lots of matrices that arise in practice are not diagonalizable, and are often not even square. However, there is something sort of similar to diagonalization that works for any m × n matrix. We will call a square matrix orthogonal if its columns are orthonormal. Exercise 4: Explain the following statement: if A is an orthogonal n × n matrix, then A is invertible and AT = A−1. (This came up when we discussed the QR factorization.) Definition 3.1. A Singular Value Decomposition of an m × n matrix A is an expression A = UΣV T where U is an m×m matrix with orthonormal columns, V is an n×n matrix with orthonormal columns, and Σ = (σi;j) is an m × n matrix with σi;j = 0 for i 6= j and σ1;1 > σ2;2 > σ3;3 > ··· > σm;m > 0: Example: Here is an example of a SVD: 0 1=3 2=3 −2=3 1 6 30 −21 4=5 −3=5 45 0 0 = 2=3 −2=3 −1=3 : 17 10 −22 3=5 4=5 0 15 0 @ A 2=3 1=3 2=3 Exercise 5: Check that the above decomposition is a Singular Value Decomposition. (You need to check that the left-hand matrix in the decomposition has orthonormal columns, that the rows of the right-hand matrix are orthonormal, and that the middle matrix is \diagonal" with decreasing, positive entries on the diagonal. Of course no work is required to check this third condition.) Here are the key facts about the SVD: Theorem 3.2. Every m × n matrix A admits (many) Singular Value Decompositions. Fact 3.3. If A = UΣV T is a Singular Value Decomposition of an m × n matrix A, then T • The numbers σi;i are the square roots of the eigenvalues of A A, repeated according to their multiplicities as roots of the characteristic polynomial of AT A.
Recommended publications
  • Diagonalizing a Matrix
    Diagonalizing a Matrix Definition 1. We say that two square matrices A and B are similar provided there exists an invertible matrix P so that . 2. We say a matrix A is diagonalizable if it is similar to a diagonal matrix. Example 1. The matrices and are similar matrices since . We conclude that is diagonalizable. 2. The matrices and are similar matrices since . After we have developed some additional theory, we will be able to conclude that the matrices and are not diagonalizable. Theorem Suppose A, B and C are square matrices. (1) A is similar to A. (2) If A is similar to B, then B is similar to A. (3) If A is similar to B and if B is similar to C, then A is similar to C. Proof of (3) Since A is similar to B, there exists an invertible matrix P so that . Also, since B is similar to C, there exists an invertible matrix R so that . Now, and so A is similar to C. Thus, “A is similar to B” is an equivalence relation. Theorem If A is similar to B, then A and B have the same eigenvalues. Proof Since A is similar to B, there exists an invertible matrix P so that . Now, Since A and B have the same characteristic equation, they have the same eigenvalues. > Example Find the eigenvalues for . Solution Since is similar to the diagonal matrix , they have the same eigenvalues. Because the eigenvalues of an upper (or lower) triangular matrix are the entries on the main diagonal, we see that the eigenvalues for , and, hence, are .
    [Show full text]
  • Singular Value Decomposition (SVD)
    San José State University Math 253: Mathematical Methods for Data Visualization Lecture 5: Singular Value Decomposition (SVD) Dr. Guangliang Chen Outline • Matrix SVD Singular Value Decomposition (SVD) Introduction We have seen that symmetric matrices are always (orthogonally) diagonalizable. That is, for any symmetric matrix A ∈ Rn×n, there exist an orthogonal matrix Q = [q1 ... qn] and a diagonal matrix Λ = diag(λ1, . , λn), both real and square, such that A = QΛQT . We have pointed out that λi’s are the eigenvalues of A and qi’s the corresponding eigenvectors (which are orthogonal to each other and have unit norm). Thus, such a factorization is called the eigendecomposition of A, also called the spectral decomposition of A. What about general rectangular matrices? Dr. Guangliang Chen | Mathematics & Statistics, San José State University3/22 Singular Value Decomposition (SVD) Existence of the SVD for general matrices Theorem: For any matrix X ∈ Rn×d, there exist two orthogonal matrices U ∈ Rn×n, V ∈ Rd×d and a nonnegative, “diagonal” matrix Σ ∈ Rn×d (of the same size as X) such that T Xn×d = Un×nΣn×dVd×d. Remark. This is called the Singular Value Decomposition (SVD) of X: • The diagonals of Σ are called the singular values of X (often sorted in decreasing order). • The columns of U are called the left singular vectors of X. • The columns of V are called the right singular vectors of X. Dr. Guangliang Chen | Mathematics & Statistics, San José State University4/22 Singular Value Decomposition (SVD) * * b * b (n>d) b b b * b = * * = b b b * (n<d) * b * * b b Dr.
    [Show full text]
  • Eigen Values and Vectors Matrices and Eigen Vectors
    EIGEN VALUES AND VECTORS MATRICES AND EIGEN VECTORS 2 3 1 11 × = [2 1] [3] [ 5 ] 2 3 3 12 3 × = = 4 × [2 1] [2] [ 8 ] [2] • Scale 3 6 2 × = [2] [4] 2 3 6 24 6 × = = 4 × [2 1] [4] [16] [4] 2 EIGEN VECTOR - PROPERTIES • Eigen vectors can only be found for square matrices • Not every square matrix has eigen vectors. • Given an n x n matrix that does have eigenvectors, there are n of them for example, given a 3 x 3 matrix, there are 3 eigenvectors. • Even if we scale the vector by some amount, we still get the same multiple 3 EIGEN VECTOR - PROPERTIES • Even if we scale the vector by some amount, we still get the same multiple • Because all you’re doing is making it longer, not changing its direction. • All the eigenvectors of a matrix are perpendicular or orthogonal. • This means you can express the data in terms of these perpendicular eigenvectors. • Also, when we find eigenvectors we usually normalize them to length one. 4 EIGEN VALUES - PROPERTIES • Eigenvalues are closely related to eigenvectors. • These scale the eigenvectors • eigenvalues and eigenvectors always come in pairs. 2 3 6 24 6 × = = 4 × [2 1] [4] [16] [4] 5 SPECTRAL THEOREM Theorem: If A ∈ ℝm×n is symmetric matrix (meaning AT = A), then, there exist real numbers (the eigenvalues) λ1, …, λn and orthogonal, non-zero real vectors ϕ1, ϕ2, …, ϕn (the eigenvectors) such that for each i = 1,2,…, n : Aϕi = λiϕi 6 EXAMPLE 30 28 A = [28 30] From spectral theorem: Aϕ = λϕ 7 EXAMPLE 30 28 A = [28 30] From spectral theorem: Aϕ = λϕ ⟹ Aϕ − λIϕ = 0 (A − λI)ϕ = 0 30 − λ 28 = 0 ⟹ λ = 58 and
    [Show full text]
  • Singular Value Decomposition (SVD) 2 1.1 Singular Vectors
    Contents 1 Singular Value Decomposition (SVD) 2 1.1 Singular Vectors . .3 1.2 Singular Value Decomposition (SVD) . .7 1.3 Best Rank k Approximations . .8 1.4 Power Method for Computing the Singular Value Decomposition . 11 1.5 Applications of Singular Value Decomposition . 16 1.5.1 Principal Component Analysis . 16 1.5.2 Clustering a Mixture of Spherical Gaussians . 16 1.5.3 An Application of SVD to a Discrete Optimization Problem . 22 1.5.4 SVD as a Compression Algorithm . 24 1.5.5 Spectral Decomposition . 24 1.5.6 Singular Vectors and ranking documents . 25 1.6 Bibliographic Notes . 27 1.7 Exercises . 28 1 1 Singular Value Decomposition (SVD) The singular value decomposition of a matrix A is the factorization of A into the product of three matrices A = UDV T where the columns of U and V are orthonormal and the matrix D is diagonal with positive real entries. The SVD is useful in many tasks. Here we mention some examples. First, in many applications, the data matrix A is close to a matrix of low rank and it is useful to find a low rank matrix which is a good approximation to the data matrix . We will show that from the singular value decomposition of A, we can get the matrix B of rank k which best approximates A; in fact we can do this for every k. Also, singular value decomposition is defined for all matrices (rectangular or square) unlike the more commonly used spectral decomposition in Linear Algebra. The reader familiar with eigenvectors and eigenvalues (we do not assume familiarity here) will also realize that we need conditions on the matrix to ensure orthogonality of eigenvectors.
    [Show full text]
  • A Singularly Valuable Decomposition: the SVD of a Matrix Dan Kalman
    A Singularly Valuable Decomposition: The SVD of a Matrix Dan Kalman Dan Kalman is an assistant professor at American University in Washington, DC. From 1985 to 1993 he worked as an applied mathematician in the aerospace industry. It was during that period that he first learned about the SVD and its applications. He is very happy to be teaching again and is avidly following all the discussions and presentations about the teaching of linear algebra. Every teacher of linear algebra should be familiar with the matrix singular value deco~??positiolz(or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well-known theo1-j~of diagonalization for sylnmetric matrices makes the topic immediately accessible to linear algebra teachers and, indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Gilbert Strang was aware of these facts when he introduced the SVD in his now classical text [22, p. 1421, obselving that "it is not nearly as famous as it should be." Golub and Van Loan ascribe a central significance to the SVD in their defini- tive explication of numerical matrix methods [8, p, xivl, stating that "perhaps the most recurring theme in the book is the practical and theoretical value" of the SVD. Additional evidence of the SVD's significance is its central role in a number of re- cent papers in :Matlgenzatics ivlagazine and the Atnericalz Mathematical ilironthly; for example, [2, 3, 17, 231.
    [Show full text]
  • 3.3 Diagonalization
    3.3 Diagonalization −4 1 1 1 Let A = 0 1. Then 0 1 and 0 1 are eigenvectors of A, with corresponding @ 4 −4 A @ 2 A @ −2 A eigenvalues −2 and −6 respectively (check). This means −4 1 1 1 −4 1 1 1 0 1 0 1 = −2 0 1 ; 0 1 0 1 = −6 0 1 : @ 4 −4 A @ 2 A @ 2 A @ 4 −4 A @ −2 A @ −2 A Thus −4 1 1 1 1 1 −2 −6 0 1 0 1 = 0−2 0 1 − 6 0 11 = 0 1 @ 4 −4 A @ 2 −2 A @ @ −2 A @ −2 AA @ −4 12 A We have −4 1 1 1 1 1 −2 0 0 1 0 1 = 0 1 0 1 @ 4 −4 A @ 2 −2 A @ 2 −2 A @ 0 −6 A 1 1 (Think about this). Thus AE = ED where E = 0 1 has the eigenvectors of A as @ 2 −2 A −2 0 columns and D = 0 1 is the diagonal matrix having the eigenvalues of A on the @ 0 −6 A main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. Definition 3.3.1 A n × n matrix is A diagonal if all of its non-zero entries are located on its main diagonal, i.e. if Aij = 0 whenever i =6 j. Diagonal matrices are particularly easy to handle computationally. If A and B are diagonal n × n matrices then the product AB is obtained from A and B by simply multiplying entries in corresponding positions along the diagonal, and AB = BA.
    [Show full text]
  • R'kj.Oti-1). (3) the Object of the Present Article Is to Make This Estimate Effective
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 259, Number 2, June 1980 EFFECTIVE p-ADIC BOUNDS FOR SOLUTIONS OF HOMOGENEOUS LINEAR DIFFERENTIAL EQUATIONS BY B. DWORK AND P. ROBBA Dedicated to K. Iwasawa Abstract. We consider a finite set of power series in one variable with coefficients in a field of characteristic zero having a chosen nonarchimedean valuation. We study the growth of these series near the boundary of their common "open" disk of convergence. Our results are definitive when the wronskian is bounded. The main application involves local solutions of ordinary linear differential equations with analytic coefficients. The effective determination of the common radius of conver- gence remains open (and is not treated here). Let K be an algebraically closed field of characteristic zero complete under a nonarchimedean valuation with residue class field of characteristic p. Let D = d/dx L = D"+Cn_lD'-l+ ■ ■■ +C0 (1) be a linear differential operator with coefficients meromorphic in some neighbor- hood of the origin. Let u = a0 + a,jc + . (2) be a power series solution of L which converges in an open (/>-adic) disk of radius r. Our object is to describe the asymptotic behavior of \a,\rs as s —*oo. In a series of articles we have shown that subject to certain restrictions we may conclude that r'KJ.Oti-1). (3) The object of the present article is to make this estimate effective. At the same time we greatly simplify, and generalize, our best previous results [12] for the noneffective form. Our previous work was based on the notion of a generic disk together with a condition for reducibility of differential operators with unbounded solutions [4, Theorem 4].
    [Show full text]
  • Diagonalizable Matrix - Wikipedia, the Free Encyclopedia
    Diagonalizable matrix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_diagonalization Diagonalizable matrix From Wikipedia, the free encyclopedia (Redirected from Matrix diagonalization) In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.[1] A square matrix which is not diagonalizable is called defective. Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power. Geometrically, a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling) — it scales the space, as does a homogeneous dilation, but by a different factor in each direction, determined by the scale factors on each axis (diagonal entries). Contents 1 Characterisation 2 Diagonalization 3 Simultaneous diagonalization 4 Examples 4.1 Diagonalizable matrices 4.2 Matrices that are not diagonalizable 4.3 How to diagonalize a matrix 4.3.1 Alternative Method 5 An application 5.1 Particular application 6 Quantum mechanical application 7 See also 8 Notes 9 References 10 External links Characterisation The fundamental fact about diagonalizable maps and matrices is expressed by the following: An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A.
    [Show full text]
  • EIGENVALUES and EIGENVECTORS 1. Diagonalizable Linear Transformations and Matrices Recall, a Matrix, D, Is Diagonal If It Is
    EIGENVALUES AND EIGENVECTORS 1. Diagonalizable linear transformations and matrices Recall, a matrix, D, is diagonal if it is square and the only non-zero entries are on the diagonal. This is equivalent to D~ei = λi~ei where here ~ei are the standard n n vector and the λi are the diagonal entries. A linear transformation, T : R ! R , is n diagonalizable if there is a basis B of R so that [T ]B is diagonal. This means [T ] is n×n similar to the diagonal matrix [T ]B. Similarly, a matrix A 2 R is diagonalizable if it is similar to some diagonal matrix D. To diagonalize a linear transformation is to find a basis B so that [T ]B is diagonal. To diagonalize a square matrix is to find an invertible S so that S−1AS = D is diagonal. Fix a matrix A 2 Rn×n We say a vector ~v 2 Rn is an eigenvector if (1) ~v 6= 0. (2) A~v = λ~v for some scalar λ 2 R. The scalar λ is the eigenvalue associated to ~v or just an eigenvalue of A. Geo- metrically, A~v is parallel to ~v and the eigenvalue, λ. counts the stretching factor. Another way to think about this is that the line L := span(~v) is left invariant by multiplication by A. n An eigenbasis of A is a basis, B = (~v1; : : : ;~vn) of R so that each ~vi is an eigenvector of A. Theorem 1.1. The matrix A is diagonalizable if and only if there is an eigenbasis of A.
    [Show full text]
  • Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions
    Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions Xin Liu∗ Zaiwen Weny Yin Zhangz March 22, 2012 Abstract In many data-intensive applications, the use of principal component analysis (PCA) and other related techniques is ubiquitous for dimension reduction, data mining or other transformational purposes. Such transformations often require efficiently, reliably and accurately computing dominant singular value de- compositions (SVDs) of large unstructured matrices. In this paper, we propose and study a subspace optimization technique to significantly accelerate the classic simultaneous iteration method. We analyze the convergence of the proposed algorithm, and numerically compare it with several state-of-the-art SVD solvers under the MATLAB environment. Extensive computational results show that on a wide range of large unstructured matrices, the proposed algorithm can often provide improved efficiency or robustness over existing algorithms. Keywords. subspace optimization, dominant singular value decomposition, Krylov subspace, eigen- value decomposition 1 Introduction Singular value decomposition (SVD) is a fundamental and enormously useful tool in matrix computations, such as determining the pseudo-inverse, the range or null space, or the rank of a matrix, solving regular or total least squares data fitting problems, or computing low-rank approximations to a matrix, just to mention a few. The need for computing SVDs also frequently arises from diverse fields in statistics, signal processing, data mining or compression, and from various dimension-reduction models of large-scale dynamic systems. Usually, instead of acquiring all the singular values and vectors of a matrix, it suffices to compute a set of dominant (i.e., the largest) singular values and their corresponding singular vectors in order to obtain the most valuable and relevant information about the underlying dataset or system.
    [Show full text]
  • Notes on the Spectral Theorem
    Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner prod- uct space V has an adjoint. (T ∗ is defined as the unique linear operator on V such that hT (x), yi = hx, T ∗(y)i for every x, y ∈ V – see Theroems 6.8 and 6.9.) When V is infinite dimensional, the adjoint T ∗ may or may not exist. One useful fact (Theorem 6.10) is that if β is an orthonormal basis for a finite dimen- ∗ ∗ sional inner product space V , then [T ]β = [T ]β. That is, the matrix representation of the operator T ∗ is equal to the complex conjugate of the matrix representation for T . For a general vector space V , and a linear operator T , we have already asked the ques- tion “when is there a basis of V consisting only of eigenvectors of T ?” – this is exactly when T is diagonalizable. Now, for an inner product space V , we know how to check whether vec- tors are orthogonal, and we know how to define the norms of vectors, so we can ask “when is there an orthonormal basis of V consisting only of eigenvectors of T ?” Clearly, if there is such a basis, T is diagonalizable – and moreover, eigenvectors with distinct eigenvalues must be orthogonal. Definitions Let V be an inner product space. Let T ∈ L(V ). (a) T is normal if T ∗T = TT ∗ (b) T is self-adjoint if T ∗ = T For the next two definitions, assume V is finite-dimensional: Then, (c) T is unitary if F = C and kT (x)k = kxk for every x ∈ V (d) T is orthogonal if F = R and kT (x)k = kxk for every x ∈ V Notes 1.
    [Show full text]
  • §9.2 Orthogonal Matrices and Similarity Transformations
    n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y. n I For any x 2 R , kQ xk2 = kxk2. Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . Ex 0 1 1 1 1 1 2 2 2 2 B C B 1 1 1 1 C B − 2 2 − 2 2 C B C T H = B C ; H H = I : B C B − 1 − 1 1 1 C B 2 2 2 2 C @ A 1 1 1 1 2 − 2 − 2 2 x9.2 Orthogonal Matrices and Similarity Transformations n×n Def: A matrix Q 2 R is said to be orthogonal if its columns n (1) (2) (n)o n q ; q ; ··· ; q form an orthonormal set in R . n×n Thm: Suppose matrix Q 2 R is orthogonal. Then −1 T I Q is invertible with Q = Q . n T T I For any x; y 2 R ,(Q x) (Q y) = x y.
    [Show full text]