Linear Algebra Review
Total Page:16
File Type:pdf, Size:1020Kb
Appendix A Linear Algebra Review In this appendix, we review results from linear algebra that are used in the text. The results quoted here are mostly standard, and the proofs are mostly omitted. For more information, the reader is encouraged to consult such standard linear algebra textbooks as [HK]or[Axl]. Throughout this appendix, we let Mn.C/ denote the space of n n matrices with entries in C: A.1 Eigenvectors and Eigenvalues n For any A 2 Mn.C/; a nonzero vector v in C is called an eigenvector for A if there is some complex number such that Av D v: An eigenvalue for A is a complex number for which there exists a nonzero v 2 Cn with Av D v: Thus, is an eigenvalue for A if the equation Av D v or, equivalently, the equation .A I /v D 0; has a nonzero solution v: This happens precisely when A I fails to be invertible, which is precisely when det.A I / D 0: For any A 2 Mn.C/; the characteristic polynomial p of A is given by A previous version of this book was inadvertently published without the middle initial of the author’s name as “Brian Hall”. For this reason an erratum has been published, correcting the mistake in the previous version and showing the correct name as Brian C. Hall (see DOI http:// dx.doi.org/10.1007/978-3-319-13467-3_14). The version readers currently see is the corrected version. The Publisher would like to apologize for the earlier mistake. © Springer International Publishing Switzerland 2015 409 B.C. Hall, Lie Groups, Lie Algebras, and Representations, Graduate Texts in Mathematics 222, DOI 10.1007/978-3-319-13467-3 410 A Linear Algebra Review p./ D det.A I /; 2 C: This polynomial has degree n: In light of the preceding discussion, the eigenvalues are precisely the zeros of the characteristic polynomial. We can define, more generally, the notion of eigenvector and eigenvalue for any linear operator on a vector space. If V is a finite-dimensional vector space over C (or over any algebraically closed field), every linear operator on V will have a least one eigenvalue. If A is a linear operator on a vector space V and is an eigenvalue for A; the -eigenspace for A; denoted V; is the set of all vectors v 2 V (including the zero vector) that satisfy Av D v: The -eigenspace for A is a subspace of V: The dimension of this space is called the multiplicity of : (More precisely, this is the “geometric multiplicity” of : In the finite-dimensional case, there is also a notion of the “algebraic multiplicity” of ; which is the number of times that occurs as a root of the characteristic polynomial. The geometric multiplicity of cannot exceed the algebraic multiplicity). Proposition A.1. Suppose that A is a linear operator on a vector space V and v1;:::;vk are eigenvectors with distinct eigenvalues 1;:::;k: Then v1;:::;vk are linearly independent. Note that here V does not have to be finite dimensional. Proposition A.2. Suppose that A and B are linear operators on a finite- dimensional vector space V and suppose that AB D BA: Then for each eigenvalue of A; the operator B maps the -eigenspace of A into itself. Proof. Let be an eigenvalue of A and let V be the -eigenspace of A: Then let v be an element of V and consider Bv: Since B commutes with A; we have A.Bv/ D BAv D Bv, showing that Bv is in V: ut A.2 Diagonalization Two matrices A; B 2 Mn.C/ are said to be similar if there exists an invertible matrix C such that A D CBC 1: The operation B ! CBC 1 is called conjugation of B by C: A matrix is said to be diagonalizable if it is similar to a diagonal matrix. A matrix A 2 Mn.C/ is diagonalizable if and only if there exist n linearly independent eigenvectors for A: Specifically, if v1;:::;vn are linearly independent eigenvectors, let C be the matrix whose kth column is vk: Then C is invertible and we will have A.3 Generalized Eigenvectors and the SN Decomposition 411 0 1 1 B : C 1 A D C @ :: A C ; (A.1) n where 1;:::;n are the eigenvalues associated to the eigenvectors v1;:::;vn; in that order. To verify (A.1), note that C maps the standard basis element ej to vj : 1 Thus, C maps vj to ej ; the diagonal matrix on the right-hand side of (A.1)then maps ej to j ej ; and C maps j ej to j vj : Thus, both sides of (A.1)mapvj to j vj ; for all j: If A 2 Mn.C/ has n distinct eigenvalues (i.e., n distinct roots to the characteristic polynomial), A is necessarily diagonalizable, by Proposition A.1. If the characteris- tic polynomial of A has repeated roots, A may or may not be diagonalizable. For A 2 Mn.C/; the adjoint of A; denoted A ; is the conjugate-transpose of A; .A /jk D Akj: (A.2) AmatrixA is said to be self-adjoint (or Hermitian)ifA D A: AmatrixA is said to be skew self-adjoint (or skew Hermitian)ifA DA: A matrix is said to be unitary if A D A1: More generally, A is said to be normal if A commutes with A: If A is normal, A is necessarily diagonalizable, and, indeed, it is possible to find an orthonormal basis of eigenvectors for A: In such cases, the matrix C in (A.1) may be taken to be unitary. If A is self-adjoint, all of its eigenvalues are real. If A is real and self-adjoint (or, equivalently, real and symmetric), the eigenvectors may be taken to be real as well, which means that in this case, the matrix C may be taken to be orthogonal. If A is skew, then its eigenvalues are pure imaginary. If A is unitary, then its eigenvalues are complex numbers of absolute value 1. We summarize the results of the previous paragraphs in the following. Theorem A.3. Suppose that A 2 Mn.C/ has the property that A A D AA ; (e.g., if A D A; A D A1; or A DA). Then A is diagonalizable and it is possible to find an orthonormal basis for Cn consisting of eigenvectors for A: If A D A; all the eigenvalues of A are real; if A DA; all the eigenvalues of A are pure imaginary; and if A D A1; all the eigenvalues of A have absolute value 1. A.3 Generalized Eigenvectors and the SN Decomposition Not all matrices are diagonalizable, even over C: If, for example,  à 11 A D ; (A.3) 01 412 A Linear Algebra Review then the only eigenvalue of A is 1, and every eigenvector with eigenvalue 1 is of the form .c; 0/: Thus, we cannot find two linearly independent eigenvectors for A: It is not hard, however, to prove the following result. Recall that a matrix A is nilpotent if Ak D 0 for some positive integer k: Theorem A.4. Every matrix is similar to an upper triangular matrix. Every nilpotent matrix is similar to an upper triangular matrix with zeros on the diagonal. While Theorem A.4 is sufficient for some purposes, we will in general need something that comes a bit closer to a diagonal representation. If A 2 Mn.C/ does not have n linearly independent eigenvectors, we may consider the more general concept of generalized eigenvectors. A nonzero vector v 2 Cn is called a generalized eigenvector for A if there is some complex number and some positive integer k such that .A I /kv D 0: (A.4) If (A.4) holds for some v ¤ 0,then.AI / cannot be invertible. Thus, the number must be an (ordinary) eigenvalue for A: However, for a fixed eigenvalue ; there may be generalized eigenvectors v that are not ordinary eigenvectors. In the case of the matrix A in (A.3), for example, the vector .0; 1/ is a generalized eigenvector with eigenvalue 1 (with k D 2). It can be shown that every A 2 Mn.C/ has a basis of generalized eigenvectors. For any matrix A and any eigenvalue for A; let W be the generalized eigenspace with eigenvalue : ˇ n ˇ k W Dfv 2 C .A I / v D 0 for some k g: n Then C decomposes as a direct sum of the W’s, as ranges over all the eigenvalues of A: Furthermore, the subspace W is easily seen to be invariant under the matrix A.LetA denote the restriction of A to the subspace W; and let N D A I; so that A D I C N: k Then N is nilpotent; that is, N D 0 for some positive integer k: We summarize the preceding discussion in the following theorem. Theorem A.5. Let A be an n n complex matrix. Then there exists a basis for Cn consisting of generalized eigenvectors for A: Furthermore, Cn is the direct sum of the generalized eigenspaces W: Each W is invariant under A; and the restriction of A to W is of the form I C N; where N is nilpotent. The preceding result is the basis for the following decomposition. Theorem A.6. Each A 2 Mn.C/ has a unique decomposition as A D S CN where S is diagonalizable, N is nilpotent, and SN D NS: A.5 The Trace 413 The expression A D S C N; with S and N as in the theorem, is called the SN decomposition of A: The existence of an SN decomposition follows from the previous theorem: We define S to be the operator equal to I on each generalized eigenspace W of A and we set N to be the operator equal to N on each W.For example, if A is the matrix in (A.3), then we have  à  à 10 01 S D ;ND : 01 00 A.4 The Jordan Canonical Form The Jordan canonical form may be viewed as a refinement of the SN decomposition, based on a further analysis of the nilpotent matrices N in Theorem A.5.