Linear Algebra: Graduate Level Problems and Solutions
Total Page:16
File Type:pdf, Size:1020Kb
Linear Algebra: Graduate Level Problems and Solutions Igor Yanovsky 1 Linear Algebra Igor Yanovsky, 2005 2 Disclaimer: This handbook is intended to assist graduate students with qualifying examination preparation. Please be aware, however, that the handbook might contain, and almost certainly contains, typos as well as incorrect or inaccurate solutions. I can not be made responsible for any inaccuracies contained in this handbook. Linear Algebra Igor Yanovsky, 2005 3 Contents 1 Basic Theory 4 1.1 Linear Maps . 4 1.2 Linear Maps as Matrices . 4 1.3 Dimension and Isomorphism . 4 1.4 Matrix Representations Redux . 6 1.5 Subspaces . 6 1.6 Linear Maps and Subspaces . 7 1.7 Dimension Formula . 7 1.8 Matrix Calculations . 7 1.9 Diagonalizability . 8 2 Inner Product Spaces 8 2.1 Inner Products . 8 2.2 Orthonormal Bases . 8 2.2.1 Gram-Schmidt procedure . 9 2.2.2 QR Factorization . 9 2.3 Orthogonal Complements and Projections . 9 3 Linear Maps on Inner Product Spaces 11 3.1 Adjoint Maps . 11 3.2 Self-Adjoint Maps . 13 3.3 Polarization and Isometries . 13 3.4 Unitary and Orthogonal Operators . 14 3.5 Spectral Theorem . 15 3.6 Normal Operators . 15 3.7 Unitary Equivalence . 16 3.8 Triangulability . 16 4 Determinants 17 4.1 Characteristic Polynomial . 17 5 Linear Operators 18 5.1 Dual Spaces . 18 5.2 Dual Maps . 22 6 Problems 23 Linear Algebra Igor Yanovsky, 2005 4 1 Basic Theory 1.1 Linear Maps Lemma. If A 2 Matmxn(F) and B 2 Matnxm(F), then tr(AB) = tr(BA): Pn Proof. Note that the (i; i) entry in AB is j=1 ®ij¯ji, while (j; j) entry in BA is Pm i=1 ¯ji®ij. Xm Xn Thus tr(AB) = ®ij¯ji; i=1 j=1 Xn Xm tr(BA) = ¯ji®ij: j=1 i=1 1.2 Linear Maps as Matrices n Example. Let Pn = f®0 + ®1t + ¢ ¢ ¢ + ®nt : ®0; ®1; : : : ; ®n 2 Fg be the space of polynomials of degree · n and D : V ! V the di®erential map n n¡1 D(®0 + ®1t + ¢ ¢ ¢ + ®nt ) = ®1 + ¢ ¢ ¢ + n®nt : If we use the basis 1; t; : : : ; tn for V then we see that D(tk) = ktk¡1 and thus the (n + 1)x(n + 1) matrix representation is computed via 2 3 0 1 0 ¢ ¢ ¢ 0 6 7 6 0 0 2 ¢ ¢ ¢ 0 7 6 7 2 n n¡1 2 n 6 .. 7 [D(1) D(t) D(t ) ¢ ¢ ¢ D(t )] = [0 1 2t ¢ ¢ ¢ nt ] = [1 t t ¢ ¢ ¢ t ] 6 0 0 0 . 0 7 6 . 7 4 . .. .. n 5 0 0 0 ¢ ¢ ¢ 0 1.3 Dimension and Isomorphism A linear map L : V ! W is isomorphism if we can ¯nd K : W ! V such that LK = IW and KL = IV . V ¡¡¡¡!L W x x ? ? IV ? ?IW V á¡¡¡K W Linear Algebra Igor Yanovsky, 2005 5 Theorem. V and W are isomorphic , there is a bijective linear map L : V ! W . Proof. ) If V and W are isomorphic we can ¯nd linear maps L : V ! W and K : W ! V so that LK = IW and KL = IV . Then for any y = IW (y) = L(K(y)) so we can let x = K(y), which means L is onto. If L(x1) = L(x2) then x1 = IV (x1) = KL(x1) = KL(x2) = IV (x2) = x2, which means L is 1 ¡ 1. ( Assume L : V ! W is linear and a bijection. Then we have an inverse map L¡1 ¡1 ¡1 which satis¯es L ± L = IW and L ± L = IV . In order for this inverse map to be allowable as K we need to check that it is linear. Select ®1; ®2 2 F and y1; y2 2 W . ¡1 Let xi = L (yi) so that L(xi) = yi. Then we have ¡1 ¡1 ¡1 L (®1y1 + ®2y2) = L (®1L(x1) + ®2L(x2)) = L (L(®1x1 + ®2x2)) ¡1 ¡1 = IV (®1x1 + ®2x2) = ®1x1 + ®2x2 = ®1L (y1) + ®2L (y2): Theorem. If Fm and Fn are isomorphic over F, then n = m. m n n m Proof. Suppose we have L : F ! F and K : F ! F such that LK = IFn and KL = IFm . L 2 Matnxm(F) and K 2 Matmxn(F). Thus n = tr(IFn ) = tr(LK) = tr(KL) = tr(IFm ) = m: De¯ne the dimension of a vector space V over F as dimF V = n if V is isomorphic to Fn. Remark. dimC C = 1, dimR C = 2, dimQ R = 1. The set of all linear maps fL : V ! W g over F is homomorphism, and is denoted by homF(V; W ). Corollary. If V and W are ¯nite dimensional vector spaces over F, then homF(V; W ) is also ¯nite dimensional and dimF homF(V; W ) = (dimF W ) ¢ (dimF V ) Proof. By choosing bases for V and W there is a natural mapping (dimF W )¢(dimF V ) homF(V; W ) ! Mat(dimF W )£(dimF V )(F) ' F This map is both 1-1 and onto as the matrix represetation uniquely determines the linear map and every matrix yields a linear map. Linear Algebra Igor Yanovsky, 2005 6 1.4 Matrix Representations Redux L : V ! W , bases x1; : : : ; xm for V and y1; : : : ; yn for W . The matrix for L interpreted as a linear map is [L]: Fm ! Fn. The basis isomorphisms de¯ned by the choices of basis for V and W : m 1 [x1 ¢ ¢ ¢ xm]: F ! V , n [y1 ¢ ¢ ¢ yn ]: F ! W . V ¡¡¡¡!L W x x ? ? [x1¢¢¢xm]? ?[y1¢¢¢yn] [L] Fm ¡¡¡¡! Fn L ± [x1 ¢ ¢ ¢ xm] = [y1 ¢ ¢ ¢ yn][L] 1.5 Subspaces A nonempty subset M ½ V is a subspace if ®; ¯ 2 F and x; y 2 M, then ®x+¯y 2 M. Also, 0 2 M. If M; N ½ V are subspaces, then we can form two new subspaces, the sum and the intersection: \ M + N = fx + y : x 2 M; y 2 Ng;M N = fx : x 2 M; x 2 Ng: T M and N have trivial intersection if M N = f0g. M and N are transversal if M + N = V . Two spaces are complementary if theyT are transversal and have trivial intersection.L M; N form a direct sum of V if M N = f0g and M + N = V . Write V = M N. Example. V = R2. M = f(x; 0) : x 2 Rg, x-axis, and N = f(0; y): y 2 Rg, y-axis. Example. V = R2. M = f(x; 0) : x 2 Rg, x-axis, and N = f(y; y): y 2 Rg, a diagonal. L Note (x; y) = (x ¡ y; 0) + (y; y), which gives V = M N. L If we have a direct sum decomposition V = M N, then we can construct the projection of V onto M along N. The map E : V ! V is de¯ned using that each z = x + y, x 2 M, y 2 N and mapping z to x. E(z) = E(x + y) = E(x) + E(y) = E(x) = x. Thus im(E) = M and ker(E) = N. De¯nition. If V is a vector space, a projection of V is a linear operator E on V such that E2 = E. 1 m [x1 ¢ ¢ ¢ xm]: F ! V means 2 3 ®1 6 . 7 [x1 ¢ ¢ ¢ xm] 4 . 5 = ®1x1 + ¢ ¢ ¢ + ®mxm ®m Linear Algebra Igor Yanovsky, 2005 7 1.6 Linear Maps and Subspaces L : V ! W is a linear map over F. The kernel or nullspace of L is ker(L) = N(L) = fx 2 V : L(x) = 0g The image or range of L is im(L) = R(L) = L(V ) = fL(x) 2 W : x 2 V g Lemma. ker(L) is a subspace of V and im (L) is a subspace of W . Proof. Assume that ®1; ®2 2 F and that x1; x2 2 ker(L), then L(®1x1 + ®2x2) = ®1L(x1) + ®2L(x2) = 0 ) ®1x1 + ®2x2 2 ker(L). Assume ®1; ®2 2 F and x1; x2 2 V , then ®1L(x1) + ®2L(x2) = L(®1x1 + ®2x2) 2 im (L). Lemma. L is 1-1 , ker(L) = f0g. Proof. ) We know that L(0¢0) = 0¢L(0) = 0, so if L is 1¡1 we have L(x) = 0 = L(0) implies that x = 0. Hence ker(L) = f0g. ( Assume that ker(L) = f0g. If L(x1) = L(x2), then linearity of L tells that L(x1 ¡ x2) = 0. Then ker(L) = f0g implies x1 ¡ x2 = 0, which shows that x1 = x2 as desired. Lemma. L : V ! W , and dim V = dim W . L is 1-1 , L is onto , dim im (L) = dim V . Proof. From the dimension formula, we have dim V = dim ker(L) + dim im(L): L is 1-1 , ker(L) = f0g , dim ker(L) = 0 , dim im (L) = dim V , dim im (L) = dim W , im (L) = W , that is, L is onto. 1.7 Dimension Formula Theorem. Let V be ¯nite dimensional and L : V ! W a linear map, all over F, then im(L) is ¯nite dimensional and dimF V = dimF ker(L) + dimF im(L) Proof. We know that dim ker(L) ·Tdim V and that it has a complement M of dimension k = dim V ¡ dim ker(L). Since M ker(L) = f0g the linear map L must be 1-1 when restricted to M. Thus LjM : M ! im(L) is an isomorphism, i.e. dim im(L) = dim M = k. 1.8 Matrix Calculations 2 Change of Basis Matrix. Given the two basis of R , ¯1 = fx1 = (1; 1); x2 = (1; 0)g and ¯2 = fy1 = (4; 3); y2 = (3; 2)g, we ¯nd the change-of-basis matrix P from ¯1 to ¯2.