
Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix. Because they form the foundation on which we later work, we want an explicit method for analyzing these subspaces- That method will be the Singular Value Decomposition (SVD). It is unfortunate that mostfirst courses in linear algebra do not cover this material, so we do it here. Again, we cannot stress the importance of this decomposition enough- We will apply this technique throughout the rest of this text. 5.1 Representation, Basis and Dimension Let us quickly review some notation and basic ideas from linear algebra. First, a basis is a linearly inde- pendent spanning set, and the dimension of a subspace is the number of basis vectors it needs. Suppose we have a subspaceH IR n, and a basis forH in the columns of matrixV. ⊂ By the definition of a basis, every vector in the subspaceH can be written as a linear combination of the basis elements. In particular, ifx H, then we can write: ∈ . x=c 1v1 +...+c kvk =Vc=V[x] V (5.1) wherec is sometimes denoted as [x] V , and is referred to as the coordinates ofx with respect to the basis in V. Therefore, every vector in our subset of IRn can be identified with a point in IRk, which gives us a function that we’ll call the coordinate mapping: x1 c1 x2 n . k x= . IR . =c IR . ∈ ←→ ∈ ck xn Ifk is small (with respect ton) we think ofc is the low dimensional representation of the vectorx, and thatH is isomorphic to IR k (Isormorphic meaning one to one and onto linear map is the isormorphism) Example 5.1.1. Ifv i,v j are two linearly independent vectors, then the subspace created by their span is isomorphic to the plane IR2 - but is not equal to the plane. The isomorphism is the coordinate mapping. Generally,finding the coordinates ofx with respect to an arbitrary basis (as columns of a matrixV) means that we have to solve Equation 5.1. However, if the columns ofV are orthogonal, then it is very simple to compute the coordinates. Start with Equation 5.1: x=c v + +c v 1 1 ··· k k 51 Now take the inner product of both sides withv j : x v =c v v + +c v v + +c v v · j 1 1 · j ··· j j · j ··· k k · j All the dot products are 0 (due to orthogonality) except for the dot product withx andv j leading to the formula: x v c = · j j v v j · j And we see that this is the scalar projection ofx ontov j . Recall the formula from Calc III: u v Proj (u) = · v v v v · Therefore, we can think of the linear combination as the following, which simplifies if we use orthonormal basis vectors: x= Proj (x) +Proj (x) + +Proj (x) (5.2) v1 v2 ··· vk = (x v )v + (x v )v + +(x v )v · 1 1 · 2 2 ··· · k k IMPORTANT NOTE: In the event thatx is NOT inH, then Equation 5.2 gives the (orthogonal) projection of x intoH. Projections Consider the following example. If a matrixU=[u 1,...,u k] has orthonormal columns (so ifU isn k, then that requiresk n), then × ≤ T T T T u1 u1 u1 u2 u 1 uk 1 0 0 u1 T T ··· T ··· u u1 u u2 u uk 0 1 0 T . 2 2 ··· 2 ··· U U= . [u1,...,u k] = . = . =I k . . u k uT u uT u u T u 0 0 1 k 1 k 2 ··· k k ··· ButUU T (which isn n) is NOT the identity ifk=n (Ifk=n, then the previous computation proves that the inverse is the× transpose). � Here is a computation one might make forUU T (these are OUTER products): uT 1 UU T = [u ,...,u ] . =u uT +u uT + +u uT 1 k . 1 1 2 2 ··· k k uk Example 5.1.2. Consider the following computations: � � � � 1 1 0 U= U T U=1UU T = 0 0 0 IfUU T is not the identity, what is it? Consider the following computation: UU T x=u uT x+u uT x+ +u uT x 1 1 2 2 ··· k k =u (uT x) +u (uT x) + +u (uT x) 1 1 2 2 ··· k k which we recognize as the projection ofx into the space spanned by the orthonormal vectors ofU. In summary, we can think of: the following matrix form for the coordinates: T [x]U =c=U x. and the projection matrixP that takes a vectorx and produces the projection ofx into the space spanned by the orthonormal columns ofU is P=UU T 52 Exercises 1. Let the subspaceH be formed by the span of the vectorsv 1,v 2 given below. Given the pointx 1,x 2 below,find which one belongs toH, and if it does, give its coordinates. (NOTE: The basis vectors are NOT orthonormal) 1 2 7 4 v1 = 2 v2 = 1 x1 = 4 x2 = 3 1 −1 0 1 − − 2. Show that the planeH defined by: 1 1 H= α1 1 +α 2 1 such thatα 1,α 2 IR 1 −0 ∈ is isormorphic to IR2. 3. Let the subspaceG be the plane defined below, and consider the vectorx, where: 1 3 1 G= α1 3 +α 2 1 such thatα 1,α 2 IR x= 0 2 −0 ∈ 2 − (a) Find the projectorP that takes an arbitrary vector and projects it (orthogonally) to the planeG. (b) Find the orthogonal projection of the givenx onto the planeG. (c) Find the distance from the planeG to the vectorx. 4. If the low dimensional representation of a vectorx is [9, 1] T and the basis vectors are [1,0,1] T and [3,1,1] T , then what was the original vectorx? (HINT: it− is easy to compute it directly) 5. If the vectorx = [10,4,2] T and the basis vectors are [1,0,1] T and [3,1,1] T , then what is the low dimensional representation forx? 6. Leta=[ 1,3] T . Find a square matrixP so thatPx is the orthogonal projection ofx onto the span ofa. − 7. To prove that we have an orthogonal projection, the vector Proju(x) x should be orthogonal tou. Use this definition to show that our earlier formula was correct- that− is, x u Proj (x)= · u u u u · is the orthogonal projection ofx ontou. 8. Continuing with the last exercise, show thatUU T x is the orthogonal projection ofx into the space spanned by the columns ofU by showing that (UU T x x) is orthogonal tou for anyi=1,2, ,k. − i ··· 5.2 The Four Fundamental Subspaces Given anym n matrixA, we consider the mappingA : IR n IRm by: × → x Ax=y → The four subspaces allow us to completely understand the domain and range of the mapping. We willfirst define them, then look at some examples. 53 Definition 5.2.1. The Four Fundamental Subspaces The row spaceofA is a subspace of IR n formed by taking all possible linear combinations of the rows • ofA. Formally, � � Row(A) = x IR n x=A T yy IR m ∈ | ∈ The null space ofA is a subspace of IR n formed by • Null(A) = x IR n Ax=0 { ∈ | } The column space ofA is a subspace of IR m formed by taking all possible linear combinations of the • columns ofA. Col(A) = y IR m y=Ax IR n { ∈ | ∈ } The column space is also the image of the mapping. Notice thatAx is simply a linear combination of the columns ofA: Ax=x a +x a + +x a 1 1 2 2 ··· n n Finally, we define the null space ofA T can be defined in the obvious way (see the Exercises). • The fundamental subspaces subdivide the domain and range of the mapping in a particularly nice way: Theorem 5.2.1. LetA be anm n matrix. Then × The nullspace ofA is orthogonal to the row space ofA • The nullspace ofA T is orthogonal to the columnspace ofA • Proof: We’ll prove thefirst statement, the second statement is almost identical to thefirst. To prove thefirst statement, we have to show that if we take any vectorx from nullspace ofA and any vectory from the row space ofA, thenx y=0. · Alternatively, if we can show thatx is orthogonal to each and every row ofA, then we’re done as well (sincey is a linear combination of the rows ofA). In fact, now we see a strategy: Write out what it means forx to be in the nullspace using the rows ofA. For ease of notation, leta denote thej th row ofA, which will have size 1 n. Then: j × a1 a1x a2 a2x Ax=0 . x= . =0 ⇒ . . am amx Therefore, the dot product between any row ofA andx is zero, so thatx is orthogonal to every row ofA. Therefore,x must be orthogonal to any linear combination of the rows ofA, so thatx is orthogonal to the row space ofA.� Before going further, let us recall how to construct a basis for the column space, row space and nullspace of a matrixA. We’ll do it with a particular matrix: Example 5.2.1. Construct a basis for the column space, row space and nullspace of the matrixA below that is row equivalent to the matrix beside it, RREF(A): 2 0 2 2 1 0 1 1 − − A= 2 5 7 3 RREF(A) = 0 1 1 1 −3 5 8 2 0 0 0 0 − − − 54 Thefirst two columns of the original matrix form a basis for the columnspace (which is a subspace of IR 3): 2 2 Col(A) = span 2 , 2 −3 −3 A basis for the row space is found by using the row reduced rows corresponding to the pivots (and is a subspace of IR4).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages16 Page
-
File Size-