Linear Algebra- Final Exam Review
Total Page:16
File Type:pdf, Size:1020Kb
Linear Algebra- Final Exam Review 1. Show that Row(A) ? Null(A). SOLUTION: We can write matrix-vector multiplication in terms of the rows of the matrix A. If A is m × n, then: 2 3 2 3 Row1(A) Row1(A)x 6 Row (A) 7 6 Row (A)x 7 6 2 7 6 2 7 Ax = 6 . 7 x = 6 . 7 4 . 5 4 . 5 Rowm(A) Rowm(A)x Each of these products is the \dot product" of a row of A with the vector x. To show the desired result, let x 2 Null(A). Then each of the products shown in the equation above must be zero, since Ax = 0, so that x is orthogonal to each row of A. Since the rows form a spanning set for the row space, x is orthogonal to every vector in the row space. 2. Let A be invertible. Show that, if v1; v2; v3 are linearly independent vectors, so are Av1;Av2;Av3. NOTE: It should be clear from your answer that you know the definition. SOLUTION: We need to show that the only solution to: c1Av1 + c2Av2 + c3Av3 = 0 is the trivial solution. Factoring out the matrix A, A(c1v1 + c2v2 + c3v3) = 0 Think of the form Ax^ = 0. Since A is invertible, the only solution to this is x^ = 0, which implies that the only solution to the equation above is the solution to c1v1 + c2v2 + c3v3 = 0 Which is (only) the trivial solution, since the vectors are linearly independent. (NOTE: Notice that if the original vectors had been linearly dependent, this last equation would have non-trivial solutions). 3. Find the line of best first for the data: x 0 1 2 3 y 1 1 2 2 Let A be the matrix formed by a column from x column of ones, then we form the normal equations AT Ac = AT y and solve: 14 6 11 AT Ac = AT y c^ = 6 4 6 T −1 T 1 T The solution is c^ = (A A) A y = 10 [4; 9] , so the slope is 2=5 and the intercept is 9=10. 1 −3 0 4. Let A = . (a) Is A orthogonally diagonalizable? If so, orthogonally diagonalize 0 0 it! (b) Find the SVD of A. SOLUTION: For part (a), the matrix A is symmetric, so it is orthogonally diagonalizable. It is also a diagonal matrix, so the eigenvalues are λ = −3 and λ = 0. The eigenvalues are the standard basis vectors, so P = I, and D = A. For the SVD, the eigenvalues of AT A are 9 and 0, so the singular values are 3 and 0. The column space is spanned by [1; 0]T , as is the row space. We also see that −3 −1 Av = σ u = = 3 1 1 1 0 0 This brings up a good point- You may use either: −1 0 1 0 U = V = 0 1 0 1 or the reverse: 1 0 −1 0 U = V = 0 1 0 1 This problem with the ±1 is something we don't run into with the usual diagonalization. 5. Let V be the vector space spanned by the functions on the interval [−1; 1]. 1; t; t2 Use Gram-Schmidt to find an orthonormal basis, if we define the inner product: Z 1 hf(t); g(t)i = 2f(t)g(t) dt −1 SOLUTION: Let v1 = 1 (which is not normalized- We'll normalize later). Then R 1 2t dt v = t − Proj (t) = t − −1 1 = t − 0 = t 2 v1 R 1 −1 2 dt R 1 2 R 1 3 2 2 2 2 −1 2t dt −1 2t dt v3 = t − Projv (t ) − Projv (t ) = t − 1 − t 1 2 R 1 R 1 2 −1 2 dt −1 2t dt We note that the integral of any odd function will be zero, so that last term drops: 4=3 1 v = t2 − 1 = t2 − 3 4 3 2 6. Suppose that vectors u; v; w are linearly independent vectors in IRn. Determine if the set fu + v; u − v; u − 2v + wg are also linearly independent. SOLUTION: We want to show that the only solution to the following equation is the trivial solution: C1(u + v) + C2(u − v) + C3(u − 2v + w) = 0 Regrouping this using the original vectors, we have: (C1 + C2 + C3)u + (C1 − C2 − 2C3)v + C3w = 0 Since these vectors are linearly independent, each coefficient must be zero: C1 + C2 + C3 = 0 C1 − C2 − 2C3 = 0 C3 = 0 From which we get that C1 = C2 = C3 = 0 is the only solution. 7. Let v1;:::; vp be orthonormal. If x = c1v1 + c2v2 + ··· + cpvp 2 2 2 then show that kxk = jc1j + ··· + jcpj . (Hint: Write the norm squared as the dot product). SOLUTION: Compute x · x, and use the property that the vectors vi are orthonormal: (c1v1 + c2v2 + ··· + cpvp) · (c1v1 + c2v2 + ··· + cpvp) = Since all dot products of the form cickvi ·vk = 0 for i 6= k, then the dot product simplifies to: 2 2 2 c1v1 · v1 + c2v2 · v2 + ··· cpvp · vp And since the vectors are normalized, this gives the result: 2 2 2 kxk = c1 + ··· + cp (Don't need the magnitudes here, since we're working with real numbers). 8. Short answer: (a) If ku + vk2 = kuk2 + kvk2, then u; v are orthogonal. SOLUTION: True- This is the Pythagorean Theorem. (b) Let H be the subset of vectors in IR3 consisting of those vectors whose first element is the sum of the second and third elements. Is H a subspace? SOLUTION: One way of showing that a subset is a subspace is to show that the subspace can be represented by the span of some set of vectors. In this case, 2 a + b 3 2 1 3 2 1 3 4 a 5 = a 4 1 5 + b 4 0 5 b 0 1 Because H is the span of the given vectors, it is a subspace. 3 (c) Explain why the image of a linear transformation T : V ! W is a subspace of W SOLUTION: Maybe \Prove" would have been better than \Explain", since we want to go through the three parts: i. 0 2 T (V ) since 0 2 V and T (0) = 0. ii. Let u; v be in T (V ). Then there is an x; y in V so that T (x) = u and T (y) = v. Since V is a subspace, x+y 2 V , and therefore T (x+y) = T (x)+T (y) = u+v so that u + v 2 T (V ). iii. Let u 2 T (V ). Show that cu 2 T (V ) for all scalars c. If u 2 T (V ), there is an x in V so that T (x) = u. Since V is a subspace, cu 2 V , and T (cu) 2 T (V ). By linearity, this means cT (u) 2 T (V ). (OK, that probably should not have been in the short answer section) 2 1 2 3 3 (d) Is the following matrix diagonalizable? Explain. A = 4 0 5 8 5 0 0 13 SOLUTION: Yes. The eigenvalues are all distinct, so the corresponding eigenvec- tors are linearly independent. (e) If the column space of an 8 × 4 matrix A is 3 dimensional, give the dimensions of the other three fundamental subspaces. Given these numbers, is it possible that the mapping x ! Ax is one to one? onto? SOLUTION: If the column space is 3-d, so is the row space. Therefore the null space (as a subspace of IR4) is 1 dimensional and the null space of AT is 5 dimensional. Since the null space has more than the zero vector, Ax = 0 has non-trivial solutions, so the matrix mapping will not be 1-1. Since the column space is a three dimensional subspace of IR8, the mapping cannot be onto. (f) i. Suppose matrix Q has orthonormal columns. Must QT Q = I? SOLUTION: Yes, QT Q = I. ii. True or False: If Q is m × n with m > n, then QQT = I. SOLUTION: False- If m 6= n, then QQT is the projection matrix that takes a vector x and projects it to the column space of Q. iii. Suppose Q is an orthogonal matrix. Prove that det(Q) = ±1. SOLUTION: If Q is orthogonal, then QT Q = I, and if we take determinants of both sides, we get: (det(Q))2 = 1 Therefore, the determinant of Q is ±1. 2 1 1 2 2 3 9. Find a basis for the null space, row space and column space of A, if A = 4 2 2 5 5 5 0 0 3 3 The basis for the column space is the set containing the first and third columns of A.A basis for the row space is the set of vectors [1; 1; 0; 0]T ; [0; 0; 1; 1]T . A basis for the null space of A is [−1; 1; 0; 0]T ; [0; 0; −1; 1]T . 4 10. Find an orthonormal basis for W = Span fx1; x2; x3g using Gram-Schmidt (you might wait until the very end to normalize all vectors at once): 2 0 3 2 0 3 2 1 3 6 0 7 6 1 7 6 1 7 x1 = 6 7 ; x2 = 6 7 ; x3 = 6 7 ; 4 1 5 4 1 5 4 1 5 1 2 1 SOLUTION: Using Gram Schmidt (before normalization, which is OK if doing by hand), we get 2 0 3 2 0 3 2 3 3 6 0 7 6 2 7 6 1 7 v1 = 6 7 ; v2 = 6 7 ; v3 = 6 7 4 1 5 4 −1 5 4 1 5 1 1 −1 11.