Math 432 - Real Analysis II Solutions to Final Examination
Total Page:16
File Type:pdf, Size:1020Kb
Math 432 - Real Analysis II Solutions to Final Examination Question 1. For the below statements, decide if they are True or False. If True, provide a proof or reason. If False, provide a counterexample or disproof. (a) The vector space M2(R) of 2 × 2-matrices with real entries is an inner product space with dot product given by hA; Bi = Tr(AB); where Tr is the trace of a matrix. (b) Let V be an n-dimensional inner product space. For any x 2 V , consider the subspace U = span(x): Then, dim(U ?) = n − 1: (c) Let V be a normed vector space. V is an inner product space with norm induced by the inner product if and only if the norm satisfies the parallelogram law. Solution 1. (a) False. To see this, consider the matrix 0 −1 A = : 1 0 Notice that −1 0 A · A = : 0 −1 However, Tr(A · A) = −1 − 1 = −2; which contradicts the positive-definite axiom. Thus, this definition of h·; ·i is not an inner product. (b) False. Consider the zero vector x = 0. Then, U = spanf0g = f0g; the zero subspace. However, f0g? = V , so the dimension of its orthogonal complement is n, not n − 1. Note: this is the only counterexample to this claim. (c) True. It is always true that any inner product space is a normed vector space with norm induced by the inner product. The converse is true by the Jordan-Fr´echet-Von Neumman Theorem if the parallelogram law holds. Question 2. In this question, we will investigate the inner product space M2(R) of 2 × 2-matrices with dot product given by hA; Bi = Tr(ABT ); where T indicates the transpose of a matrix and Tr denotes its trace (sum of diagonal entires). We have previously proven that M2(R) with is a 4-dimensional inner product space with this dot product. (a) Consider the subset W ⊂ M2(R) of traceless matrices. That is, W = fA 2 M2(R) j Tr(A) = 0g: In other words, a b 2 W if and only if a + b = 0: c d Show that W is a subspace of M2(R). 1 (b) Show that the following three matrices are a basis for W : 1 0 0 1 0 0 ; ; : 0 −1 0 0 1 0 (c) Use (b) to compute dim W . (d) Use the Orthogonal Decomposition Theorem to compute dim W ?. ? (e) Let fe1; e2; : : : ; eng be a basis for a subspace U in an inner product space V . Show that v 2 U if and only if hv; eii = 0 for all i. (f) Compute W ?. Give a basis for W ?. Part (e) may be helpful in this problem. (g) Run the Gram-Schmidt process on the basis vectors (in the order they are presented) to obtain an orthonormal basis for W . (h) Consider the matrix 1 0 2 M ( ): −2 2 2 R Write the above matrix as A + A0 where A 2 W and A0 2 W ?. Is this decomposition unique? Solution 2. (a) First, note that the zero matrix is in W . Thus, W is non-empty. Let A; B 2 W . Then, Tr(A) = 0 = Tr(B). Since Trace is additive, we have that Tr(A + B) = Tr(A) + Tr(B) = 0: Thus, A + B 2 W . Similarly, since for any c 2 R, we know that Tr(cA) = cTr(A). Thus, for any A 2 W , we have that Tr(cA) = cTr(A) = c · 0 = 0: Thus, cA 2 W . Thus, W is non-empty, closed under addition, and closed under scalar multiplication. So, it is a subspace. (b) First, we show linear independence. Assume that for a; b; c 2 R, we have that 1 0 0 1 0 0 0 0 a + b + c = : 0 −1 0 0 1 0 0 0 Simplifying the right-hand side, we get that a b 0 0 = : c −a 0 0 Thus, equating components of the matrices, we get that a = b = c = 0. Thus, this collection is independent. For span, consider an arbitrary matrix in W . Since it is traceless, it must be of the form a b : c −a By the above, we can clearly write it as as a linear combination of the basis elements. Thus, this collection of 3 vectors is a basis for M2(R). (c) Since W has a basis of 3 elements, then dim(W ) = 3. 2 ? (d) By the Orthogonal Decomposition Theorem, M2(R) = W ⊕ W : Thus, ? ? ? 4 = dim(M2(R)) = dim(W ⊕ W ) = dim(W ) + dim(W ) = 3 + dim(W ): Thus, dim(W ?) = 1: ? (e) If v 2 U , then hv; ui = 0 for all u 2 U. Since ei 2 U, then hv; eii = 0 for all i. Conversely, let u 2 U. Then, since the ei's form a basis for U, then u = α1e1 + ··· + αnen for some αi 2 F. Thus, hv; ui = hv; α1ei + ··· + αneni = α1hv; e1i + ··· + αnhv; eni = 0: Since this is true for any u 2 U, then v 2 U ?. (f) Let a b A = 2 W ?: c d Then A is orthogonal to any element in W and, in particular, to the basis elements. Thus, a b 1 0 a −b Tr · = Tr = a − d = 0: c d 0 −1 c −d Thus, a = d. Similarly, dotting with the other basis elements, we get the two following equations: a b 0 1 0 a Tr · = Tr = c = 0; c d 0 0 0 c a b 0 0 b 0 Tr · = Tr = b = 0: c d 1 0 d 0 Thus, using the fact that a = d and c = b = 0, we get that every matrix in W ? is of the form a 0 : 0 a Thus, W ? is spanned by the identity matrix. (g) First, notice that this collection of vectors is already orthogonal. Thus, we only need to normalize our basis elements. Doing so, we obtain the following orthogonal basis: p 1= 2 0 0 1 0 0 p ; ; : 0 −1= 2 0 0 1 0 (h) Using the above orthonormal basis, we compute the projection to be 1 0 −1=2 0 P = : W −2 2 −2 1=2 Clearly, this matrix is traceless and thus in W . The orthogonal component is given by 1 0 −1=2 0 3=2 0 − = : −2 2 −2 1=2 0 3=2 Thus, we have the following decomposition for our matrix: 1 0 −1=2 0 3=2 0 = + : −2 2 −2 1=2 0 3=2 3 Question 3. Consider C([0; 1]), the space of continuous functions on [0; 1]. Several norms can be placed on 1 2 1 this vector space. Three of the most popular ones are the L ;L , and L norms denoted by jj · jj1; jj · jj2; and jj · jj1, respectively. They are given by Z 1 jjfjj1 = jf(x)j dx 0 s Z 1 2 jjfjj2 = [f(x)] dx 0 jjfjj1 = supfjf(x)j j x 2 [0; 1]g: (a) Explain why we can replace the definition of jj · jj1 to be jjfjj1 = maxfjf(x)j j x 2 [0; 1]g: (b) Use the Cauchy-Schwarz Inequality to prove that for any f 2 C([0; 1]), jjfjj1 ≤ jjfjj2: (c) Use properties of the integral to show that for any f 2 C([0; 1]), jjfjj1 ≤ jjfjj1: (d) As with any normed vector space, we can obtain a metric space from the norm in a canonical way. In particular, for the L1 and L1 norms, we get two different metrics on C([0; 1]) given respectively by d1(f; g) = jjf − gjj1 d1(f; g) = jjf − gjj1: Use (c) to show that if fn ! f in the d1 metric, then fn ! f in the d1 metric. Provide an "− N proof. n (e) Consider the sequence of functions given by fn(x) = x 2 C([0; 1]). Compute jjfnjj1 and use this to show that fn ! 0 in the d1 metric. n (f) For fn(x) = x , compute jjfnjj1. Use this to show that fn 6! 0 in the d1 metric. Use this to show that the converse to the statement in (d) is false. (g) Use (b) to provide a statement (similar to the one in (d)) relating the convergence of fn ! f in the metrics induced by the L1 and L2 norms. Solution 3. (a) Since f is a continuous function (and thus jfj is continuous) and [0; 1] is a compact set, f must attain a maximum on [0; 1]. This max will correspond to the supremum and thus we can replace \sup" with \max". (b) We will run the Cauchy-Schwarz inequality with the vector jfj and 1, the constant function. Doing so, we get Z 1 jf(x)j · 1 dx = jhjfj; 1ij ≤ jj jfj jj2 · jj1jj2 = jjfjj2 · 1 = jjfjj2: 0 1 (c) By definition of the L norm, we have that jf(x)j ≤ jjfjj1 for all x 2 [0; 1]: Thus, Z 1 Z 1 jjfjj1 = jf(x)j dx ≤ jjfjj1 dx = jjfjj1: 0 0 4 (d) Let " > 0. Since fn ! f in the d1 metric, there exists an N such that for ann n > N, jjfn − fjj1 < ": For this N, notice that for all n > N, by the above we have that jjfn − fjj1 ≤ jjfn − fjj1 < ": 1 Thus, fn ! f in the L metric. (e) Computing, we get that Z 1 Z 1 n 1 jjfnjj1 = jfn(x)j dx = x dx = : 0 0 n + 1 1 Thus, jjfn − 0jj1 = 1=(n + 1): Thus, fn ! 0 in the L metric.