<<

Math 432 - Real Analysis II Solutions to Final Examination

Question 1. For the below statements, decide if they are True or False. If True, provide a proof or reason. If False, provide a counterexample or disproof.

(a) The vector M2(R) of 2 × 2-matrices with real entries is an with given by hA, Bi = Tr(AB), where Tr is the of a . (b) Let V be an n-dimensional inner product space. For any x ∈ V , consider the subspace U = span(x). Then, dim(U ⊥) = n − 1. (c) Let V be a normed . V is an inner product space with induced by the inner product if and only if the norm satisfies the .

Solution 1.

(a) False. To see this, consider the matrix

 0 −1  A = . 1 0

Notice that  −1 0  A · A = . 0 −1 However, Tr(A · A) = −1 − 1 = −2, which contradicts the positive-definite axiom. Thus, this definition of h·, ·i is not an inner product.

(b) False. Consider the zero vector x = 0. Then, U = span{0} = {0}, the zero subspace. However, {0}⊥ = V , so the of its is n, not n − 1. Note: this is the only counterexample to this claim.

(c) True. It is always true that any inner product space is a with norm induced by the inner product. The converse is true by the Jordan-Fr´echet-Von Neumman Theorem if the parallelogram law holds.

Question 2. In this question, we will investigate the inner product space M2(R) of 2 × 2-matrices with dot product given by hA, Bi = Tr(ABT ), where T indicates the of a matrix and Tr denotes its trace (sum of diagonal entires). We have previously proven that M2(R) with is a 4-dimensional inner product space with this dot product.

(a) Consider the subset W ⊂ M2(R) of traceless matrices. That is,

W = {A ∈ M2(R) | Tr(A) = 0}. In other words,  a b  ∈ W if and only if a + b = 0. c d

Show that W is a subspace of M2(R).

1 (b) Show that the following three matrices are a for W :

 1 0   0 1   0 0  , , . 0 −1 0 0 1 0

(c) Use (b) to compute dim W . (d) Use the Orthogonal Decomposition Theorem to compute dim W ⊥.

⊥ (e) Let {e1, e2, . . . , en} be a basis for a subspace U in an inner product space V . Show that v ∈ U if and only if hv, eii = 0 for all i. (f) Compute W ⊥. Give a basis for W ⊥. Part (e) may be helpful in this problem. (g) Run the Gram-Schmidt process on the basis vectors (in the order they are presented) to obtain an for W . (h) Consider the matrix  1 0  ∈ M ( ). −2 2 2 R Write the above matrix as A + A0 where A ∈ W and A0 ∈ W ⊥. Is this decomposition unique?

Solution 2.

(a) First, note that the zero matrix is in W . Thus, W is non-empty. Let A, B ∈ W . Then, Tr(A) = 0 = Tr(B). Since Trace is additive, we have that

Tr(A + B) = Tr(A) + Tr(B) = 0.

Thus, A + B ∈ W . Similarly, since for any c ∈ R, we know that Tr(cA) = cTr(A). Thus, for any A ∈ W , we have that Tr(cA) = cTr(A) = c · 0 = 0. Thus, cA ∈ W . Thus, W is non-empty, closed under addition, and closed under multiplication. So, it is a subspace.

(b) First, we show . Assume that for a, b, c ∈ R, we have that  1 0   0 1   0 0   0 0  a + b + c = . 0 −1 0 0 1 0 0 0

Simplifying the right-hand side, we get that

 a b   0 0  = . c −a 0 0

Thus, equating components of the matrices, we get that a = b = c = 0. Thus, this collection is independent. For span, consider an arbitrary matrix in W . Since it is traceless, it must be of the form

 a b  . c −a

By the above, we can clearly write it as as a of the basis elements.

Thus, this collection of 3 vectors is a basis for M2(R). (c) Since W has a basis of 3 elements, then dim(W ) = 3.

2 ⊥ (d) By the Orthogonal Decomposition Theorem, M2(R) = W ⊕ W . Thus,

⊥ ⊥ ⊥ 4 = dim(M2(R)) = dim(W ⊕ W ) = dim(W ) + dim(W ) = 3 + dim(W ). Thus, dim(W ⊥) = 1.

⊥ (e) If v ∈ U , then hv, ui = 0 for all u ∈ U. Since ei ∈ U, then hv, eii = 0 for all i. Conversely, let u ∈ U. Then, since the ei’s form a basis for U, then u = α1e1 + ··· + αnen for some αi ∈ F. Thus,

hv, ui = hv, α1ei + ··· + αneni = α1hv, e1i + ··· + αnhv, eni = 0.

Since this is true for any u ∈ U, then v ∈ U ⊥. (f) Let  a b  A = ∈ W ⊥. c d Then A is orthogonal to any element in W and, in particular, to the basis elements. Thus,

 a b   1 0   a −b  Tr · = Tr = a − d = 0. c d 0 −1 c −d

Thus, a = d. Similarly, dotting with the other basis elements, we get the two following equations:

 a b   0 1   0 a  Tr · = Tr = c = 0; c d 0 0 0 c

 a b   0 0   b 0  Tr · = Tr = b = 0. c d 1 0 d 0 Thus, using the fact that a = d and c = b = 0, we get that every matrix in W ⊥ is of the form

 a 0  . 0 a

Thus, W ⊥ is spanned by the .

(g) First, notice that this collection of vectors is already orthogonal. Thus, we only need to normalize our basis elements. Doing so, we obtain the following : √  1/ 2 0   0 1   0 0  √ , , . 0 −1/ 2 0 0 1 0

(h) Using the above orthonormal basis, we compute the projection to be

 1 0   −1/2 0  P = . W −2 2 −2 1/2

Clearly, this matrix is traceless and thus in W . The orthogonal component is given by

 1 0   −1/2 0   3/2 0  − = . −2 2 −2 1/2 0 3/2

Thus, we have the following decomposition for our matrix:

 1 0   −1/2 0   3/2 0  = + . −2 2 −2 1/2 0 3/2

3 Question 3. Consider C([0, 1]), the space of continuous functions on [0, 1]. Several norms can be placed on 1 2 ∞ this vector space. Three of the most popular ones are the L ,L , and L norms denoted by || · ||1, || · ||2, and || · ||∞, respectively. They are given by Z 1 ||f||1 = |f(x)| dx 0 s Z 1 2 ||f||2 = [f(x)] dx 0

||f||∞ = sup{|f(x)| | x ∈ [0, 1]}.

(a) Explain why we can replace the definition of || · ||∞ to be

||f||∞ = max{|f(x)| | x ∈ [0, 1]}.

(b) Use the Cauchy-Schwarz to prove that for any f ∈ C([0, 1]),

||f||1 ≤ ||f||2.

(c) Use properties of the to show that for any f ∈ C([0, 1]),

||f||1 ≤ ||f||∞.

(d) As with any normed vector space, we can obtain a space from the norm in a canonical way. In particular, for the L1 and L∞ norms, we get two different metrics on C([0, 1]) given respectively by

d1(f, g) = ||f − g||1

d∞(f, g) = ||f − g||∞.

Use (c) to show that if fn → f in the d∞ metric, then fn → f in the d1 metric. Provide an ε− N proof.

n (e) Consider the sequence of functions given by fn(x) = x ∈ C([0, 1]). Compute ||fn||1 and use this to show that fn → 0 in the d1 metric.

n (f) For fn(x) = x , compute ||fn||∞. Use this to show that fn 6→ 0 in the d∞ metric. Use this to show that the converse to the statement in (d) is false.

(g) Use (b) to provide a statement (similar to the one in (d)) relating the convergence of fn → f in the metrics induced by the L1 and L2 norms.

Solution 3.

(a) Since f is a continuous function (and thus |f| is continuous) and [0, 1] is a compact set, f must attain a maximum on [0, 1]. This max will correspond to the supremum and thus we can replace “sup” with “max”. (b) We will run the Cauchy-Schwarz inequality with the vector |f| and 1, the constant function. Doing so, we get Z 1 |f(x)| · 1 dx = |h|f|, 1i| ≤ || |f| ||2 · ||1||2 = ||f||2 · 1 = ||f||2. 0 ∞ (c) By definition of the L norm, we have that |f(x)| ≤ ||f||∞ for all x ∈ [0, 1]. Thus, Z 1 Z 1 ||f||1 = |f(x)| dx ≤ ||f||∞ dx = ||f||∞. 0 0

4 (d) Let ε > 0. Since fn → f in the d∞ metric, there exists an N such that for ann n > N,

||fn − f||∞ < ε.

For this N, notice that for all n > N, by the above we have that

||fn − f||1 ≤ ||fn − f||∞ < ε.

1 Thus, fn → f in the L metric. (e) Computing, we get that Z 1 Z 1 n 1 ||fn||1 = |fn(x)| dx = x dx = . 0 0 n + 1 1 Thus, ||fn − 0||1 = 1/(n + 1). Thus, fn → 0 in the L metric. (f) Computing, we get that n ||fn||∞ = max{|x | | x ∈ [0, 1]} = 1. Thus, ||fn − 0||∞ = 1 6→ 0. ∞ 1 Thus, fn 6→ 0 in the L metric. Thus, we have a sequence of functions that converges in the L metric, but not the L∞ metric, which is a counterexample to the converse of the previous statement.

2 1 (g) If fn → f in the L metric, then fn → f in the L metric.

5