<<

MATH 110: FALL 2007/08 PROBLEM SET 9 SOLUTIONS

In the following V will denote a finite-dimensional vector over R that is also an with inner product denoted by h ·, · i. The norm induced by h ·, · i will be denoted k · k.

1. Let S be a subset (not necessarily a subspace) of V . We define the orthogonal annihilator of S, denoted S⊥, to be the set S⊥ = {v ∈ V | hv, wi = 0 for all w ∈ S}. (a) Show that S⊥ is always a subspace of V . ⊥ Solution. Let v1, v2 ∈ S . Then hv1, wi = 0 and hv2, wi = 0 for all w ∈ S. So for any α, β ∈ R, hαv1 + βv2, wi = αhv1, wi + βhv2, wi = 0 ⊥ for all w ∈ S. Hence αv1 + βv2 ∈ S . (b) Show that S ⊆ (S⊥)⊥. Solution. Let w ∈ S. For any v ∈ S⊥, we have hv, wi = 0 by definition of S⊥. Since this is true for all v ∈ S⊥ and hw, vi = hv, wi, we see that hw, vi = 0 for all v ∈ S⊥, ie. w ∈ S⊥. (c) Show that span(S) ⊆ (S⊥)⊥. Solution. Since (S⊥)⊥ is a subspace by (a) (it is the orthogonal annihilator of S⊥) and S ⊆ (S⊥)⊥ by (b), we have span(S) ⊆ (S⊥)⊥. ⊥ ⊥ (d) Show that if S1 and S2 are subsets of V and S1 ⊆ S2, then S2 ⊆ S1 . ⊥ Solution. Let v ∈ S2 . Then hv, wi = 0 for all w ∈ S2 and so for all w ∈ S1 (since ⊥ S1 ⊆ S2). Hence v ∈ S1 . (e) Show that ((S⊥)⊥)⊥ = S⊥. ⊥ ⊥ Solution. Applying (d) to (b) with S1 = S and S2 = (S ) , we get ((S⊥)⊥)⊥ ⊆ S⊥. Apply (c) to S⊥, we get span(S⊥) ⊆ ((S⊥)⊥)⊥ but by (a), span(S⊥) = S⊥. Hence we get equality. ⊥ (f) Show that either S ∩ S must be either the empty set ∅ or the zero subspace {0V }. ⊥ ⊥ ⊥ Solution. If S ∩ S 6= ∅, then let v ∈ S ∩ S . Since v ∈ S , we have hv, wi = 0 2 for all w ∈ S. Since v ∈ S, in particular, kvk = hv, vi = 0, implying that v = 0V . In ⊥ ⊥ other words, the only vector in S ∩ S is 0V . So S ∩ S = {0V }. On the other hand, if ⊥ ⊥ 0V ∈/ S, then 0V ∈/ S ∩ S and so S ∩ S = ∅ (if not, then the previous argument gives a contradiction).

2. Let W be a subspace of V . The subspace W ⊥ is called the orthogonal complement of W . (a) Show that V = W ⊕ W ⊥. ⊥ Solution. Since W and W are both subspaces (the latter by Problem 1(a)), 0V ∈ W ⊥ ⊥ and 0V ∈ W . So by Problem 1(f), we have that W ∩W = {0V }. It remains to show that

Date: December 9, 2007 (Version 1.0).

1 ⊥ ⊥ W + W = V . Clearly W + W ⊆ V . For the converse, let v ∈ V and let {w1,..., wr} be an orthonormal basis of W . Consider

x := hv, w1iw1 + hv, w2iw2 + ··· + hv, wriwr and y := v − x. Clearly x ∈ W . We claim that y ∈ W ⊥. For any w ∈ W , we could write

w = hw, w1iw1 + hw, w2iw2 + ··· + hw, wriwr

since {w1,..., wr} is an orthonormal basis of W . So hy, wi = hv − x, wi = hv, wi − hx, wi D Xr E DXr E = v, hw, wiiwi − hv, wiiwi, w i=1 i=1 Xr Xr = hv, wiihw, wii − hv, wiihwi, wi i=1 i=1 = 0 ⊥ since hw, wii = hwi, wi. Hence v = x + y ∈ W + W . (b) Show that W = (W ⊥)⊥. Solution. By Problem 1(b), we have W ⊆ (W ⊥)⊥. But by (a), we have W ⊕ W ⊥ = V = W ⊥ ⊕ (W ⊥)⊥ and so dim(W ) + dim(W ⊥) = dim(W ⊥) + dim((W ⊥)⊥) and so dim(W ) = dim((W ⊥)⊥). Hence W = (W ⊥)⊥.

m×n n m 3. Let A ∈ R and regard A : R → R as a linear transformation in the usual way. Show that nullsp(A>) = (colsp(A))⊥ and rowsp(A) = (nullsp(A))⊥. Hence deduce that m > R = nullsp(A ) ⊕ colsp(A) and n R = rowsp(A) ⊕ nullsp(A). m n ⊥ Solution. Clearly colsp(A) = {Ax ∈ R | x ∈ R }. So if v ∈ (colsp(A)) , then hv,Axi = 0 n for all x ∈ R . So hA>v, xi = (A>v)>x = v>Ax = hv,Axi = 0 n > for all x ∈ R . In particular, we may pick x = A v to get kA>vk2 = hA>v,A>vi = 0, giving us A>v = 0. Hence v ∈ nullsp(A>). Conversely, if v ∈ nullsp(A>), then hv,Axi = v>Ax = (A>v)>x = 0>x = 0

2 n ⊥ for all x ∈ R . So v ∈ (colsp(A)) . This proves the first equality. For the second equality, just note that rowsp(A) = colsp(A>) and apply the first equality to get (colsp(A>))⊥ = nullsp(A), and then apply Problem 2(b) to get rowsp(A) = ((rowsp(A))⊥)⊥ = (nullsp(A))⊥. The next two inequalities are immediate consequences of Problem 2(a).

n×n n 4. Let A, B ∈ R . Let h ·, · i denote the standard inner product on R . n (a) Show that if hAx, yi = 0 for all x, y ∈ R , then A = O, the zero matrix. n n Solution. Let {e1,..., en} be the standard basis of R . Observe that if A = [aij]i,j=1, then > aij = ei Aej = hei,Aeji = hAej, eii = 0

by letting x and y be ei and ej respectively. Since this is true for all i and j, we have that A = O. n (b) Is it true that if hAx, xi = 0 for all x ∈ R , then A = O, the zero matrix? 2 n > Solution. Recall notation from Homework 4, Problem 1(d). Let A ∈ ∧ (R ), ie. A = −A. Then hAx, xi = x>A>x = −x>Ax = −hx,Axi = −hAx, xi, implying that hAx, xi = 0. So any non-zero skew-symmetric matrix would be a counter example. An explicit 2 × 2 example is given by 0 −1 A = . 1 0

n (c) Is it true that if hAx, xi = hBx, xi for all x ∈ R , then A = B? Solution. Any two distinct skew-symmetric matrices would provide a counter example. An explicit 2 × 2 example is given by 0 −1 0 0 A = ,B = . 1 0 0 0

n 5. Let B1 = {x1,..., xn} and B2 = {y1,..., yn} be two bases of R . B1 and B2 are not necessarily orthonormal bases. n×n (a) Show that there exists an orthogonal matrix Q ∈ R such that

Qxi = yi, i = 1, . . . , n, if and only if

hxi, xji = hyi, yji, i, j = 1, . . . , n. Solution. By a theorem we have proved in the lectures, an orthogonal matrix Q preserves inner product and so satisfies

hyi, yji = hQxi,Qxji = hxi, xji. It remains to show that if the latter condition is satisfied, then such an orthogonal matrix may be defined. First, observe that since B1 and B2 are bases, the equations

Qxi = yi, i = 1, . . . , n,

3 1 n defines the matrix Q uniquely . We only need to verify that Q is orthogonal. Let x ∈ R and let Xn x = αixi i=1 be the representation of x in terms of B1. Then kQxk2 = hQx,Qxi DXn Xn E = αiQxi, αjQxj i=1 j=1 DXn Xn E = αiyi, αjyj i=1 j=1 Xn Xn = αiαjhyi, yji i=1 j=1 Xn Xn = αiαjhxi, xji i=1 j=1 DXn Xn E = αixi, αjxj i=1 j=1 = hx, xi = kxk2. n Since this is true for all x ∈ R , ie. Q preserves norm, the aforementioned theorem in the lectures imply that Q is orthogonal. n×n (b) Show that if B1 is an orthonormal basis and if Q ∈ R is an orthogonal matrix, then

{Qx1,...,Qxn} is also an orthonormal basis. Solution. Clearly Qx1,...,Qxn are linearly independent since if

α1Qx1 + ··· + αnQxn = 0, then Q(α1x1 + ··· + αnxn) = 0, and so > α1x1 + ··· + αnxn = Q 0 = 0 and so α1 = ··· = αn = 0 by the linear independence of x1,..., xn. Hence {Qx1,...,Qxn} n is a basis (since the set has size n = dim(R )). The orthonormality follows from the fact that Q preserves inner product and that x1,..., xn form an orthonormal basis: ( 0 if i 6= j, hQxi,Qxji = hxi, xji = 1 if i = j.

1 In fact, Q is just the matrix whose coordinate representation with respect to B1 and B2 is the identity matrix, ie.

[Q]B1,B2 = I. Explicitly, if we let X and Y be the matrices whose columns are x1,..., xn and y1,..., yn respectively, then Q = YX−1.

4