<<

1 Orthogonal Complements

1. Definition: If u, v ∈ = 0. Since < u, v >=< v, u > it is obvious that u is orthogonal to v if and only if v is orthogonal to u. We often denote the of u and v by u ⊥ v.

2. The zero vector is orthogonal to every vector in

3. The only vector in = 0 but this is if and only if v = O.

4. Since we now have < u, v >= ||u||||v|| cos(θ) where θ is the (acute) angle between u and v, we see that two non-zero vectors are orthogonal to each other if and only if the cosine of the angle π between them is 0, i.e. it is 2 . 5. Observation: If S ⊂

6. Definition: If V is a subspace of

V ⊥ = {w ∈ = 0 for every v ∈ V }

.

7. Observation: Since each element of V is orthogonal to each element of V ⊥ the previous obser- vation shows that every element of v is orthogonal to the linear span of V ⊥. In other words, V ⊥ is itself a subspace of

8. If V is a subspace of = 0 only if v = 0.

t 9. If A = [A1 A2, ··· An] is an m × n then A is an m × n matrix with form

 t  A1  At   2   .   .  t An

If w ∈ Null(At) then Atw = O i.e.    t   t    0 A1 A1w < A1, w > t t  0   A2   A2w   < A2, w >    Atw =  .  w =  .  =  .  =  0   .   .   .   .         .  At At w < A , w >   n n n 0

t That is, w ∈ Null(A ) if and only if w ⊥ Ai for each i. But we have just seen that this is equivalent to w being orthogonal to span({A1, ··· ,An} = Col(A). Thus we have seen:

1 Col(A)⊥ = Nullspace(At)

Since (At)t = A, replacing A by At we see that the null space of A is the orthogonal complement of the column space of At. Since the columns of At are the transposes of the rows of A, the column space of At is called the row space of A and we write :

RowSpace(A)⊥ = Nullspace(A)

1.1 Dimension Formula for Orthogonal Complements

If A is an M by n matrix then we know that the dimension of col(A) = (A) and dim(Null(A)) = n−rank(A). Recall the remarkable fact that rank(A) = rank(At) apply this to At, remembering that At has m columns) and we have

dim(Null(At)) = m − rank(At) = m − rank(A)

that is

dim(col(A))t = m − rank(A) so we have dim(col(A))t + dim(col(A)) = m Since any subspace V of

10. If V is a subspace of

11. Recall the dimension formula dim(V +W )+dim(V ∩W ) = dim(V )+dim(W ). Since V ∩W = O we have dim(V + W ) = dim(V ) + dim(W ) = m. There is only one m dimensional subspace of

12. Theorem: If V ⊂

1.2 Calculating the Orthogonal Complement

n If V is a subspace of < then we want to calculate a basis {v1, v2, ··· , vr} for V and a complemen- ⊥ ⊥ ⊥ ⊥ n tary {v1 , v2 , ··· , vs } for V which together are a basis for < . According to the above, if we write v ∈

⊥ ⊥ ⊥ v = a1v1 + a2v2 + ··· + arvr + b1v1 + b2v2 + ··· + bsvs

⊥ ⊥ ⊥ then a1v1 + a2v2 + ··· + arvr and b1v1 + b2v2 + ··· + bsvs are orthognoal and are the orthogonal projections of v onto V and V ⊥ respectively. We already have a way to calculate {v1, v2, ··· , vr} by row reducing the matrix A and choosing ⊥ ⊥ ⊥ t the pivot columns. On the face of it to get {v1 , v2 , ··· , vs } we would do the same thing with A . There is, however, a better way as it turns out that the columns of the consistency matrix, Ct, are a basis for V ⊥. The calculation

2 CA = O translates to AtCt = O which shows that the columns of Ct are all in the null space of At. They are linearly independent since they are columns of an and there are exactly n − rank(A) of them since that is exacly the number of zero rows in any REF of A Thus the columns of Ct acontained in A⊥, are linearly independent, and there are exactly dim(A⊥) of them. Thus they are a basis for A⊥

1.3 Example

 1 2 3   0 −1 −1    Let V be the column space of A =  1 2 3     2 1 3  4 1 5  1 2 3 1 0 0 0 0   0 −1 −1 0 1 0 0 0    Then A|I =  1 2 3 0 0 1 0 0     2 1 3 0 0 0 1 0  4 1 5 0 0 0 0 1 The REF of A|I is  1 2 3 1 0 0 0 0   0 −1 −1 0 1 0 0 0     0 0 0 −1 0 1 0 0     0 0 0 0 3 −2 1 0  2 −7 0 0 0 0 0 3 3 1 The pivot columns of A are the basis for V = Col(A) and the columns of the of the consistency matrix are the basis for V ⊥. We arrange them in a matrix

 1 2 −1 0 0   0 −1 0 −3 0   2  B =  1 2 1 −2 3   −7   2 1 0 1 3  4 1 0 0 1

The following calculation exhibits the orthogonality

 22 10 0 0 0   10 11 0 0 0  t  2  B B =  0 0 2 −2 3   −11   0 0 −2 14 3  2 −11 62 0 0 3 3 9

3