<<

Math 214 – Spring, 2013 Apr 22

Orthogonality in Rn

n A set of vectors {v1, v2,... vk} in R is called an orthogonal set if all the vectors in the set are pairwise orthogonal, that is, vi · vj = 0 for all i 6= j, i, j = 1, 2, . . . , k. Example: Check whether the following vectors form an orthogonal set:  1   2   1   2  ,  0  ,  −1 . −1 2 −1

n A for a subspace W of R that forms an orthogonal set is called an . Orthogonal bases are more convenient, as it is easier to find the coefficients in the expansion of vectors in terms of the basis vectors if it is an orthogonal basis. n Theorem: Let {v1, v2,... vk} be an orthogonal basis for a subspace W of R , and let w be an vector in W . The the unique scalars in the expansion of w,

w = c1v1 + c2v2 + ··· + ckvk are given by w · vi ci = for i = 1, 2, . . . , k. vi · vi

It’s even more convenient to work with an orthogonal basis in which all the vectors are unit vectors.

n A set of vectors in R is called an orthonormal set if it is an orthogonal set of unit vectors. n An for a subspace W of R is a basis of W which is an orthonormal set.

n Theorem: Let {q1, q2,... qk} be an orthonormal basis for a subspace W of R , then any vector w of W has the unique expansion

w = (w · q1)q1 + (w · q1)q2 + ··· + (w · qk)qk.

Orthogonal matrices Theorem: The columns of an m × n Q form an orthonormal set if and only if T Q Q = In. An n×n matrix Q whose columns form and orthonormal set is called and .

From the previous Theorem it is clear that: Theorem: A Q is orthogonal if an only if Q−1 = QT .

Example: The standard matrices of rotations are orthogonal (why?):  cos θ − sin θ  A = . sinθ − cos θ

1 Theorem: Let Q be an n × n matrix, then the following statements are equivalent. (a) Q is orthogonal.

n (b) |Qx| = |x| for every x in R . n (c) Qx · Qy = x · y for every x and y in R .

From the definition of an orthogonal matrix, it is clear that. Theorem: The rows of an orthogonal matrix form an orthonormal set.

Here are some of the properties of orthogonal matrices. Theorem: Let Q be an orthogonal matrix, then (a) Q−1 is orthogonal.

(b) det Q = ±1.

(c) If λ is an eigenvalue of Q, then |λ| = 1.

(d) If Q1 and Q2 are orthogonal n × n matrices, then so is Q1Q2.

Orthogonal complements and projections n n Let W be a subspace of R . A vector v in R is orthogonal to W , if v is orthogonal to every vector in W . The set of all vectors orthogonal to W is called the of W and is denoted by W ⊥,

⊥ n W = {v in R : v · w = 0 for all w in W }.

n Theorem: Let W be a subspace of R , then ⊥ n (a) W is a subspace of R . (b) (W ⊥)⊥ = W .

(c) W ∩ W ⊥ = {0}.

⊥ (d) If W = span(w1,..., wk), then v is in W if and only if v · wi = 0 for all i = 1, . . . , k.

For the subspaces associated with an orthogonal matrix we have the following result. Theorem: Let A be an orthogonal m × n matrix, then the orthogonal complement of row(A) is null(A) and the orthogonal complement of col(A) is null(AT ).

Similar to projection onto the direction of a vector, we can define projections onto n subspaces. Let W be a subspace of R and let {u1, u2,... uk} be an orthogonal basis for n W . For any vector v in R the orthogonal projection onto W is       v · u1 v · u2 v · uk projW v = u1 + u2 + ··· + uk. u1 · u1 u2 · u2 uk · uk

2 n Theorem (Orthogonal decomposition): Let W be a subspace of R and let v be any vector n ⊥ ⊥ in R . Then there are unique vectors w in W and w in W such that

v = w + w⊥.

As a corollary from the previous theorem we have. n ⊥ Theorem: If W is a subspace of R , then dim W + dim W = n.

Exercises:  2   4  1. Let W be a subspace of R3 spanned by the vectors  1 ,  0 . Find a basis for −2 1 W ⊥.

3