<<

Ch 5:

5.5 Orthonormal Sets

1. a set {v1, v2, . . . , vn} is an orthogonal set if hvi, vji = 0, ∀1 ≤ i 6= j, ≤ n (note that orthogonality only makes sense in an since you need to inner product to check hvi, vji = 0. n For example {e1, e2, . . . , en} is an orthogonal set in R .

2. {v1, v2, . . . , vn} is an orthogonal set =⇒ v1, v2, . . . , vn are linearly independent.

3. a set {v1, v2, . . . , vn} is an orthonormal set if hvi, vji = 0 and ||vi|| = 1, ∀1 ≤ i 6= j, ≤ n (i.e. orthogonal and unit length). n For example {e1, e2, . . . , en} is an orthonormal set in R . 4. given an orthogonal for a vector space V , we can always find an orthonormal basis for V by dividing each vector by its length (see Example 2 and 3 page 256)

n 5. a space with an orthonormal basis behaves like the R with the (it is easier to work with basis that are orthonormal, just like it is n easier to work with the standard basis versus other bases for R )

6. if v is a vector in a inner product space V with {u1, u2, . . . , un} an orthonormal basis, Pn Pn then we can write v = i=1 ciui = i=1hv, uiiui Pn 7. an easy way to find the inner product of two vectors: if u = i=1 aiui and v = Pn i=1 biui, where {u1, u2, . . . , un} is an orthonormal basis for an inner product space Pn V , then hu, vi = i=1 aibi Pn 8. Parseval’s Formula: if u = i=1 aiui, where {u1, u2, . . . , un} is an orthonormal basis Pn 2 Pn 2 for an inner product space V , then ||u|| = hu, ui = i=1 ci = i=1hu, uii (where ci = hv, uii by Thm 5.5.2)

1 9. an n × n matrix Q is said to be an if the column vectors of Q are orthonormal. (Note that terminology does not imply that the vectors have length one, but the definition says they must be.)

10. Q is orthogonal ⇐⇒ QT Q = I (or QQT = I)

11. properties of orthogonal matrices Q:

(a) the columns of Q form an orthonormal basis (b) QT Q = I = QQT

(c) Q−1 = QT (is very easy to compute Q−1 for orthogonal matrices) (d) hQx,Qyi = hx, yi (multiplication by an orthogonal matrix preserves the angle between two vectors)

(e) ||Qx||2 = ||x||2 (multiplication by an orthogonal matrix preserves length of vec- tors)

12. so all that an orthogonal matrix does to vectors is to shift them around and rotate all the vectors by the same amount (it is an ideal linear transformation)

13. how do we solve Ax = y for x if A is an orthogonal matrix?

Ax = y AT Ax = AT y x = AT y

14. If A is not orthogonal, then A can be decomposed as QR-factorization, where Q is orthogonal, and R is up upper triangular matrix, since orthogonal matrices are easy to work with, and the upper triangular ones are easy to use for solving (back substitution)–see next Section 5.6. Another alternative when A is not orthogonal is SVD-decomposition (Section 6.5)

2 15. a permutation matrix is a matrix obtained from I be reordering its columns (it is an elementary matrix of type I, i.e. obtained by swapping two rows).

16. permutation matrices are orthogonal matrices (and so P −1 = P T ), and P −1 is also an orthogonal matrix

17. if I3 has columns e1, e2, e3, and P is the permutation matrix that reorders the columns 0 1 0 of I3 to e2, e1, e3, then we say that P = [e2, e1, e3] = 1 0 0. 0 0 1   b1 18. for an n × 3 matrix A = [a1, a2, a3] and a 3 × n matrix B = b2 we have b3

AP = [Ae2,Ae1,Ae3] = [a2, a1, a3]

and   b2 T T T PB = [e2 B, e1 B, e3 B] = b1 b3

19. feel free to read the end of the section, I will not be covering it in class.

3