
Mathematics Department Stanford University Math 61CM/DM { Orthogonal transformations We introduced the notion of an orthogonal map. Definition 1 Suppose V; W are inner product spaces, and O 2 L(V; W ). We say that O is orthogonal if (i) O is invertible, (ii) for all x; y 2 V , hOx; OyiW = hx; yiV . The most interesting case will be V = W , but this definition makes sense in general. Note that orthogonal maps preserve inner products, which is why they are important in the inner product space setting. 2 2 Note that if O 2 L(V; W ) satisfies (ii) only, then for all x 2 V , kOxkW = hOx; OxiW = hx; xiV = kxk , so Ox = 0 implies x = 0, thus O is injective. Thus just assuming (ii), the content of (i) is that O is surjective. Now, if V; W are finite dimensional, then dim V = dim N(T ) + dim Ran(T ) = dim Ran(T ), and surjectivity is equivalent to dim Ran T = dim W , so for O 2 L(V; W ) satisfying (ii), O is orthogonal if and only if dim V = dim W . n m A particular example is linear maps O 2 L(R ; R ), in which the case equality of dimensions is m = n, i.e. O is given (in the standard basis) by an n×n matrix. Thus, in agreement with the above definition, one calls an n × n matrix orthogonal if it satisfies (ii) above, as (i) is automatic then (by the equality of the dimensions of the domain and the target space). n n Notice that if O 2 L(R ; R ) is orthogonal with respect to the standard inner product, and e1; : : : ; en is the standard basis, then for all i; j, ( 1 if i = j Oei · Oej = ei · ej = δij = 0 if i 6= j: Here one calls δij the `Kronecker delta'. Thus, the columns of the matrix of O (which are exactly the vectors Oej) are orthonormal. n n Conversely, if O 2 L(R ; R ) with orthonormal columns, i.e. Oei ·Oej = δij for all i; j, where e1; : : : ; en Pn Pn is the standard basis, then for any x = i=1 xiei, y = j=1 yjej, n n n n n X X X X X Ox · Oy = xiei · yjej = xiyjei · ej = xjyj = x · y; i=1 j=1 i=1 j=1 j=1 so O is orthogonal. Thus, an n × n orthogonal matrix is exactly a matrix with orthonormal columns, in agreement with the textbook's definition in Section 3.5. Recall that the transpose AT of a linear map A 2 L(V; W ) (which we constructed in finite dimensional vector spaces V; W ) has the property that T hx; AyiW = hA x; yiV for all x 2 W , y 2 V . Thus, for an orthogonal linear map O 2 L(V; W ) we have T hx; yiV = hOx; OyiW = hO Ox; yiV for all x; y 2 V . Thus, OTO is the identity map I 2 L(V; V ), which can be seen by noticing that its matrix in an orthonormal basis e1; : : : ; en has ij entry δij, i.e. the same as the identity map, or instead that T T h(O O − I)x; yiV = hO Ox; yiV − hx; yiV = 0 using the previous displayed equation, so substituting in y = (OTO−I)x shows that k(OTO−I)xk2 = 0 for all x 2 V , i.e. (OTO − I)x = 0 for all x 2 V , i.e. OTO − I = 0, i.e. OTO = I as claimed. Notice that conversely, if OTO = I, one certainly has T hx; yiV = hO Ox; yiV = hOx; OyiW ; so O satisfies (ii) of the definition of an orthogonal map. As (ii) suffices to conclude that O is injective, if dim V = dim W , we conclude that O is orthogonal, as discussed above. Now, if O is orthogonal, as O is invertible, it has a unique left inverse, which is automaticly a right inverse as well, so (being a left inverse as shown above) OT is a right inverse, i.e. OOT = I as well. 0 0 Finally consider the change of orthonormal bases. So suppose e1; : : : ; en and e1; : : : ; en are orthonormal n bases of V , and let C = (cij)i;j=1 be the change of basis matrix so n 0 X ej = cijei: i=1 Then n n n n n n 0 0 X X X X X X T T δij = ei · ej = ckiek · c`je` = ckic`jek · e` = ckickj = (C )ikCkj = (C C)ij k=1 `=1 k=1 `=1 k=1 k=1 But this says that C TC is exactly the identity matrix, so C TC corresponds to the identity map, so C is an orthogonal matrix by the definition above..
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages2 Page
-
File Size-