<<

Real Inner Product Spaces and Orthogonal Transformations Math 422

Definition 1 AvectorspaceV equipped with a real valued < , >: V V R is a real inner × → product space if and only if the following conditions hold for all a, b, c V and t R: ∈ ∈ 1. a, b = b, a h i h i 2. a + b, c = a + c, b + c h i h i 3. ta, b = a, tb = t a, b h i h i h i 4. a, a 0 and a, a =0if and only if a =0. h i ≥ h i The function < , > on an V , is called an inner product on V. Real valued functions on a are called forms. Axiom(1)saysthattheform , is symmetric. Taken together, axioms (1), (2) and (3) tell us that , is bilinear,i.e.,foreachfixed elementh xi V, the forms h i ∈ x, : R R and ,x : R R are linear. Axiom (4) says that , is positive definite and non-degenerate. hIn thesei → terms, weh sayi that→an inner product is a symmetric, bilinear,h i positive definite and non-degenerate form on V V. × Definition 2 Let V be an inner product space. The on V induced by the inner product is a form : V R given by a = a, a . The distance between two vectors a, b V relative to the given inner productkk → is defined by dk(a,k b)= ha ib . ∈ pk − k

Definition 3 The Euclidean inner product (often called the “”) of two vectors x =(x1,...,xn) n and y =(y1,...,yn) in R is defined by x, y = x1y1 + + xnyn. In this case we use the notation x y to denote the inner product. h i ··· •

Theorem 4 (The polarization identity) Let x, y be elements of an inner product sapce V. Then 1 1 x, y = x + y 2 x y 2 . (1) h i 4 k k − 4 k − k Proof. x + y 2 = x + y, x + y = x, x +2 x, y + y, y = x 2 +2 x, y + y 2 k k h i h i h i h i k k h i k k and x y 2 = x y, x y = x, x 2 x, y + y, y = x 2 2 x, y + y 2 . k − k h − − i h i − h i h i k k − h i k k Therefore 1 1 1 1 x + y 2 x y 2 = x 2 +2 x, y + y 2 x 2 2 x, y + y 2 4 k k − 4 k − k 4 k k h i k k − 4 k k − h i k k = x³, y . ´ ³ ´ h i

T T n Let x =[x1 xn] , y =[y1 yn] be coordinate matrices in the standard for x, y R . Observe that the dot product··· can be expressed··· as a product as follows: ∈

y1 . T x y =x1y1 + + xnyn =[x1 xn ] . = x y. • ··· ··· ⎡ . ⎤ y ⎢ n ⎥ ⎣ ⎦ So if A is an n n , × Ax y =(Ax)T y = xT AT y = x AT y. (2) • •

1 T 1 Definition 5 A square matrix A is orthogonal if A = A− .

Theorem 6 Let A be an n n matrix. The following statements are all equivalent: × 1. A is an .

2. Ax = x for all x Rn, i.e., multiplication by A preserves Euclidean norm. k k k k ∈ 3. Ax Ay = x y for all x, y Rn, i.e., multiplication by A preserves Euclidean inner product. • • ∈ Proof. (1) (2). Let x Rn. Using the fact that AT A = I and the identity in equation (2), ⇒ ∈ Ax 2 = Ax Ax = x AT Ax = x x = x 2 . k k • • • k k (2) (3). Let x, y Rn. By the polarization identity (1) and assumption (2), ⇒ ∈ 1 1 1 1 Ax Ay = Ax+Ay 2 Ax Ay 2 = A (x + y) 2 A (x y) 2 • 4 k k − 4 k − k 4 k k − 4 k − k 1 1 = x + y 2 x y 2 = x y. 4 k k − 4 k − k • (3) (1). Let x, y Rn. By assumption (3) and equation (2), ⇒ ∈ 0=Ax Ay x y = x AT Ay x Iy = x AT Ay Iy = x AT A I y. • − • • − • • − • − n ¡ T ¢ ¡ ¢ Since this holds for all x R , it holds in particular when x = A A I y.Thus ∈ − ¡ ¢ 2 0= AT A I y AT A I y = AT A I y − • − − ¡ ¢ ¡ ¢ °¡ ¢ ° in which case ° ° AT A I y = 0. − n This is a homogeneous system of linear equations¡ satis¢fied by all y R . In particular, this holds for y = ei, in which case the ith column of AT A I is 0. Since this is true for∈ all i =1,...,n, the matrix AT A I = 0 and A is orthogonal. − − ¡ ¢

Definition 7 AlinearmapL : Rn Rn is an orthogonal transformation if and only if its matrix in the is an orthogonal matrix.→

Example 8 Rotations about the origin and reflections in lines through the origin are orthogonal transfor- mations of the .

Unlike the , is to some extent basis dependent. Since the purpose of the Euclidean inner product is to measure Euclidean length and angle, it makes sense to study orthogonality relative to coordinate matrices of vectors in an , i.e., a basis of pairwise orthogonal unit vectors. Consider Rn with its standard basis and the Euclidean inner product. Consider an orthonormal basis = u1,...,un B { } and construct the associated transition matrix P from -coordinates to standard coordinates: B

P =[u1 un] . |···|

Then by of the ui’s we have

T T T T u1 u1 u1 u2 u1 un u1 T T ··· T u2 u1 u2 u2 u2 un P T P = . [u u ]=⎡ ⎤ = I ⎡ . ⎤ 1 n . .. . T | ··· | . . . un ⎢ T T T ⎥ ⎢ ⎥ ⎢ u u1 u u2 u un ⎥ ⎣ ⎦ ⎢ n n ··· n ⎥ ⎣ ⎦ 2 1 T 1 so that P − = P ; thus P and P − are orthogonal matrices. By Theorem 6, multiplication by P and 1 1 P − preserves the Euclidean inner product. Therefore, if L is an orthogonal transformation and P − is an orthogonal transition matrix from standard basis to orthonormal basis ,then B [L] [L]T =(P T [L] P )(P T [L] P )T = P T [L] PPT [L]T P = I, B B which proves that [L] is orthogonal. In summary: B Theorem 9 The matrix of an orthogonal transformation in any orthonormal basis is an orthogonal matrix.

Corollary 10 If L : Rn Rn is an orthogonal transformation, then → det(L)= 1. ± Proof. Since the standard basis is an orthonormal basis, [L] is an orthogonal matrix. Furthermore, det (A)= det AT and det (AB)=detA det B for all square matrices A and B. Therefore ¡ ¢ [det(L)]2 =det([L]) det [L]T =det [L][L]T =det(I)=1. ³ ´ ³ ´

3