CHAPTER 5 INNER PRODUCT SPACES

5.1 and in Rn 5.2 Inner Product Spaces 5.3 Orthonormal Bases: Gram-Schmidt Process 5.4 Mathematical Models and Least Square Analysis 5.5 Applications of

Elementary Linear 投影片設計製作者 R. Larson (8 Edition) 淡江大學 電機系 翁慶昌 教授 CH 5 Linear Algebra Applied

Electric/Magnetic Flux (p.240) Heart Rhythm Analysis (p.255)

Work (p.248)

Revenue (p.266) Torque (p.277) 2/101 5.1 Length and Dot Product in Rn

 Length: n The length of a vector v  ( v 1 , v 2 ,  , v n ) in R is given by

2 2 2 || v||  v1  v2  vn

 Notes: The length of a vector is also called its norm.

 Notes: Properties of length 1 v  0 2 v  1 v is called a unit vector. 3 v  0 iff v  0 4 cv  c v

Elementary Linear Algebra: Section 5.1, p.232 3/101  Ex 1: (a) In R5, the length of v  ( 0 ,  2 , 1 , 4 ,  2 ) is given by

||v||  02  (2)2 12  42  (2)2  25  5

(b) In R3 the length of v  ( 2 ,  2 , 3 ) is given by 17 17 17

2 2 2  2    2   3  17 || v||           1  17   17   17  17

(v is a unit vector)

Elementary Linear Algebra: Section 5.1, p.232 4/101 n  A standard unit vector in R :

e1,e2,,en 1,0,,0,0,1,,0,0,0,,1

 Ex: the standard unit vector in R2: i, j 1,0,0,1

the standard unit vector in R3: i, j,k 1,0,0,0,1,0,0,0,1

 Notes: (Two nonzero vectors are parallel) u  cv 1 c  0  u and v have the same direction 2 c  0  u and v have the opposite direction

Elementary Linear Algebra: Section 5.1, p.232 5/101  Thm 5.1: (Length of a scalar multiple) Let v be a vector in Rn and c be a scalar. Then ||cv||  |c||| v|| Pf: v  (v1 , v2 ,, vn )

 cv  (cv1 , cv2 ,, cvn )

||cv||  || (cv1 , cv2 , , cvn ) ||

2 2 2  (cv1)  (cv2 )    (cvn )

2 2 2 2  c (v1  v2    vn )

2 2 2  |c| v1  v2    vn  |c| || v|| Elementary Linear Algebra: Section 5.1, p.233 6/101  Thm 5.2: (Unit vector in the direction of v) v If v is a nonzero vector in Rn, then the vector u  ||v|| has length 1 and has the same direction as v. This vector u is called the unit vector in the direction of v. Pf: 1 v is nonzero  v  0   0 v 1  u  v (u has the same direction as v) v v 1 ||u||   || v|| 1 (u has length 1 ) || v|| || v||

Elementary Linear Algebra: Section 5.1, p.233 7/101  Notes: v (1) The vector is called the unit vector in the direction of v. ||v||

(2) The process of finding the unit vector in the direction of v is called normalizing the vector v.

Elementary Linear Algebra: Section 5.1, p.233 8/101  Ex 2: (Finding a unit vector) Find the unit vector in the direction of , and verify that this vector has length 1. Sol: v  (3, 1, 2)  v  32  12  22  14

v (3 , 1, 2) 1  3 1 2     (3 , 1, 2)   , ,  || v|| 32  (1)2  22 14  14 14 14 

2 2 2  3   1   2  14           1  14   14   14  14 v  v is a unit vector.

Elementary Linear Algebra: Section 5.1, p.233 9/101  between two vectors: The distance between two vectors u and v in Rn is

d(u , v) ||u  v||

 Notes: (Properties of distance) (1) d(u , v)  0 (2) d (u , v )  0 if and only if u  v (3) d(u , v)  d(v , u)

Elementary Linear Algebra: Section 5.1, p.234 10/101  Ex 3: (Finding the distance between two vectors) The distance between u = (0, 2, 2) and v = (2, 0, 1) is

d(u , v) ||u  v||  || (0  2 , 2  0 , 2 1) ||  (2)2  22 12  3

Elementary Linear Algebra: Section 5.1, p.234 11/101 n  Dot product in R :

The dot product of u  ( u 1 , u 2 ,  , u n ) and v  (v1 , v2 ,, vn ) is the scalar quantity

u  v  u1v1  u2v2  unvn

 Ex 4: (Finding the dot product of two vectors) The dot product of u=(1, 2, 0, -3) and v=(3, -2, 4, 2) is

u  v  (1)(3)  (2)(2)  (0)(4)  (3)(2)  7

Elementary Linear Algebra: Section 5.1, p.235 12/101  Thm 5.3: (Properties of the dot product) If u, v, and w are vectors in Rn and c is a scalar, then the following properties are true. (1) u  v  v u (2) u  (v  w)  u  v  u  w (3) c(u  v)  (cu)  v  u  (cv) (4) v  v  ||v||2 (5) v  v  0 , and v  v  0 if and only if v  0

Elementary Linear Algebra: Section 5.1, p.235 13/101  Euclidean n-space: Rn was defined to be the set of all order n-tuples of real . When Rn is combined with the standard operations of vector addition, scalar , vector length, and the dot product, the resulting is called Euclidean n-space.

Elementary Linear Algebra: Section 5.1, p.235 14/101  Ex 5: (Finding dot products)

u  (2 ,  2) , v  (5 , 8), w  (4 , 3)

(a)u  v (b) ( u  v ) w (c) u  ( 2 v ) (d) || w || 2 (e) u  (v  2w) Sol: (a) u  v  (2)(5)  (2)(8)  6 (b) (u  v)w  6w  6(4 , 3)  (24 , 18) (c) u  (2v)  2(u  v)  2(6)  12 (d) || w ||2  w  w  (4)(4)  (3)(3)  25 (e) v  2w  (5  (8) , 8  6)  (13 , 2) u  (v  2w)  (2)(13)  (2)(2)  26  4  22

Elementary Linear Algebra: Section 5.1, p.236 15/101  Ex 6: (Using the properties of the dot product) Given u  u  39 u  v  3 v  v  79 Find (u  2v)  (3u  v)

Sol: (u  2v)(3u  v)  u  (3u  v)  2v  (3u  v)  u  (3u)  u  v  (2v)(3u)  (2v)  v  3(u  u)  u  v  6(v  u)  2(v  v)

 3(u  u)  7(u  v)  2(v  v)

 3(39)  7(3)  2(79)  254

Elementary Linear Algebra: Section 5.1, p.236 16/101  Thm 5.4: (The Cauchy - Schwarz inequality) If u and v are vectors in Rn, then | u  v |  || u || || v || ( | u  v | denotes the of u  v )

 Ex 7: (An example of the Cauchy - Schwarz inequality) Verify the Cauchy - Schwarz inequality for u=(1, -1, 3) and v=(2, 0, -1) Sol: u v  1, uu 11, v v  5  u  v  1 1 u v  u u  v  v  11 5  55  u  v  u v

Elementary Linear Algebra: Section 5.1, p.237 17/101 n  The angle between two vectors in R : u  v cos  , 0     || u |||| v || Same Opposite u  v  0 u  v  0 u  v  0 direction direction

             0      0 2 2 2 cos  1 cos  0 cos  0 cos  0 cos 1

 Note: The angle between the zero vector and another vector is not defined.

Elementary Linear Algebra: Section 5.1, p.238 18/101  Ex 8: (Finding the angle between two vectors) u  (4 , 0 , 2 ,  2) v  (2 , 0 , 1,1) Sol: u  uu   42  02  22   22  24

v  v v  22  02  12 12  6 u v  (4)(2)  (0)(0)  (2)(1)  (2)(1)  12 u  v 12 12  cos      1 || u || || v|| 24 6 144

    u and v have opposite directions. (u  2v)

Elementary Linear Algebra: Section 5.1, p.238 19/101  Orthogonal vectors: Two vectors u and v in Rn are orthogonal if u  v  0

 Note: The vector 0 is said to be orthogonal to every vector.

Elementary Linear Algebra: Section 5.1, p.238 20/101  Ex 10: (Finding orthogonal vectors) Determine all vectors in Rn that are orthogonal to u=(4, 2). Sol:

u  (4 , 2) Let v  (v1 , v2 )

 u  v  (4 , 2)  (v1 , v2 )

 4v1  2v2  1  4 2 0 1 0  0  2   t v  , v  t 1 2 2  t   v   ,t, t  R  2 

Elementary Linear Algebra: Section 5.1, p.238 21/101

 Thm 5.5: (The triangle inequality) If u and v are vectors in Rn, then ||u  v|| ||u||  ||v|| Pf: ||u  v||2  (u  v)  (u  v)  u  (u  v)  v  (u  v)  u  u  2(u  v)  v  v  ||u||2 2(u  v) || v||2  ||u||2 2 |u  v|  || v||2  ||u||2 2 ||u||||v||  ||v||2  (||u||  ||v||)2 ||u  v||  ||u||  ||v||

 Note: Equality occurs in the triangle inequality if and only if the vectors u and v have the same direction.

Elementary Linear Algebra: Section 5.1, p.239 22/101  Thm 5.6: (The ) If u and v are vectors in Rn, then u and v are orthogonal if and only if

||u  v||2 ||u||2  ||v||2

Elementary Linear Algebra: Section 5.1, p.239 23/101  Dot product and matrix multiplication:

u1  v1  u  v  (A vector u  ( u , u ,  , u ) in Rn u   2  v   2  1 2 n           is represented as an n×1 column matrix) un  vn 

v1  v  u  v  uT v  [u u  u ]  2   [u v  u v    u v ] 1 2 n    1 1 2 2 n n   vn 

Elementary Linear Algebra: Section 5.1, p.240 24/101 Key Learning in Section 5.1

▪ Find the length of a vector and find a unit vector. ▪ Find the distance between two vectors. ▪ Find a dot product and the angle between two vectors, determine orthogonality, and verify the Cauchy-Schwarz Inequality, the triangle inequality, and the Pythagorean Theorem. ▪ Use a matrix product to represent a dot product.

25/101 Keywords in Section 5.1

 length: 長度

 norm: 範數

 unit vector: 單位向量

 standard unit vector : 標準單位向量

 normalizing: 單範化

 distance: 距離

 dot product: 點積

 Euclidean n-space: 歐基里德n維空間

 Cauchy-Schwarz inequality: 科西-舒瓦茲不等式

 angle: 夾角

 triangle inequality: 三角不等式

 Pythagorean theorem: 畢氏定理

26/101 5.2 Inner Product Spaces

 Inner product: Let u, v, and w be vectors in a vector space V, and let c be any scalar. An inner product on V is a that associates a real with each pair of vectors u and v and satisfies the following axioms.

(1) 〈u , v〉〈v , u〉 (2) 〈u , v  w〉〈u , v〉〈u , w〉 (3) c〈u , v〉〈cu , v〉 (4) 〈 v , v 〉  0 and 〈 v , v 〉  0 if and only if v  0

Elementary Linear Algebra: Section 5.2, p.243 27/101  Note: u  v  dot product (Euclidean inner product for Rn )  u , v  general inner product for vector space V

 Note: A vector space V with an inner product is called an inner product space.

Vector space: V , ,  Inner product space: V , , , , 

Elementary Linear Algebra: Section 5.2, Addition 28/101 n  Ex 1: (The Euclidean inner product for R ) Show that the dot product in Rn satisfies the four axioms of an inner product. Sol:

u  (u1 , u2 ,,un ) , v  (v1 , v2 ,,vn )

〈u , v〉 u  v  u1v1 u2v2 unvn

By Theorem 5.3, this dot product satisfies the required four axioms. Thus it is an inner product on Rn.

Elementary Linear Algebra: Section 5.2, p.243 29/101 n  Ex 2: (A different inner product for R ) Show that the function defines an inner product on R2,

where u  ( u 1 , u 2 ) and v  ( v 1 , v 2 ) .

〈u , v〉 u1v1  2u2v2 Sol:

(a) 〈 u , v〉 u1v1  2u2v2  v1u1  2v2u2 〈v , u〉

(b) w  (w1 , w2 )

〈u , v  w〉 u1(v1  w1)  2u2 (v2  w2 )

 u1v1  u1w1  2u2v2  2u2w2

 (u1v1  2u2v2 )  (u1w1  2u2w2 ) 〈u , v〉〈u , w〉

Elementary Linear Algebra: Section 5.2, p.244 30/101 (c) c〈u , v〉 c(u1v1  2u2v2 )  (cu1)v1  2(cu2 )v2 〈cu , v〉 2 2 (d) 〈 v , v〉 v1  2v2  0 2 2 〈v , v〉 0  v1  2v2  0  v1  v2  0 (v  0)

n  Note: (An inner product on R )

〈u , v〉 c1u1v1  c2u2v2 cnunvn , ci  0

Elementary Linear Algebra: Section 5.2, p.244 31/101  Ex 3: (A function that is not an inner product) Show that the following function is not an inner product on R3.

〈u  v〉 u1v1  2u2v2  u3v3 Sol: Let v  (1, 2 ,1) Then  v , v  (1)(1)  2(2)(2)  (1)(1)  6  0

Axiom 4 is not satisfied. Thus this function is not an inner product on R3.

Elementary Linear Algebra: Section 5.2, p.244 32/101  Thm 5.7: (Properties of inner products) Let u, v, and w be vectors in an inner product space V, and let c be any . (1) 〈0 , v〉〈v , 0〉 0 (2) 〈u  v , w〉〈u , w〉〈v , w〉 (3) 〈u , cv〉 c〈u , v〉

 Norm (length) of u:

||u||  〈u , u〉

 Note: ||u||2 〈u , u〉

Elementary Linear Algebra: Section 5.2, p.245 33/101  Distance between u and v: d(u , v) ||u  v|| u  v,u  v

 Angle between two nonzero vectors u and v: 〈u , v〉 cos  , 0    ||u|| || v||

 Orthogonal: (u  v)

u and v are orthogonal if〈 u , v 〉  0 .

Elementary Linear Algebra: Section 5.2, p.246 34/101  Notes: (1) If || v ||  1 , then v is called a unit vector.

(2) v 1 v Normalizing (the unit vector in the v  0 v direction of v) not a unit vector

Elementary Linear Algebra: Section 5.2, p.246 35/101  Ex 6: (Finding inner product)

p , q  a0b0  a1b1    anbn is an inner product 2 2 Let p(x) 1 2x , q(x)  4  2x  x be in P2 (x) (a) 〈p , q〉 ? (b) ||q|| ? (c) d( p , q)  ? Sol: (a) 〈p , q〉 (1)(4)  (0)(2)  (2)(1)  2 (b) ||q||  〈q , q〉 42  (2)2 12  21 (c)  p  q  3 2x  3x2 d( p , q)  || p  q||   p  q, p  q  (3)2  22  (3)2  22

Elementary Linear Algebra: Section 5.2, p.246 36/101  Properties of norm: (1) ||u||  0 (2) || u ||  0 if and only if u  0 (3) ||cu|| |c|||u||

 Properties of distance: (1) d(u , v)  0 (2) d ( u , v )  0 if and only if u  v (3) d(u , v)  d(v , u)

Elementary Linear Algebra: Section 5.2, p.247 37/101  Thm 5.8: Let u and v be vectors in an inner product space V. (1) Cauchy-Schwarz inequality: 〈| u , v〉|  ||u||||v|| Theorem 5.4 (2) Triangle inequality: ||u  v|| ||u||  ||v|| Theorem 5.5 (3) Pythagorean theorem : u and v are orthogonal if and only if

||u  v||2 ||u||2  ||v||2 Theorem 5.6

Elementary Linear Algebra: Section 5.2, p.248 38/101  Orthogonal projections in inner product spaces: Let u and v be two vectors in an inner product space V, such that v  0 . Then the orthogonal projection of u onto v is given by u , v proj u  v v v , v  Note: If v is a init vector, then 〈 v , v 〉  || v || 2  1 . The formula for the orthogonal projection of u onto v takes the following simpler form.

projvu  u , vv

Elementary Linear Algebra: Section 5.2, p.249 39/101 3  Ex 10: (Finding an orthogonal projection in R ) Use the Euclidean inner product in R3 to find the orthogonal projection of u=(6, 2, 4) onto v=(1, 2, 0).

Sol: u , v  (6)(1)  (2)(2)  (4)(0) 10 v , v 12  22  02  5 u  v proj u  v  10 (1, 2 , 0)  (2 , 4 , 0) v v  v 5

 Note:

u projv u  (6, 2, 4) (2, 4, 0)  (4, 2, 4) is orthogonal to v  (1,2,0).

Elementary Linear Algebra: Section 5.2, p.249 40/101  Thm 5.9: (Orthogonal projection and distance) Let u and v be two vectors in an inner product space V, such that v  0 . Then u , v d(u , proj u)  d(u , cv) , c  v v , v

Elementary Linear Algebra: Section 5.2, p.260 41/101 Key Learning in Section 5.2

▪ Determine whether a function defines an inner product, and find n the inner product of two vectors in R , Mm,n, Pn and C[a, b]. ▪ Find an orthogonal projection of a vector onto another vector in an inner product space.

42/101 Keywords in Section 5.2

 inner product: 內積

 inner product space: 內積空間

 norm: 範數

 distance: 距離

 angle: 夾角

 orthogonal: 正交

 unit vector: 單位向量

 normalizing: 單範化

 Cauchy – Schwarz inequality: 科西 - 舒瓦茲不等式

 triangle inequality: 三角不等式

 Pythagorean theorem: 畢氏定理

 orthogonal projection: 正交投影

43/101 5.3 Orthonormal Bases: Gram-Schmidt Process

 Orthogonal: A set S of vectors in an inner product space V is called an orthogonal set if every pair of vectors in the set is orthogonal.

S  v1, v 2 ,, v n  V

 Orthonormal: vi , v j   0 i  j An orthogonal set in which each vector is a unit vector is called orthonormal. S  v1, v2 ,, vn  V

1 i  j vi , v j     Note: 0 i  j If S is a basis, then it is called an orthogonal basis or an orthonormal basis. Elementary Linear Algebra: Section 5.3, p.254 44/101 3  Ex 1: (A nonstandard orthonormal basis for R ) Show that the following set is an orthonormal basis.

v1 v2 v3  1 1  2 2 2 2  2 2 1        S   , , 0 ,  , ,  ,  ,  ,   2 2   6 6 3   3 3 3 Sol: Show that the three vectors are mutually orthogonal.

1 1 v1  v2   6  6  0  0 2 2 v  v    0  0 1 3 3 2 3 2 2 2 2 2 v  v      0 2 3 9 9 9

Elementary Linear Algebra: Section 5.3, p.255 45/101 Show that each vector is of length 1.

1 1 ||v1 ||  v1  v1  2  2  0 1

2 2 8 ||v 2 ||  v 2  v 2  36  36  9 1

4 4 1 ||v 3 ||  v 3  v 3  9  9  9 1

Thus S is an orthonormal set.

Elementary Linear Algebra: Section 5.3, p.255 46/101  Ex 2: (An orthonormal basis for P 3 ( x ) )

In P 3 ( x ) , with the inner product

 p,q  a0b0  a1b1  a2b2 The standard basis B  { 1 , x , x 2 } is orthonormal.

Sol: 2 2 2 v1 1 0x  0x , v2  0 x  0x , v3  0  0x  x , Then

v1, v2   (1)(0)  (0)(1)  (0)(0)  0,

v1, v3   (1)(0)  (0)(0)  (0)(1)  0,

v2 , v3   (0)(0)  (1)(0)  (0)(1)  0

Elementary Linear Algebra: Section 5.3, p.255 47/101 v1  v1, v1  11 00 00  1,

v 2  v 2 , v 2   00 11 00  1,

v3  v3 , v3   00 00 11  1

Elementary Linear Algebra: Section 5.3, p.255 48/101  Thm 5.10: (Orthogonal sets are linearly independent)

If S   v 1 , v 2 ,  , v n  is an orthogonal set of nonzero vectors in an inner product space V, then S is linearly independent. Pf: S is an orthogonal set of nonzero vectors

i.e. vi , v j   0 i  j and vi , vi   0

Let c1v1  c2 v2  cn vn  0

 c1v1  c2 v2  cn vn , vi   0, vi   0 i

 c1v1, vi   c2 v2 , vi   ci vi , vi   cn vn , vi 

 ci vi , vi 

vi , vi   0  ci  0 i S is linearly independen t.

Elementary Linear Algebra: Section 5.3, p.257 49/101  Corollary to Thm 5.10: If V is an inner product space of dimension n, then any orthogonal set of n nonzero vectors is a basis for V.

Elementary Linear Algebra: Section 5.3, p.257 50/101  Ex 4: (Using orthogonality to test for a basis) Show that the following set is a basis for R 4 .

v1 v 2 v 3 v 4 S {(2 , 3 , 2 ,  2) , (1, 0 , 0 ,1) , (1, 0 , 2 ,1) , (1, 2 , 1,1)} Sol:

v 1, v 2 , v 3 , v 4 : nonzero vectors

v1  v2  2  0  0  2  0 v2  v3  1 0  0 1  0

v1  v3  2  0  4  2  0 v2  v4  1 0  0 1  0

v1  v4  2  6  2  2  0 v3  v4 1 0  2 1  0  S is orthogonal .  S is a basis for R 4 (by Corollary to Theorem 5.10)

Elementary Linear Algebra: Section 5.3, p.257 51/101  Thm 5.11: (Coordinates relative to an orthonormal basis)

If B  { v 1 , v 2 ,  , v n } is an orthonormal basis for an inner product space V, then the coordinate representation of a vector w with respect to B is

w  w , v1v1  w , v2 v2  w , vn vn Pf: is a basis for V w V

w  k1v1  k2v2  knvn (unique representation)

B {v1 , v2 ,, vn} is orthonormal 1 i  j  vi , v j    0 i  j Elementary Linear Algebra: Section 5.3, p.258 52/101 〈 w , vi〉 〈 (k1v1  k2 v 2    kn v n ) , vi〉

 k〈1 v1 , vi〉  k〈i vi , vi〉    k〈n v n , vi〉

 ki i

 w  w, v1v1  w, v2v2 w, vn vn

 Note:

If B  { v 1 , v 2 ,  , v n } is an orthonormal basis for V and w  V , Then the corresponding coordinate matrix of w relative to B is

w, v1 w, v  w   2  B      w, v n 

Elementary Linear Algebra: Section 5.3, p.258 53/101  Ex 5: (Representing vectors relative to an orthonormal basis) Find the coordinates of w = (5, -5, 2) relative to the following orthonormal basis for R 3 .

3 4 4 3 B {( 5 , 5 , 0) ,( 5 , 5 , 0) , (0 , 0 ,1)} Sol:

3 4 w, v1  w  v1  (5 ,  5 , 2)  ( 5 , 5 , 0)  1 4 3 w, v 2   w  v 2  (5,  5 , 2)  ( 5 , 5 , 0)  7

w, v3   w  v3  (5 ,  5 , 2)  (0 , 0 ,1)  2

1  [w]   7 B    2 

Elementary Linear Algebra: Section 5.3, p.258 54/101  Gram-Schmidt orthonormalization process:

B  {u 1 , u 2 ,  , u n } is a basis for an inner product space V Let v  u 1 1 w  sp a n ({v }) 〈 u , v〉 1 1 v  u  proj u  u  2 1 v 2 2 W1 2 2 1 〈 v1 , v1〉 w 2  sp a n ({v 1 , v 2 }) 〈 u , v〉 〈 u , v 〉 v  u  proj u  u  3 1 v  3 2 v 3 3 W2 3 3 1 2 〈 v1 , v1〉 〈 v2 , v2〉  n1〈 v , v〉 v  u  proj u  u  n i v n n Wn1 n n  i i1〈 vi , vi〉

 B'{v1 , v2 ,, vn} is an orthogonal basis. v v v  B'' { 1 , 2 ,, n } is an orthonormal basis. v1 v2 vn

Elementary Linear Algebra: Section 5.3, p.259 55/101  Ex 7: (Applying the Gram-Schmidt orthonormalization process) Apply the Gram-Schmidt process to the following basis.

u1 u2 u3 B  {(1,1, 0) , (1, 2 , 0) , (0 ,1, 2)}

Sol: v1  u1  (1,1, 0)

u2  v1 3 1 1 v2  u2  v1  (1, 2 , 0)  (1,1, 0)  ( , , 0) v1  v1 2 2 2

u3  v1 u3  v 2 v3  u3  v1  v 2 v1  v1 v 2  v 2 1 1/ 2 1 1  (0 ,1, 2)  (1,1, 0)  ( , , 0)  (0 , 0 , 2) 2 1/ 2 2 2

Elementary Linear Algebra: Section 5.3, p.260 56/101 Orthogonal basis 1 1  B'{v , v , v } {(1, 1, 0), ( , , 0), (0, 0, 2)} 1 2 3 2 2

Orthonormal basis v v v 1 1 1 1  B'' { 1 , 2 , 3 }  {( , , 0), ( , , 0), (0, 0, 1)} v1 v2 v3 2 2 2 2

Elementary Linear Algebra: Section 5.3, p.260 57/101  Ex 10: (Alternative form of Gram-Schmidt orthonormalization process) Find an orthonormal basis for the solution space of the homogeneous system of linear equations.

x1  x2  7x4  0

2x1  x2  2x3  6x4  0 Sol:

1 1 0 7 0 G.J .E 1 0 2 1 0      2 1 2 6 0 0 1  2 8 0

x1   2s  t  2  1  x   2s 8t   2  8   2      s   t  x3   s   1   0          x4   t   0   1  Elementary Linear Algebra: Section 5.3, p.262 58/101 Thus one basis for the solution space is

B  {v1 , v2}  {(2 , 2 ,1, 0) , (1, 8 , 0 ,1)}

v1  u1   2, 2, 1, 0

u2 , v1 18 v2  u2  v1  1, 8, 0, 1  2, 2, 1, 0 v1, v1 9   3,  4, 2, 1  B'  2,2,1,0  3,4,2,1 (orthogonal basis)

  2 2 1    3  4 2 1   B''  , , ,0 ,  , , ,   3 3 3   30 30 30 30  (orthonormal basis)

Elementary Linear Algebra: Section 5.3, p.262 59/101 Key Learning in Section 5.3

▪ Show that a set of vectors is orthogonal and forms an orthonormal basis, and represent a vector relative to an orthonormal basis. ▪ Apply the Gram-Schmidt orthonormalization process.

60/101 Keywords in Section 5.3

 orthogonal set: 正交集合

 orthonormal set: 單範正交集合

 orthogonal basis: 正交基底

 orthonormal basis: 單範正交基底

 linear independent: 線性獨立

 Gram-Schmidt Process: Gram-Schmidt過程

61/101 5.4 Mathematical Models and Analysis

 Orthogonal subspaces:

The subspaces W1 and W2 of an inner product space V are orthogonal

if v1 , v2   0 for all v1 in W1 and all v2 in W2 .

. Ex 2: (Orthogonal subspaces)

The subspaces 1 1 -1 W  span(0, 1) and W  span( 1 ) 1     2   1 0  1 

are orthogonal because v1 , v2   0 for any vector in W1 and any vector in W2 is zero.

Elementary Linear Algebra: Section 5.4, p.266 62/101  Orthogonal complement of W: Let W be a subspace of an inner product space V. (a) A vector u in V is said to orthogonal to W, if u is orthogonal to every vector in W. (b) The set of all vectors in V that are orthogonal to every vector in W is called the orthogonal complement of W.

W  {vV | v , w  0 , wW}

 (read “ perp”) . Notes: W W

(1) 0 V (2) V   0

Elementary Linear Algebra: Section 5.4, p.266 63/101  Notes: W is a subspace of V (1) W  is a subspace of V (2) W W   0 (3) (W  )  W

. Ex: If V  R2 , W  x  axis Then (1) W   y - axis is a subspace of R2 (2) W W   (0,0) (3) (W  )  W

Elementary Linear Algebra: Section 5.4, Addition 64/101  Direct sum: Let and be two subspaces of n . If each vector n W1 W2 R xR can be uniquely written as a sum of a vector w from 1 W1

and a vector w 2 from W 2 , x  w 1  w 2 , then is the direct sum of and , and you can write . n R W1 W2  Thm 5.13: (Properties of orthogonal subspaces) Let W be a subspace of Rn. Then the following properties are true. (1) dim( W)  dim(W  )  n (2) R n  W W  (3) ( W  ) W Elementary Linear Algebra: Section 5.4, pp.267-268 65/101  Thm 5.14: (Projection onto a subspace) If is an orthonormal basis for the {u1 , u2 ,, ut } subspace S of V, and v  V , then

projW v  v , u1u1 v , u2u2 v , ut ut Pf:

projW v W and {u1 , u2 ,, ut } is an orthonorma l basis for W

 projW v  projW v,u1 u1  projW v,ut ut

 v  proj v,u u  v  proj v,u u W  1 1 W  t t (proj v  v  proj v) W W   v,u u  v,u u (proj v,u   0, i) 1 1 t t W  i

Elementary Linear Algebra: Section 5.4, p.268 66/101  Ex 5: (Projection onto a subspace)

w1  0, 3, 1, w 2  2, 0, 0, v  1, 1, 3 Find the projection of the vector v onto the subspace W. W  span({w ,w }) Sol: 1 2

w1, w 2  : an orthogonal basis for W    w1 w2   3 1   u1,u2  ,   (0, , ), 1,0,0 :  w1 w2   10 10  an orthonormal basis for W

projW v  v, u1 u1  v, u2 u2 6 3 1 9 3  (0, , )  1,0,0  (1, , ) 10 10 10 5 5

Elementary Linear Algebra: Section 5.4, p.269 67/101  Find by the other method:

A  w1,w 2 , b  v Ax  b  x  (AT A)1 ATb T 1 T  projcs( A)b  Ax  A(A A) A b

Elementary Linear Algebra: Section 5.4, p.269 68/101  Thm 5.15: (Orthogonal projection and distance) Let W be a subspace of an inner product space V, and v  V. Then for all , w W w  projW v

||v  projW v||  ||v  w||

or ||v  proj v|| min ||v  w|| W wW

( proj W v is the best approximation to v from W)

Elementary Linear Algebra: Section 5.4, p.269 69/101 Pf:

v w  (v projW v) (projW v  w)

(v  projW v)  (projW v  w)

By the Pythagorean theorem

2 2 2 || v  w||  || v  projW v||  ||projW v  w||

w  projW v  projW v  w  0

2 2 || v  w||  || v  projW v||

||v  projW v||||v  w||

Elementary Linear Algebra: Section 5.4, p.269 70/101  Notes: (1) Among all the scalar multiples of a vector u, the orthogonal projection of v onto u is the one that is closest to v. (p.250 Thm 5.9) (2) Among all the vectors in the subspace W, the vector

proj W v is the closest vector to v.

Elementary Linear Algebra: Section 5.4, p.269 71/101  Thm 5.16: (Fundamental subspaces of a matrix) If A is an m × n matrix, then

(1) (CS(A))   NS(A ) (NS(A ))   CS(A) (2) (CS(A ))   NS(A) (NS(A))   CS(A ) (3) CS(A) NS(AT )  Rm CS(A)(NS(A))   Rm (4) CS(AT ) NS(A)  Rn CS(AT )(CS(A ))   Rn

Elementary Linear Algebra: Section 5.4, p.270 72/101  Ex 6: (Fundamental subspaces) Find the four fundamental subspaces of the matrix.

1 2 0 0 0 1 A    (reduced row-echelon form) 0 0 0   0 0 0 Sol: CS(A)  span1,0,0,0 0,1,0,0 is a subspace of R4

CS(A )  RSA  span1,2,0 0,0,1 is a subspace of R3

NS(A)  span 2,1,0 is a subspace of R3

Elementary Linear Algebra: Section 5.4, p.270 73/101 1 0 0 0 1 0 0 0      A  2 0 0 0 ~ R  0 1 0 0 0 1 0 0 0 0 0 0 s t NS(A )  span0,0,1,0 0,0,0,1 is a subspace of R4

 Check: (CS(A))   NS(A ) (CS(A ))   NS(A) CS(A) NS(AT )  R4 CS(AT ) NS(A)  R3

Elementary Linear Algebra: Section 5.4, p.270 74/101  Ex 3 & Ex 4:

W  span({w1,w 2}) 4 Let W is a subspace of R and w 1  ( 1, 2, 1, 0 ), w 2  ( 0, 0, 0, 1 ) . (a) Find a basis for W (b) Find a basis for the orthogonal complement of W. Sol: 1 0 1 0 2 0 0 1 A    ~ R    (reduced row-echelon form) 1 0 0 0     0 1 0 0

w1 w 2

Elementary Linear Algebra: Section 5.4, p.267 75/101 (a) W  CSA  1,2,1,0,0,0,0,1 is a basis for W (b) W   CSA  NSA 

x1   2s  t  2 1 1 2 1 0 x   s   1   0      2         A       s  t 0 0 0 1 x3   t   0   1          x4   0   0   0    2,1,0,0 1,0,1,0 is a basis for W 

 Notes: (1) dim(W )  dim(W  )  dim(R4 ) (2) W W   R4

Elementary Linear Algebra: Section 5.4, p.267 76/101  Least squares problem: A x  b (A system of linear equations) m n n1 m1 (1) When the system is consistent, we can use the Gaussian elimination with back-substitution to solve for x

(2) When the system is inconsistent, how to find the “best possible” solution of the system. That is, the value of x for which the difference between Ax and b is small.

Elementary Linear Algebra: Section 5.4, p.271 77/101  Least squares solution: Given a system Ax = b of m linear equations in n unknowns, the least squares problem is to find a vector x in Rn that minimizes A x  b with respect to the Euclidean inner product on Rn. Such a vector is called a least squares solution of Ax = b.

 Notes: The least square problem is to find a vector xˆ in R n such that

Axˆ  projCS( A)b in the column space of A (i.e., Axˆ CS (A)) is as close as possible to b. That is,

b  projCS( A)b  b  Axˆ  min b  Ax xRn

Elementary Linear Algebra: Section 5.4, p.271 78/101 A M mn x  R n Ax CS (A) (CSA is a subspace of R m ) bCS (A) (Ax  b is an inconsiste nt system)

Let Axˆ  projCS( A)b  (b  Axˆ)  CS(A)  b  Axˆ (CS(A))   NS(A )  A (b  Axˆ)  0 i.e. A Axˆ  Ab (the normal system associated with Ax = b)

Elementary Linear Algebra: Section 5.4, p.271 79/101  Note: (Ax = b is an inconsistent system) The problem of finding the least squares solution of A x  b is equal to he problem of finding an exact solution of the associated normal system A  A xˆ  A  b .

Elementary Linear Algebra: Section 5.4, p.271 80/101  Ex 7: (Solving the normal equations) Find the least squares solution of the following system

Ax  b 1 1 0   c0     1 2      1  (this system is inconsistent) c1  1 3 3

and find the orthogonal projection of b on the column space of A.

Elementary Linear Algebra: Section 5.4, p.271 81/101 Sol: 1 1 T 1 1 1   3 6  A A    1 2    1 2 3 6 14 1 3 0 T 1 1 1    4  A b    1    1 2 3 11 3

the associated normal system AT Axˆ  AT b

3 6 c0   4        6 14c1  11

Elementary Linear Algebra: Section 5.4, p.271 82/101 the least squares solution of Ax = b

5  3  xˆ   3   2 

the orthogonal projection of b on the column space of A

1 1 1  6  5  3     8  proj b  Axˆ  1 2  6 CS( A)   3    2   17 1 3  6 

Elementary Linear Algebra: Section 5.4, p.271 83/101 Key Learning in Section 5.4

▪ Define the least squares problem. ▪ Find the orthogonal complement of a subspace and the projection of a vector onto a subspace. ▪ Find the four fundamental subspaces of a matrix. ▪ Solve a least squares problem. ▪ Use least squares for mathematical modeling.

84/101 Keywords in Section 5.4

 orthogonal to W: 正交於W

 orthogonal complement: 正交補集

 direct sum: 直和

 projection onto a subspace: 在子空間的投影

 fundamental subspaces: 基本子空間

 least squares problem: 最小平方問題

 normal equations: 一般方程式

85/101 5.5 Applications of Inner Product Spaces

3  The cross product of two vectors in R

A vector product that yields a vector in R3 is orthogonal to two vectors. This vector product is called the cross product, and it is most conveniently defined and calculated with vectors written in standard unit vector form

v  (v1,v2 ,v3 )  v1i  v2 j v3k i  (1,0,0), j  (0,1,0),k  (0,0,1)

Elementary Linear Algebra: Section 5.5, p.277 86/101 3  Cross product of two vectors in R : 3 Let u  u 1 i  u 2 j  u 3 k and v  v 1 i  v 2 j  v 3 k be vectors in R . The cross product of u and v is the vector

u v  (u2v3  u3v2 )i  (u1v3  u3v1)j (u1v2  u2v1)k i j k

u v  u1 u2 u3 Components of u

v1 v2 v3 Components of v u u u u u u u v  2 3 i  1 3 j 1 2 k v2 v3 v1 v3 v1 v2

 (u2v3  u3v2 )i  (u1v3  u3v1)j (u1v2  u2v1)k  Notes: (1) The cross product is defined only for vectors in R3. (2) The cross product of two vectors in R3 is orthogonal to two vectors. (3) The cross product of two vectors in Rn, n ≠ 3 is not defined here. Elementary Linear Algebra: Section 5.5, p.277 87/101  Ex 1: (Finding the Cross Product of Two Vectors)

u  i  2j k v  3i  j 2k Sol: i j k  2 1 1 1 1  2 u v  1  2 1  i  j k  3i  5j 7k 1  2 3  2 3 1 3 1  2 i j k 1  2 3  2 3 1 vu  3 1  2  i  j k  3i  5j 7k  2 1 1 1 1  2 1  2 1 i j k 1  2 3  2 3 1 v v  3 1  2  i  j k  0i  0j 0k  0 1  2 3  2 3 1 2 1  2 Elementary Linear Algebra: Section 5.5, p.278 88/101  Thm 5.17: (Algebraic Properties of the Cross Product) If u, v, and w are vectors in R3 and c is a scalar, then the following properties are true. 1. u v  (v u) 2. u(v  w)  (u v)  (u w) 3. c(u v)  cu v  u cv 4. u 0  0u  0 5. uu  0 6. u (v  w)  (u v)w

Elementary Linear Algebra: Section 5.5, p.278 89/101 Pf:

u  u1i  u2 j u3k v  v1i  v2 j  v3k i j k

u v  u1 u2 u3  (u2v3  u3v2 )i  (u1v3  u3v1)j (u1v2  u2v1)k

v1 v2 v3 i j k

v u  v1 v2 v3  (v2u3  v3u2 )i  (v1u3  v3u1)j (v1u2  v2u1)k

u1 u2 u3

 (u2v3  u3v2 )i  (u1v3  u3v1)j (u1v2  u2v1)k  (v u)

 Note: The vectors u × v and v × u have equal but opposite directions.

Elementary Linear Algebra: Section 5.5, p.278-279 90/101  Thm 5.18: (Geometric Properties of the Cross Product)

If u and v are nonzero vectors in R3, then the following properties are true.

1. u × v is orthogonal to both u and v.

2. The angle θ between u and v is given by u  v  u v sin  . 3. u and v are parallel if and only if u  v  0 . 4. The parallelogram having u and v as adjacent sides has an of u  v .

Elementary Linear Algebra: Section 5.5, p.279 91/101 Pf: Base Height

Area  u v sin  u v

 Notes: (1) The three vectors u, v, and u × v form a right-handed system. (2) The three vectors u, v, and v × u form a left-handed system. Elementary Linear Algebra: Section 5.5, p.279 92/101 Ex 2: (Finding a Vector Orthogonal to Two Given Vectors) u  i  4j k v  2i  3j Sol: i j k u v  1  4 1  3i  2j11k length 2 3 0 u v  (3)2  22 112  134 unit vector u v  3 2 11  i  j k u v 134 134 134  3 2 11 ( , , )(1,  4, 1)  0 134 134 134  3 2 11 ( , , )(2, 3, 0)  0 134 134 134 Elementary Linear Algebra: Section 5.5, p.280 93/101 Ex 3: (Finding the Area of a Parallelogram)

u  3i  4j k v  2j 6k Sol: i j k u v   3 4 1  26i 18j 6k 0  2 6

area

u v  262 182  62  1036  32.19

Elementary Linear Algebra: Section 5.5, p.280 94/101 Key Learning in Section 5.5

3  Find the cross product of two vectors in R .

 Find the linear or quadratic least squares approximation of a function.

 Find the nth-order Fourier approximation of a function.

95/101 Keywords in Section 5.5

 cross product: 外積

 parallelogram: 平行四邊形

96/101 5.1 Linear Algebra Applied

 Electric/Magnetic Flux

Electrical engineers can use the dot product to calculate electric or magnetic flux, which is a measure of the strength of the electric or magnetic penetrating a surface. Consider an arbitrarily shaped surface with an element of area dA, normal (perpendicular) vector dA, electric field vector E and magnetic field vector B. The electric flux Φe can be found using the surface integral Φe = ∫ E‧dA and the magnetic flux can be found using the surface integral Φe = ∫ B‧dA. It is interesting to note that for a given closed surface that surrounds an electrical charge, the net electric flux is proportional to the charge, but the net magnetic flux is zero. This is because electric fields initiate at positive charges and terminate at negative charges, but magnetic fields form closed loops, so they do not initiate or terminate at any point. This that the magnetic field entering a closed surface must equal the magnetic field leaving the closed surface.

Elementary Linear Algebra: Section 5.1, p.240 97/101 5.2 Linear Algebra Applied

 Work

The concept of work is important to scientists and engineers for determining the energy needed to perform various jobs. If a constant force F acts at an angle  with the line of motion of an object to move the object from point A to point B (see figure below), then the work done by the force is given by W (cos ) F AB

=F  AB where AB represents the directed line segment from A to B. The quantity (cos  ) F is the length of the orthogonal projection of F onto AB Orthogonal projections are discussed on the next page.

Elementary Linear Algebra: Section 5.2, p.248 98/101 5.3 Linear Algebra Applied

 Heart Rhythm Analysis

Time-frequency analysis of irregular physiological signals, such as beat-to-beat cardiac rhythm variations (also known as heart rate variability or HRV), can be difficult. This is because the structure of a signal can include multiple periodic, nonperiodic, and pseudo- periodic components. Researchers have proposed and validated a simplified HRV analysis method called orthonormal-basis partitioning and time-frequency representation (OPTR). This method can detect both abrupt and slow changes in the HRV signal’s structure, divide a nonstationary HRV signal into segments that are “less nonstationary,” and determine patterns in the HRV. The researchers found that although it had poor time resolution with signals that changed gradually, the OPTR method accurately represented multicomponent and abrupt changes in both real-life and simulated HRV signals.

Elementary Linear Algebra: Section 5.3, p.255 99/101 5.4 Linear Algebra Applied

 Revenues

The least squares problem has a wide variety of real- life applications. To illustrate, in Examples 9 and 10 and Exercises 39, 40, and 41, are all least squares analysis problems, and they involve such diverse subject matter as world population, astronomy, master’s degrees awarded, company revenues, and galloping speeds of animals. In each of these applications, you will be given a set of data and you are asked to come up with mathematical model(s) for the data. For example, in Exercise 40, you are given the annual revenues from 2008 through 2013 for General Dynamics Corporation. You are asked to find the least squares regression quadratic and cubic polynomials for the data, to predict the revenue for the year 2018, and to decide which of the models appears to be more accurate for predicting future revenues.

Elementary Linear Algebra: Section 5.4, p.266 100/101 5.5 Linear Algebra Applied

 Torque

In physics, the cross product can be used to measure torque—the moment M of a force F about a point A as shown in the figure below. When the point of application of the force is B, the moment of F about A is given by

MFAB where AB represents the vector whose initial point is A and whose terminal point is B. The magnitude of the moment M measures the tendency of to rotate counterclockwise about an axis directed along the vector M.

Elementary Linear Algebra: Section 5.5, p.277 101/101