1 Scalar.Mcd Fundamental Concepts in Vector Analysis: Scalar Product, Magnitude, Angle, Orthogonality, Projection
Total Page:16
File Type:pdf, Size:1020Kb
1 scalar.mcd Fundamental concepts in vector analysis: scalar product, magnitude, angle, orthogonality, projection. Instructor: Nam Sun Wang In a linear vector space (LVS), there are rules on 1) addition of two vectors and 2) multiplication by a scalar. The resulting vector must also lie in the same LVS. We now impose another rule: multiplication of two vectors to yield a scalar. Real Euclidean Space. Real Euclidean Space is a subspace of LVS where there is a real-valued function (called scalar product) defined on pairs of vectors x and y with the following four properties. 1. Commutative (x, y) (y, x) 2. Associative (.x, y) .(x, y) 3. Distributive (x y, z) (x, z) (y, z) 4. Positive magnitude (x, x)>0 unless x 0 math jargon: (x, x) 0 (x, x) 0 iff x 0 Note that it is up to us to come up with a definition of the scalar product. Any definition that satisfies that above four properties is a valid one. Not only do we define the scalar product accordingly for different types of vectors, we conveniently adopt different definitions of scalar product for different applications even for one type of vectors. A scalar product is also known as an inner product or a dot product. We have vectors in linear vector space if we come up with rules of addition and multiplication by a scalar; furthermore, these vectors are in real Euclidean space if we come up with a rule on multiplication of two vectors. There are two metrics of a vector: magnitude and direction. They are defined in terms of the scalar product. Magnitude, or length or Euclidean norm, of x: x (x, x) (x, y) Direction or angle between two vectors x and y cos() x . y Orthogonal means (is defined as) =/2=90 or (x,y)=0. Schwarz Inequality. (x, y) x . y Schwarz inequality results from the four properties of scalar product. It does not depend on what types of vectors or how the scalar product is defined. Special case. For linearly dependent vectors, (x, y) x . y Triangular Inequality. x y x y Proof. ( x y )2 (x y, x y) (x, x) 2.(x, y) (y, y) . ( x )2 2.(x, y) ( y )2 ( x )2 2. x . y ( y )2 ( x y )2 Thus, x y x y Special case. For orthogonal vectors x and y, the scalar product is 0, i.e., (x,y)=0. Thus, ( x y )2 ( x )2 ( y )2 ... Pythagorean theorem (True not just for the conventional distances, but for all vectors in general). 2 scalar.mcd Example -- A column of real numbers. n Define (x, y) x .y i i i = 1 Show that this definition is valid by demonstrating that it satisfies the four properties. n n Property #1. (x, y) (y, x) (x, y) x .y y .x (y, x) ... ok i i i i i = 1 i = 1 n n Property #2. (.x, y) .(x, y) (.x, y) .x .y . x .y .(x, y) ... ok i i i i i = 1 i = 1 n n Property #3. (x y, z) (x, z) (y, z) (x y, z) x y .z x .z y .z i i i i i i i i = 1 i = 1 n n . x .z y .z (x, z) (y, z)... ok i i i i i = 1 i = 1 n Property #4. (x, x)>0 unless x 0 (x, x) x .x >0 unless all elements are 0. i i i = 1 Thus, the definition is a valid one. Being vectors in real Euclidean space, they automatically inherit all sorts of properties, including Schwarz inequality and triangular inequality, Pythagorean rule, etc. Schwarz inequality: n n n x .y x .x . y .y i i i i i i i = 1 i = 1 i = 1 Is the following definition of a scalar product a valid one? n (x, y) w .x .y i i i i = 1 What about the following? (x, y) x .y x .y 1 1 2 2 What about the following? (x, y) x .y x .y 1 2 2 1 3 scalar.mcd Example -- continuous functions f(t) in 0t1. 1 Define (f, g) f(t).g(t) dt 0 Show that this definition is valid by demonstrating that it satisfies the four properties. 1 1 Property #1. (f, g) (g, f) (f, g) f(t).g(t) dt g(t).f(t) dt (g, f) ... ok 0 0 1 1 Property #2. (.x, y) .(x, y) (.f, g) (.f(t)).g(t) dt . f(t).g(t) dt .(f, g) 0 0 ... ok 1 Property #3. (f g, h) (f, h) (g, h) (f g, h) (f(t) g(t)).h(t) dt 0 1 . (f(t).h(t) g(t).h(t)) dt 0 1 1 . f(t).h(t) dt g(t).h(t) dt 0 0 . (f, h) (g, h) ... ok 1 Property #4. (f, f)>0 unless f 0 (f, f) f(t).f(t) dt>0 ... ok 0 Thus, the definition is a valid one. Schwarz inequality: 1 1 1 f(t).g(t) dt f(t).f(t) dt. g(t).g(t) dt 0 0 0 Is the following definition of a scalar product valid one? 1 (f, g) w(t).f(t).g(t) dt 0 What about the following, where and are within [0, 1], say, [0.5 0.6]? (f, g) f(t).g(t) dt 4 scalar.mcd Gram-Schmidt Process. Given a set of n linearly independent vectors in real Euclidean space, f 1, f2,..., fn, we can construct a set of n vectors g1, g2,..., gn that are pair-wise orthogonal. g i, g j 0 for i j First, we choose any one of the given vectors fi to be the first vector. Here we simply choose f1. g 1 f 1 To construct a second vector that is orthogonal to the first vector, we take any one of the remaining given vectors (typically f2) and subtract from it the component aligned with the first vector g1. Whatever is left behind has nothing in common with the first vector, and, therefore, is orthogonal to the first vector. f , g 2 1 . g 2 f 2 g 1 g 1, g 1 To construct a third vector that is orthogonal to the first two vectors that have already been constructed, we take any one of the remaining given vectors (typically f 3) and subtract from it both the component aligned with the first vector g1 and that aligned with the second vector g2. Whatever is left behind has nothing in common with the first two vectors, and, therefore, is orthogonal to the first two vectors. f , g f , g 3 1 . 3 2 . g 3 f 3 g 1 g 2 g 1, g 1 g 2, g 2 f , g f , g f , g 4 1 . 4 2 . 4 3 . g 4 f 4 g 1 g 2 g 3 g 1, g 1 g 2, g 2 g 3, g 3 We repeat the process. i 1 f , g f , g f , g f , g i 1 . i 2 . i i 1 . i j . g i f i g 1 g 2 ... g f i g j g , g g , g g , g i 1 g , g 1 1 2 2 i 1 i 1 j = 1 j j Finally, the last orthogonal vector is, n 1 f , g f , g f , g f , g n 1 . n 2 . n n 1 . n j . g n f n g 1 g 2 ... g f n g j g , g g , g g , g n 1 g , g 1 1 2 2 n 1 n 1 j = 1 j j In general, we generate orthogonal basis function by the Gram-Schmidt orthogonalization process, i 1 g , f j i . g i f i g j for i 1, 2 .. n g , g j = 1 j j We have an orthonormal basis, if the basis vectors are normalized to have length of unity. g i g i, g i 1 g i, g j 0 for i j g i, g j 1 for i j 5 scalar.mcd Example. All polynomials of degree n or less in -1t1. 1 Define scalar project. (p, q) p(t).q(t) dt 1 Given f (t) 1 f (t) t f (t) t2 ... f tn 1 2 3 n 1 The results of Gram-Schmidt orthogonalization process is a set of polynomials known as Legendre polynomials, L1, L2,..., Ln+1, which form an orthogonal basis for the set of all polynomials of degree n or less. Thus, we have the choice of expressing a continuous function in -1t1 as a linear combination of the power series (which are not orthogonal) or as a linear combination of the Legendre polynomials (which are orthogonal). Because the basis is defined only for finite dimensions, we can only approximate, not represent exactly, a general continuous function in -1t1. g 1 1 g 2 t 2 1 g 3 t 3 3 3. g 4 t t 5 : i i 1 . d 2 g i t 1 2i.i! d ti Example. All functions f(t) of the form in 0t n n . f(t) j cos(j t) j sin(j t) j = 0 j = 1 The following set of 2n+1 vectors provide a basis (not orthogonal in [0 ]). f 1(t) 1 f 2(t) sin(t) f 3(t) cos(t) .