Notes for Chapter 5 – Orthogonality 5.1 the Scalar Product in R

Total Page:16

File Type:pdf, Size:1020Kb

Notes for Chapter 5 – Orthogonality 5.1 the Scalar Product in R Notes for Chapter 5 { Orthogonality n 5.1 The Scalar Product in R n The vector space , R = fx = (x1; x2; ··· ; xn): xj 2 R for j = 1; ··· ; ng. Here we define the vector addition and scalar multiplication as follows. 1. Given vectors x = (x1; x2; ··· ; xn) and y = (y1; y2; ··· ; yn) we define x + y by x + y = (x1 + y1; x2 + y2; ··· ; xn + yn): 2. Given a scalar α and a vector x = (x1; x2; ··· ; xn) we define αx by αx = (αx1; αx2; ··· ; αxn): The scalar product (or dot product) of x and y 2 Rn is defined by n T X x · y = x y = xjyj (1) j=1 We use the scalar product to define a distance function called the norm. Definition 1. 1. The length of a vector x 2 Rn is defined by n !1=2 T 1=2 X 2 kxk = (x x) = xj : (2) j=1 2. The distance between two vectors x and yRn is n !1=2 X 2 kx − yk = (xj − yj) : (3) j=1 3. If θ is the angle between two vectors x and y 2 Rn then xTy cos(θ) = : (4) kxk kyk 4. A vector u is called a unit vector if kuk = 1. For any vector x 2 Rn a direction vector u pointing in the direction of x is the unit vector u = x=kxk. 5. Two vectors x and y 2 Rn are said to be orthogonal if xTy = 0, i.e., if the angle between them is 90◦. xTy 6. The Scalar Projection of a vector x onto a vector y is . kyk xTy 7. The Vector Projection of a vector x onto a vector y is P (x) = y. y kyk2 8. If vectors x and y are orthogonal then kx + yk2 = kxk2 + kyk2. 1 9. In general for all vectors x and y we have kx + yk2 ≤ kxk2 + kyk2 which is often called the triangle inequality. 10. (Cauchy-Schwartz Inequality) For all vectors x and y we have T x y ≤ kxk kyk: (5) If P and P are points in n, then the vector from P = (x ; x ; ··· ; x ) to P = (y ; y ; ··· ; y ) is 1 2−−! R 1 1 2 n 2 1 2 n denoted by P1P2. This vector can be written as −−! P1P2 =< (y1 − x1); (y2 − x2); ··· ; (yn − xn) > : n n If N is a nonzero vector and P0 is a point in R , then the set of points P = (x1; x2; ··· ; xn) 2 R T −−! n satisfying N P0P = 0 forms a hyperplane in R that passes through the point P0 with normal 0 0 0 vector N. For example, if N =< a1; a2; ··· ; an > and P0 = (x1; x2; ··· ; xn) then the equation of the hyperplane is T −−! 0 0 0 N P0P = a1(x1 − x1) + a2(x2 − x2) + ··· + an(xn − xn) = 0: In R3 a hyperplane is called a plane and the above formula is the one you learned in calculus III for the equation of a plane. In R2 a hyperplane is called a line and the above formula is the one you learned in calculus III for the equation of a line. We can use this information to answer some simple basic problems. For example, to find the distance 3 T −−! d from a point P1 = (x1; y1; z1) in R to the plane N P0P = we can use the scalar projection to find T −−! jN P0P1 j ja (x − x ) + a (y − y ) + a (z − z )j d = = 1 1 0 2 1 0 3 1 0 : kNk p 2 2 2 a1 + a2 + a3 The analog of this in R2 would be to find the distance d from a point to a line. In this case the line may be written as follows: The line passing through a point P0 = (x0; y0) and orthogonal to a vector N =< a1; a2 > is given by T −−! N P0P1 = a1(x − x0) + a2(y − y0): 2 T −−! The distance from a point P1 = (x1; y1) in R to the line N P0P = 0 is T −−! jN P0P1 j ja (x − x ) + a (y − y )j d = = 1 1 0 2 1 0 : kNk p 2 2 a1 + a2 5.2 Orthogonal Subspaces Definition 2. 1. Two subspaces X and Y of Rn are called orthogonal if xTy = 0 for every x 2 X and y 2 Y . In this case we write X ? Y . 2. If Y is a subspace of Rn then the Orthogonal Complement of Y , denoted Y ? is defined by ? n T Y = fx 2 R : x y = 0 8 y 2 Y g: (6) 2 Example 1. Let A be a m × n matrix. Then A generates a linear mapping from Rn to Rm by T (x) = Ax. The column space of A is the same as the image of T . We denote the range space by R(A): m n R(A) = fb 2 R : b = Ax for some x 2 R g: (7) The column space of AT, i.e., R(AT) ⊂ Rn T n T m R(A ) = fy 2 R : y = A x for some x 2 R g: (8) If y 2 R(AT) ⊂ Rn and x 2 N(A) then we claim x and y are orthogonal, i.e., R(AT) ? N(A). T m T Recall the y 2 R(A ) means there is a vector x0R such that y = A x0. So we have T T T T x y = x A x0 = (Ax) x0 = 0 since x 2 N(A). This gives us the so-called Fundamental Subspace Theorem: Theorem 1. Let A be a m × n matrix. The we have N(A) = R(AT)? and N(AT) = R(A)?: (9) Definition 3. If U and V are subspaces of a vector space W and if each w 2 W can be written uniquely in the form w = u + v. Notice that this is really two statements: 1) every w can be written in this form, and, 2) the representation is unique. When this is the case we say the W is the direct sum of U and V and we write W = U ⊕ V . We also have the following general results for subspaces of the vector space Rn: Theorem 2. 1. Let S be a subspace of Rn. Then we have dim(S) + dim(S?) = n. Further, if r n n n fxjgj=1 is a basis for U and fxjgj=r+1 is a basis for V then fxjgj=1 is a basis for R . 2. If S be a subspace of Rn then S ⊕ S? = Rn. 3. If S be a subspace of Rn then (S?)? = S. 4. If A is an m × n matrix and b 2 Rm, then one of the following alternatives must hold: (a) There is an x 2 Rn so that Ax = b, or, (b) There is a vector y 2 Rm so that ATy = 0 (i.e., y 2 N(AT) ) and yTb 6= 0. 5. In other words, b 2 R(A) , b ? N(AT). N.B. This is a very important result. (a) When we want to solve Ax = b it can be very important to have a test to decide if the problem is solvable. That is, whether b 2 R(A). This result tells us that if we find a basis for N(AT) we can check to see if b 2 R(A) by simply checking to see if b ? y for all y in a basis for N(AT). (b) You will also find that this is the basis for the method of least squares in the next section. 6. We can write n T R = R(A ) ⊕ N(A): Remark 1. 1. To find a basis for the R(A) we find the row echelon form to determine the pivot columns then take exactly those columns of A. 3 01 2 3 1 1 2. We can also proceed as in the following example. Given A = @1 3 5 −2A. We note 3 8 13 −3 that a basis for the R(A) is the same thing as a basis for the column space of A. Now we also know that the column space of A is the same as the row space of AT so we find the row echelon form U of AT and take the transpose of the pivot rows. Here 01 1 3 1 01 1 31 T B2 3 8 C B0 1 2C A = B C ) U = B C : @3 5 13 A @0 0 0A 1 −2 −3 0 0 0 Therefore < 1; 1; 3 > and < 0; 1; 2 > form a basis for R(A). 5.3 Least Squares When we have an overdetermined linear system it is unlikely there will be a solution. Nevertheless this is exactly the type of problem very often considered in applications. What happens in these applications is that one seeks to find a so called least squares solution. This is something that is close to being a solution in a certain sense. In particular we have an overdetermined system Ax = b where A is m × n with m > n (usually much larger). Definition 4 (Least Squares Problem). Given a system Ax = b where A is m × n with m > n, b 2 Rm, then for each x 2 Rn we can form the residual r(x) = b − Ax: The distance from b to Ax is kb − Axk = kr(x)k: 1. The least squares problem is to find a vector x 2 Rn for which kr(x)k is minimum.
Recommended publications
  • A New Description of Space and Time Using Clifford Multivectors
    A new description of space and time using Clifford multivectors James M. Chappell† , Nicolangelo Iannella† , Azhar Iqbal† , Mark Chappell‡ , Derek Abbott† †School of Electrical and Electronic Engineering, University of Adelaide, South Australia 5005, Australia ‡Griffith Institute, Griffith University, Queensland 4122, Australia Abstract Following the development of the special theory of relativity in 1905, Minkowski pro- posed a unified space and time structure consisting of three space dimensions and one time dimension, with relativistic effects then being natural consequences of this space- time geometry. In this paper, we illustrate how Clifford’s geometric algebra that utilizes multivectors to represent spacetime, provides an elegant mathematical framework for the study of relativistic phenomena. We show, with several examples, how the application of geometric algebra leads to the correct relativistic description of the physical phenomena being considered. This approach not only provides a compact mathematical representa- tion to tackle such phenomena, but also suggests some novel insights into the nature of time. Keywords: Geometric algebra, Clifford space, Spacetime, Multivectors, Algebraic framework 1. Introduction The physical world, based on early investigations, was deemed to possess three inde- pendent freedoms of translation, referred to as the three dimensions of space. This naive conclusion is also supported by more sophisticated analysis such as the existence of only five regular polyhedra and the inverse square force laws. If we lived in a world with four spatial dimensions, for example, we would be able to construct six regular solids, and in arXiv:1205.5195v2 [math-ph] 11 Oct 2012 five dimensions and above we would find only three [1].
    [Show full text]
  • Determinant Notes
    68 UNIT FIVE DETERMINANTS 5.1 INTRODUCTION In unit one the determinant of a 2× 2 matrix was introduced and used in the evaluation of a cross product. In this chapter we extend the definition of a determinant to any size square matrix. The determinant has a variety of applications. The value of the determinant of a square matrix A can be used to determine whether A is invertible or noninvertible. An explicit formula for A–1 exists that involves the determinant of A. Some systems of linear equations have solutions that can be expressed in terms of determinants. 5.2 DEFINITION OF THE DETERMINANT a11 a12 Recall that in chapter one the determinant of the 2× 2 matrix A = was a21 a22 defined to be the number a11a22 − a12 a21 and that the notation det (A) or A was used to represent the determinant of A. For any given n × n matrix A = a , the notation A [ ij ]n×n ij will be used to denote the (n −1)× (n −1) submatrix obtained from A by deleting the ith row and the jth column of A. The determinant of any size square matrix A = a is [ ij ]n×n defined recursively as follows. Definition of the Determinant Let A = a be an n × n matrix. [ ij ]n×n (1) If n = 1, that is A = [a11], then we define det (A) = a11 . n 1k+ (2) If na>=1, we define det(A) ∑(-1)11kk det(A ) k=1 Example If A = []5 , then by part (1) of the definition of the determinant, det (A) = 5.
    [Show full text]
  • Linear Algebra for Dummies
    Linear Algebra for Dummies Jorge A. Menendez October 6, 2017 Contents 1 Matrices and Vectors1 2 Matrix Multiplication2 3 Matrix Inverse, Pseudo-inverse4 4 Outer products 5 5 Inner Products 5 6 Example: Linear Regression7 7 Eigenstuff 8 8 Example: Covariance Matrices 11 9 Example: PCA 12 10 Useful resources 12 1 Matrices and Vectors An m × n matrix is simply an array of numbers: 2 3 a11 a12 : : : a1n 6 a21 a22 : : : a2n 7 A = 6 7 6 . 7 4 . 5 am1 am2 : : : amn where we define the indexing Aij = aij to designate the component in the ith row and jth column of A. The transpose of a matrix is obtained by flipping the rows with the columns: 2 3 a11 a21 : : : an1 6 a12 a22 : : : an2 7 AT = 6 7 6 . 7 4 . 5 a1m a2m : : : anm T which evidently is now an n × m matrix, with components Aij = Aji = aji. In other words, the transpose is obtained by simply flipping the row and column indeces. One particularly important matrix is called the identity matrix, which is composed of 1’s on the diagonal and 0’s everywhere else: 21 0 ::: 03 60 1 ::: 07 6 7 6. .. .7 4. .5 0 0 ::: 1 1 It is called the identity matrix because the product of any matrix with the identity matrix is identical to itself: AI = A In other words, I is the equivalent of the number 1 for matrices. For our purposes, a vector can simply be thought of as a matrix with one column1: 2 3 a1 6a2 7 a = 6 7 6 .
    [Show full text]
  • Eigenvalues and Eigenvectors
    Jim Lambers MAT 605 Fall Semester 2015-16 Lecture 14 and 15 Notes These notes correspond to Sections 4.4 and 4.5 in the text. Eigenvalues and Eigenvectors In order to compute the matrix exponential eAt for a given matrix A, it is helpful to know the eigenvalues and eigenvectors of A. Definitions and Properties Let A be an n × n matrix. A nonzero vector x is called an eigenvector of A if there exists a scalar λ such that Ax = λx: The scalar λ is called an eigenvalue of A, and we say that x is an eigenvector of A corresponding to λ. We see that an eigenvector of A is a vector for which matrix-vector multiplication with A is equivalent to scalar multiplication by λ. Because x is nonzero, it follows that if x is an eigenvector of A, then the matrix A − λI is singular, where λ is the corresponding eigenvalue. Therefore, λ satisfies the equation det(A − λI) = 0: The expression det(A−λI) is a polynomial of degree n in λ, and therefore is called the characteristic polynomial of A (eigenvalues are sometimes called characteristic values). It follows from the fact that the eigenvalues of A are the roots of the characteristic polynomial that A has n eigenvalues, which can repeat, and can also be complex, even if A is real. However, if A is real, any complex eigenvalues must occur in complex-conjugate pairs. The set of eigenvalues of A is called the spectrum of A, and denoted by λ(A). This terminology explains why the magnitude of the largest eigenvalues is called the spectral radius of A.
    [Show full text]
  • Algebra of Linear Transformations and Matrices Math 130 Linear Algebra
    Then the two compositions are 0 −1 1 0 0 1 BA = = 1 0 0 −1 1 0 Algebra of linear transformations and 1 0 0 −1 0 −1 AB = = matrices 0 −1 1 0 −1 0 Math 130 Linear Algebra D Joyce, Fall 2013 The products aren't the same. You can perform these on physical objects. Take We've looked at the operations of addition and a book. First rotate it 90◦ then flip it over. Start scalar multiplication on linear transformations and again but flip first then rotate 90◦. The book ends used them to define addition and scalar multipli- up in different orientations. cation on matrices. For a given basis β on V and another basis γ on W , we have an isomorphism Matrix multiplication is associative. Al- γ ' φβ : Hom(V; W ) ! Mm×n of vector spaces which though it's not commutative, it is associative. assigns to a linear transformation T : V ! W its That's because it corresponds to composition of γ standard matrix [T ]β. functions, and that's associative. Given any three We also have matrix multiplication which corre- functions f, g, and h, we'll show (f ◦ g) ◦ h = sponds to composition of linear transformations. If f ◦ (g ◦ h) by showing the two sides have the same A is the standard matrix for a transformation S, values for all x. and B is the standard matrix for a transformation T , then we defined multiplication of matrices so ((f ◦ g) ◦ h)(x) = (f ◦ g)(h(x)) = f(g(h(x))) that the product AB is be the standard matrix for S ◦ T .
    [Show full text]
  • Learning Geometric Algebra by Modeling Motions of the Earth and Shadows of Gnomons to Predict Solar Azimuths and Altitudes
    Learning Geometric Algebra by Modeling Motions of the Earth and Shadows of Gnomons to Predict Solar Azimuths and Altitudes April 24, 2018 James Smith [email protected] https://mx.linkedin.com/in/james-smith-1b195047 \Our first step in developing an expression for the orientation of \our" gnomon: Diagramming its location at the instant of the 2016 December solstice." Abstract Because the shortage of worked-out examples at introductory levels is an obstacle to widespread adoption of Geometric Algebra (GA), we use GA to calculate Solar azimuths and altitudes as a function of time via the heliocentric model. We begin by representing the Earth's motions in GA terms. Our representation incorporates an estimate of the time at which the Earth would have reached perihelion in 2017 if not affected by the Moon's gravity. Using the geometry of the December 2016 solstice as a starting 1 point, we then employ GA's capacities for handling rotations to determine the orientation of a gnomon at any given latitude and longitude during the period between the December solstices of 2016 and 2017. Subsequently, we derive equations for two angles: that between the Sun's rays and the gnomon's shaft, and that between the gnomon's shadow and the direction \north" as traced on the ground at the gnomon's location. To validate our equations, we convert those angles to Solar azimuths and altitudes for comparison with simulations made by the program Stellarium. As further validation, we analyze our equations algebraically to predict (for example) the precise timings and locations of sunrises, sunsets, and Solar zeniths on the solstices and equinoxes.
    [Show full text]
  • Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
    Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis.
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]
  • Equations of Lines and Planes
    Equations of Lines and 12.5 Planes Copyright © Cengage Learning. All rights reserved. Lines A line in the xy-plane is determined when a point on the line and the direction of the line (its slope or angle of inclination) are given. The equation of the line can then be written using the point-slope form. Likewise, a line L in three-dimensional space is determined when we know a point P0(x0, y0, z0) on L and the direction of L. In three dimensions the direction of a line is conveniently described by a vector, so we let v be a vector parallel to L. 2 Lines Let P(x, y, z) be an arbitrary point on L and let r0 and r be the position vectors of P0 and P (that is, they have representations and ). If a is the vector with representation as in Figure 1, then the Triangle Law for vector addition gives r = r0 + a. Figure 1 3 Lines But, since a and v are parallel vectors, there is a scalar t such that a = t v. Thus which is a vector equation of L. Each value of the parameter t gives the position vector r of a point on L. In other words, as t varies, the line is traced out by the tip of the vector r. 4 Lines As Figure 2 indicates, positive values of t correspond to points on L that lie on one side of P0, whereas negative values of t correspond to points that lie on the other side of P0.
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Lecture 3.Pdf
    ENGR-1100 Introduction to Engineering Analysis Lecture 3 POSITION VECTORS & FORCE VECTORS Today’s Objectives: Students will be able to : a) Represent a position vector in Cartesian coordinate form, from given geometry. In-Class Activities: • Applications / b) Represent a force vector directed along Relevance a line. • Write Position Vectors • Write a Force Vector along a line 1 DOT PRODUCT Today’s Objective: Students will be able to use the vector dot product to: a) determine an angle between In-Class Activities: two vectors, and, •Applications / Relevance b) determine the projection of a vector • Dot product - Definition along a specified line. • Angle Determination • Determining the Projection APPLICATIONS This ship’s mooring line, connected to the bow, can be represented as a Cartesian vector. What are the forces in the mooring line and how do we find their directions? Why would we want to know these things? 2 APPLICATIONS (continued) This awning is held up by three chains. What are the forces in the chains and how do we find their directions? Why would we want to know these things? POSITION VECTOR A position vector is defined as a fixed vector that locates a point in space relative to another point. Consider two points, A and B, in 3-D space. Let their coordinates be (XA, YA, ZA) and (XB, YB, ZB), respectively. 3 POSITION VECTOR The position vector directed from A to B, rAB , is defined as rAB = {( XB –XA ) i + ( YB –YA ) j + ( ZB –ZA ) k }m Please note that B is the ending point and A is the starting point.
    [Show full text]