Vector Calculus

Total Page:16

File Type:pdf, Size:1020Kb

Vector Calculus VECTOR CALCULUS Syafiq Johar [email protected] Contents 1 Introduction 2 2 Vector Spaces 2 n 2.1 Vectors in R ......................................3 2.2 Geometry of Vectors . .4 n 2.3 Lines and Planes in R .................................9 3 Quadric Surfaces 12 3.1 Curvilinear Coordinates . 15 4 Vector-Valued Functions 18 4.1 Derivatives of Vectors . 18 4.2 Length of Curves . 22 4.3 Parametrisation of Curves . 23 4.4 Integration of Vectors . 28 5 Multivariable Functions 29 5.1 Level Sets . 30 5.2 Partial Derivatives . 31 5.3 Gradient . 34 5.4 Critical and Extrema Points . 41 5.5 Lagrange Multipliers . 44 5.6 Integration over Multiple Variables . 45 6 Vector Fields 52 6.1 Divergence and Curl . 54 7 Line and Surface Integrals 56 7.1 Line Integral . 56 7.2 Green's Theorem . 62 7.3 Surface Integral . 65 7.4 Stokes' Theorem . 72 1 1 Introduction In previous calculus course, you have seen calculus of single variable. This variable, usually denoted x, is fed through a function f to give a real value f(x), which then can be represented 2 as a graph (x; f(x)) in R . From this graph, as long as f is sufficiently nice, we can deduce many properties and compute extra information using differentiation and integration. Differentiation df here measures the rate of change of the function as we vary x, that is dx is the infinitesimal rate of change of the quantity f. In this course, we are going to extend the notion of one-dimensional calculus to higher dimensions. We are going to work over vector spaces over the real numbers R. In higher dimensional spaces, we have a distinction between two objects which are called scalars and vectors. Scalars are quantities that only have magnitudes or size, which can be described by a real number. Vectors on the other hand are quantities that have a magnitude and direction. Vectors can be described with an array of numbers, as we shall see later. This array of numbers is reminiscent of the array (x; f(x)) we have seen above, but of course, in higher dimensions, we would have even move numbers. In higher dimensions, it can be difficult to have an explicit graphical representation of the vectors, so we will restrict our attention to 2 and 3 dimensions most of the time. Even so, there is a rich amount of mathematics available in these low dimensions. There are also various applications of vector calculus in physics, engineering, economics, geophysical sciences, meteorology, astronomy, and optimisation. It is also used as a tool and generalised in further studies of pure mathematics, such as topology and differential geometry. 2 Vector Spaces In order to study objects in higher dimensions, we first define vector spaces. Abstractly, a real vector space is a collection of points (or can be seen as arrows) with two operations: addition and scalar multiplication. More concretely: Definition 2.1 (Vector spaces). A vector space V over the field F is a non-empty set V together with the addition map V × V ! V such that (u; v) 7! u + v and a scalar multiplication map F × V ! V such that (λ, v) 7! λv satisfying the vector space axioms: 1. addition is commutative: u + v = v + u, 2. addition is associative: (u + v) + w = u + (v + w), 3. existence of additive identity: there exists an element 0 such that 0 + v = v + 0 = v, 4. existence of additive inverse: for every v 2 V , there exists a u 2 V such that v + u = u + v = 0, 5. distributivity of scalar multiplication over addition: λ(u + v) = λu + λv, 2 6. distributivity of scalar multiplication over field addition: (λ + µ)u = λu + µu, 7. compatibility of scalar multiplication with field multiplication: (λµ)u = λ(µu), 8. existence of scalar multiplication identity: there exists an element 1 2 F such that 1v = v. 2.1 Vectors in Rn However, for concreteness and practical applications, we are mostly interested in the vector n n space R over the field R. In this vector space, every element v 2 R can be written as the list (v1; v2; : : : ; vn) where vi 2 R for all i = 1; 2; : : : ; n. Sometimes, these numbers are arranged in a column. In matrix notation, this is written as the transpose (v1; v2; : : : ; vn)|. We can also write n the vectors in terms of the standard basis of R : n n Definition 2.2 (Standard basis of R ). The standard basis of R is given by the collection of vectors fe1; e2;:::; eng such that ei = (0;:::; 0; 1; 0;:::; 0) where the number 1 appears in the i-th position and 0 appears everywhere else. This basis is also called the Cartesian coordinate system. In this system, we can express a vector v = (v1; v2; : : : ; vn) as the sum v = v1e1 + v2e2 + 2 ::: + vnen. Note that this is simply a generalisation of the Cartesian plane R we have seen in high school where the standard basis is simply the unit vector in the x and y axes. Thus, 2 vectors in R are written as the pair (x; y) = xe1 + ye2. The difference in higher dimensions is just that we have more components to specify a point. e3 v3 v v1 v2 e1 e2 Figure 1: A vector v and its coordinates v = v1e1 + v2e2 + v3e3 = (v1; v2; v3). With these concrete expression of vectors, we can define the addition and scalar multiplica- tions explicitly. Suppose that u = (u1; u2; : : : ; un) and v = (v1; v2; : : : ; vn), then: u + v = (u1; u2; : : : ; un) + (v1; v2; : : : ; vn) = (u1 + v1; u2 + v2; : : : ; un + vn); λu = λ(u1; u2; : : : ; un) = (λu1; λu2; : : : ; λun): Of course, the zero vector 0 in this expression is given by (0; 0;:::; 0). With these defined, one can easily check that all the vector space axioms are satisfied. Remark 2.3. In engineering or physics, where one usually works in low dimensional vector 2 3 2 3 spaces like R or R , the standard bases of R and R are written as fi = (1; 0); j = (0; 1)g and fi = (1; 0; 0); j = (0; 1; 0); k = (0; 0; 1)g respectively. 3 2.2 Geometry of Vectors Geometrically, we can view vectors as the position of a point relative to the 0 vector (called the origin). This type of vector is called the position vector. The scalars vi are called coordinates (or components) of the vector (or point) v with respect to the standard basis. Two position vectors are called parallel if they are scalar multiples of each other. Another interpretation of vectors is that it represents a movement or direction from a specific point. For example, starting from a point w = (w1; w2; : : : ; wn) if we move in the direction v = (v1; v2; : : : ; vn), we would end up at the point (w1 + v1; w2 + v2; : : : ; wn + vn) = w + v. The vector v is called the translation vector. Therefore, if we start at a point u and wishes to end up at the point w, we have to move in the direction w − u. e3 e3 w + v u w v w e e e1 e2 1 2 w − u (a) Vector addition. (b) Vector subtraction. Figure 2: Vector addition and subtraction. Since vectors represent geometrical objects, we can deduce some geometrical properties such as lengths and angles. n Definition 2.4 (Length). The length or magnitude of a vector v = (v1; v2; : : : ; vn) 2 R is the non-negative real number jvj defined by: q 2 2 2 jvj = v1 + v2 + ::: + vn; which measures the distance of the point v from the origin 0. n From the above definition, the distance between any two point u and v in R is given by the magnitude of the vector v − u, that is: p 2 2 2 dist(u; v) = jv − uj = (v1 − u1) + (v2 − u2) + ::: + (vn − un) : Associated to any vector v is the unit vector v^ which is defined as the vector parallel to v with unit length. This definition is useful in denoting the direction a vector is pointing, without regards to its magnitude. More concretely, it is given by the following: n Definition 2.5 (Unit vector). Let v 2 R be a vector. Then, the unit vector in the direction of v is given by: 1 v^ = v: jvj Dividing a non-zero vector by its magnitude is an operation called normalising. 4 n Proposition 2.6 (Properties of magnitude). Suppose that u; v 2 R and λ 2 R. Then: 1. juj ≥ 0 with equality if and only if u = 0, 2. jλuj = jλjjuj, 3. triangle inequality: ju + vj ≤ juj + jvj with equality if and only if u = λv for some λ ≥ 0. 4. reverse triangle inequality: ju − vj ≥ jjuj − jvjj. Proof. The first two assertions are clear. To prove the third assertion, suppose that u = (u1; u2; : : : ; un) and v = (v1; v2; : : : ; vn). The inequality is trivial if v = 0. So assume that v 6= 0. Then for any real number x 2 R, we have: n n 2 X 2 2 2 2 X 0 ≤ ju + xvj = (ui + xvi) = juj + x jvj + 2x uivi; (1) i=1 i=1 which is a quadratic expression in x. Since the quadratic expression is non-negative, its dis- criminant must be non-positive: !2 n n X 2 2 X 2 uivi − 4juj jvj ≤ 0 ) uivi ≤ jujjvj: i=1 i=1 Hence, we compute: n n 2 2 2 X 2 2 X 2 2 2 ju+vj = juj +jvj +2 uivi ≤ juj +jvj +2 uivi ≤ juj +jvj +2jujjvj = (juj+jvj) ; (2) i=1 i=1 which, upon taking the square root on both sides, implies the desired inequality.
Recommended publications
  • Multilinear Algebra
    Appendix A Multilinear Algebra This chapter presents concepts from multilinear algebra based on the basic properties of finite dimensional vector spaces and linear maps. The primary aim of the chapter is to give a concise introduction to alternating tensors which are necessary to define differential forms on manifolds. Many of the stated definitions and propositions can be found in Lee [1], Chaps. 11, 12 and 14. Some definitions and propositions are complemented by short and simple examples. First, in Sect. A.1 dual and bidual vector spaces are discussed. Subsequently, in Sects. A.2–A.4, tensors and alternating tensors together with operations such as the tensor and wedge product are introduced. Lastly, in Sect. A.5, the concepts which are necessary to introduce the wedge product are summarized in eight steps. A.1 The Dual Space Let V be a real vector space of finite dimension dim V = n.Let(e1,...,en) be a basis of V . Then every v ∈ V can be uniquely represented as a linear combination i v = v ei , (A.1) where summation convention over repeated indices is applied. The coefficients vi ∈ R arereferredtoascomponents of the vector v. Throughout the whole chapter, only finite dimensional real vector spaces, typically denoted by V , are treated. When not stated differently, summation convention is applied. Definition A.1 (Dual Space)Thedual space of V is the set of real-valued linear functionals ∗ V := {ω : V → R : ω linear} . (A.2) The elements of the dual space V ∗ are called linear forms on V . © Springer International Publishing Switzerland 2015 123 S.R.
    [Show full text]
  • Math Reference
    Math Reference Eran Guendelman Last revised: November 16, 2002 1 Vectors 1.1 Cross Product 1.1.1 Basic Properties • The cross product is not associative 1.1.2 The star operator [Bara, Rigid Body Simulation]: 0 −uz uy ∗ • Let u = uz 0 −ux . Then −uy ux 0 u∗v = u × v vT u∗ = (v × u)T • (u∗)T = −u∗ • (u∗)2 = uuT − uT uI (using u × (u × v) = uuT − uT uI v from 1.1.4) ∗ • If A = (a1 | a2 | a3) then u A = (u × a1 | u × a2 | u × a3). We could write u × A but I'm not sure how standard this notation is... In summation notation ∗ • (u )ij = −εijkuk 1.1.3 Scalar triple product [Weisstein, Scalar triple product]: [u, v, w] = u · (v × w) = v · (w × u) = w · (u × v) ux uy uz = vx vy vz wx wy wz 1 1.1.4 Vector triple product [Weisstein, Vector triple product]: u × (v × w) = v (u · w) − w (u · v) • Note that u × (v × w) = vwT − wvT u = uT wI − wuT v = vuT − uT vI w • As a special case, we get u × (v × u) = v (u · u) − u (u · v) = uT uI − uuT v • You can also look at it as decomposing v into components along orthogonal vectors u and u × (v × u): v · u v · (u × (v × u)) v = u + u × (v × u) |u|2 |u × (v × u)|2 uT v 1 = u + u × (v × u) uT u uT u where we used v · (u × (v × u)) = (v × u) · (v × u) = |v × u|2 and |u × (v × u)|2 = |u|2 |v × u|2 • Yet another way to look at it (related to above) is to note that Pu (v) = T uu is the linear operator projecting onto the vector .
    [Show full text]
  • Exercises: Answers and Solutions
    Appendix Exercises: Answers and Solutions Exercise Chapter 1 1.1 Complex Numbers as 2D Vectors (p. 6) Convince yourself that the complex numbers z = x + iy are elements of a vector space, i.e. that they obey the rules (1.1)–(1.6). Make a sketch to demonstrate that z1 + z2 = z2 + z1, with z1 = 3 + 4i and z2 = 4 + 3i, in accord with the vector addition in 2D. Exercise Chapter 2 2.1 Exercise: Compute Scalar Product for given Vectors (p. 14) Compute the length, the scalar products and the angles betwen the three vectors a, b, c which have the components {1, 0, 0}, {1, 1, 0}, and {1, 1, 1}. Hint: to visualize the directions of the vectors, make a sketch of a cube and draw them there! The scalar products of the vectors with themselves are: a · a = 1, b · b = 2, c · c = 3, and consequently √ √ a =|a|=1, b =|b|= 2, c =|c|= 3. The mutual scalar products are a · b = 1, a · c = 1, b · c = 2. The cosine of the angle ϕ between these vectors are √1 , √1 , √2 , 2 3 6 © Springer International Publishing Switzerland 2015 389 S. Hess, Tensors for Physics, Undergraduate Lecture Notes in Physics, DOI 10.1007/978-3-319-12787-3 390 Appendix: Exercises… respectively. The corresponding angles are exactly 45◦ for the angle between a and b, and ≈70.5◦ and ≈35.3◦, for the other two angles. Exercises Chapter 3 3.1 Symmetric and Antisymmetric Parts of a Dyadic in Matrix Notation (p. 38) Write the symmetric traceless and the antisymmetric parts of the dyadic tensor Aμν = aμbν in matrix form for the vectors a :{1, 0, 0} and b :{0, 1, 0}.
    [Show full text]
  • Unit 10 Vector Algebra
    UNIT 10 VECTOR ALGEBRA Structure 10.1 Introduction Objectives 10.2 Basic Concepts 10.3 Components of a Vector 10.4 Operations on vectors 10.4.1 Addition of Vectors 10.4.2 Properties of Vector Addition 10.4.3 Multiplication of Vectors by S&lars 10.5 Product of Two Vectors 10.5.1 Scalar or Dot Product 10.5.2 Vector Product of Two Vectors 10.6 Multiple Products of Vectors 10.6.1 Scalar Triple Product 10.6.2 Vector Triple Product 10.6.3 Quadruple Product of Vectors 10.7 Polar and Axial Vectors 10.8 Coordinate Systems for Space 10.8.1 Cartesian Coordinates 10.8.2 Cylindrical Coordinates 10.8.3 Spherical Coordinates 10.9 Summary 1 0.1 INTKODUCTION Vectors are used extensively in almost all branches of physics, mathematics and engineering. The usefulness of vectors in engineering mathematics results from the fact that many physical quantities-for example velocity of a body, the forces acting on a body, linear and angular momentum of a body, magnetic and electrostatic forces may be represented by vectors. In several respects, the rules of vector calculations are as simple as the rules governing thesystem of real numbers. It is true that any problem that can be solved by the use of vectors can also be treated by non-vectorial methods, but vectol analysis simplifies many calculations considerably. Further more, it is a way of visualizing physical and geometrical quantities and relations between them. For all these reasons, extensive use is made of vector notation in modern engineering literature.
    [Show full text]
  • Cross Product from Wikipedia, the Free Encyclopedia
    Cross product From Wikipedia, the free encyclopedia In mathematics and vector calculus, the cross product or vector product (occasionally directed area product to emphasize the geometric significance) is a binary operation on two vectors in three-dimensional space (R3) and is denoted by the symbol ×. Given two linearly independent vectors a and b, the cross product, a × b, is a vector that is perpendicular to both a and b and therefore normal to the plane containing them. It has many applications in mathematics, physics, engineering, and computer programming. It should not be confused with dot product (projection product). If two vectors have the same direction (or have the exact opposite direction from one another, i.e. are not linearly independent) or if either one has zero length, then their cross product is zero. More generally, the magnitude of the product equals the area of a parallelogram with the vectors for sides; in particular, the magnitude of the product of two perpendicular vectors is the product of their lengths. The cross product is anticommutative (i.e. a × b = −(b × a)) and is distributive over addition (i.e. a × (b + c) = a × b + a × c). The space R3 together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Like the dot product, it depends on the metric of Euclidean space, but unlike the dot product, it also depends on a choice of orientation or “handedness“. The product can be generalized in various ways; it can be made independent of orientation by changing the result to pseudovector, or in arbitrary dimensions the exterior product of vectors can be used with a bivector or two-form result.
    [Show full text]
  • Vector Calculus for Engineers
    Vector Calculus for Engineers Lecture Notes for Jeffrey R. Chasnov The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon Hong Kong Copyright © 2019 by Jeffrey Robert Chasnov This work is licensed under the Creative Commons Attribution 3.0 Hong Kong License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/hk/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. Preface View the promotional video on YouTube These are the lecture notes for my online Coursera course, Vector Calculus for Engineers. Students who take this course are expected to already know single-variable differential and integral calculus to the level of an introductory college calculus course. Students should also be familiar with matrices, and be able to compute a three-by-three determinant. I have divided these notes into chapters called Lectures, with each Lecture corresponding to a video on Coursera. I have also uploaded all my Coursera videos to YouTube, and links are placed at the top of each Lecture. There are some problems at the end of each lecture chapter. These problems are designed to exemplify the main ideas of the lecture. Students taking a formal university course in multivariable calculus will usually be assigned many more problems, some of them quite difficult, but here I follow the philosophy that less is more. I give enough problems for students to solidify their understanding of the material, but not so many that students feel overwhelmed. I do encourage students to attempt the given problems, but, if they get stuck, full solutions can be found in the Appendix.
    [Show full text]
  • Unit 2 Vector Algebra
    - UNIT 2 VECTOR ALGEBRA 2.1 Introduction 2.2 Basic Concepts 2.3 Components of a Vector 2.4 Operations on Vectors 2 4.1 Addition of Vectors 2.4.2 Properties of Vector Addition 2.4.3 Multiplication of Vectors by Scalars 2.5 Product of Two Vectors 2.5.1 Scalar or Dot Product 2.5.2 Vector Product of Two Vectors 2.6 Multiple Products of Vectors 2.6.1 Scalar Triple Product 2.6.2 Vector Triple Product 2.6.3 Quadruple Product of Vectors 2.7 Polar and Axial Vectors 2.8 Coordinate Systems for Space 2.8.1 Cartesian Coordinates 2.8.2 Cylindrical Coordinates 2.8.3 Spherical Coordinates 2.9 Summary 2.10 Answers to SAQs 2.1 INTRODUCTION Vectors are used extensively in almost all branches of physics, mathematics and engineering. The usefulness of vectors in engineering mathematics resuIts from the fact that many physical quantities - for example, velocity of a body, the forces acting on a body, linear and angular momentum of a body, magnetic and electrostatic forces, may be represented by vectors. In several respects, the rule of vector calculations are as simple as the rules governing the system of real It is true that any problem that can be solved by the use of vectors can also be treated by non-vectorial methods, but vector analysis simplifies many calculations considerably. Further more, it is a way of visualizing physical and geometrical quantities and relations between them. For all these reasons, extensive use is made of vector notation in modern engineering literature.
    [Show full text]
  • Cross Product 1 Cross Product
    Cross product 1 Cross product In mathematics, the cross product or vector product is a binary operation on two vectors in three-dimensional space. It results in a vector which is perpendicular to both and therefore normal to the plane containing them. It has many applications in mathematics, physics, and engineering. If the vectors have the same direction or one has zero length, then their cross product is zero. More generally, the magnitude of the product equals the area of a parallelogram with the vectors for sides; in particular for perpendicular vectors this is a rectangle and the magnitude of the product is the product of their lengths. The cross product is anticommutative and is distributive over addition. The space and product form an algebra over a field, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Like the dot product, it depends on the metric of Euclidean space, but unlike the dot product, it also depends on the choice of orientation or "handedness". The product can be generalized in various ways; it can be made independent of orientation by changing the result to pseudovector, or in arbitrary dimensions the exterior product of vectors can be used with a bivector or two-form result. Also, using the orientation and metric structure just as for the traditional 3-dimensional cross product, one can in n dimensions take the product of n − 1 vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions.
    [Show full text]
  • Identities Generalizing the Theorems of Pappus and Desargues
    S S symmetry Article Identities Generalizing the Theorems of Pappus and Desargues Roger D. Maddux Department of Mathmatics, Iowa State University, Ames, IA 50011, USA; [email protected] Abstract: The Theorems of Pappus and Desargues (for the projective plane over a field) are general- ized here by two identities involving determinants and cross products. These identities are proved to hold in the three-dimensional vector space over a field. They are closely related to the Arguesian identity in lattice theory and to Cayley-Grassmann identities in invariant theory. Keywords: Pappus theorem; Desargues theorem; projective plane over a field 1. Introduction Identities that generalize the theorems of Pappus and Desargues come from two primary sources: lattice theory and invariant theory. Schützenberger [1] showed in 1945 that the theorem of Desargues can be expressed as an identity (the Arguesian identity) that holds in the lattice of subspaces of a projective geometry just in case the geometry is Desarguesian. Jónsson [2] showed in 1953 that the Arguesian identity holds in every lattice of commuting equivalence relations. The further literature on Arguesian lattices includes [3–22]. Two identities similar to those in this paper were published in 1974 [23] (with different meanings for the operations), as examples in the context of invariant theory. For proofs see [24] (p. 150, p. 156). Further literature on such identities includes [25–39]. Citation: Maddux, R.D. Identities The identities in lattice theory use join and meet and apply to lattices. The identities Generalizing the Theorems of Pappus in invariant theory use the wedge (exterior) product (corresponding to join), bracket and Desargues.
    [Show full text]