Derivatives of Vector Fields. Derivative Theory for Vector Fields Is a Straightfor- Ward Extension of That for Scalar Fields. Gi

Total Page:16

File Type:pdf, Size:1020Kb

Derivatives of Vector Fields. Derivative Theory for Vector Fields Is a Straightfor- Ward Extension of That for Scalar Fields. Gi 18.024 SPRING OF 2008 DER. DERIVATIVES OF VECTOR FIELDS Derivatives of vector fields. Derivative theory for vector fields is a straightfor- ward extension of that for scalar fields. Given f : D ⊂ Rn ! Rm a vector field, in components, f consists of m scalar fields of n variables. That is, f(x) = (f1(x); : : : ; fm(x)); n and each fi : D ⊂ R ! R is a scalar field, where i = 1; : : : ; m. It seems natural to define the differentiability of f, thus, componentwise. We recall that if f : D ⊂ Rn ! R is a scalar field and a 2 int(D) then the derivative of f at a (if exists) is given by the gradient rf(a) = (D1f(a;D2f(a);:::;Dnf(a)): For our purposes, it is convenient to understand the derivative of f as a row matrix rather than as a vector. We will write as Df(a) = [D1f(a;D2f(a);:::;Dnf(a)]: Then, f is differentiable at a (with the total derivative Df(a) if (*) f(a + h) − f(a) = Df(a)h + khkE(h; a); and jE(h; a)j ! 0 as khk ! 0. Here the first term on the right side is matrix multiplication of Df(a) and the column matrix 2 3 h1 6 . 7 h = 4 . 5 : hn Definition 1. Let f : D 2 Rm and D ⊂ Rn be open. In scalar form, f(x) = (f1(x1; : : : ; xn); : : : ; fm(x1; : : : ; xn)): We say that f is differentiable at a 2 D if each of the component functions f1; : : : ; fm is differentiable at a (in the sense discussed in [Apo, Section 8.11] or above). Furthermore, by the derivative of f at a we mean the matrix 2 3 D1f1(a) D2f1(a) ··· Dnf1(a) 6 D1f2(a) D2f2(a) ··· Dnf2a) 7 6 7 Df(a) = 6 . 7 : 4 . 5 D1fm(a) D2fm(a) ··· Dnfm In other words, Df(a) is the matrix whose ith row is given as the derivative Dfi(a) of the ith coordinate function of f. It is often called the jacobian matrix∗ of f. An- other notation for this matrix is @(f ; ··· ; f ) 1 m : @(x1; ··· ; xn) ∗It is named after the Prussian mathematician Carl Gustav Jacob Jacobi (1804-1851). 1 2 DERIVATIVES OF VECTOR FIELDS In [Apo, Section 8.18], the differentiability of a vector field f is given, alterna- tively, by directly extending the Taylor’s formula (*) in the vector-field setting. Our definition then is equivalent to that given in [Apo], which is stated below. Theorem 2. The vector field f is differentiable at a if and only if f(a + h) − f(a) = Df(a)h + khkE(h; a) and kE(h; a)k ! 0 as khk ! 0. Here, f; h and E are written as column matrices. Proof. We observe that both sides of the equation represent column matrices. Con- sidering the ith entries of these matrices, we obtain the following equation fi(a + h) − fi(a) = Dfi)(a) · h + khkEi(h; a): Now if f is differentiable at a if and only if each fi is, and fi is differentiable at a if and only if jEi(h; a)j ! 0 as khk ! 0. But, jEi(h)j ! 0 as khk ! 0 for each i if and only if kE(h)k ! 0 as khk ! 0, and we are done. The following result is immediate from our definition of differenentiability. Theorem 3. If f is differentiable at a then f is continuous at a. The chain rule for derivatives of vector fields. Before considering the chain rule for vector fields, let us take the chain rule for scalar fields we have already proved [Apo, Section 8.15] and reformulate it in terms of matrices. Let f : D ⊂ Rn ! R be a scalar field defined in an open ball about a, and let n x : I ⊂ R ! R be a vector-valued function defined in an open interval about t0. Let x(t0) = a and x(I) ⊂ D. If f is differentiable at a and if x is differentiable at t0, then we have shown in Theorem 8.8 of [Apo] that f ◦ x is differentiable at t0 and its derivative is given by d f(x(t )) = rf(x(t )) · x0(t ): dt 0 0 0 We now write this formula in scalar form as d @f dx @f dx (f ◦ x) = 1 + ··· + n ; dt @x1 dt @xn dt or, we write it in matrix form as 2 dx1 3 d @f @f dt 6 . 7 (f ◦ x) = ··· . : dt @x1 @xn 4 5 dxn dt Recalling the definition of the Jacobian matrix Df the latter formula is recognized as d (f(x(t)) = Df(x(t))Dx(t): dt (Note that Df is a row matrix while Dx is by definition a column matrix.) This is the form of the chain rule which is useful to extend to vector fields. Let f : Rn ! Rp and g : Rp ! Rm be vector fields such that the composition field h = g ◦ f is defined in an open ball about a 2 Rn. Let f(a) = b 2 Rp. We write these fields as f(x) = f(x1; : : : ; xn); g(y) = g(y1; : : : ; yp): DERIVATIVES OF VECTOR FIELDS 3 If f and g are differentiable at a and b, respectively, then partial derivatives of h exist at a by applying the chain rule for each hi(x) = gi(f(x)) as @h @g @f @g @f i = i 1 + ··· + i p : @xj @y1 @xj @yp @xj This is recognized as matrix multiplication 2 3 Djf1 6 . 7 [D1gi D2gi ··· Dpgi] 4 . 5 : Djfp In other words, its multiplication of the ith row of Dg and the jth column of Df. Thus, the Jacobian matrix of h is expected to satisfy the matrix equation Dh(a) = Dg(b)Df(a): Not exactly. If f and g are differentiable then we know that the partial deriva- tives of the composite function h exist. But, the mere existence of the partial deriva- tives of the function hi does not imply the differentiability of hi, and thus, one needs to provide a separate argument that h is differentiable. One may avoid giving a separate proof of the differentiability of h by assum- ing a stronger hypothesis, namely, that both f and g are continuously differentiable. Then, the formula, p X Djhi(x) = Dlgi(f(x))Djfl(x); l=1 which we have proved, shows that Djhi is continuous, that is, h is continuously differentiable. Therefore, h is differentiable. We summarize these. Theorem 4. Let f : Rn ! Rp and g : Rp ! Rm be vector fields defined in open balls about a 2 Rn and b 2 Rp, respectively. Let f(a) = b. If f and g are continuously differ- entiable on their respective domains, then h(x) = g ◦ f(x) is continuously differentiable on its domain and Dh(x) = Dg(f(x) · Df(x): Most applications of the chain rule we will use in future are in the C1-class setting, and thus this theorem will suffice. But, one may remove the C1 condition. Please read the statement and its proof of [Apo, Theorem 8.11]. Example 5 (Polar coordinates). Given a function f(x; y) defined on the (x; y) plane, we introduce polar coordinates x = r cos θ and y = r sin θ and f becomes a function of r and θ as φ(r; θ) = f(r cos θ; r sin θ). Then, @φ @f @x @f @y @f @f = + = cos θ + sin θ @r @x @r @y @r @x @y and @φ @f @f = −r sin θ + r cos θ: @θ @x @y 4 DERIVATIVES OF VECTOR FIELDS The second derivative is computed as @2φ @ @φ @ @f @f = = cos θ + sin θ @r2 @r @r @r @x @y @ @f @f @(cos θ) @ @f @f @(sin θ) = cos θ + + sin θ + @r @x @x @r @r @y @y @r @ @f @ @f @ @f @ @f = cos θ + sin θ cos θ + cos θ + sin θ sin θ @x @x @y @x @x @y @y @y @2f @2f @2f = + + sin θ cos θ: @x2 @y2 @x@y An analogous formula for @2φ/∂θ2 is worked in [Apo, Example 3] and @2φ/∂r@θ is in [Apo, Section 8.22, Exercise 5]. Derivatives of inverses. Recall that if f(x) is a differentiable real-valued function of a single variable x, and if f 0(x) > 0 for x 2 [a; b] (or f 0(x) < 0), then f is strictly increasing (or decreasing), and it has an inverse, say g. Furthermore, g is differentiable and its derivative satisfies 1 g0(f(x)) = : f 0(x) Part of this theory extends to vector fields. Theorem 6. Let f : Rn ! Rn be defined in an open ball of a and f(a) = b. Suppose that f has an inverse, say g. If f is differentiable at a and if g is differentiable at b then Dg(b) = (Df(a))−1: In interpretation, the jacobian matrix of the inverse function is just the inverse of the jacobian matrix. Proof. We observe that g ◦ f(x) = x. Since f and g are differentiable and so is the composite function g ◦ f, by the chain rule, it follows that Dg(b)Df(a) = In; which implies the assertion. The theorem shows that in order for the differentiable function f to have a dif- ferentiable inverse, it is necessary that the Jacobian matrix Df have rank n. Roughly speaking, it is also sufficient for f to have an inverse. More precisely, one has the fol- lowing Inverse Function Theorem, which is one of the fundamental results in math- ematical analysis.
Recommended publications
  • A Brief Tour of Vector Calculus
    A BRIEF TOUR OF VECTOR CALCULUS A. HAVENS Contents 0 Prelude ii 1 Directional Derivatives, the Gradient and the Del Operator 1 1.1 Conceptual Review: Directional Derivatives and the Gradient........... 1 1.2 The Gradient as a Vector Field............................ 5 1.3 The Gradient Flow and Critical Points ....................... 10 1.4 The Del Operator and the Gradient in Other Coordinates*............ 17 1.5 Problems........................................ 21 2 Vector Fields in Low Dimensions 26 2 3 2.1 General Vector Fields in Domains of R and R . 26 2.2 Flows and Integral Curves .............................. 31 2.3 Conservative Vector Fields and Potentials...................... 32 2.4 Vector Fields from Frames*.............................. 37 2.5 Divergence, Curl, Jacobians, and the Laplacian................... 41 2.6 Parametrized Surfaces and Coordinate Vector Fields*............... 48 2.7 Tangent Vectors, Normal Vectors, and Orientations*................ 52 2.8 Problems........................................ 58 3 Line Integrals 66 3.1 Defining Scalar Line Integrals............................. 66 3.2 Line Integrals in Vector Fields ............................ 75 3.3 Work in a Force Field................................. 78 3.4 The Fundamental Theorem of Line Integrals .................... 79 3.5 Motion in Conservative Force Fields Conserves Energy .............. 81 3.6 Path Independence and Corollaries of the Fundamental Theorem......... 82 3.7 Green's Theorem.................................... 84 3.8 Problems........................................ 89 4 Surface Integrals, Flux, and Fundamental Theorems 93 4.1 Surface Integrals of Scalar Fields........................... 93 4.2 Flux........................................... 96 4.3 The Gradient, Divergence, and Curl Operators Via Limits* . 103 4.4 The Stokes-Kelvin Theorem..............................108 4.5 The Divergence Theorem ...............................112 4.6 Problems........................................114 List of Figures 117 i 11/14/19 Multivariate Calculus: Vector Calculus Havens 0.
    [Show full text]
  • Vector Fields
    Vector Calculus Independent Study Unit 5: Vector Fields A vector field is a function which associates a vector to every point in space. Vector fields are everywhere in nature, from the wind (which has a velocity vector at every point) to gravity (which, in the simplest interpretation, would exert a vector force at on a mass at every point) to the gradient of any scalar field (for example, the gradient of the temperature field assigns to each point a vector which says which direction to travel if you want to get hotter fastest). In this section, you will learn the following techniques and topics: • How to graph a vector field by picking lots of points, evaluating the field at those points, and then drawing the resulting vector with its tail at the point. • A flow line for a velocity vector field is a path ~σ(t) that satisfies ~σ0(t)=F~(~σ(t)) For example, a tiny speck of dust in the wind follows a flow line. If you have an acceleration vector field, a flow line path satisfies ~σ00(t)=F~(~σ(t)) [For example, a tiny comet being acted on by gravity.] • Any vector field F~ which is equal to ∇f for some f is called a con- servative vector field, and f its potential. The terminology comes from physics; by the fundamental theorem of calculus for work in- tegrals, the work done by moving from one point to another in a conservative vector field doesn’t depend on the path and is simply the difference in potential at the two points.
    [Show full text]
  • Introduction to a Line Integral of a Vector Field Math Insight
    Introduction to a Line Integral of a Vector Field Math Insight A line integral (sometimes called a path integral) is the integral of some function along a curve. One can integrate a scalar-valued function1 along a curve, obtaining for example, the mass of a wire from its density. One can also integrate a certain type of vector-valued functions along a curve. These vector-valued functions are the ones where the input and output dimensions are the same, and we usually represent them as vector fields2. One interpretation of the line integral of a vector field is the amount of work that a force field does on a particle as it moves along a curve. To illustrate this concept, we return to the slinky example3 we used to introduce arc length. Here, our slinky will be the helix parameterized4 by the function c(t)=(cost,sint,t/(3π)), for 0≤t≤6π, Imagine that you put a small-magnetized bead on your slinky (the bead has a small hole in it, so it can slide along the slinky). Next, imagine that you put a large magnet to the left of the slink, as shown by the large green square in the below applet. The magnet will induce a magnetic field F(x,y,z), shown by the green arrows. Particle on helix with magnet. The red helix is parametrized by c(t)=(cost,sint,t/(3π)), for 0≤t≤6π. For a given value of t (changed by the blue point on the slider), the magenta point represents a magnetic bead at point c(t).
    [Show full text]
  • 6.10 the Generalized Stokes's Theorem
    6.10 The generalized Stokes’s theorem 645 6.9.2 In the text we proved Proposition 6.9.7 in the special case where the mapping f is linear. Prove the general statement, where f is only assumed to be of class C1. n m 1 6.9.3 Let U R be open, f : U R of class C , and ξ a vector field on m ⊂ → R . a. Show that (f ∗Wξ)(x)=W . Exercise 6.9.3: The matrix [Df(x)]!ξ(x) adj(A) of part c is the adjoint ma- b. Let m = n. Show that if [Df(x)] is invertible, then trix of A. The equation in part b (f ∗Φ )(x) = det[Df(x)]Φ 1 . is unsatisfactory: it does not say ξ [Df(x)]− ξ(x) how to represent (f ∗Φ )(x) as the ξ c. Let m = n, let A be a square matrix, and let A be the matrix obtained flux of a vector field when the n n [j,i] × from A by erasing the jth row and the ith column. Let adj(A) be the matrix matrix [Df(x)] is not invertible. i+j whose (i, j)th entry is (adj(A))i,j =( 1) det A . Show that Part c deals with this situation. − [j,i] A(adj(A)) = (det A)I and f ∗Φξ(x)=Φadj([Df(x)])ξ(x) 6.10 The generalized Stokes’s theorem We worked hard to define the exterior derivative and to define orientation of manifolds and of boundaries. Now we are going to reap some rewards for our labor: a higher-dimensional analogue of the fundamental theorem of calculus, Stokes’s theorem.
    [Show full text]
  • Stokes' Theorem
    V13.3 Stokes’ Theorem 3. Proof of Stokes’ Theorem. We will prove Stokes’ theorem for a vector field of the form P (x, y, z) k . That is, we will show, with the usual notations, (3) P (x, y, z) dz = curl (P k ) · n dS . � C � �S We assume S is given as the graph of z = f(x, y) over a region R of the xy-plane; we let C be the boundary of S, and C ′ the boundary of R. We take n on S to be pointing generally upwards, so that |n · k | = n · k . To prove (3), we turn the left side into a line integral around C ′, and the right side into a double integral over R, both in the xy-plane. Then we show that these two integrals are equal by Green’s theorem. To calculate the line integrals around C and C ′, we parametrize these curves. Let ′ C : x = x(t), y = y(t), t0 ≤ t ≤ t1 be a parametrization of the curve C ′ in the xy-plane; then C : x = x(t), y = y(t), z = f(x(t), y(t)), t0 ≤ t ≤ t1 gives a corresponding parametrization of the space curve C lying over it, since C lies on the surface z = f(x, y). Attacking the line integral first, we claim that (4) P (x, y, z) dz = P (x, y, f(x, y))(fxdx + fydy) . � C � C′ This looks reasonable purely formally, since we get the right side by substituting into the left side the expressions for z and dz in terms of x and y: z = f(x, y), dz = fxdx + fydy.
    [Show full text]
  • T. King: MA3160 Page 1 Lecture: Section 18.3 Gradient Field And
    Lecture: Section 18.3 Gradient Field and Path Independent Fields For this section we will follow the treatment by Professor Paul Dawkins (see http://tutorial.math.lamar.edu) more closely than our text. A link to this tutorial is on the course webpage. Recall from Calc I, the Fundamental Theorem of Calculus. There is, in fact, a Fundamental Theorem for line Integrals over certain kinds of vector fields. Note that in the case above, the integrand contains F’(x), which is a derivative. The multivariable analogy is the gradient. We state below the Fundamental Theorem for line integrals without proof. · Where P and Q are the endpoints of the path c. A logical question to ask at this point is what does this have to do with the material we studied in sections 18.1 and 18.2? • First, remember that is a vector field, i.e., the direction/magnitude of the maximum change of f. • So, if we had a vector field which we know was a gradient field of some function f, we would have an easy way of finding the line integral, i.e., simply evaluate the function f at the endpoints of the path c. → The problem is that not all vector fields are gradient fields. ← So we will need two skills in order to effectively use the Fundamental Theorem for line integrals fully. T. King: MA3160 Page 1 Lecture 18.3 1. Know how to determine whether a vector field is a gradient field. If it is, it is called a conservative vector field. For such a field, there is a function f such that .
    [Show full text]
  • DIFFERENTIABLE MANIFOLDS Course C3.1B 2012 Nigel Hitchin
    DIFFERENTIABLE MANIFOLDS Course C3.1b 2012 Nigel Hitchin [email protected] 1 Contents 1 Introduction 4 2 Manifolds 6 2.1 Coordinate charts . .6 2.2 The definition of a manifold . .9 2.3 Further examples of manifolds . 11 2.4 Maps between manifolds . 13 3 Tangent vectors and cotangent vectors 14 3.1 Existence of smooth functions . 14 3.2 The derivative of a function . 16 3.3 Derivatives of smooth maps . 20 4 Vector fields 22 4.1 The tangent bundle . 22 4.2 Vector fields as derivations . 26 4.3 One-parameter groups of diffeomorphisms . 28 4.4 The Lie bracket revisited . 32 5 Tensor products 33 5.1 The exterior algebra . 34 6 Differential forms 38 6.1 The bundle of p-forms . 38 6.2 Partitions of unity . 39 6.3 Working with differential forms . 41 6.4 The exterior derivative . 43 6.5 The Lie derivative of a differential form . 47 6.6 de Rham cohomology . 50 2 7 Integration of forms 57 7.1 Orientation . 57 7.2 Stokes' theorem . 62 8 The degree of a smooth map 68 8.1 de Rham cohomology in the top dimension . 68 9 Riemannian metrics 76 9.1 The metric tensor . 76 9.2 The geodesic flow . 80 10 APPENDIX: Technical results 87 10.1 The inverse function theorem . 87 10.2 Existence of solutions of ordinary differential equations . 89 10.3 Smooth dependence . 90 10.4 Partitions of unity on general manifolds . 93 10.5 Sard's theorem (special case) . 94 3 1 Introduction This is an introductory course on differentiable manifolds.
    [Show full text]
  • Manifolds, Tangent Vectors and Covectors
    Physics 250 Fall 2015 Notes 1 Manifolds, Tangent Vectors and Covectors 1. Introduction Most of the “spaces” used in physical applications are technically differentiable man- ifolds, and this will be true also for most of the spaces we use in the rest of this course. After a while we will drop the qualifier “differentiable” and it will be understood that all manifolds we refer to are differentiable. We will build up the definition in steps. A differentiable manifold is basically a topological manifold that has “coordinate sys- tems” imposed on it. Recall that a topological manifold is a topological space that is Hausdorff and locally homeomorphic to Rn. The number n is the dimension of the mani- fold. On a topological manifold, we can talk about the continuity of functions, for example, of functions such as f : M → R (a “scalar field”), but we cannot talk about the derivatives of such functions. To talk about derivatives, we need coordinates. 2. Charts and Coordinates Generally speaking it is impossible to cover a manifold with a single coordinate system, so we work in “patches,” technically charts. Given a topological manifold M of dimension m, a chart on M is a pair (U, φ), where U ⊂ M is an open set and φ : U → V ⊂ Rm is a homeomorphism. See Fig. 1. Since φ is a homeomorphism, V is also open (in Rm), and φ−1 : V → U exists. If p ∈ U is a point in the domain of φ, then φ(p) = (x1,...,xm) is the set of coordinates of p with respect to the given chart.
    [Show full text]
  • Divergence and Curl "Del", ∇ - a Defined Operator ∂ ∂ ∂ ∇ = , , ∂X ∂ Y ∂ Z
    Divergence and Curl "Del", ∇ - A defined operator ∂ ∂ ∂ ∇ = , , ∂x ∂ y ∂ z The gradient of a function (at a point) is a vec tor that points in the direction in which the function increases most rapidly. A vector field is a vector function that can be thou ght of as a velocity field of a fluid. At each point it assigns a vector that represents the velocity of a particle at that point. The flux of a vector field is the volume of fluid flowing through an element of surface area per unit time. The divergence of a vector field is the flux per u nit volume. The divergence of a vector field is a number that can be thought of as a measure of the rate of change of the density of the flu id at a point. The curl of a vector field measures the tendency of the vector field to rotate about a point. The curl of a vector field at a point is a vector that points in the direction of the axis of rotation and has magnitude represents the speed of the rotation. Vector Field Scalar Funct ion F = Pxyz()()(),,, Qxyz ,,, Rxyz ,, f( x, yz, ) Gra dient grad( f ) ∇f = fffx, y , z Div ergence div (F ) ∂∂∂ ∂∂∂P Q R ∇⋅=F , , ⋅PQR, , =++=++ PQR ∂∂∂xyz ∂∂∂ xyz x y z Curl curl (F ) i j k ∂ ∂ ∂ ∇×=F =−−−+−()RQyz ijk() RP xz() QP xy ∂x ∂ y ∂ z P Q R ∇×=F RQy − z, −() RPQP xzxy −, − F( x,, y z) = xe−z ,4 yz2 ,3 ye − z ∂ ∂ ∂ div()F=∇⋅= F ,, ⋅ xe−z ,4,3 yz2 ye − z = e−z+4 z2 − 3 ye − z ∂x ∂ y ∂ z i j k ∂ ∂ ∂ curl ()F=∇×= F = 3e− z − 8 yz 2 , − 0 − ( − xe − z ) , 0 ∂x ∂ y ∂ z ( ) xe−z4 yz2 3 ye − z 3e−z− 8, yz2 − xe − z ,0 grad( scalar function ) = Vector Field div(Vector Field ) = scalarfunction curl (Vector Field) = Vector Field Which of the 9 ways to combine grad, div and curl by taking one of each.
    [Show full text]
  • Electrical Engineering Dictionary
    ratio of the power per unit solid angle scat- tered in a specific direction of the power unit area in a plane wave incident on the scatterer R from a specified direction. RADHAZ radiation hazards to personnel as defined in ANSI/C95.1-1991 IEEE Stan- RS commonly used symbol for source dard Safety Levels with Respect to Human impedance. Exposure to Radio Frequency Electromag- netic Fields, 3 kHz to 300 GHz. RT commonly used symbol for transfor- mation ratio. radial basis function network a fully R-ALOHA See reservation ALOHA. connected feedforward network with a sin- gle hidden layer of neurons each of which RL Typical symbol for load resistance. computes a nonlinear decreasing function of the distance between its received input and Rabi frequency the characteristic cou- a “center point.” This function is generally pling strength between a near-resonant elec- bell-shaped and has a different center point tromagnetic field and two states of a quan- for each neuron. The center points and the tum mechanical system. For example, the widths of the bell shapes are learned from Rabi frequency of an electric dipole allowed training data. The input weights usually have transition is equal to µE/hbar, where µ is the fixed values and may be prescribed on the electric dipole moment and E is the maxi- basis of prior knowledge. The outputs have mum electric field amplitude. In a strongly linear characteristics, and their weights are driven 2-level system, the Rabi frequency is computed during training. equal to the rate at which population oscil- lates between the ground and excited states.
    [Show full text]
  • Counting Dimensions and Stokes' Theorem
    Counting dimensions and Stokes’ theorem June 5, 2018 The general Stokes’ theorem, and its derivatives such as the divergence theorem (3-dimensional region in R3), Kelvin–Stokes theorem (2-dimensional surface in R3), and Green’s theorem (2-dimensional region in R2), are simply fancier versions of the fundamental theorem of calculus (FTC) in one dimension. They also help determine the algebraic formulas for div and curl, which might look arbitrary, but just remember that they all come from applying FTC repeatedly. To see how the theorem can be derived, imagine a 2-dimensional region in R2 or a 2-dimensional surface in R3 as a sheet of square grid paper, so it can be seen as a collection of small and (approximately) flat 2- dimensional squares. Similarly, a 3-dimensional region in R3 can be seen as a collection of small cubes. In short, the formula for Stokes’ theorem comes from the case of flat squares and cubes. It is a good exercise to check Green’s theorem on the unit square on R2 and the divergence theorem on the unit cube in R3 (try it). To attain the general Stokes’ theorem and put together the simple shapes, we need more precise tools from algebraic topology (which is, of course, beyond the scope of this course). But we can learn what Stokes’ theorem means, and how it can be used to simplify many calculations. General Stokes’ theorem We shall work in R3, as Green’s theorem in R2 is just a special case of the Kelvin-Stokes theorem in R3, and FTC in R is a special case of the fundamental theorem of line integrals (FTLI) in R3, so the results here can carry to R2 and R with some minor tweaking.
    [Show full text]
  • Gradients and Directional Derivatives R Horan & M Lavelle
    Intermediate Mathematics Gradients and Directional Derivatives R Horan & M Lavelle The aim of this package is to provide a short self assessment programme for students who want to obtain an ability in vector calculus to calculate gradients and directional derivatives. Copyright c 2004 [email protected] , [email protected] Last Revision Date: August 24, 2004 Version 1.0 Table of Contents 1. Introduction (Vectors) 2. Gradient (Grad) 3. Directional Derivatives 4. Final Quiz Solutions to Exercises Solutions to Quizzes The full range of these packages and some instructions, should they be required, can be obtained from our web page Mathematics Support Materials. Section 1: Introduction (Vectors) 3 1. Introduction (Vectors) The base vectors in two dimensional Cartesian coordinates are the unit vector i in the positive direction of the x axis and the unit vector j in the y direction. See Diagram 1. (In three dimensions we also require k, the unit vector in the z direction.) The position vector of a point P (x, y) in two dimensions is xi + yj . We will often denote this important vector by r . See Diagram 2. (In three dimensions the position vector is r = xi + yj + zk .) y 6 Diagram 1 y 6 Diagram 2 P (x, y) 3 r 6 j yj 6- - - - 0 i x 0 xi x Section 1: Introduction (Vectors) 4 The vector differential operator ∇ , called “del” or “nabla”, is defined in three dimensions to be: ∂ ∂ ∂ ∇ = i + j + k . ∂x ∂y ∂z Note that these are partial derivatives! This vector operator may be applied to (differentiable) scalar func- tions (scalar fields) and the result is a special case of a vector field, called a gradient vector field.
    [Show full text]