Topics from Tensoral Calculus∗

Total Page:16

File Type:pdf, Size:1020Kb

Topics from Tensoral Calculus∗ Topics from Tensoral Calculus∗ Jay R. Walton Fall 2013 1 Preliminaries These notes are concerned with topics from tensoral calculus, i.e. generalizations of calculus to functions defined between two tensor spaces. To make the discussion more concrete, the tensor spaces are defined over ordinary Euclidean space, RN , with its usual inner product structure. Thus, the tensor spaces inherit a natural inner product, the tensor dot-product, from the underlying vector space, RN . 2 The Derivative of Tensor Valued Functions Let F : D ⊂ T r −! T s (1) be a function with domain D a subset of T r taking values in the tensor space T s, where D is assumed to be an open subset of T r.1 Continuity. The function F is said to be Continuous at A 2 D provided for every > 0 there exists a δ > 0 such F (B) 2 B(F (A); ) whenever B 2 B(A; δ), i.e. F maps the ball of radius δ centered at A into the ball of radius centered at F (A). The function F is said to be continuous on all of D provided it is continuous at each A 2 D. There are two useful alternative characterizations of continuity. The first is that F is continuous on D provided it maps convergent sequences onto convergent sequences. That is, if An 2 D is a sequence converging to A 2 D (limn!1 An = A), then limn!1 F (An) = F (A). The second alternative characterization is that the inverse function F −1 maps open subsets of T s to open subsets of D. Derivative. The function F is said to be Differentiable at A 2 D provided there exists a tensor L 2 T r+s such that F (A + H) = F (A) + L[H] + o(H) as jHj ! 0 (2) ∗Copyright c 2011 by J. R. Walton. All rights reserved. 1A subset D ⊂ T r is said to be open provided for every element A 2 D there exists an open ball centered at A that is wholly contained in D. 1 where jHj denotes the norm of the tensor H 2 T r. If such a tensor L exists satisfying (??), it is called the Derivative of F at A and denoted DF (A). Thus, (??) can be rewritten F (A + H) = F (A) + DF (A)[H] + o(H): (3) Recall that o(jHj) is the Landau \little oh" symbol which is used to denote a function depending upon H that tends to zero faster that jHj, i.e. o(H) lim = 0: jHj!0 jHj If the derivative DF (A) exists at each point in D, then it defines a function DF (·): D ⊂ T r −! T r+s: Moreover, if the function DF (·) is differentiable at A 2 T r, then its derivative is a tensor in T 2r+s, denoted by D2F (A), called the second derivative of F at A 2 D. Continuing in this manner, derivatives of F of all orders can be defined. Example. Let 1 ∼ N 0 ∼ φ(·): T = R −! T = R: (4) Thus, φ(·) is a real-valued function of N-real variables and its graph is a surface in RN+1. In the definition of the derivative (??), it is more customary in this context to let H = hu where u is a unit vector in RN . Defining equation (??) then becomes φ(a + hu) = φ(a) + Dφ(a)[hu] + o(hu): (5) From the linearity of the tensor Dφ(a)[hu], one concludes that φ(a + hu) − φ(a) Dφ(a)[u] = lim ; (6) h!0 h which is the familiar directional derivative of φ(·) at the point a in the direction u. Thus, being differentiable at a point implies the existence of directional derivatives, and hence partial derivatives, in all directions. However, the converse is not true. That is, there exist functions with directional derivatives existing at a point in all possible directions but which are not differentiable at the point. For such an example, consider the function φ(·): R2 −! R given by ( 3 3 x −y when (x; y) 6= (0; 0); φ(x; y) = x2+y2 (7) 0 when (x; y) = (0; 0): One then shows easily that if u = (cos(θ); sin(θ))T , the directional derivative of φ(·) at the origin (0,0) in the direction u equals cos(θ)3 − sin(θ)3. However, φ(·) is not differentiable at the origin in the sense of (??). (Why?) Consider further the function φ(·) in (??). If φ(·) is differentiable in the sense of (??), then its derivative Dφ(a)[·] 2 T 1 is a linear transformation from RN to R, and as such is representable by dot-product with a vector in RN . Specifically, there exists a unique vector, denoted by rφ(a) 2 RN such that Dφ(a)[u] = rφ(a) · u for all u 2 RN : (8) 2 The vector rφ(a) is called the Gradient of φ(·) at a. The component forms for the derivative Dφ and the gradient rφ are easily constructed. N In particular, let B = fe1;:::; eN g be the natural orthonormal basis for R . Then the 1×N matrix representation for Dφ and the N-tuple vector representation for the gradient rφ are given by 2 3 @x1 φ(a) [Dφ(a)] = [@ φ(a);:::;@ φ(a)] and [rφ(a)] = 6 . 7 (9) B x1 xN B 4 . 5 @xN φ(a) where @xi φ(a) denotes the partial derivative of φ with respect to xi at the point a φ(a + hei) − φ(a) @x φ(a) = lim : i h!0 h Example. Another important example is provided by functions F (·): T 0 −! T s, i.e. s-ordered tensor valued functions of a single scalar variable. Since in continuum mechanics the scalar independent variable is usually time, that variable is given the special symbol t and the derivative of such functions is represented by DF (t)[τ] = F_ (t)τ 2 T s (10) where F (t + h) − F (t) F_ (t) = lim : h!0 h In component form, if the tensor valued function F (·) has the component representation s [Fi1;:::;is ] with respect to the natural basis for T , then the component representation for the tensor F_ (t) is _ _ [F (t)] = [Fi1;:::;is ]: (11) 1 ∼ N 1 Example. A Vector Field is a function a(·): D ⊂ T = R −! T . Its derivative defines a second order tensor Da(x)[·] 2 T 2. Its component form, with respect to the natural basis on RN for example, is [Da(x)] = [@xj ai(x)] i; j = 1;:::;N (12) where [ai(x)]; i = 1;:::;N gives the component representation of a(x). The right hand side of (??) is the familiar Jacobian matrix. Product Rule. Various types of \products" of tensor functions occur naturally in tensor calculus. Rather than proving a separate product rule formula for every product that arises, it is much more expedient and much cleaner to prove one product rule formula for a general, abstract notion of product. To that end, the appropriate general notion of product is provided by a general bi-linear form. More specifically, suppose that F (·): D ⊂ T r −! T p and G(·): D ⊂ T r −! T q are two differentiable functions with the same domain set D in T r but different range spaces. Letπ ^(·; ·): T p × T q −! T s denote a bi-linear function (i.e. π^(·; ·) is linear in each of its variables separately) with values in T s. One then defines the product function E(·): D ⊂ T r −! T s by E(A) :=π ^(F (A);G(A)) for A 2 D: 3 Since F and G are assumed to be differentiable at A 2 D, it is not difficult to show that E is also differentiable at A with DE(A)[H] =π ^(DF (A)[H];G(A)) +π ^(F (A); DG(A)[H]) for all H 2 T r. (13) Notice that (??) has the familiar form (fg)0 = f 0g + fg0 from single variable calculus. 0 ∼ r 0 s Example. Let A(·): T = R −! T and B(·): T −! T be differentiable tensor valued functions of the single scalar variable t. Then their tensor product E(t) := A(t) ⊗ B(t) is differentiable with E_ (t) = A_(t) ⊗ B(t) + A(t) ⊗ B_ (t): Chain Rule The familiar chain rule from single variable calculus has a straight forward generalization to the tensor setting. Specifically, suppose F (·): D ⊂ T r −! T q is differ- entiable at A 2 D and G(·): G ⊂ T q −! T s (with G being an open set on which G(·) is defined) is differentiable at F (A) 2 G \ F (D), then the composite function E(·) := G ◦ F (·) is also differentiable at A 2 D with DE(A)[H] = DG(F (A))[DF (A)[H]] for all H 2 T r: (14) The right hand side of (??) is the composition of the tensor DG(F (A))[·] 2 T q+s with the tensor DF (A)[·] 2 T r+q producing a the tensor DG(F (A)) ◦ DF (A)[·] 2 T r+s. This generalizes the familiar chain rule formula g(f(x))0 = g0(f(x))f 0(x) from single variable calculus. Example. An important application of the chain rule is to composite functions of the form E(t) = G ◦ F (t) = G(F (t)), i.e. functions for which the inner function is a function of the single scalar variable t. The chain rule then yields the result E_ (t) = DG(F (t))[F_ (t)]: For example, let A(t) be a differentiable function taking values in T 2, i.e.
Recommended publications
  • Section 8.8: Improper Integrals
    Section 8.8: Improper Integrals One of the main applications of integrals is to compute the areas under curves, as you know. A geometric question. But there are some geometric questions which we do not yet know how to do by calculus, even though they appear to have the same form. Consider the curve y = 1=x2. We can ask, what is the area of the region under the curve and right of the line x = 1? We have no reason to believe this area is finite, but let's ask. Now no integral will compute this{we have to integrate over a bounded interval. Nonetheless, we don't want to throw up our hands. So note that b 2 b Z (1=x )dx = ( 1=x) 1 = 1 1=b: 1 − j − In other words, as b gets larger and larger, the area under the curve and above [1; b] gets larger and larger; but note that it gets closer and closer to 1. Thus, our intuition tells us that the area of the region we're interested in is exactly 1. More formally: lim 1 1=b = 1: b − !1 We can rewrite that as b 2 lim Z (1=x )dx: b !1 1 Indeed, in general, if we want to compute the area under y = f(x) and right of the line x = a, we are computing b lim Z f(x)dx: b !1 a ASK: Does this limit always exist? Give some situations where it does not exist. They'll give something that blows up.
    [Show full text]
  • Notes on Calculus II Integral Calculus Miguel A. Lerma
    Notes on Calculus II Integral Calculus Miguel A. Lerma November 22, 2002 Contents Introduction 5 Chapter 1. Integrals 6 1.1. Areas and Distances. The Definite Integral 6 1.2. The Evaluation Theorem 11 1.3. The Fundamental Theorem of Calculus 14 1.4. The Substitution Rule 16 1.5. Integration by Parts 21 1.6. Trigonometric Integrals and Trigonometric Substitutions 26 1.7. Partial Fractions 32 1.8. Integration using Tables and CAS 39 1.9. Numerical Integration 41 1.10. Improper Integrals 46 Chapter 2. Applications of Integration 50 2.1. More about Areas 50 2.2. Volumes 52 2.3. Arc Length, Parametric Curves 57 2.4. Average Value of a Function (Mean Value Theorem) 61 2.5. Applications to Physics and Engineering 63 2.6. Probability 69 Chapter 3. Differential Equations 74 3.1. Differential Equations and Separable Equations 74 3.2. Directional Fields and Euler’s Method 78 3.3. Exponential Growth and Decay 80 Chapter 4. Infinite Sequences and Series 83 4.1. Sequences 83 4.2. Series 88 4.3. The Integral and Comparison Tests 92 4.4. Other Convergence Tests 96 4.5. Power Series 98 4.6. Representation of Functions as Power Series 100 4.7. Taylor and MacLaurin Series 103 3 CONTENTS 4 4.8. Applications of Taylor Polynomials 109 Appendix A. Hyperbolic Functions 113 A.1. Hyperbolic Functions 113 Appendix B. Various Formulas 118 B.1. Summation Formulas 118 Appendix C. Table of Integrals 119 Introduction These notes are intended to be a summary of the main ideas in course MATH 214-2: Integral Calculus.
    [Show full text]
  • A Brief Tour of Vector Calculus
    A BRIEF TOUR OF VECTOR CALCULUS A. HAVENS Contents 0 Prelude ii 1 Directional Derivatives, the Gradient and the Del Operator 1 1.1 Conceptual Review: Directional Derivatives and the Gradient........... 1 1.2 The Gradient as a Vector Field............................ 5 1.3 The Gradient Flow and Critical Points ....................... 10 1.4 The Del Operator and the Gradient in Other Coordinates*............ 17 1.5 Problems........................................ 21 2 Vector Fields in Low Dimensions 26 2 3 2.1 General Vector Fields in Domains of R and R . 26 2.2 Flows and Integral Curves .............................. 31 2.3 Conservative Vector Fields and Potentials...................... 32 2.4 Vector Fields from Frames*.............................. 37 2.5 Divergence, Curl, Jacobians, and the Laplacian................... 41 2.6 Parametrized Surfaces and Coordinate Vector Fields*............... 48 2.7 Tangent Vectors, Normal Vectors, and Orientations*................ 52 2.8 Problems........................................ 58 3 Line Integrals 66 3.1 Defining Scalar Line Integrals............................. 66 3.2 Line Integrals in Vector Fields ............................ 75 3.3 Work in a Force Field................................. 78 3.4 The Fundamental Theorem of Line Integrals .................... 79 3.5 Motion in Conservative Force Fields Conserves Energy .............. 81 3.6 Path Independence and Corollaries of the Fundamental Theorem......... 82 3.7 Green's Theorem.................................... 84 3.8 Problems........................................ 89 4 Surface Integrals, Flux, and Fundamental Theorems 93 4.1 Surface Integrals of Scalar Fields........................... 93 4.2 Flux........................................... 96 4.3 The Gradient, Divergence, and Curl Operators Via Limits* . 103 4.4 The Stokes-Kelvin Theorem..............................108 4.5 The Divergence Theorem ...............................112 4.6 Problems........................................114 List of Figures 117 i 11/14/19 Multivariate Calculus: Vector Calculus Havens 0.
    [Show full text]
  • Two Fundamental Theorems About the Definite Integral
    Two Fundamental Theorems about the Definite Integral These lecture notes develop the theorem Stewart calls The Fundamental Theorem of Calculus in section 5.3. The approach I use is slightly different than that used by Stewart, but is based on the same fundamental ideas. 1 The definite integral Recall that the expression b f(x) dx ∫a is called the definite integral of f(x) over the interval [a,b] and stands for the area underneath the curve y = f(x) over the interval [a,b] (with the understanding that areas above the x-axis are considered positive and the areas beneath the axis are considered negative). In today's lecture I am going to prove an important connection between the definite integral and the derivative and use that connection to compute the definite integral. The result that I am eventually going to prove sits at the end of a chain of earlier definitions and intermediate results. 2 Some important facts about continuous functions The first intermediate result we are going to have to prove along the way depends on some definitions and theorems concerning continuous functions. Here are those definitions and theorems. The definition of continuity A function f(x) is continuous at a point x = a if the following hold 1. f(a) exists 2. lim f(x) exists xœa 3. lim f(x) = f(a) xœa 1 A function f(x) is continuous in an interval [a,b] if it is continuous at every point in that interval. The extreme value theorem Let f(x) be a continuous function in an interval [a,b].
    [Show full text]
  • Lecture 13 Gradient Methods for Constrained Optimization
    Lecture 13 Gradient Methods for Constrained Optimization October 16, 2008 Lecture 13 Outline • Gradient Projection Algorithm • Convergence Rate Convex Optimization 1 Lecture 13 Constrained Minimization minimize f(x) subject x ∈ X • Assumption 1: • The function f is convex and continuously differentiable over Rn • The set X is closed and convex ∗ • The optimal value f = infx∈Rn f(x) is finite • Gradient projection algorithm xk+1 = PX[xk − αk∇f(xk)] starting with x0 ∈ X. Convex Optimization 2 Lecture 13 Bounded Gradients Theorem 1 Let Assumption 1 hold, and suppose that the gradients are uniformly bounded over the set X. Then, the projection gradient method generates the sequence {xk} ⊂ X such that • When the constant stepsize αk ≡ α is used, we have 2 ∗ αL lim inf f(xk) ≤ f + k→∞ 2 P • When diminishing stepsize is used with k αk = +∞, we have ∗ lim inf f(xk) = f . k→∞ Proof: We use projection properties and the line of analysis similar to that of unconstrained method. HWK 6 Convex Optimization 3 Lecture 13 Lipschitz Gradients • Lipschitz Gradient Lemma For a differentiable convex function f with Lipschitz gradients, we have for all x, y ∈ Rn, 1 k∇f(x) − ∇f(y)k2 ≤ (∇f(x) − ∇f(y))T (x − y), L where L is a Lipschitz constant. • Theorem 2 Let Assumption 1 hold, and assume that the gradients of f are Lipschitz continuous over X. Suppose that the optimal solution ∗ set X is not empty. Then, for a constant stepsize αk ≡ α with 0 2 < α < L converges to an optimal point, i.e., ∗ ∗ ∗ lim kxk − x k = 0 for some x ∈ X .
    [Show full text]
  • Calculus Terminology
    AP Calculus BC Calculus Terminology Absolute Convergence Asymptote Continued Sum Absolute Maximum Average Rate of Change Continuous Function Absolute Minimum Average Value of a Function Continuously Differentiable Function Absolutely Convergent Axis of Rotation Converge Acceleration Boundary Value Problem Converge Absolutely Alternating Series Bounded Function Converge Conditionally Alternating Series Remainder Bounded Sequence Convergence Tests Alternating Series Test Bounds of Integration Convergent Sequence Analytic Methods Calculus Convergent Series Annulus Cartesian Form Critical Number Antiderivative of a Function Cavalieri’s Principle Critical Point Approximation by Differentials Center of Mass Formula Critical Value Arc Length of a Curve Centroid Curly d Area below a Curve Chain Rule Curve Area between Curves Comparison Test Curve Sketching Area of an Ellipse Concave Cusp Area of a Parabolic Segment Concave Down Cylindrical Shell Method Area under a Curve Concave Up Decreasing Function Area Using Parametric Equations Conditional Convergence Definite Integral Area Using Polar Coordinates Constant Term Definite Integral Rules Degenerate Divergent Series Function Operations Del Operator e Fundamental Theorem of Calculus Deleted Neighborhood Ellipsoid GLB Derivative End Behavior Global Maximum Derivative of a Power Series Essential Discontinuity Global Minimum Derivative Rules Explicit Differentiation Golden Spiral Difference Quotient Explicit Function Graphic Methods Differentiable Exponential Decay Greatest Lower Bound Differential
    [Show full text]
  • The Infinite and Contradiction: a History of Mathematical Physics By
    The infinite and contradiction: A history of mathematical physics by dialectical approach Ichiro Ueki January 18, 2021 Abstract The following hypothesis is proposed: \In mathematics, the contradiction involved in the de- velopment of human knowledge is included in the form of the infinite.” To prove this hypothesis, the author tries to find what sorts of the infinite in mathematics were used to represent the con- tradictions involved in some revolutions in mathematical physics, and concludes \the contradiction involved in mathematical description of motion was represented with the infinite within recursive (computable) set level by early Newtonian mechanics; and then the contradiction to describe discon- tinuous phenomena with continuous functions and contradictions about \ether" were represented with the infinite higher than the recursive set level, namely of arithmetical set level in second or- der arithmetic (ordinary mathematics), by mechanics of continuous bodies and field theory; and subsequently the contradiction appeared in macroscopic physics applied to microscopic phenomena were represented with the further higher infinite in third or higher order arithmetic (set-theoretic mathematics), by quantum mechanics". 1 Introduction Contradictions found in set theory from the end of the 19th century to the beginning of the 20th, gave a shock called \a crisis of mathematics" to the world of mathematicians. One of the contradictions was reported by B. Russel: \Let w be the class [set]1 of all classes which are not members of themselves. Then whatever class x may be, 'x is a w' is equivalent to 'x is not an x'. Hence, giving to x the value w, 'w is a w' is equivalent to 'w is not a w'."[52] Russel described the crisis in 1959: I was led to this contradiction by Cantor's proof that there is no greatest cardinal number.
    [Show full text]
  • Convergence Rates for Deterministic and Stochastic Subgradient
    Convergence Rates for Deterministic and Stochastic Subgradient Methods Without Lipschitz Continuity Benjamin Grimmer∗ Abstract We extend the classic convergence rate theory for subgradient methods to apply to non-Lipschitz functions. For the deterministic projected subgradient method, we present a global O(1/√T ) convergence rate for any convex function which is locally Lipschitz around its minimizers. This approach is based on Shor’s classic subgradient analysis and implies generalizations of the standard convergence rates for gradient descent on functions with Lipschitz or H¨older continuous gradients. Further, we show a O(1/√T ) convergence rate for the stochastic projected subgradient method on convex functions with at most quadratic growth, which improves to O(1/T ) under either strong convexity or a weaker quadratic lower bound condition. 1 Introduction We consider the nonsmooth, convex optimization problem given by min f(x) x∈Q for some lower semicontinuous convex function f : Rd R and closed convex feasible → ∪{∞} region Q. We assume Q lies in the domain of f and that this problem has a nonempty set of minimizers X∗ (with minimum value denoted by f ∗). Further, we assume orthogonal projection onto Q is computationally tractable (which we denote by PQ( )). arXiv:1712.04104v3 [math.OC] 26 Feb 2018 Since f may be nondifferentiable, we weaken the notion of gradients to· subgradients. The set of all subgradients at some x Q (referred to as the subdifferential) is denoted by ∈ ∂f(x)= g Rd y Rd f(y) f(x)+ gT (y x) . { ∈ | ∀ ∈ ≥ − } We consider solving this problem via a (potentially stochastic) projected subgradient method.
    [Show full text]
  • Mean Value, Taylor, and All That
    Mean Value, Taylor, and all that Ambar N. Sengupta Louisiana State University November 2009 Careful: Not proofread! Derivative Recall the definition of the derivative of a function f at a point p: f (w) − f (p) f 0(p) = lim (1) w!p w − p Derivative Thus, to say that f 0(p) = 3 means that if we take any neighborhood U of 3, say the interval (1; 5), then the ratio f (w) − f (p) w − p falls inside U when w is close enough to p, i.e. in some neighborhood of p. (Of course, we can’t let w be equal to p, because of the w − p in the denominator.) In particular, f (w) − f (p) > 0 if w is close enough to p, but 6= p. w − p Derivative So if f 0(p) = 3 then the ratio f (w) − f (p) w − p lies in (1; 5) when w is close enough to p, i.e. in some neighborhood of p, but not equal to p. Derivative So if f 0(p) = 3 then the ratio f (w) − f (p) w − p lies in (1; 5) when w is close enough to p, i.e. in some neighborhood of p, but not equal to p. In particular, f (w) − f (p) > 0 if w is close enough to p, but 6= p. w − p • when w > p, but near p, the value f (w) is > f (p). • when w < p, but near p, the value f (w) is < f (p). Derivative From f 0(p) = 3 we found that f (w) − f (p) > 0 if w is close enough to p, but 6= p.
    [Show full text]
  • Part IA — Vector Calculus
    Part IA | Vector Calculus Based on lectures by B. Allanach Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures. They are nowhere near accurate representations of what was actually lectured, and in particular, all errors are almost surely mine. 3 Curves in R 3 Parameterised curves and arc length, tangents and normals to curves in R , the radius of curvature. [1] 2 3 Integration in R and R Line integrals. Surface and volume integrals: definitions, examples using Cartesian, cylindrical and spherical coordinates; change of variables. [4] Vector operators Directional derivatives. The gradient of a real-valued function: definition; interpretation as normal to level surfaces; examples including the use of cylindrical, spherical *and general orthogonal curvilinear* coordinates. Divergence, curl and r2 in Cartesian coordinates, examples; formulae for these oper- ators (statement only) in cylindrical, spherical *and general orthogonal curvilinear* coordinates. Solenoidal fields, irrotational fields and conservative fields; scalar potentials. Vector derivative identities. [5] Integration theorems Divergence theorem, Green's theorem, Stokes's theorem, Green's second theorem: statements; informal proofs; examples; application to fluid dynamics, and to electro- magnetism including statement of Maxwell's equations. [5] Laplace's equation 2 3 Laplace's equation in R and R : uniqueness theorem and maximum principle. Solution of Poisson's equation by Gauss's method (for spherical and cylindrical symmetry) and as an integral. [4] 3 Cartesian tensors in R Tensor transformation laws, addition, multiplication, contraction, with emphasis on tensors of second rank. Isotropic second and third rank tensors.
    [Show full text]
  • MATH 162: Calculus II Differentiation
    MATH 162: Calculus II Framework for Mon., Jan. 29 Review of Differentiation and Integration Differentiation Definition of derivative f 0(x): f(x + h) − f(x) f(y) − f(x) lim or lim . h→0 h y→x y − x Differentiation rules: 1. Sum/Difference rule: If f, g are differentiable at x0, then 0 0 0 (f ± g) (x0) = f (x0) ± g (x0). 2. Product rule: If f, g are differentiable at x0, then 0 0 0 (fg) (x0) = f (x0)g(x0) + f(x0)g (x0). 3. Quotient rule: If f, g are differentiable at x0, and g(x0) 6= 0, then 0 0 0 f f (x0)g(x0) − f(x0)g (x0) (x0) = 2 . g [g(x0)] 4. Chain rule: If g is differentiable at x0, and f is differentiable at g(x0), then 0 0 0 (f ◦ g) (x0) = f (g(x0))g (x0). This rule may also be expressed as dy dy du = . dx x=x0 du u=u(x0) dx x=x0 Implicit differentiation is a consequence of the chain rule. For instance, if y is really dependent upon x (i.e., y = y(x)), and if u = y3, then d du du dy d (y3) = = = (y3)y0(x) = 3y2y0. dx dx dy dx dy Practice: Find d x d √ d , (x2 y), and [y cos(xy)]. dx y dx dx MATH 162—Framework for Mon., Jan. 29 Review of Differentiation and Integration Integration The definite integral • the area problem • Riemann sums • definition Fundamental Theorem of Calculus: R x I: Suppose f is continuous on [a, b].
    [Show full text]
  • Generalized Stokes' Theorem
    Chapter 4 Generalized Stokes’ Theorem “It is very difficult for us, placed as we have been from earliest childhood in a condition of training, to say what would have been our feelings had such training never taken place.” Sir George Stokes, 1st Baronet 4.1. Manifolds with Boundary We have seen in the Chapter 3 that Green’s, Stokes’ and Divergence Theorem in Multivariable Calculus can be unified together using the language of differential forms. In this chapter, we will generalize Stokes’ Theorem to higher dimensional and abstract manifolds. These classic theorems and their generalizations concern about an integral over a manifold with an integral over its boundary. In this section, we will first rigorously define the notion of a boundary for abstract manifolds. Heuristically, an interior point of a manifold locally looks like a ball in Euclidean space, whereas a boundary point locally looks like an upper-half space. n 4.1.1. Smooth Functions on Upper-Half Spaces. From now on, we denote R+ := n n f(u1, ... , un) 2 R : un ≥ 0g which is the upper-half space of R . Under the subspace n n n topology, we say a subset V ⊂ R+ is open in R+ if there exists a set Ve ⊂ R open in n n n R such that V = Ve \ R+. It is intuitively clear that if V ⊂ R+ is disjoint from the n n n subspace fun = 0g of R , then V is open in R+ if and only if V is open in R . n n Now consider a set V ⊂ R+ which is open in R+ and that V \ fun = 0g 6= Æ.
    [Show full text]