Review of Differentiation and Integration Rules from Calculus I and II for Ordinary Differential Equations, 3301 General Notatio

Total Page:16

File Type:pdf, Size:1020Kb

Review of Differentiation and Integration Rules from Calculus I and II for Ordinary Differential Equations, 3301 General Notatio Review of di®erentiation and integration rules from Calculus I and II for Ordinary Di®erential Equations, 3301 General Notation: a; b; m; n; C are non-speci¯c constants, independent of variables e; ¼ are special constants e = 2:71828 ¢ ¢ ¢, ¼ = 3:14159 ¢ ¢ ¢ f; g; u; v; F are functions f n(x) usually means [f(x)]n, but f ¡1(x) usually means inverse function of f a(x + y) means a times x + y, but f(x + y) means f evaluated at x + y fg means function f times function g, but f(g) means output of g is input of f t; x; y are variables, typically t is used for time and x for position, y is position or output 0;00 are Newton notations for ¯rst and second derivatives. d d2 d d2 Leibnitz notations for ¯rst and second derivatives are and or and dx dx2 dt dt2 Di®erential of x is shown by dx or ¢x or h f(x + ¢x) ¡ f(x) f 0(x) = lim , derivative of f shows the slope of the tangent line, rise ¢x!0 ¢x over run, for the function y = f(x) at x General di®erentiation rules: dt dx 1a- Derivative of a variable with respect to itself is 1. = 1 or = 1. dt dx 1b- Derivative of a constant is zero. 2- Linearity rule (af + bg)0 = af 0 + bg0 3- Product rule (fg)0 = f 0g + fg0 µ ¶ f 0 f 0g ¡ fg0 4- Quotient rule = g g2 5- Power rule (f n)0 = nf n¡1f 0 6- Chain rule (f(g(u)))0 = f 0(g(u))g0(u)u0 7- Logarithmic rule (f)0 = [eln f ]0 8- PPQ rule (f ngm)0 = f n¡1gm¡1(nf 0g + mfg0), combines power, product and quotient 9- PC rule (f n(g))0 = nf n¡1(g)f 0(g)g0, combines power and chain rules 10- Golden rule: Last algebra action speci¯es the ¯rst di®erentiation rule to be used Function-speci¯c di®erentiation rules: (un)0 = nun¡1u0 (uv)0 = uvv0 ln u + vuv¡1u0 (eu)0 = euu0 (au)0 = auu0 ln a u0 u0 (ln(u))0 = (log (u))0 = u a u log a (sin(u))0 = cos(u)u0 (cos(u))0 = ¡ sin(u)u0 (tan(u))0 = sec2(u)u0 (cot(u))0 = ¡ csc2(u)u0 (sec(u))0 = sec(u) tan(u)u0 (csc(u))0 = ¡ csc(u) cot(u)u0 u0 ¡u0 (sin¡1(u))0 = p (cos¡1(u))0 = p 1 ¡ u2 1 ¡ u2 u0 ¡u0 (tan¡1(u))0 = (cot¡1(u))0 = 1 + u2 1 + u2 u0 ¡u0 (sec¡1(u))0 = p (csc¡1(u))0 = p u u2 ¡ 1 u u2 ¡ 1 General integration de¯nitions and methods: R 1- Inde¯nite integral f(x)dx = F (x) + C means F 0(x) = f(x), F is antiderivative of f R 2- De¯nite integral b f(x)dx = F (b) ¡ F (a) is area under y = f(x) from x = a to x = b R a R R 3- Linearity (af + bg)dx = a fdx + b gdx R R 4a- Integration by parts fg0dx = fg ¡ f 0gdx R R 4b- Integration by parts udv = uv ¡ vdu R R 5a- Inde¯nite integration by substitution f(g(x))g0(x)dx = f(u)du when u = g(x) R R b 0 g(b) 5b- De¯nite integration by substitution a f(g(x))g (x)dx = g(a) f(u)du when u = g(x) 6- Integration by partial fraction decomposition 7- Integration by trigonometric substitution, reduction, circulation, etc 8- Study Chapter 7 of calculus text (Stewart's) for more detail Some basic integration formulas: Z Z un+1 du undu = + C, n 6= ¡1 = ln(u) + C n + 1 u Z Z au eudu = eu + C audu = + C ln a Z Z cos(u)du = sin(u) + C sin(u)du = ¡ cos(u) + C Z Z sec2(u)du = tan(u) + C csc2(u)du = ¡ cot(u) + C Z Z sec(u) tan(u)du = sec(u) + C csc(u) cot(u)du = ¡ csc(u) + C Z Z tan(u)du = ln j sec(u)j + C cot(u)du = ¡ ln j csc(u)j + C Z Z 1 1 ³u´ 1 ³u´ du = tan¡1 + C p du = sin¡1 + C u2 + a2 a a a2 ¡ u2 a Z ¯ ¯ Z 1 1 ¯u ¡ a¯ 1 p du = ln ¯ ¯ + C p du = ln ju + u2 § a2j + C u2 ¡ a2 2a ¯u + a¯ u2 § a2 Z Z sec(u)du = ln j sec(u) + tan(u)j + C csc(u)du = ln j csc(u) ¡ cot(u)j + C Z 1 sec3(u)du = (sec(u) tan(u) + ln j sec(u) + tan(u)j) + C 2 Z 1 csc3(u)du = (¡ csc(u) cot(u) + ln j csc(u) ¡ cot(u)j) + C 2.
Recommended publications
  • Notes on Calculus II Integral Calculus Miguel A. Lerma
    Notes on Calculus II Integral Calculus Miguel A. Lerma November 22, 2002 Contents Introduction 5 Chapter 1. Integrals 6 1.1. Areas and Distances. The Definite Integral 6 1.2. The Evaluation Theorem 11 1.3. The Fundamental Theorem of Calculus 14 1.4. The Substitution Rule 16 1.5. Integration by Parts 21 1.6. Trigonometric Integrals and Trigonometric Substitutions 26 1.7. Partial Fractions 32 1.8. Integration using Tables and CAS 39 1.9. Numerical Integration 41 1.10. Improper Integrals 46 Chapter 2. Applications of Integration 50 2.1. More about Areas 50 2.2. Volumes 52 2.3. Arc Length, Parametric Curves 57 2.4. Average Value of a Function (Mean Value Theorem) 61 2.5. Applications to Physics and Engineering 63 2.6. Probability 69 Chapter 3. Differential Equations 74 3.1. Differential Equations and Separable Equations 74 3.2. Directional Fields and Euler’s Method 78 3.3. Exponential Growth and Decay 80 Chapter 4. Infinite Sequences and Series 83 4.1. Sequences 83 4.2. Series 88 4.3. The Integral and Comparison Tests 92 4.4. Other Convergence Tests 96 4.5. Power Series 98 4.6. Representation of Functions as Power Series 100 4.7. Taylor and MacLaurin Series 103 3 CONTENTS 4 4.8. Applications of Taylor Polynomials 109 Appendix A. Hyperbolic Functions 113 A.1. Hyperbolic Functions 113 Appendix B. Various Formulas 118 B.1. Summation Formulas 118 Appendix C. Table of Integrals 119 Introduction These notes are intended to be a summary of the main ideas in course MATH 214-2: Integral Calculus.
    [Show full text]
  • Lecture 9: Partial Derivatives
    Math S21a: Multivariable calculus Oliver Knill, Summer 2016 Lecture 9: Partial derivatives ∂ If f(x,y) is a function of two variables, then ∂x f(x,y) is defined as the derivative of the function g(x) = f(x,y) with respect to x, where y is considered a constant. It is called the partial derivative of f with respect to x. The partial derivative with respect to y is defined in the same way. ∂ We use the short hand notation fx(x,y) = ∂x f(x,y). For iterated derivatives, the notation is ∂ ∂ similar: for example fxy = ∂x ∂y f. The meaning of fx(x0,y0) is the slope of the graph sliced at (x0,y0) in the x direction. The second derivative fxx is a measure of concavity in that direction. The meaning of fxy is the rate of change of the slope if you change the slicing. The notation for partial derivatives ∂xf,∂yf was introduced by Carl Gustav Jacobi. Before, Josef Lagrange had used the term ”partial differences”. Partial derivatives fx and fy measure the rate of change of the function in the x or y directions. For functions of more variables, the partial derivatives are defined in a similar way. 4 2 2 4 3 2 2 2 1 For f(x,y)= x 6x y + y , we have fx(x,y)=4x 12xy ,fxx = 12x 12y ,fy(x,y)= − − − 12x2y +4y3,f = 12x2 +12y2 and see that f + f = 0. A function which satisfies this − yy − xx yy equation is also called harmonic. The equation fxx + fyy = 0 is an example of a partial differential equation: it is an equation for an unknown function f(x,y) which involves partial derivatives with respect to more than one variables.
    [Show full text]
  • Differentiation Rules (Differential Calculus)
    Differentiation Rules (Differential Calculus) 1. Notation The derivative of a function f with respect to one independent variable (usually x or t) is a function that will be denoted by D f . Note that f (x) and (D f )(x) are the values of these functions at x. 2. Alternate Notations for (D f )(x) d d f (x) d f 0 (1) For functions f in one variable, x, alternate notations are: Dx f (x), dx f (x), dx , dx (x), f (x), f (x). The “(x)” part might be dropped although technically this changes the meaning: f is the name of a function, dy 0 whereas f (x) is the value of it at x. If y = f (x), then Dxy, dx , y , etc. can be used. If the variable t represents time then Dt f can be written f˙. The differential, “d f ”, and the change in f ,“D f ”, are related to the derivative but have special meanings and are never used to indicate ordinary differentiation. dy 0 Historical note: Newton used y,˙ while Leibniz used dx . About a century later Lagrange introduced y and Arbogast introduced the operator notation D. 3. Domains The domain of D f is always a subset of the domain of f . The conventional domain of f , if f (x) is given by an algebraic expression, is all values of x for which the expression is defined and results in a real number. If f has the conventional domain, then D f usually, but not always, has conventional domain. Exceptions are noted below.
    [Show full text]
  • A Quotient Rule Integration by Parts Formula Jennifer Switkes ([email protected]), California State Polytechnic Univer- Sity, Pomona, CA 91768
    A Quotient Rule Integration by Parts Formula Jennifer Switkes ([email protected]), California State Polytechnic Univer- sity, Pomona, CA 91768 In a recent calculus course, I introduced the technique of Integration by Parts as an integration rule corresponding to the Product Rule for differentiation. I showed my students the standard derivation of the Integration by Parts formula as presented in [1]: By the Product Rule, if f (x) and g(x) are differentiable functions, then d f (x)g(x) = f (x)g(x) + g(x) f (x). dx Integrating on both sides of this equation, f (x)g(x) + g(x) f (x) dx = f (x)g(x), which may be rearranged to obtain f (x)g(x) dx = f (x)g(x) − g(x) f (x) dx. Letting U = f (x) and V = g(x) and observing that dU = f (x) dx and dV = g(x) dx, we obtain the familiar Integration by Parts formula UdV= UV − VdU. (1) My student Victor asked if we could do a similar thing with the Quotient Rule. While the other students thought this was a crazy idea, I was intrigued. Below, I derive a Quotient Rule Integration by Parts formula, apply the resulting integration formula to an example, and discuss reasons why this formula does not appear in calculus texts. By the Quotient Rule, if f (x) and g(x) are differentiable functions, then ( ) ( ) ( ) − ( ) ( ) d f x = g x f x f x g x . dx g(x) [g(x)]2 Integrating both sides of this equation, we get f (x) g(x) f (x) − f (x)g(x) = dx.
    [Show full text]
  • Laplace Transform
    Chapter 7 Laplace Transform The Laplace transform can be used to solve differential equations. Be- sides being a different and efficient alternative to variation of parame- ters and undetermined coefficients, the Laplace method is particularly advantageous for input terms that are piecewise-defined, periodic or im- pulsive. The direct Laplace transform or the Laplace integral of a function f(t) defined for 0 ≤ t< 1 is the ordinary calculus integration problem 1 f(t)e−stdt; Z0 succinctly denoted L(f(t)) in science and engineering literature. The L{notation recognizes that integration always proceeds over t = 0 to t = 1 and that the integral involves an integrator e−stdt instead of the usual dt. These minor differences distinguish Laplace integrals from the ordinary integrals found on the inside covers of calculus texts. 7.1 Introduction to the Laplace Method The foundation of Laplace theory is Lerch's cancellation law 1 −st 1 −st 0 y(t)e dt = 0 f(t)e dt implies y(t)= f(t); (1) R R or L(y(t)= L(f(t)) implies y(t)= f(t): In differential equation applications, y(t) is the sought-after unknown while f(t) is an explicit expression taken from integral tables. Below, we illustrate Laplace's method by solving the initial value prob- lem y0 = −1; y(0) = 0: The method obtains a relation L(y(t)) = L(−t), whence Lerch's cancel- lation law implies the solution is y(t)= −t. The Laplace method is advertised as a table lookup method, in which the solution y(t) to a differential equation is found by looking up the answer in a special integral table.
    [Show full text]
  • Calculus Terminology
    AP Calculus BC Calculus Terminology Absolute Convergence Asymptote Continued Sum Absolute Maximum Average Rate of Change Continuous Function Absolute Minimum Average Value of a Function Continuously Differentiable Function Absolutely Convergent Axis of Rotation Converge Acceleration Boundary Value Problem Converge Absolutely Alternating Series Bounded Function Converge Conditionally Alternating Series Remainder Bounded Sequence Convergence Tests Alternating Series Test Bounds of Integration Convergent Sequence Analytic Methods Calculus Convergent Series Annulus Cartesian Form Critical Number Antiderivative of a Function Cavalieri’s Principle Critical Point Approximation by Differentials Center of Mass Formula Critical Value Arc Length of a Curve Centroid Curly d Area below a Curve Chain Rule Curve Area between Curves Comparison Test Curve Sketching Area of an Ellipse Concave Cusp Area of a Parabolic Segment Concave Down Cylindrical Shell Method Area under a Curve Concave Up Decreasing Function Area Using Parametric Equations Conditional Convergence Definite Integral Area Using Polar Coordinates Constant Term Definite Integral Rules Degenerate Divergent Series Function Operations Del Operator e Fundamental Theorem of Calculus Deleted Neighborhood Ellipsoid GLB Derivative End Behavior Global Maximum Derivative of a Power Series Essential Discontinuity Global Minimum Derivative Rules Explicit Differentiation Golden Spiral Difference Quotient Explicit Function Graphic Methods Differentiable Exponential Decay Greatest Lower Bound Differential
    [Show full text]
  • CHAPTER 3: Derivatives
    CHAPTER 3: Derivatives 3.1: Derivatives, Tangent Lines, and Rates of Change 3.2: Derivative Functions and Differentiability 3.3: Techniques of Differentiation 3.4: Derivatives of Trigonometric Functions 3.5: Differentials and Linearization of Functions 3.6: Chain Rule 3.7: Implicit Differentiation 3.8: Related Rates • Derivatives represent slopes of tangent lines and rates of change (such as velocity). • In this chapter, we will define derivatives and derivative functions using limits. • We will develop short cut techniques for finding derivatives. • Tangent lines correspond to local linear approximations of functions. • Implicit differentiation is a technique used in applied related rates problems. (Section 3.1: Derivatives, Tangent Lines, and Rates of Change) 3.1.1 SECTION 3.1: DERIVATIVES, TANGENT LINES, AND RATES OF CHANGE LEARNING OBJECTIVES • Relate difference quotients to slopes of secant lines and average rates of change. • Know, understand, and apply the Limit Definition of the Derivative at a Point. • Relate derivatives to slopes of tangent lines and instantaneous rates of change. • Relate opposite reciprocals of derivatives to slopes of normal lines. PART A: SECANT LINES • For now, assume that f is a polynomial function of x. (We will relax this assumption in Part B.) Assume that a is a constant. • Temporarily fix an arbitrary real value of x. (By “arbitrary,” we mean that any real value will do). Later, instead of thinking of x as a fixed (or single) value, we will think of it as a “moving” or “varying” variable that can take on different values. The secant line to the graph of f on the interval []a, x , where a < x , is the line that passes through the points a, fa and x, fx.
    [Show full text]
  • MATH 162: Calculus II Framework for Thurs., Mar
    MATH 162: Calculus II Framework for Thurs., Mar. 29–Fri. Mar. 30 The Gradient Vector Today’s Goal: To learn about the gradient vector ∇~ f and its uses, where f is a function of two or three variables. The Gradient Vector Suppose f is a differentiable function of two variables x and y with domain R, an open region of the xy-plane. Suppose also that r(t) = x(t)i + y(t)j, t ∈ I, (where I is some interval) is a differentiable vector function (parametrized curve) with (x(t), y(t)) being a point in R for each t ∈ I. Then by the chain rule, d ∂f dx ∂f dy f(x(t), y(t)) = + dt ∂x dt ∂y dt 0 0 = [fxi + fyj] · [x (t)i + y (t)j] dr = [fxi + fyj] · . (1) dt Definition: For a differentiable function f(x1, . , xn) of n variables, we define the gradient vector of f to be ∂f ∂f ∂f ∇~ f := , ,..., . ∂x1 ∂x2 ∂xn Remarks: • Using this definition, the total derivative df/dt calculated in (1) above may be written as df = ∇~ f · r0. dt In particular, if r(t) = x(t)i + y(t)j + z(t)k, t ∈ (a, b) is a differentiable vector function, and if f is a function of 3 variables which is differentiable at the point (x0, y0, z0), where x0 = x(t0), y0 = y(t0), and z0 = z(t0) for some t0 ∈ (a, b), then df ~ 0 = ∇f(x0, y0, z0) · r (t0). dt t=t0 • If f is a function of 2 variables, then ∇~ f has 2 components.
    [Show full text]
  • Multivariable Analysis
    1 - Multivariable analysis Roland van der Veen Groningen, 20-1-2020 2 Contents 1 Introduction 5 1.1 Basic notions and notation . 5 2 How to solve equations 7 2.1 Linear algebra . 8 2.2 Derivative . 10 2.3 Elementary Riemann integration . 12 2.4 Mean value theorem and Banach contraction . 15 2.5 Inverse and Implicit function theorems . 17 2.6 Picard's theorem on existence of solutions to ODE . 20 3 Multivariable fundamental theorem of calculus 23 3.1 Exterior algebra . 23 3.2 Differential forms . 26 3.3 Integration . 27 3.4 More on cubes and their boundary . 28 3.5 Exterior derivative . 30 3.6 The fundamental theorem of calculus (Stokes Theorem) . 31 3.7 Fundamental theorem of calculus: Poincar´elemma . 32 3 4 CONTENTS Chapter 1 Introduction The goal of these notes is to explore the notions of differentiation and integration in arbitrarily many variables. The material is focused on answering two basic questions: 1. How to solve an equation? How many solutions can one expect? 2. Is there a higher dimensional analogue for the fundamental theorem of calculus? Can one find a primitive? The equations we will address are systems of non-linear equations in finitely many variables and also ordinary differential equations. The approach will be mostly theoretical, schetching a framework in which one can predict how many solutions there will be without necessarily solving the equation. The key assumption is that everything we do can locally be approximated by linear functions. In other words, everything will be differentiable. One of the main results is that the linearization of the equation predicts the number of solutions and approximates them well locally.
    [Show full text]
  • Vector Calculus and Differential Forms with Applications To
    Vector Calculus and Differential Forms with Applications to Electromagnetism Sean Roberson May 7, 2015 PREFACE This paper is written as a final project for a course in vector analysis, taught at Texas A&M University - San Antonio in the spring of 2015 as an independent study course. Students in mathematics, physics, engineering, and the sciences usually go through a sequence of three calculus courses before go- ing on to differential equations, real analysis, and linear algebra. In the third course, traditionally reserved for multivariable calculus, stu- dents usually learn how to differentiate functions of several variable and integrate over general domains in space. Very rarely, as was my case, will professors have time to cover the important integral theo- rems using vector functions: Green’s Theorem, Stokes’ Theorem, etc. In some universities, such as UCSD and Cornell, honors students are able to take an accelerated calculus sequence using the text Vector Cal- culus, Linear Algebra, and Differential Forms by John Hamal Hubbard and Barbara Burke Hubbard. Here, students learn multivariable cal- culus using linear algebra and real analysis, and then they generalize familiar integral theorems using the language of differential forms. This paper was written over the course of one semester, where the majority of the book was covered. Some details, such as orientation of manifolds, topology, and the foundation of the integral were skipped to save length. The paper should still be readable by a student with at least three semesters of calculus, one course in linear algebra, and one course in real analysis - all at the undergraduate level.
    [Show full text]
  • Antiderivatives 307
    4100 AWL/Thomas_ch04p244-324 8/20/04 9:02 AM Page 307 4.8 Antiderivatives 307 4.8 Antiderivatives We have studied how to find the derivative of a function. However, many problems re- quire that we recover a function from its known derivative (from its known rate of change). For instance, we may know the velocity function of an object falling from an initial height and need to know its height at any time over some period. More generally, we want to find a function F from its derivative ƒ. If such a function F exists, it is called an anti- derivative of ƒ. Finding Antiderivatives DEFINITION Antiderivative A function F is an antiderivative of ƒ on an interval I if F¿sxd = ƒsxd for all x in I. The process of recovering a function F(x) from its derivative ƒ(x) is called antidiffer- entiation. We use capital letters such as F to represent an antiderivative of a function ƒ, G to represent an antiderivative of g, and so forth. EXAMPLE 1 Finding Antiderivatives Find an antiderivative for each of the following functions. (a) ƒsxd = 2x (b) gsxd = cos x (c) hsxd = 2x + cos x Solution (a) Fsxd = x2 (b) Gsxd = sin x (c) Hsxd = x2 + sin x Each answer can be checked by differentiating. The derivative of Fsxd = x2 is 2x. The derivative of Gsxd = sin x is cos x and the derivative of Hsxd = x2 + sin x is 2x + cos x. The function Fsxd = x2 is not the only function whose derivative is 2x. The function x2 + 1 has the same derivative.
    [Show full text]
  • Numerical Differentiation
    Jim Lambers MAT 460/560 Fall Semester 2009-10 Lecture 23 Notes These notes correspond to Section 4.1 in the text. Numerical Differentiation We now discuss the other fundamental problem from calculus that frequently arises in scientific applications, the problem of computing the derivative of a given function f(x). Finite Difference Approximations 0 Recall that the derivative of f(x) at a point x0, denoted f (x0), is defined by 0 f(x0 + h) − f(x0) f (x0) = lim : h!0 h 0 This definition suggests a method for approximating f (x0). If we choose h to be a small positive constant, then f(x + h) − f(x ) f 0(x ) ≈ 0 0 : 0 h This approximation is called the forward difference formula. 00 To estimate the accuracy of this approximation, we note that if f (x) exists on [x0; x0 + h], 0 00 2 then, by Taylor's Theorem, f(x0 +h) = f(x0)+f (x0)h+f ()h =2; where 2 [x0; x0 +h]: Solving 0 for f (x0), we obtain f(x + h) − f(x ) f 00() f 0(x ) = 0 0 + h; 0 h 2 so the error in the forward difference formula is O(h). We say that this formula is first-order accurate. 0 The forward-difference formula is called a finite difference approximation to f (x0), because it approximates f 0(x) using values of f(x) at points that have a small, but finite, distance between them, as opposed to the definition of the derivative, that takes a limit and therefore computes the derivative using an “infinitely small" value of h.
    [Show full text]