Dx(T) = −Λx(T) Dt + Dw (T), Λ > 0

Total Page:16

File Type:pdf, Size:1020Kb

Dx(T) = −Λx(T) Dt + Dw (T), Λ > 0 LECTURE 4 STOCHASTIC DIFFERENTIAL EQUATIONS AND SOLUTIONS Let us consider the following simple stochastic ordinary equation: dX(t) = −λX(t) dt + dW (t); λ > 0: (0.1) It can be readily verified by Ito's formula that the following process Z t −λt −λ(t−s) X(t) = e x0 + e dW (s) (0.2) 0 satisfies Equation (0.1). By the Kolmogrov continuity theorem, the solution is H¨oldercontinuous of order less than 1=2 in time since 2 [jX(t) − X(s)j2] ≤ (t − s)2( + x2) + jt − sj : (0.3) E λ 0 This simple model shows that the solution to a stochastic differential equation is H¨oldercontinuous of order less than 1=2 and thus does not have derivatives in time. This low regularity of solutions leads to different concerns in SODEs (and their numerical methods) from ODEs. 1. Existence and uniqueness of strong solutions W > W Let (Ω; F; P) be a probability space and (W (t); Ft ) = ((W1(t);:::;Wm(t)) ; Ft ) be an m- W dimensional standard Wiener process, where Ft ; 0 ≤ t ≤ T; is an increasing family of σ-subalgebras of F induced by W (t): Consider the system of Ito SODEs m X dX = a(t; X)dt + σr(t; X)dWr(t); t 2 (t0;T ];X(t0) = x0; (1.1) r=1 where X; a; σr are m-dimensional column-vectors and x0 is independent of w. We assume that a(t; x) and σ(t; x) are sufficiently smooth and globally Lipschitz. Remark 1.1. The SODEs (1.1) can be rewritten in Stratonovich sense under mild conditions. The equation (1.1) can be written as m X dX = [a(t; X) − c(t; X)]dt + σr(t; X)dWr(t); t 2 (t0;T ];X(t0) = x0; (1.2) r=1 where m 1 X @σr(t; X) c(t; X) = σ (t; X); 2 @x r r=1 @σr and @x is the Jacobi matrix of the column-vector σr: 2 @σ @σ 3 1;r ··· 1;r 6 @x1 @xm 7 @σr @σr @σr 6 . 7 = ··· = 6 . .. 7 : @x @x @x 6 . 7 1 m 4@σ @σ 5 m;r ··· m;r @x1 @xm 2 2 We denote f 2 Lad(Ω; L ([a; b])) if f(t) is adapted to Ft and f(t; !) 2 L ([a; b]), i.e., ( Z b ) 2 2 f 2 Lad(Ω; L ([a; b])) = f(t; !)jf(t; !) is Ft-measurable and P( fs ds < 1) = 1 : a Here fFt; a ≤ t ≤ bg is a filtration such that Date: November 3, 2019. 1 2 LECTURE 4 • for each t, f(t) and W (t) are Ft-measurable, i.e., f(t) and W (t) are adapted to the filtration Ft. • for any s ≤ t, W (t) − W (s) is independent of the σ-filed Fs. Definition 1.2 (A strong solution to a SODE). We say that X(t) is a (strong) solution to SDE (1.1) if 1 • a(t; X(t)) 2 Lad(Ω;L ([c; d])), 2 • σ(t; X(t)) 2 Lad(Ω;L ([c; d])), • and X(t) satisfies the following integral equation a.s. Z t Z t X(t) = x + a(s; X(s)) ds + σ(s; X(s)) dW (s): (1.3) 0 0 In general, it is difficult to give a necessary and sufficient condition for the existence and uniqueness of strong solutions. Usually, we can give sufficient conditions. 2 Theorem 1.3 (Existence and uniqueness). If X0 is F0-measurable and E[X0 ] < 1. The coefficients a; σ satisfy the following conditions. • (Lipschitz condition) a and σ are Lipschitz continuous, i.e., there is a constant K > 0 such that m X ja(x) − a(y)j + jσr(x) − σr(y)j ≤ Kjx − yj: r=1 • (Linear growth) a and σ grow at most linearly i.e., there is a C > 0 such that ja(x)j + jσ(x)j ≤ C(1 + jxj); then the SDE above has a unique strong solution and the solution has the following properties • X(t) is adapted to the filtration generated by X0 and W (s)(s ≤ t). Z t 2 • E[ X (s) ds] < 1. 0 See [Øksendal, 2003, Chapter 5] for a proof. Here are some examples where the conditions in the theorem are satisfied. • (Geometric Brownian motion) For µ, σ 2 R, dX(t) = µX(t) dt + σX(t) dW (t);X0 = x: • (Sine process) For σ 2 R, dX(t) = sin(X(t)) dt + σ dW (t);X0 = x: • (modified Cox-Ingersoll-Ross process) For θ1; θ2 2 R, θ2 dX(t) = −θ X(t) dt + θ p1 + X(t)2 dW (t);X = x: θ + 2 > 0: 1 2 0 1 2 Remark 1.4. The condition in the theorem is also known as global Lipschitz condition. A straight- forward generalization is one-sided Lipschitz condition (global monotone condition) m > X 2 2 (x − y) (a(x) − a(y)) + p0 jσr(x) − σr(y)j ≤ Kjx − yj ; p0 > 0; r=1 and the growth condition can also be generalized as m > X 2 2 x a(x) + p1 jσr(x)j ≤ C(1 + jxj ): r=1 Theorem 1.5 (Regularity of the solution). Under the conditions of Theorem 1.3, the solution is continuous and there exists a constant C > 0 depending only on t that 2p p E[jX(t) − X(s)j ] ≤ C jt − sj ; p ≥ 1: LECTURE 4 3 The proof of this theorem rely on the Burkholder-Davis-Gundy inequality. Then by the Kol- mogorov continuity theorem, we can conclude that the solution is only H¨oldercontinuous with exponent less than 1=2, which is the same as Brownian motion. 2. Solution methods This process (0.2) here is a special case of the Ornstein-Uhlenbeck process, which satisfies the equation dX(t) = κ(θ − X(t)) dt + σ dW (t): (2.1) where κ, σ > 0; θ 2 R. The solution to (2.1) can be obtained by the method of change-of-variable: Y (t) = θ − X(t). Then by Ito's formula we have dY (t) = −κY (t) dt + σ d(−W (t)): Similar to (0.2), the solution is Z t −κt −κ(t−s) Y (t) = e Y0 + σ e d(−W (s)): (2.2) 0 Then by Y (t) = θ − X(t), we have Z t −κt −κt −κ(t−s) X(t) = X0e + θ(1 − e ) + σ e dW (s): 0 In a more general case, we can use similar ideas to find explicit solutions to SODEs. 2.1. The integrating factor method. We apply the integrating factor method to solve nonlinear SDEs of the form dX(t) = f(t; X(t)) dt + σ(t)X(t) dW (t);X0 = x: (2.3) where f is a continuous deterministic function defined from R+ × R to R. • Step 1. Solve the equation dG(t) = σ(t)G(t) dW (t): Then we have Z t 1 Z t G(t) = exp( σ(s) dW (s) − σ2(s) ds): 0 2 0 The integrating factor function is defined by F (t) = G−1(t). It can be readily verified that F (t) satisfies dF (t) = −σ(t)F (t) dW (t) + σ2(t)F (t) dt: • Step 2. Let X(t) = G(t)C(t) and then C(t) = F (t)X(t). Then by the product rule, (2.3) can be written as d(F (t)X(t)) = F (t)f(t; X(t)) dt: Then Ct satisfies the following \deterministic" ODE dC(t) = F (t)f(t; G(t)C(t)): (2.4) • Step 3. Once we obtain C(t), we get X(t) from X(t) = G(t)C(t). Remark 2.1. When (2.4) cannot be explicitly solved, we may use some numerical methods to obtain C(t). Example 2.2. Use the integrating factor method to solve the SDE −1 dX(t) = (X(t)) dt + αX(t) dW (t);X0 = x > 0; where α is a constant. 4 LECTURE 4 −1 α2 Solution. Here f(t; x) = x and F (t) = exp(−αW (t) + 2 t). We only need to solve dC(t) = F (t)[G−1(t)C(t)]−1 = F 2(t)=C(t): This gives d(C(t))2 = 2F 2(t) dt and thus Z t (C(t))2 = 2 exp(−2αW (s) + α2s) ds + x2: 0 Since the initial condition is x > 0, we take Y (t) > 0 such that s α2 Z t X(t) = G(t)Y (t) = exp(αW (t) − t) 2 exp(−2αW (s) + α2s) ds + x2 > 0: 2 0 2.2. Moment equations of solutions. For a more complicated SODE, we cannot obtain a solution that can be written explicitly in terms of W (t). For example, the modified Cox-Ingersoll-Ross model (2.5) does not have an explicit solution: p dX(t) = κ(θ − X(t))dt + σ X(t)dW (t);X0 = x; (2.5) However, we can say a bit more about the moments of the process X(t). Write (2.5) in its integral form: Z t Z t X(t) = x + κ (θ − X(s))ds + σ pX(s) dW (s) (2.6) 0 0 and using Ito's formula gives Z t Z t Z t X2(t) = x2 + (2κθ + σ2) X(s) ds − 2κ X(s)2 ds + 2σ (X(s))3=2 dW (s): (2.7) 0 0 0 From this equation and the properties of Ito's integral, we can obtain the moments of the solution.
Recommended publications
  • Ordinary Differential Equations
    Ordinary Differential Equations for Engineers and Scientists Gregg Waterman Oregon Institute of Technology c 2017 Gregg Waterman This work is licensed under the Creative Commons Attribution 4.0 International license. The essence of the license is that You are free to: Share copy and redistribute the material in any medium or format • Adapt remix, transform, and build upon the material for any purpose, even commercially. • The licensor cannot revoke these freedoms as long as you follow the license terms. Under the following terms: Attribution You must give appropriate credit, provide a link to the license, and indicate if changes • were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. No additional restrictions You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. Notices: You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation. No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material. For any reuse or distribution, you must make clear to others the license terms of this work. The best way to do this is with a link to the web page below. To view a full copy of this license, visit https://creativecommons.org/licenses/by/4.0/legalcode.
    [Show full text]
  • New Dirac Delta Function Based Methods with Applications To
    New Dirac Delta function based methods with applications to perturbative expansions in quantum field theory Achim Kempf1, David M. Jackson2, Alejandro H. Morales3 1Departments of Applied Mathematics and Physics 2Department of Combinatorics and Optimization University of Waterloo, Ontario N2L 3G1, Canada, 3Laboratoire de Combinatoire et d’Informatique Math´ematique (LaCIM) Universit´edu Qu´ebec `aMontr´eal, Canada Abstract. We derive new all-purpose methods that involve the Dirac Delta distribution. Some of the new methods use derivatives in the argument of the Dirac Delta. We highlight potential avenues for applications to quantum field theory and we also exhibit a connection to the problem of blurring/deblurring in signal processing. We find that blurring, which can be thought of as a result of multi-path evolution, is, in Euclidean quantum field theory without spontaneous symmetry breaking, the strong coupling dual of the usual small coupling expansion in terms of the sum over Feynman graphs. arXiv:1404.0747v3 [math-ph] 23 Sep 2014 2 1. A method for generating new representations of the Dirac Delta The Dirac Delta distribution, see e.g., [1, 2, 3], serves as a useful tool from physics to engineering. Our aim here is to develop new all-purpose methods involving the Dirac Delta distribution and to show possible avenues for applications, in particular, to quantum field theory. We begin by fixing the conventions for the Fourier transform: 1 1 g(y) := g(x) eixy dx, g(x)= g(y) e−ixy dy (1) √2π √2π Z Z To simplify the notation we denote integration over the real line by the absence of e e integration delimiters.
    [Show full text]
  • Second Order Linear Differential Equations Y
    Second Order Linear Differential Equations Second order linear equations with constant coefficients; Fundamental solutions; Wronskian; Existence and Uniqueness of solutions; the characteristic equation; solutions of homogeneous linear equations; reduction of order; Euler equations In this chapter we will study ordinary differential equations of the standard form below, known as the second order linear equations : y″ + p(t) y′ + q(t) y = g(t). Homogeneous Equations : If g(t) = 0, then the equation above becomes y″ + p(t) y′ + q(t) y = 0. It is called a homogeneous equation. Otherwise, the equation is nonhomogeneous (or inhomogeneous ). Trivial Solution : For the homogeneous equation above, note that the function y(t) = 0 always satisfies the given equation, regardless what p(t) and q(t) are. This constant zero solution is called the trivial solution of such an equation. © 2008, 2016 Zachary S Tseng B-1 - 1 Second Order Linear Homogeneous Differential Equations with Constant Coefficients For the most part, we will only learn how to solve second order linear equation with constant coefficients (that is, when p(t) and q(t) are constants). Since a homogeneous equation is easier to solve compares to its nonhomogeneous counterpart, we start with second order linear homogeneous equations that contain constant coefficients only: a y″ + b y′ + c y = 0. Where a, b, and c are constants, a ≠ 0. A very simple instance of such type of equations is y″ − y = 0 . The equation’s solution is any function satisfying the equality t y″ = y. Obviously y1 = e is a solution, and so is any constant multiple t −t of it, C1 e .
    [Show full text]
  • 5 Mar 2009 a Survey on the Inverse Integrating Factor
    A survey on the inverse integrating factor.∗ Isaac A. Garc´ıa (1) & Maite Grau (1) Abstract The relation between limit cycles of planar differential systems and the inverse integrating factor was first shown in an article of Giacomini, Llibre and Viano appeared in 1996. From that moment on, many research articles are devoted to the study of the properties of the inverse integrating factor and its relation with limit cycles and their bifurcations. This paper is a summary of all the results about this topic. We include a list of references together with the corresponding related results aiming at being as much exhaustive as possible. The paper is, nonetheless, self-contained in such a way that all the main results on the inverse integrating factor are stated and a complete overview of the subject is given. Each section contains a different issue to which the inverse integrating factor plays a role: the integrability problem, relation with Lie symmetries, the center problem, vanishing set of an inverse integrating factor, bifurcation of limit cycles from either a period annulus or from a monodromic ω-limit set and some generalizations. 2000 AMS Subject Classification: 34C07, 37G15, 34-02. Key words and phrases: inverse integrating factor, bifurcation, Poincar´emap, limit cycle, Lie symmetry, integrability, monodromic graphic. arXiv:0903.0941v1 [math.DS] 5 Mar 2009 1 The Euler integrating factor The method of integrating factors is, in principle, a means for solving ordinary differential equations of first order and it is theoretically important. The use of integrating factors goes back to Leonhard Euler. Let us consider a first order differential equation and write the equation in the Pfaffian form ω = P (x, y) dy Q(x, y) dx =0 .
    [Show full text]
  • Handbook of Mathematics, Physics and Astronomy Data
    Handbook of Mathematics, Physics and Astronomy Data School of Chemical and Physical Sciences c 2017 Contents 1 Reference Data 1 1.1 PhysicalConstants ............................... .... 2 1.2 AstrophysicalQuantities. ....... 3 1.3 PeriodicTable ................................... 4 1.4 ElectronConfigurationsoftheElements . ......... 5 1.5 GreekAlphabetandSIPrefixes. ..... 6 2 Mathematics 7 2.1 MathematicalConstantsandNotation . ........ 8 2.2 Algebra ......................................... 9 2.3 TrigonometricalIdentities . ........ 10 2.4 HyperbolicFunctions. ..... 12 2.5 Differentiation .................................. 13 2.6 StandardDerivatives. ..... 14 2.7 Integration ..................................... 15 2.8 StandardIndefiniteIntegrals . ....... 16 2.9 DefiniteIntegrals ................................ 18 2.10 CurvilinearCoordinateSystems. ......... 19 2.11 VectorsandVectorAlgebra . ...... 22 2.12ComplexNumbers ................................. 25 2.13Series ......................................... 27 2.14 OrdinaryDifferentialEquations . ......... 30 2.15 PartialDifferentiation . ....... 33 2.16 PartialDifferentialEquations . ......... 35 2.17 DeterminantsandMatrices . ...... 36 2.18VectorCalculus................................. 39 2.19FourierSeries .................................. 42 2.20Statistics ..................................... 45 3 Selected Physics Formulae 47 3.1 EquationsofElectromagnetism . ....... 48 3.2 Equations of Relativistic Kinematics and Mechanics . ............. 49 3.3 Thermodynamics and Statistical Physics
    [Show full text]
  • FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS III: Numerical and More Analytic Methods
    FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS III: Numerical and More Analytic Methods David Levermore Department of Mathematics University of Maryland 30 September 2012 Because the presentation of this material in lecture will differ from that in the book, I felt that notes that closely follow the lecture presentation might be appreciated. Contents 8. First-Order Equations: Numerical Methods 8.1. Numerical Approximations 2 8.2. Explicit and Implicit Euler Methods 3 8.3. Explicit One-Step Methods Based on Taylor Approximation 4 8.3.1. Explicit Euler Method Revisited 4 8.3.2. Local and Global Errors 4 8.3.3. Higher-Order Taylor-Based Methods (not covered) 5 8.4. Explicit One-Step Methods Based on Quadrature 6 8.4.1. Explicit Euler Method Revisited Again 6 8.4.2. Runge-Trapezoidal Method 7 8.4.3. Runge-Midpoint Method 9 8.4.4. Runge-Kutta Method 10 8.4.5. General Runge-Kutta Methods (not covered) 12 9. Exact Differential Forms and Integrating Factors 9.1. Implicit General Solutions 15 9.2. Exact Differential Forms 16 9.3. Integrating Factors 20 10. Special First-Order Equations and Substitution 10.1. Linear Argument Equations (not covered) 25 10.2. Dilation Invariant Equations (not covered) 26 10.3. Bernoulli Equations (not covered) 27 10.4. Substitution (not covered) 29 1 2 8. First-Order Equations: Numerical Methods 8.1. Numerical Approximations. Analytic methods are either difficult or impossible to apply to many first-order differential equations. In such cases direction fields might be the only graphical method that we have covered that can be applied.
    [Show full text]
  • Phase-Space Matrix Representation of Differential Equations for Obtaining the Energy Spectrum of Model Quantum Systems
    Phase-space matrix representation of differential equations for obtaining the energy spectrum of model quantum systems Juan C. Morales, Carlos A. Arango1, a) Department of Chemical Sciences, Universidad Icesi, Cali, Colombia (Dated: 27 August 2021) Employing the phase-space representation of second order ordinary differential equa- tions we developed a method to find the eigenvalues and eigenfunctions of the 1- dimensional time independent Schr¨odinger equation for quantum model systems. The method presented simplifies some approaches shown in textbooks, based on asymptotic analyses of the time-independent Schr¨odinger equation, and power series methods with recurrence relations. In addition, the method presented here facilitates the understanding of the relationship between the ordinary differential equations of the mathematical physics and the time independent Schr¨odinger equation of physical models as the harmonic oscillator, the rigid rotor, the Hydrogen atom, and the Morse oscillator. Keywords: phase-space, model quantum systems, energy spectrum arXiv:2108.11487v1 [quant-ph] 25 Aug 2021 a)Electronic mail: [email protected] 1 I. INTRODUCTION The 1-dimensional time independent Schr¨odinger equation (TISE) can be solved analyt- ically for few physical models. The harmonic oscillator, the rigid rotor, the Hydrogen atom, and the Morse oscillator are examples of physical models with known analytical solution of the TISE (1). The analytical solution of the TISE for a physical model is usually obtained by using the ansatz of a wavefunction as a product of two functions, one of these functions acts as an integrating factor (2), the other function produces a differential equation solv- able either by Frobenius series method or by directly comparing with a template ordinary differential equation (ODE) with known solution (3).
    [Show full text]
  • Mathematics for Chemistry Prof. Madhav Ranganathan Department of Chemistry Indian Institute of Technology, Kanpur
    Mathematics for Chemistry Prof. Madhav Ranganathan Department of Chemistry Indian Institute of Technology, Kanpur Module - 05 Lecture - 23 Integrating Factors Today I will be talking about the Integrating Factor method, which is a method for solving differential equations; first order differential equations which cannot be expressed as exact differentials. (Refer Slide Time: 00:22) (Refer Slide Time: 00:27) (Refer Slide Time: 02:59) So, just to remind ourselves about exact and in exact differentials. We said that, we can write our first order differential equation as M times dx plus N times dy equal to 0; where M and N are functions of x and y. And what we said is at an exact differential implies. So, exact implies dou N by dou y equal to dou N by dou x. So, then you say it like that. If it is not exact; so if not exact then can we find some function alpha of x y such that alpha times M dx plus alpha times N dy equal to 0 is an exact differential. So, what I mean is that the left hand side is an exact differential. So, what I did is I just took this equation multiplied it by alpha. So, the right hand side will be 0 into alpha which will still be 0. Now, what I have is alpha M and alpha N instead of M and N. Now if this is an exact differential. So, can we find such an alpha? So, this is the main goal of this idea of integrating factor.
    [Show full text]
  • Solving 1St Order Odes
    Solving First Order ODEs Table of contents Solving First Order ODEs ............................................... 1 1. Introduction ..................................... ................. 1 Side: Two ways of writing a semilinear 1st order ODE . .............. 2 2. Integrating factors and separation of variables . ......................... 2 Side: One must be careful when multiplying an ODE by a function .............. 3 Side:Theroadahead .................................. ......... 3 3. Linear1storderequations .......................... ................... 4 Summary: Solving 1st order linear equations . ............... 6 4. Separableequations ............................... .................. 7 Summary: Solvingseparable equations . .............. 9 5. Exactequations ................................... ................ 9 Side:Theoperator“d” ................................ ........... 9 6. Non-exactequations ............................... .................. 12 7. Solvable equations in disguise: Transformation method . ......................... 18 7.1. Homogeneousequation ............................ ............... 18 7.2. dy G a x b y ................................................ 19 dx = ( + ) 7.3. Bernoulliequation .............................. ................ 19 7.4. Equations with linear coefficients (NOT REQUIRED FOR 334 FALL2010) ......... 20 8. Direction field, numerics, and well-posedness theory . ......................... 22 8.1. Directionfield .................................. ............... 22 8.2. Numerics ......................................
    [Show full text]
  • Analytic Solutions for Third Order Ordinary Differential Equations
    2019 HAWAII UNIVERSITY INTERNATIONAL CONFERENCES SCIENCE, TECHNOLOGY & ENGINEERING, ARTS, MATHEMATICS & EDUCATION JUNE 5 - 7, 2019 HAWAII PRINCE HOTEL WAIKIKI, HONOLULU, HAWAII ANALYTIC SOLUTIONS FOR THIRD ORDER ORDINARY DIFFERENTIAL EQUATIONS BECCAR-VARELA, MARIA P. ET AL UNIVERSITY OF TEXAS AT EL PASO EL PASO, TEXAS Dr. Maria P. Beccar-Varela Department of Mathematical Sciences University of Texas at El Paso El Paso, Texas Mr. Md Al Masum Bhuiyan Mr. Osei K. Tweneboah Computational Science Program University of Texas at El Paso El Paso, Texas Dr. Maria C. Mariani Department of Mathematical Sciences and Computational Science Program University of Texas at El Paso El Paso, Texas Analytic Solutions for Third Order Ordinary Differential Equations Synopsis: This work studies an analytic approach for solving higher order ordinary differential equations (ODEs). We develop alternate techniques for solving third order ODEs and discuss possible generalizations to higher order ODEs. The techniques are effective for solving complex ODEs and could be used in other application of sciences such as physics, engineering, and applied sciences. Analytic Solutions for Third Order Ordinary Differential Equations Maria P. Beccar-Varela ∗, Md Al Masum Bhuiyany, Maria C. Marianiz and Osei K. Tweneboahx Abstract This paper focuses on an analytic approach for solving higher order ordinary differential equations (ODEs). We develop a self-adjoint for- mulation and integrating-factor techniques to solve third order ODEs. The necessary conditions for ODEs to be self-adjoint are also pro- vided. Under these conditions, we find the analytic solution of the ODEs. The solutions produced in this work are exact unlike numeri- cal solutions which have approximation errors.
    [Show full text]
  • Solving Linear First-Order Differential Equations Leonard Euler's Integrating Factor Method
    Ursinus College Digital Commons @ Ursinus College Transforming Instruction in Undergraduate Differential Equations Mathematics via Primary Historical Sources (TRIUMPHS) Summer 2020 Solving Linear First-Order Differential Equations Leonard Euler's Integrating Factor Method Adam E. Parker Wittenberg University Follow this and additional works at: https://digitalcommons.ursinus.edu/triumphs_differ Part of the Curriculum and Instruction Commons, Educational Methods Commons, Higher Education Commons, and the Science and Mathematics Education Commons Click here to let us know how access to this document benefits ou.y Recommended Citation Parker, Adam E., "Solving Linear First-Order Differential Equations Leonard Euler's Integrating Factor Method" (2020). Differential Equations. 4. https://digitalcommons.ursinus.edu/triumphs_differ/4 This Course Materials is brought to you for free and open access by the Transforming Instruction in Undergraduate Mathematics via Primary Historical Sources (TRIUMPHS) at Digital Commons @ Ursinus College. It has been accepted for inclusion in Differential Equations by an authorized administrator of Digital Commons @ Ursinus College. For more information, please contact [email protected]. Solving Linear First-Order Differential Equations Leonard Euler’s Integrating Factor Method Adam E. Parker∗ May 26, 2021 1 Introduction In 1926, British mathematician E. L. Ince (1891–1941) described the typical evolution of solution techniques from calculus (and differential equations and science in general).1 1111111111111111111111111111111111111111 The early history of the infinitesimal calculus abounds in instances of problems solved through the agency of what were virtually differential equations; it is even true to say that the problem of integration, which may be regarded as the solution of the simplest of all types of differential equations, was a practical problem even in the middle of the sixteenth century.
    [Show full text]
  • Ordinary Differential Equations Math 22B-002, Spring 2017 Final Exam: Solutions 1. [15 Pts.] Solve the Initial Value Problem
    Ordinary Differential Equations Math 22B-002, Spring 2017 Final Exam: Solutions 1. [15 pts.] Solve the initial value problem 6y00 − y0 − y = 0; y(0) = 10; y0(0) = 0: Solution. • The characteristic equation is 6r2 −r −1 = 0 with roots r = 1=2; −1=3, so the general solution is x=2 −x=3 y(x) = c1e + c2e : • The initial conditions are satisfied if 1 1 c + c = 10; c − c = 0 1 2 2 1 3 2 whose solution is c1 = 4, c2 = 6, so y(x) = 4ex=2 + 6e−x=3: 1 2. [20 pts.] (a) Let y0 is an arbitrary constant. Find the solution y(x) of the initial-value problem 0 y − 2xy = x; y(0) = y0: (b) For what initial value y0 does the solution remain bounded as x ! +1? What is the solution in that case? Solution. • The ODE is first-order, linear, and nohomogeneous, so we use the in- tegrating factor method. • The integrating factor with coefficient p(x) = −2x is given by Z µ(x) = exp − 2x dx = e−x2 : • Multiplying the ODE by the integrating factor and writing the left- hand side as an exact derivative, we get that 2 0 2 e−x y = xe−x ; so 2 Z 2 1 2 e−x y(x) = xe−x dx + c = − ex + c; 2 and the general solution is 1 2 y(x) = − + cex : 2 • The initial condition gives y0 = −1=2 + c, so c = y0 + 1=2 and 1 1 2 y(x) = − + y + ex : 2 0 2 • (b) The solution remains bounded as x ! 1 if c = 0 and y0 = −1=2.
    [Show full text]