Variation of Parameters

Total Page:16

File Type:pdf, Size:1020Kb

Variation of Parameters 156 ● CHAPTER 4 HIGHER-ORDER DIFFERENTIAL EQUATIONS In Problems 27–34 find linearly independent functions that 55. yЉϩ25y ϭ 20 sin 5x 56. yЉϩy ϭ 4 cos x Ϫ sin x are annihilated by the given differential operator. 57. yЉϩyЈϩy ϭ x sin x 58. yЉϩ4y ϭ cos2x 27. D5 28. D2 ϩ 4D yٞϩ8yЉϭϪ6x2 ϩ 9x ϩ 2 .59 29. D Ϫ D ϩ 30. D2 Ϫ D Ϫ ( 6)(2 3) 9 36 Ϫ yٞϪyЉϩyЈϪy ϭ xex Ϫ e x ϩ 7 .60 31. D2 ϩ 5 32. D2 Ϫ 6D ϩ 10 yٞϪ3yЉϩ3yЈϪy ϭ ex Ϫ x ϩ 16 .61 33. D3 Ϫ 10D2 ϩ 25D 34. D2(D Ϫ 5)(D Ϫ 7) Ϫ 2yٞϪ3yЉϪ3yЈϩ2y ϭ (ex ϩ e x)2 .62 In Problems 35–64 solve the given differential equation by Ϫ ٞϩ Љϭ x ϩ (4) undetermined coefficients 63. y 2y y e 1 64. y(4) Ϫ 4yЉϭ5x2 Ϫ e2x 35. yЉϪ9y ϭ 54 36. 2yЉϪ7yЈϩ5y ϭϪ29 .yЉϩyЈϭ3 38. yٞϩ2yЉϩyЈϭ10 In Problems 65–72 solve the given initial-value problem .37 Љϩ Јϩ ϭ ϩ 39. y 4y 4y 2x 6 65. yЉϪ64y ϭ 16, y(0) ϭ 1, yЈ(0) ϭ 0 40. yЉϩ3yЈϭ4x Ϫ 5 66. yЉϩyЈϭx, y(0) ϭ 1, yЈ(0) ϭ 0 yٞϩyЉϭ8x2 42. yЉϪ2yЈϩy ϭ x3 ϩ 4x .41 67. yЉϪ5yЈϭx Ϫ 2, y(0) ϭ 0, yЈ(0) ϭ 2 43. yЉϪyЈϪ12y ϭ e4x 44. yЉϩ2yЈϩ2y ϭ 5e6x 68. yЉϩ5yЈϪ6y ϭ 10e2x, y(0) ϭ 1, yЈ(0) ϭ 1 45. yЉϪ2yЈϪ3y ϭ 4ex Ϫ 9 Ϫ 69. yЉϩy ϭ 8 cos 2x Ϫ 4 sin x, y(p 2) ϭϪ1, yЈ(p 2) ϭ 0 46. yЉϩ6yЈϩ8y ϭ 3e 2x ϩ 2x > > yٞϪ yЉϩyЈϭxex ϩ y ϭ yЈ ϭ .70 47. yЉϩ25y ϭ 6 sin x 2 5, (0) 2, (0) 2, yЉ(0) ϭϪ1 48. yЉϩ4y ϭ 4 cos x ϩ 3 sin x Ϫ 8 71. yЉϪ4yЈϩ8y ϭ x3, y(0) ϭ 2, yЈ(0) ϭ 4 49. yЉϩ6yЈϩ9y ϭϪxe4x ,y(4) Ϫ yٞϭx ϩ ex, y(0) ϭ 0, yЈ(0) ϭ 0, yЉ(0) ϭ 0 .72 50. yЉϩ3yЈϪ10y ϭ x(ex ϩ 1) yٞ(0) ϭ 0 51. yЉϪy ϭ x2ex ϩ 5 Ϫ 52. yЉϩ2yЈϩy ϭ x2e x Discussion Problems 53. yЉϪ2yЈϩ5y ϭ ex sin x 73. Suppose L is a linear differential operator that factors 1 but has variable coefficients. Do the factors of L com- 54. yЉϩyЈϩ y ϭ ex(sin 3x Ϫ cos 3x) 4 mute? Defend your answer. 4.6 VARIATION OF PARAMETERS REVIEW MATERIAL ● Basic integration formulas and techniques from calculus ● Review Section 2.3 INTRODUCTION We pointed out in the discussions in Sections 4.4 and 4.5 that the method of undetermined coefficients has two inherent weaknesses that limit its wider application to linear equations: The DE must have constant coefficients and the input function g(x) must be of the type listed in Table 4.4.1. In this section we examine a method for determining a particular solution yp of a nonhomogeneous linear DE that has, in theory, no such restrictions on it. This method, due to the eminent astronomer and mathematician Joseph Louis Lagrange (1736–1813), is known as varia- tion of parameters. Before examining this powerful method for higher-order equations we revisit the solution of lin- ear first-order differential equations that have been put into standard form. The discussion under the first heading in this section is optional and is intended to motivate the main discussion of this section that starts under the second heading. If pressed for time this motivational material could be assigned for reading. Copyright 2012 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. 4.6 VARIATION OF PARAMETERS ● 157 Linear First-Order DEs Revisited In Section 2.3 we saw that the general so- Јϩ ϭ lution of a linear first-order differential equation a1(x)y a0(x)y g(x) can be found by first rewriting it in the standard form dy ϩ P(x)y ϭ f(x) (1) dx and assuming that P(x) and f(x)are continuous on an common interval I. Using the in- tegrating factor method, the general solution of (1) on the interval I, was found to be ͵ ͵ ͵ ϭ Ϫ P(x)dx ϩ Ϫ P(x)dx͵ P(x)dx See (4) of Section 2.3. ᭤ y c1e e e f(x) dx. The foregoing solution has the same͵ form as that given in Theorem 4.1.6, namely, ϭ ϩ ϭ Ϫ P(x)dx y yc yp. In this case yc c1e is a solution of the associated homogeneous equation dy ϩ P(x)y ϭ 0 (2) dx ͵ ͵ ϭ Ϫ P(x)dx͵ P(x)dx and yp e e f (x) dx (3) is a particular solution of the nonhomogeneous equation (1). As a means of moti- vating a method for solving nonhomogeneous linear equations of higher-order we The basic procedure is ᭤ that used in Section 4.2. propose to rederive the particular solution (3) by a method known as variation of parameters. Suppose that y1 is a known solution of the homogeneous equation (2), that is, dy 1 ϩ P(x)y ϭ 0. (4) dx 1 ͵ ϭ Ϫ P(x)dx It is easily shown that y1 e is a solution of (4) and because the equation is linear, c1y1(x) is its general solution. Variation of parameters consists of finding a par- ϭ ticular solution of (1) of the form yp u1(x)y1(x). In other words, we have replaced the parameter c1 by a function u1. ϭ Substituting yp u1y1 into (1) and using the Product Rule gives d u y ϩ P(x)u y ϭ f(x) dx [ 1 1] 1 1 dy du u 1 ϩ y 1 ϩ P(x)u y ϭ f(x) 1 dx 1 dx 1 1 04 because of (4) dy du u 1 ϩ P(x)y ΅ ϩ y 1 ϭ f (x) 1 dx 1 1 dx du so y 1 ϭ f (x). 1 dx By separating variables and integrating, we find u1: ϭ f(x) ϭ ͵ f (x) du1 dx yields u1 dx. y1(x) y1(x) Hence the sought-after particular solution is ϭ ϭ ͵ f(x) yp u1y1 y1 dx. y1(x) ͵ ϭ Ϫ P(x)dx From the fact that y1 e we see the last result is identical to (3). Copyright 2012 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s). Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it. 158 ● CHAPTER 4 HIGHER-ORDER DIFFERENTIAL EQUATIONS Linear Second-Order DEs Next we consider the case of a linear second- order equation Љϩ Јϩ ϭ a2(x)y a1(x)y a0(x)y g(x), (5) although, as we shall see, variation of parameters extends to higher-order equations. The method again begins by putting (5) into the standard form yЉϩP(x)yЈϩQ(x)y ϭ f(x) (6) by dividing by the leading coefficienta2(x). In (6) we suppose that coefficient func- tions P(x), Q(x), and f(x) are continuous on some common interval I. As we have already seen in Section 4.3, there is no difficulty in obtaining the complementary ϭ ϩ solution yc c1y1(x) c2y2(x), the general solution of the associated homogeneous equation of (6), when the coefficients are constants. Analogous to the preceding dis- cussion, we now ask: Can the parameters c1 and c2 in yc can be replaced with func- tions u1 and u2, or “variable parameters,” so that ϭ ϩ y u1(x)y1(x) u2(x)y2(x) (7) is a particular solution of (6)? To answer this question we substitute (7) into (6). Using the Product Rule to differentiate yp twice, we get Ј ϭ Ј ϩ Ј ϩ Ј ϩ Ј yp u1y1 y1u1 u2y2 y2u2 Љ ϭ Љ ϩ Ј Ј ϩ Љ ϩ Ј Ј ϩ Љ ϩ Ј Ј ϩ Љ ϩ Ј Ј y p u1y1 y1u1 y1u1 u1y1 u2 y 2 y2u2 y2u 2 u2 y2. Substituting (7) and the foregoing derivatives into (6) and grouping terms yields 44zero zero Љ ϩ Ј ϩ ϭ Љ ϩ Ј ϩ ϩ Љ ϩ Ј ϩ ϩ Љ ϩ Ј Ј yp P(x)yp Q(x)yp u1[y 1 Py1 Qy1] u2[y2 Py2 Qy2] y1u 1 u1y1 ϩ Љ ϩ Ј Ј ϩ Ј ϩ Ј ϩ Ј Ј ϩ Ј Ј y2u2 u2 y2 P[y1u1 y2u2] y1u1 y2u2 d d ϭ [y uЈ] ϩ [y uЈ] ϩ P[y uЈ ϩ y uЈ] ϩ yЈuЈ ϩ yЈuЈ dx 1 1 dx 2 2 1 1 2 2 1 1 2 2 d ϭ [y uЈ ϩ y uЈ] ϩ P[y uЈ ϩ y uЈ] ϩ yЈuЈ ϩ yЈuЈ ϭ f (x).
Recommended publications
  • Modifications of the Method of Variation of Parameters
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector International Joumal Available online at www.sciencedirect.com computers & oc,-.c~ ~°..-cT- mathematics with applications Computers and Mathematics with Applications 51 (2006) 451-466 www.elsevier.com/locate/camwa Modifications of the Method of Variation of Parameters ROBERTO BARRIO* GME, Depto. Matem~tica Aplicada Facultad de Ciencias Universidad de Zaragoza E-50009 Zaragoza, Spain rbarrio@unizar, es SERGIO SERRANO GME, Depto. MatemAtica Aplicada Centro Politdcnico Superior Universidad de Zaragoza Maria de Luna 3, E-50015 Zaragoza, Spain sserrano©unizar, es (Received February PO05; revised and accepted October 2005) Abstract--ln spite of being a classical method for solving differential equations, the method of variation of parameters continues having a great interest in theoretical and practical applications, as in astrodynamics. In this paper we analyse this method providing some modifications and generalised theoretical results. Finally, we present an application to the determination of the ephemeris of an artificial satellite, showing the benefits of the method of variation of parameters for this kind of problems. ~) 2006 Elsevier Ltd. All rights reserved. Keywords--Variation of parameters, Lagrange equations, Gauss equations, Satellite orbits, Dif- ferential equations. 1. INTRODUCTION The method of variation of parameters was developed by Leonhard Euler in the middle of XVIII century to describe the mutual perturbations of Jupiter and Saturn. However, his results were not absolutely correct because he did not consider the orbital elements varying simultaneously. In 1766, Lagrange improved the method developed by Euler, although he kept considering some of the orbital elements as constants, which caused some of his equations to be incorrect.
    [Show full text]
  • MTHE / MATH 237 Differential Equations for Engineering Science
    i Queen’s University Mathematics and Engineering and Mathematics and Statistics MTHE / MATH 237 Differential Equations for Engineering Science Supplemental Course Notes Serdar Y¨uksel November 26, 2015 ii This document is a collection of supplemental lecture notes used for MTHE / MATH 237: Differential Equations for Engineering Science. Serdar Y¨uksel Contents 1 Introduction to Differential Equations ................................................... .......... 1 1.1 Introduction.................................... ............................................ 1 1.2 Classification of DifferentialEquations ............ ............................................. 1 1.2.1 OrdinaryDifferentialEquations ................. ........................................ 2 1.2.2 PartialDifferentialEquations .................. ......................................... 2 1.2.3 HomogeneousDifferentialEquations .............. ...................................... 2 1.2.4 N-thorderDifferentialEquations................ ........................................ 2 1.2.5 LinearDifferentialEquations ................... ........................................ 2 1.3 SolutionsofDifferentialequations ................ ............................................. 3 1.4 DirectionFields................................. ............................................ 3 1.5 Fundamental Questions on First-Order Differential Equations ...................................... 4 2 First-Order Ordinary Differential Equations ..................................................
    [Show full text]
  • Math 6730 : Asymptotic and Perturbation Methods Hyunjoong
    Math 6730 : Asymptotic and Perturbation Methods Hyunjoong Kim and Chee-Han Tan Last modified : January 13, 2018 2 Contents Preface 5 1 Introduction to Asymptotic Approximation7 1.1 Asymptotic Expansion................................8 1.1.1 Order symbols.................................9 1.1.2 Accuracy vs convergence........................... 10 1.1.3 Manipulating asymptotic expansions.................... 10 1.2 Algebraic and Transcendental Equations...................... 11 1.2.1 Singular quadratic equation......................... 11 1.2.2 Exponential equation............................. 14 1.2.3 Trigonometric equation............................ 15 1.3 Differential Equations: Regular Perturbation Theory............... 16 1.3.1 Projectile motion............................... 16 1.3.2 Nonlinear potential problem......................... 17 1.3.3 Fredholm alternative............................. 19 1.4 Problems........................................ 20 2 Matched Asymptotic Expansions 31 2.1 Introductory example................................. 31 2.1.1 Outer solution by regular perturbation................... 31 2.1.2 Boundary layer................................ 32 2.1.3 Matching................................... 33 2.1.4 Composite expression............................. 33 2.2 Extensions: multiple boundary layers, etc...................... 34 2.2.1 Multiple boundary layers........................... 34 2.2.2 Interior layers................................. 35 2.3 Partial differential equations............................. 36 2.4 Strongly
    [Show full text]
  • Modified Variation of Parameters Method for Differential Equations
    World Applied Sciences Journal 6 (10): 1372-1376, 2009 ISSN 1818-4952 © IDOSI Publications, 2009 Modified Variation of Parameters Method for Differential Equations 1Syed Tauseef Mohyud-Din, 2Muhammad Aslam Noor and 2Khalida Inayat Noor 1HITEC University, Taxila Cantt, Pakistan 2Department of Mathematics, COMSATS Institute of Information Technology, Islamabad, Pakistan Abstract: In this paper, we apply the Modified Variation of Parameters Method (MVPM) for solving nonlinear differential equations which are associated with oscillators. The proposed modification is made by the elegant coupling of traditional Variation of Parameters Method (VPM) and He’s polynomials. The suggested algorithm is more efficient and easier to handle as compare to decomposition method. Numerical results show the efficiency of the proposed algorithm. Key words: Variation of parameters method • He’s polynomials • nonlinear oscillator INTRODUCTION the inbuilt deficiencies of various existing techniques. Numerical results show the complete reliability of the The nonlinear oscillators appear in various proposed technique. physical phenomena related to physics, applied and engineering sciences [1-6] and the references therein. VARIATION OF Several techniques including variational iteration, PARAMETERS METHOD (VPM) homotopy perturbation and expansion of parameters have been applied for solving such problems [1-6]. Consider the following second-order partial He [2-9] developed the homotopy perturbation differential equation: method for solving various physical problems. This reliable technique has been applied to a wide range of ytt = f(t,x,y,z,y,y,y,yxyzxx ,yyy ,y)zz (1) diversified physical problems [1-23] and the references therein. Recently, Ghorbani et al. [10, 11] introduced He’s polynomials by splitting the nonlinear term into a where t such that (-∞<t<∞) is time and f is linear or non series of polynomials.
    [Show full text]
  • Step Functions, Delta Functions, and the Variation of Parameters Formula
    STEP FUNCTIONS, DELTA FUNCTIONS, AND THE VARIATION OF PARAMETERS FORMULA STEPHEN SCHECTER 1. The unit step function and piecewise continuous functions The Heaviside unit step function u(t) is given by 0 if t< 0, u(t)= (1 if t> 0. The function u(t) is not defined at t = 0. Often we will not worry about the value of a function at a point where it is discontinuous, since often it doesn’t matter. u(t) u(t−a) 1−u(t−b) u(t−a)−u(t−b) 1 1 1 1 a b a b Figure 1.1. Heaviside unit step function. The function u(t) turns on at t = 0. The function u(t − a) is just u(t) shifted, or dragged, so that it turns on at t = a. The function 1 − u(t − b) turns off at t = b. The function u(t − a) − u(t − b), with a < b, turns on a t = a and turns off at t = b. We can use the unit step function to crop and shift functions. Multiplying f(t) by u(t−a) crops f(t) so that it turns on at t = a: 0 if t < a, u(t − a)f(t)= (f(t) if t > a. Multiplying f(t) by u(t − a) − u(t − b), with a < b, crops f(t) so that it turns on at t = a and turns off at t = b: 0 if t < a, (u(t − a) − u(t − b))f(t)= f(t) if a<t<b, 0 if t > b.
    [Show full text]
  • Variation of Parameters
    Overview An Example Double Check Further Discussion Variation of Parameters Bernd Schroder¨ logo1 Bernd Schroder¨ Louisiana Tech University, College of Engineering and Science Variation of Parameters y = yp + yh 2. Variation of Parameters is a way to obtain a particular solution of the inhomogeneous equation. 3. The particular solution can be obtained as follows. 3.1 Assume that the parameters in the solution of the homogeneous equation are functions. (Hence the name.) 3.2 Substitute the expression into the inhomogeneous equation and solve for the parameters. Overview An Example Double Check Further Discussion Variation of Parameters 1. The general solution of an inhomogeneous linear differential equation is the sum of a particular solution of the inhomogeneous equation and the general solution of the corresponding homogeneous equation. logo1 Bernd Schroder¨ Louisiana Tech University, College of Engineering and Science Variation of Parameters 2. Variation of Parameters is a way to obtain a particular solution of the inhomogeneous equation. 3. The particular solution can be obtained as follows. 3.1 Assume that the parameters in the solution of the homogeneous equation are functions. (Hence the name.) 3.2 Substitute the expression into the inhomogeneous equation and solve for the parameters. Overview An Example Double Check Further Discussion Variation of Parameters 1. The general solution of an inhomogeneous linear differential equation is the sum of a particular solution of the inhomogeneous equation and the general solution of the corresponding homogeneous equation. y = yp + yh logo1 Bernd Schroder¨ Louisiana Tech University, College of Engineering and Science Variation of Parameters 3. The particular solution can be obtained as follows.
    [Show full text]
  • Asymptotic Analysis and Singular Perturbation Theory
    Asymptotic Analysis and Singular Perturbation Theory John K. Hunter Department of Mathematics University of California at Davis February, 2004 Copyright c 2004 John K. Hunter ii Contents Chapter 1 Introduction 1 1.1 Perturbation theory . 1 1.1.1 Asymptotic solutions . 1 1.1.2 Regular and singular perturbation problems . 2 1.2 Algebraic equations . 3 1.3 Eigenvalue problems . 7 1.3.1 Quantum mechanics . 9 1.4 Nondimensionalization . 12 Chapter 2 Asymptotic Expansions 19 2.1 Order notation . 19 2.2 Asymptotic expansions . 20 2.2.1 Asymptotic power series . 21 2.2.2 Asymptotic versus convergent series . 23 2.2.3 Generalized asymptotic expansions . 25 2.2.4 Nonuniform asymptotic expansions . 27 2.3 Stokes phenomenon . 27 Chapter 3 Asymptotic Expansion of Integrals 29 3.1 Euler's integral . 29 3.2 Perturbed Gaussian integrals . 32 3.3 The method of stationary phase . 35 3.4 Airy functions and degenerate stationary phase points . 37 3.4.1 Dispersive wave propagation . 40 3.5 Laplace's Method . 43 3.5.1 Multiple integrals . 45 3.6 The method of steepest descents . 46 Chapter 4 The Method of Matched Asymptotic Expansions: ODEs 49 4.1 Enzyme kinetics . 49 iii 4.1.1 Outer solution . 51 4.1.2 Inner solution . 52 4.1.3 Matching . 53 4.2 General initial layer problems . 54 4.3 Boundary layer problems . 55 4.3.1 Exact solution . 55 4.3.2 Outer expansion . 56 4.3.3 Inner expansion . 57 4.3.4 Matching . 58 4.3.5 Uniform solution . 58 4.3.6 Why is the boundary layer at x = 0? .
    [Show full text]
  • Who Solved the Bernoulli Differential Equation and How Did They Do It? Adam E
    Who Solved the Bernoulli Differential Equation and How Did They Do It? Adam E. Parker Adam Parker ([email protected]) is an associate professor at Wittenberg University in Springfield, Ohio. He was an undergraduate at the University of Michigan and received his Ph.D. in algebraic geometry from the University of Texas at Austin. He teaches a wide range of classes and often tries to incorporate primary sources in his teaching. This paper grew out of just such an attempt. Everyone loves a mystery; mathematicians are no exception. Since we seek out puzzles and problems daily, and spend so much time proving things beyond any reasonable doubt, we probably enjoy a whodunit more than the next person. Here’s a mystery to ponder: Who first solved the Bernoulli differential equation dy C P.x/y D Q.x/yn? dx The name indicates it was a Bernoulli, but which? Aren’t there 20 Bernoulli mathe- maticians? (Twenty is probably an exaggeration but we could reasonably count nine!) Or, as is so often the case in mathematics, perhaps the name has nothing to do with the solver. The culprit could be anyone! Like every good mystery, the clues contradict each other. Here are the prime suspects. Was it Gottfried Leibniz—the German mathematician, philosopher, and developer of the calculus? According to Ince [12, p. 22] “The method of solution was discovered by Leibniz, Acta Erud. 1696, p.145.” Or was it Jacob (James, Jacques) Bernoulli—the Swiss mathematician best known for his work in probability theory? Whiteside [21, p. 97] in his notes to Newton’s papers, states, “The ‘generalized de Beaune’ equation dy=dx D py C qyn was given its complete solution in 1695 by Jakob Bernoulli.” Or was it Johann (Jean, John) Bernoulli—Jacob’s acerbic and brilliant younger brother? Varignon [11, p.
    [Show full text]
  • Nonhomogeneous Equations and Variation of Parameters
    Nonhomogeneous Equations and Variation of Parameters June 17, 2016 1 Nonhomogeneous Equations 1.1 Review of First Order Equations If we look at a first order homogeneous constant coefficient ordinary differential equation by0 + cy = 0: then the corresponding auxiliary equation ar + c = 0 has a root r1 = −c=a and we have a solution r1t −ct=a yh(t) = ce = c1e If the equation is nonhomogeneous by0 + cy = f: Then, we introduce the integrating factor ect=b d (ect=by) = ect=bf dt Z ct=a ct=a e y(t) = c1 + e f(t)dt Z −ct=b −ct=b ct=b y(t) = c1e + e e f(t)dt y(t) = yh(t) + yp(t) The solution is a sum of 0 • yh(t), the solution to the homogeneous equation. (byh + cyh = 0). It has the constant that will be determined by the initial condition. • yp(t), a solution that involves f. Then, 0 0 0 b(yh + yp) + c(yh + yp) = (byh + cyh) + (byp + cyp) = 0 + f = f: 1 We next take an similar, but less formal approach to second order equations, writing, y = yh + yp where yh is a general solution to 00 0 ayh + byh + cyh = 0: and yp is a particular solution to 00 0 ayp + byp + cyp = f: 1.2 Examples We gain intuition in the nature of particular solution through some illustrative exampels Example 1. For y00 + y0 + 4y = 2t; we try a particular solution yp(t) = At + B. Then yp(t) = At +B 0 yp(t) = A 00 yp (t) = 0 4yp(t) = 4At +4B 0 yp(t) = A 00 yp (t) = 0 Thus, 4At + (A + 4B) = 2t 4A = 2;A + 4B = 0 1 1 A = B = − 2 8 and 1 1 y (t) = t − : p 3 8 we can turn this suggestion into a strategy for the case that f is a polynomial.
    [Show full text]
  • Theory of Ordinary Differential Equations
    theoryofodes July 4, 2007 13:20 Page i ¨ © Theory of Ordinary Differential Equations ¨ ¨ © © ¨ © theoryofodes July 4, 2007 13:20 Page ii ¨ © ¨ ¨ © © ¨ © theoryofodes July 4, 2007 13:20 Page i ¨ © Theory of Ordinary Differential Equations Christopher P. Grant Brigham Young University ¨ ¨ © © ¨ © theoryofodes July 4, 2007 13:20 Page ii ¨ © ¨ ¨ © © ¨ © theoryofodes July 4, 2007 13:20 Page i ¨ © Contents Contents i 1 Fundamental Theory 1 1.1 ODEsandDynamicalSystems . 1 1.2 ExistenceofSolutions . ... ... ... ... .. ... ... 6 1.3 UniquenessofSolutions . 10 1.4 Picard-LindelöfTheorem. 14 ¨ ¨ © 1.5 IntervalsofExistence. 16 © 1.6 DependenceonParameters . 20 2 Linear Systems 27 2.1 ConstantCoefficientLinearEquations . 27 2.2 UnderstandingtheMatrixExponential . 30 2.3 GeneralizedEigenspaceDecomposition . 33 2.4 OperatorsonGeneralizedEigenspaces . 37 2.5 RealCanonicalForm .. ... ... ... ... .. ... ... 41 2.6 SolvingLinearSystems. 43 2.7 QualitativeBehaviorofLinearSystems . 50 2.8 ExponentialDecay. ... ... ... ... ... .. ... ... 54 2.9 NonautonomousLinearSystems . 56 2.10NearlyAutonomousLinearSystems. 61 2.11PeriodicLinearSystems . 65 3 Topological Dynamics 71 3.1 InvariantSetsandLimitSets . 71 3.2 RegularandSingularPoints. 75 i ¨ © theoryofodes July 4, 2007 13:20 Page ii ¨ © Contents 3.3 DefinitionsofStability . 80 3.4 PrincipleofLinearizedStability . 85 3.5 Lyapunov’sDirectMethod. 90 3.6 LaSalle’sInvariancePrinciple . 94 4 Conjugacies 101 4.1 Hartman-GrobmanTheorem: Part1 . 101 4.2 Hartman-GrobmanTheorem: Part2 . 103 4.3 Hartman-GrobmanTheorem: Part3
    [Show full text]
  • Mathematical Methods of Physics I Instructor: Predrag Cvitanovi´C Fall Semester 2012
    Georgia Tech PHYS 6124 Mathematical Methods of Physics I Instructor: Predrag Cvitanovi´c Fall semester 2012 Homework Set #7 due October 30 2012 == show all your work for maximum credit, == put labels, title, legends on any graphs == acknowledge study group member, if collective effort [All problems in this set are from Goldbart] Problem 1) Motion of a classical particle Consider a classical particle of unit mass moving along the x-axis. Suppose that the motion is free, except that at time t = t, with 0 < t < T, the particle receives an impulse of unit strength. a) Write Newton’s equation describing the motion of the particle. Suppose that at time t = 0 the particle is located at position x1 and that at time t = T it is located at position x2. b) Sketch the position, velocity and acceleration of the particle as a function of time for 0 < t < T. c) Compute the Green function for the motion of the particle, i.e., solve d2G(t, t)/dt2 = d(t − t). d) Consider the applied force f (t) (with 0 ≤ t ≤ T) to be a sequence of impulses. Hence establish the trajectory of the particle in terms of an integral over the applied force. e) Suppose that the force takes the form f (t) = t2/2. Find the motion of the particle. f)( optional) Rather than solve for the Green function directly, as you did in part (c), construct the Green function using the eigenfunction expansion technique. Show, by Fourier analysis, that the two schemes for comput- ing the Green function give equivalent results.
    [Show full text]
  • Existence and Uniqueness Theorem For
    Existence and uniqueness theorem for ODE: an overview Swarup Poria and Aman Dhiman Department of Applied Mathematics, University of Calcutta, 92, A.P.C.Road, Kolkata-700009, India Abstract: The study of existence and uniqueness of solutions became important due to the lack of general formula for solving nonlinear ordinary differential equations (ODEs). Compact form of existence and uniqueness theory appeared nearly 200 years after the development of the theory of differential equation. In the article, we shall discuss briefly the differences between linear and nonlinear first order ODE in context of existence and uniqueness of solutions. Special emphasis is given on the Lipschitz continuous functions in the discussion. 1. Introduction: Differential equations are essential for a mathematical description of nature, many of the general laws of nature-in physics, chemistry, biology, economics and engineering –find their most natural expression in the language of differential equation. Differential Equation(DE) allows us to study all kinds of evolutionary processes with the properties of determinacy; finite-dimensionality and differentiability. The study of DE began very soon after the invention of differential and integral calculus. In 1671, Newton had laid the foundation stone for the study of differential equations. He was followed by Leibnitz who coined the name differential equation in 1676 to denote relationship between differentials 푑푥 and 푑푦 of two variables 푥 and 푦. The fundamental law of motion in mechanics, known as Newton’s second law is a differential equation to describe the state of a system. Motion of a particle of mass m moving along a straight line under the influence of a specified external force 퐹(푡, 푥, 푥′) is described by the following DE 푑푥 푑2푥 푚푥′′ = 퐹(푡, 푥, 푥′) ; (푥′ = , 푥′′ = ) (1) 푑푡 푑푡2 At early stage, mathematicians were mostly engaged in formulating differential equations and solving them but they did not worry about the existence and uniqueness of solutions.
    [Show full text]