Notes on Perturbation Techniques for Odes

Notes on Perturbation Techniques for Odes

Notes on Perturbation Techniques for ODEs James A. Tzitzouris The idea behind the perturbation method is a simple one. Faced with a problem that we cannot solve exactly, but that is close (in some sense) to an auxiliary problem that we can solve exactly, a good approximate solution to the original problem should be close (in a related sense) to the exact solution of the auxiliary problem. Suppose we know the solution to the (possibly nonlinear) first-order differential equation given by f(_y0; y0; t) = 0; y0(t0) = a: (1) Now, consider the perturbed first-order differential equation given by f(_y; y; t) + "g(_y; y; t) = 0; y(t0) = a; (2) where g(_y; y; t) is some known function and " is taken to be very small. We assume without j j loss of generality that y0 is not a solution to the equation g(_y; y; t) = 0. Suppose we cannot solve (2) exactly. For small " , since (2) is close to (1), we should have that y0 is close to y (i.e., j j y(t) y0(t)). In fact, as we consider smaller and smaller " , the closer the perturbed and auxiliary problems≈ become. However, we can derive a better approximationj j of y(t) as follows. Assume a solution exists for (2), although we cannot solve for it exactly. Let us denote this solution by y(t; ") to denote the fact that y is a function of t, but depends on the parameter ". Since we know " is close to zero, we can take a first-order Taylor expansion of y(t; ") around " = 0 to obtain dy(t; ") y(t; ") y(t; 0) +" : ≈ d" "=0 |y{z0(t)} | y{z1(t) } The above approximation carries with it an error term proportional to "2, so this implies we should ignore all terms that are proportional to quadratic or higher powers of ". Defining y0 and y1 as indicated in the Taylor expansion above, we define our revised approximationy ^ as follows y^ = y0 + "y1: (3) The functiony ^ should be interpreted as follows. The function y0 is the roughest approximation to y that we can make. We arrive at y0 by setting " to zero. The function y1 represents a correction term, a refinement to our rough estimate y0. It is multiplied by " because the true solution y should 1 approach the rough estimate y0 as " approaches zero. Thus, we require a smaller correction as " approaches zero. Sincey ^ y, we require thaty ^(t0) = y0(t0) + "y1(t0) = a. Since this relation must hold for any ≈ choice of ", we have the modified initial conditions y0(t0) = a and y1(t0) = 0. Substituting (3) into (2), we have f(_y0 + "y_1; y0 + "y1; t) + "g(_y0 + "y_1; y0 + "y1; t) = 0; y0(t0) = a; y1(t0) = 0: Defining fy and fy_ as the partial derivatives of f with respect to y andy _ respectively, we have the following linearization of the above equation around " = 0 f(_y0; y0; t) + " (fy_ (_y0; y0; t)_y1 + fy(_y0; y0; t)y1 + g(_y0; y0; t)) = 0; y0(t0) = a; y1(t0) = 0: Since the above equation must hold for all " close to zero, we must have the following initial value system f(_y ; y ; t) = 0; y (t ) = a; 0 0 0 0 9 > > => (4) fy_ (_y0; y0; t) y_1 + fy(_y0; y0; t) y1 + g(_y0; y0; t) = 0; y1(t0) = 0: > p(t) q(t) r(t) > | {z } | {z } | {z } ;> Notice that the first equation above is simply the auxiliary equation (1). Upon first look, the second equation appears to be nonlinear. However, note that after solving the first equation, y0(t) becomes a known function of time. Thus, the second equation is a first-order linear inhomogeneous equation, which we can usually solve using some kind of integrating factor. In fact, if we have some function h(t) that satisfies d (h(t)p(t)) = h(t)q(t); dt then 1 t y1(t) = Z h(τ)r(τ)dτ: −h(t)p(t) t0 We leave it as an exercise to the reader to show that if, instead of (1) and (2), we have f(¨y0; y_0; y0; t) = 0; y0(t0) = a; (5) and f(¨y0; y_0; y0; t) + "g(¨y0; y_0; y0; t) = 0; y(t0) = a; ; (6) then (4) becomes f(¨y ; y_ ; y ; t) = 0; y (t ) = a; 0 0 0 0 0 9 > > => (7) fy¨(¨y0; y_0; y0; t) y¨1 + fy_ (¨y0; y_0; y0; t) y_1 + fy(¨y0; y_0; y0; t) y1 + g(¨y0; y_0; y0; t) = 0; y1(t0) = 0: > p(t) q(t) r(t) s(t) > | {z } | {z } | {z } | {z } ;> 2 Note that the second equation in (7) is in general a second-order, linear, variable-coefficient, inho- mogeneous equation. That may seem like a mouthful, but what it means is that we may not be able to solve this problem. In particular, the \variable-coefficient" part casts a little bit of doubt. However, there are many problems of this type that can be solved. This leads us to an important observation. In (4) and (7) above, we may have difficulty depend- ing on the functions p, q, r and s. However, when these functions are constant, we can always solve these systems. This corresponds to the case when the auxiliary problems, (1) and (5), are both linear, constant-coefficient differential equations. This bears repeating. If the auxiliary problem is a linear, constant-coefficient ordinary differential equation, then we can always solve the approximating system of equations for y0 and y1. While there certainly exist some nonlinear differential equations that we can solve exactly, we cannot do so for most nonlinear differential equations exactly. In contrast, it is always the case that we can solve problems involving linear, constant-coefficient differential equations. For this reason, we consider solving vector initial value problems whose general form is given by _ ~y = A~y + "g(~y; t); ~y(t0) = ~a; (8) where ~y is an n-dimensional vector-valued function of t, A is an n n matrix and g is an n- × ^ dimensional vector-valued function of two arguments, ~y and t. Substituting ~y = ~y0 + "~y1 into (8) and expanding g in a zeroth-order Taylor expansion around " = 0 we have _ _ ~y0 + "~y1 = A~y0 + "A~y1 + "g(~y0; t); ~y0(t0) + "~y1(t0) = ~a: Since the above equation must hold for all " close to zero, we must have that _ ~y0 = A~y0; ~y0(t0) = ~a; _ ~ ~y1 = A~y1 + g(~y0; t); ~y1(t0) = 0: Therefore, in order to obtain an approximate solution to the nonlinear initial value problem given by (8) we need only solve the following two linear initial value problems _ ~y0 = A~y0; ~y0(t0) = ~a; _ ~ ~y1 = A~y1 + g(~y0; t); ~y1(t0) = 0: Example 1. Consider the initial value problem 2 y_ = "ey ; y(0) = 1: We substitutey ^ = y0 + "y1 into the above ODE to obtain 2 (y0+"y1) y_0 + "y_1 = "e ; y(0) = 1: 3 Expanding the nonlinear term on the right as a zeroth-order Taylor series, we obtain y2 y_0 + "y_1 = "e 0 ; y0(0) + "y1(0) = 1: Since the above equation must hold for all " close to zero, we must have that y_0 = 0; y0(0) = 1; y2 y_1 = e 0 ; y1(0) = 0: Thus, y0(t) = 1 and therefore y1(t) = et; thusy ^ = 1 + "et. Example 2. Consider the initial value problem y¨ + 4y _ + 3y + "yp = 0; y(0) = 1; y_(0) = 3; − where p 2 is an integer. We substitutey ^ = y0 + "y1 into the above ODE to obtain ≥ p y¨0 + 4y _0 + 3y0 + " (y ¨1 + 4y _1 + 3y1 + (y0 + "y1) ) = 0; y0(0) + "y1(0) = 1; y_0 + "y_1(0) = 3: − Expanding the nonlinear term as a zeroth-order Taylor series, we obtain p y¨0 + 4y _0 + 3y0 + " (y ¨1 + 4y _1 + 3y1 + y ) = 0; y0(0) + "y1(0) = 1; y_0(0) + "y_1(0) = 3: 0 − Since the above equation must hold for all " close to zero, we must have that y¨0 + 4y _0 + 3y0 = 0; y0(0) = 1; y_0(0) = 3; − p y¨1 + 4y _1 + 3y1 = y ; y0(0) = 0; y_0(0) = 0: − 0 Solving the above equations (left as an exercise to the reader), we have that 3t y0(t) = e− ; 3t t 3pt 1 e− 1 e− 1 e− y1(t) = + : 6 p 1 2 1 3p − 3 (1 3p)(1 p) − − − − 3t t 3pt 3t 1 e− 1 e− 1 e− Thusy ^ = e− + " 6 p 1 + 2 1 3p 3 (1 3p)(1 p) . − − − − − Example 3. Consider the initial value problem 2yy_ y2 + "py = 0; y(0) = 1: − 4 We substitutey ^ = y0 + "y1 into the above ODE to obtain 2 2(y0 + "y1)(y _0 + "y_1) (y0 + "y1) + "py0 + "y1; y0(0) + "y1(0) = 1: − Expanding the nonlinear terms as Taylor series and ignoring high-order terms we obtain 2 2y0y_0 y + " (2y0y_1 + 2y1y_0 2y0y1 + py0) ; y0(0) + "y1(0) = 1: − 0 − Since the above equation must hold for all " close to zero, we must have that 2 2y0y_0 y = 0; y0(0) = 1; − 0 2y0y_1 + 2y1y_0 2y0y1 + py0 = 0; y1(0) = 0: − 1 t 2 1 t 1 t 1 t 2" 1 t 1 t Thus, y0(t) = e 2 and therefore y1(t) = (e− 4 e 2 ); thusy ^ = e 2 + (e− 4 e 2 ). 3 − 3 − Example 4. Consider the initial value problem 1 y_ y " = 0; y(0) = 1: − − y We substitutey ^ = y0 + "y1 into the above ODE to obtain 1 y_0 + "y_1 y0 + "y1 + " ; y0(0) + "y1(0) = 1: − y0 + "y1 Multiplying the above equation by y0 + "y1 and ignoring high-order terms we obtain 2 y0y_0 y + " (y0y_1 2y0y1 +y _0y1 1) ; y0(0) + "y1(0) = 1: − 0 − − Since the above equation must hold for all " close to zero, we must have that 2 y0y_0 y = 0; y0(0) = 1; − 0 y0y_1 2y0y1 +y _0y1 1 = 0; y1(0) = 0: − − t 1 t t t " t t Thus, y0(t) = e and therefore y1(t) = (e e ); thusy ^ = e + (e e ).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us