Tutorial #2,#3, & #4 MAT 244 – ODE – Summer 2013
1st Order Equations
1st Order O.D.E In general we write a first order equation in the form of dy = f(t, y) dt where f is some function of possibly t and y. We have essentially 4 cases that we’ll cover in this corse for 1st order ODE. We’ll start by covering how to solve them and deal with existence/justification of our work later.
(a) Separable Equations These are definitely the nicest form of 1st order ODE. They take the form dy M(t)=N(y) dt where M and N are function of the indicated single variable. Suggestively, we may rewrite this in “di↵erential form” as M(t)dt = N(y)dy
and integrate with respect to each di↵erential, i.e. we turn the di↵erential equation into an integral equation. dy M(t)=N(y) M(t)dt = N(y)dy dt () Z Z This forms are equivalent, and the integral formulation allows a nice method to solve the equation explicitly as long as we can evaluate the integral. Remark: It is sometimes impossible to get y in terms of just t, i.e. what is y(t)? We may leave the solution as an implicit solution though. Think x2 + y2 = 1 is an implicit solution for a circle, when we rearranging for y obtain y = p1 x2 which ± loses half the circle.
(b) 1st Order Linear In this case we have a general form of
y0(t)+p(t)y(t)=g(t)
A method of solving this is to recall product rule from calculus. It looks an awful lot like d (µ(t)y(t)) = µ(t)y0(t)+µ0y(t) dt Suppose we had, the above form, we could make it look like the 1st Order linear general form if
d µ0(t) (µ(t)y(t)) = µ(t)g(t) y0(t)+ y(t)=g(t) dt () µ0(t)
1 Tutorial #2,#3, & #4 – Summer 2013 MAT 244
So this µ(t), will help us “factor” the ODE if
µ0(t)=p(t)µ(t)
For this reason, we’ll call µ(t) and integrating factor. We may solve for an explicit formula using the separable method. Let’s do this
dµ µ0 = pµ = p(t)dt = ln µ = p(t)dt = µ(t)=exp p(t)dt () µ ) | | ) Z Z Z Z µ(t) will always allow us to factor the ODE. More explicitly we have a formula for y(t)since
d 1 (µ(t)y(t)) = µ(t)g(t)= y(t)= µ(t)g(t)dt dt ) µ(t) Z This formula completely solves this case. As an aside, notice as long as we can make sense of the formula we have uniqueness and existence for a solution y.
(c) Exact Equations Suppose that M,N,My,Nt are continuous in some open box, then @M @N = = M(t, y)+N(t, y)y0 = 0 has a solution @y @t ) Why does this happen? You can think of this as black magic for the moment, or if you have some interest in mathematics you can look up something called the exterior di↵erential d, and it satisfies @F @F F (x, y)=Const d(F (x, y)=d(Const) dx + dy =0 () () @x @y M N We call “functions” of this form exact. Notice that the derivative condition|{z} is|{z} simply the condition that the partials match, i.e. @2F @2F M = N = y x () @x@y @y@x Thus the solution to the ODE should be the sum of the linearly independent parts of the integral of each derivative, we denote this special sum as as you would with vectors as you’ve seen in linear algebra.
F (x, y)= Mdx Ndy = Mdx Ndy = C where C R ) 2 Z Z Z Z where the boxed formula is a implicit solution to the ODE.
(d) Homogeneous Equations In this context, we actually mean that
M(t, y)dx + N(t, y)dy =0
with M and N homogeneous. A function is said to be homogeneous of degree n if we have that
n f( t, y)= f(t, y)where R 2 Notice in this case that the ratio of scaled M and N is fixed. i.e. M(t, y) M( t, y) = N(t, y) N( t, y)
2 Tutorial #2,#3, & #4 – Summer 2013 MAT 244
Now if we fix =1/t, we see that M(t, y) M(1, y/t) = = f(y/t) N(t, y) N(1, y/t) This leads us to consider the change of variables y/t = v.Wesee M(t, y) M(t, y)dx + N(t, y)dy =0 + y0 =0 f(v)+xv0 + v =0 () N(t, y) () We can solve the ODE in v since it is separable, we see the solution takes the form dv dt dv = = ln t = f(v)+v t ) | | f(v)+v Z Z Z or, in our original variables we see
tN(t, y) y ln t = d | | tM(t, y)+yN(t, y) t Z ⇣ ⌘ Which isn’t very helpful, but it shows that we can always find a solution if the integral makes sense.
Example(Separable) - Find the general solution to
y2 1 x2dy = arcsin(x)dx p Notice this is in di↵erential form, it is equivalent to
2 2 y 1 x y0 = arcsin(x) p To solve it, notice the ODE is separable. Thus
3 2 2 2 2 arcsin(x) y arcsin (x) y 1 x y0 = arcsin(x) y dy = dx = = + C where C R () p1 x2 ) 3 2 2 p Z Z The C is just the integration constant. It means that we have a 1-parameter family of solution to the ODE. (Note: The integral is solved using u-sub with u = arcsin(x).) Thus our solution to the ODE is
2 3 3 arcsin x y(x)= + C s 2
Example(1st Order Linear) - Solve the IVP
ty0 +(t + 1)y = ty(ln(2)) = 1
To use our formula for an integrating factor, we need to rewrite the ODE in standard form. Namely t +1 y0 + y =1 t Thus our integrating factor is t +1 µ(t)=exp dt = tet t Z Using our formula, we see the general solution is
1 t 1 t 1 C y(t)= te dt = (e (t 1) + C)=1 + where C R tet tet t tet 2 Z
3 Tutorial #2,#3, & #4 – Summer 2013 MAT 244 where we used the integration by parts formula to evaluate the integral. The initial condition (data) states we want the solution such that y(ln(2)) = 1. This forces our choice of C, namely 1 C y(ln(2)) = 1 + =1 C =2 ln(2) 2 ln(2) () Thus the solution to the initial value problem(IVP) is 1 2 y(t)=1 + t tet
Example(1st Order Linear) - Find the general solution to 1 y0 + y = 3 cos(2t) t We use our integrating factor method, i.e.
µ(t)=exp p(t)dt = t Z Thus via the formula for y(x)wehave 1 3 y(x)= 3t cos(2t)dt = (2t sin(2t) + cos(2t)+C)whereC R t 4t 2 Z using integration by parts.
Example(Exact Equation) - Find the general solution to x y + y0 =0 (x2 + y2)3/2 (x2 + y2)3/2
=M =N It’s easy to verify that | {z } | {z } 3xy M = = N = Exact! y (x2 + y2)5/2 x ) Using the formula we derived, we see x y F (x, y)= Mdx Ndy = dx dy (x2 + y2)3/2 (x2 + y2)3/2 Z Z Z Z 1 1 1 = 2 x2 + y2 x2 + y2 ! 1 = p p 2 x2 + y2
Thus the implicit solution to the ODE is (note that we hidp the factor in front in the constant) 1 = C where C R x2 + y2 2 p Exact Integrating Factors Suppose that we have
M(t, y)dt + N(t, y)dy =0 and the equation if not exact, i.e. M = N . The goal here is find out what we may multiply the equation by y 6 t to try and make it exact. There isn’t exactly (haha) a systematic approach to always finding an integrating
4 Tutorial #2,#3, & #4 – Summer 2013 MAT 244 factor (µ(x, y)) to make the equation exact, but we have some special cases. Suppose that µ is a function of y alone. This means that we want the following ODE to be exact
µ(y)M dt + µ(y)N dy =0
M˜ N˜ | {z } | {z } For this to be exact, we need (M˜ y = N˜x) @ (µ(y)M)=µ0M + µM = µ(y)N @y y t
µ0(y) N M = = t y ) µ(y) M This equation tells us that this is a integrating factor if (N M )/M is a function only of y. If we assume this, t y we see our integrating factor takes the form
N M µ(y)=exp t y dy M Z Similarly, we may repeat the same argument to show µ(t) is an integrating factor if (M N )/N is a function y t only of t. In this case we’d see M N µ(x)=exp y t dt N Z As an exercise, find the condition for µ(xy) to be an integrating factor. Remark! There isn’t always just one integrating factor, consider 2 sin(y)+x cos(y)y0 = 1, you can check that µ(x)=x or µ(y)=1/ 2 sin(y)+1 both work. p Example(Exact Integrating Factor) - Find the general solution to
y y +(2x ye ) y0 =0 M N We see the equation is not exact, since |{z} | {z }
My =1 & Nx =2
But notice that the derivatives are similar (i.e. constants), so maybe we could find an integrating factor. From the derivative of an exact integrating factor, we see N M 1 x y = M y which is just a function of y.Thus N M µ(y)=exp x y dy = y M Z Let’s check that y makes the new ODE exact, we have
2 2 y y +(2xy y e ) y0 =0 =M˜ =N˜ Clearly, we have |{z} | {z }
M˜ y =2y = N˜x
5 Tutorial #2,#3, & #4 – Summer 2013 MAT 244 which means the ODE is exact. Therefore, our solution is a level set of
F (x, y)= Mdx˜ Ndy˜ = y2dx (2xy y2ey)dy Z Z Z Z =y2x xy2 (y2 2y + 2)ey =xy2 (y2 2y + 2)ey i.e. the (implicit) general solution is
2 2 y xy (y 2y + 2)e = C where C R 2
Example(Discontinuous Coe cients) - Solve the IVP
2 t [0, 1] y0 + p(t)y =0,y(0) = 1 where p(t)= 2 ( 1 t>1
By definition of a solution, we want it to be continuous, i.e. we’ll have to glue pieces together. We have two cases, t [0, 1] and t>1. For t [0, 1] we’re solving 2 2 2t y0 +2y =0 = y1(t)=C1e where C1 R ) 2
The initial data implies that C1 = 1. For t>1, we have
t y0 + y =0 = y (t)=C e ) 2 2 The constant here is determine by making the function continuous. We want
t 2 1 lim C2e = y1(1) = e = C2 = e t 1+ ) ! Thus the continuous solution to the IVP is
2t e t [0, 1] y(t)= 2 (t+1) ( e t>1
Notice
Autonomous Equations Basically Autonomous ODE is one that doesn’t depend on t,x,etc.i.e.the independent variable. Characteristics of systems of this type will be critical points(solutions), or equilibrium.
We call a critical point, a value y0 such that the autonomous system satisfies
y0 = f(y0)=0
In situations like this it makes sense to talk about stability of a critical point. Stable critical points are points such that a perturbation like y(t )=y ✏ 0 0 ± will eventually fall back into the y0 state for ✏ > 0 (not too big so it passes through another critical point). More specifically, this means that
f(y + ✏) < 0&f(y ✏) > 0= Stable Critical Point 0 0 )
6 Tutorial #2,#3, & #4 – Summer 2013 MAT 244
Unstable critical points are exactly the opposite, i.e.
f(y + ✏) > 0&f(y ✏) < 0= Unstable Critical Point 0 0 ) Then we have the mixed case, stable on one side, unstable on the other. These are called Semi-stable critical points and have the property that
sgn(f(y + ✏) = sgn(f(y ✏)) = Semi-Stable Critical Point 0 0 )
Variation of Parameters Suppose we consider the homogeneous 1st Order linear ODE, namely
y0(t)+p(t)y(t)=0
We may find the solution to this via separation of variables, it is
y(t)=C exp p(t)dt where = CI(t) C R 2 Z with I(t)=exp p(t)dt . Now suppose we consider the non-homogeneous case, with g(t) not identically zero. ⇥ R ⇤ y0(t)+p(t)y(t)=g(t)
We’ll find the solution by varying the constant on the homogeneous solution, i.e. making C into a function
C(t). Plugging this into the ODE gives us ( note that I0(t)= pI(t)) g(t) y0 + py = C0I + CI0 + pCI = C0I pCI + pCI = g = C0(t)= ) I(t) =0 Which means we may find C(t) which solves the equation| if{z we integrate.} We see g(t) C(t)= dt I(t) Z Notice that I(t)=1/µ(t), the inverse of our integrating factor. Which results in the same formula as we deduced earlier. 1 y(t)=C(t)I(t)= µ(t)g(t)dt µ(t) Z Euler’s Method(1st Order Taylor Approximation) Recall the definition of slope, i.e. rise over run, and call y(xn)=yn.Wehave y yn+1 yn = y0 if x x is small x x x ⇡ n n+1 n n+1 n Define ✏ as our step size between points, i.e. x x = ✏ for all n. Now consider the 1st order ODE n+1 n
y0 = f(x, y)
If our ✏ is small enough, it’s not a bad approximation to assume that
yn+1 yn y0 = = f(x ,y )= y = y + ✏f(x ,y ) n ✏ n n ) n+1 n n n which is exactly a taylor expansion up to first order. This gives me a computational method to solve an ODE in an iterative method (of course assuming you give me y(x0)=y0, i.e. the initial data). This works since we’re essentially flowing along the direction field created from the ODE for ✏ time. Gluing together all this “slope” lines approximates the solution (as long as f and @yf are continuous [or Lipschitz]).
7 Tutorial #2,#3, & #4 – Summer 2013 MAT 244
Example(Convergence of Euler’s Method) Consider the IVP
y0 =1 t + y, y(t )=y 0 0 Clearly we may solve this using the methods talked about earlier to obtain
t t y(t)=(y t )e 0 + t 0 0 But let’s see how we’d implement Euler’s Method here. Using the idea above, define a step size ✏ starting from the initial point t0,i.e.tn = t0 + n✏. We have that
y = y + ✏f(t ,y )=y + ✏(1 t + y )=(1+✏)y + ✏(1 t ) n+1 n n n n n n n n By using the above formula recursively, we can get everything in terms of the initial data. i.e. prove by induction that y =(1+✏)n(y t )+t n 0 0 n The base case is immediate, so assume true for n and prove that n + 1 holds. We leave this as an exercise. Now
fix t>t0 and take (t t ) ✏ = 0 = t = t n ) n Consider the limit of the solution now as our step size ✏ approaches 0, i.e. n .Wehave !1 n (t t0) t t0 lim y(tn)= lim 1+ (y0 t0 + tn =(y0 t0)e + t n n n !1 !1 ✓ ◆ using the fact that a n lim 1+ = ea n n !1 ⇣ ⌘ this shows that as our steps get finer, we’ll recover the actual solution.
The Picard-Lindel¨ofTheorem(Existence and Uniqueness) This theorem will give us existence and uniqueness to the specific 1st order ODE. Suppose that f(t, y(t)) and @yf(t, y(t)) are continuous in a box B centred around (t0,y0). Then the IVP
y0 = f(t, y),y(t0)=y0 has a unique solution locally around (t0,y0). This basically states that we can always find a solution if f(t, y) is nice enough. The idea of the proof is as follows. Turn the di↵erential equation into an integral equation
t t t y0(s)ds = f(s, y(s))ds = y(t)=y + f(s, y(s))ds ) 0 Zt0 Zt0 Zt0 We want to show that the integral equation has a fixed point y, since if it does, that y is a solution to the ODE. To do this we can define an approximating sequence (these are usual called Picard Iterations)
t 0 = y0 & n+1(t)=y0 + f(s, n(s))ds Zt0
8 Tutorial #2,#3, & #4 – Summer 2013 MAT 244
We want to show that this sequence convergences to something(i.e. the solution), so we first show the distance is shrinking between iteration of this sequence for small enough t. Which is basically done with the following bound t t max i+1(t) j+1(t) = max [f(s, i(s)) f(s, j(s))]ds 6 max f(s, i(s)) f(s, j(s)) ds t B | | t B t B | | 2 2 Zt0 2 Zt0 t @f K = max = 6K max i(s) j(s) ds t B @y ) t B | | 2 2 Zt0 6K max(t t0) i(t) j(t) t B | | 2 Then taking B such that K(t t )=K<˜ 1 for all t B. This shows that this a contraction map, which is a 0 2 fancy way of saying distances are shrinking, i.e.
˜ n max n+1(t) n(t) 6 K max 1(t) 0(t) t B | | t B | | 2 2 One may check that (t) (t) < , which means | 1 0 | 1
lim max n+1(t) n(t) =0 n t B | | !1 2
Which shows there is a limit point. You can think of the n like an finite taylor expansion approximating the solution that converges to the actual solution in the limit. Uniqueness of this solution we leave as an exercise.
Exercise(Existence)-Where do solutions exist?
dy y cos(t) = dt 1+y
From the statement of the PIcard-Lindel¨ofTheorem, we need to check where f and @yf are continuous. Since
y cos(t) f(t, y(t)) = 1+y we see that f is continuous everywhere but the “blow up” at y = 1, thus f continuous on (t, y) R R 1 . 2 ⇥ \{ } We have that @f cos(t) = @y (1 + y)2 which is continuous everywhere but y = 1 again, thus @yf is continuous on (t, y) R R 1 .Thuswe 2 ⇥ \{ } have that solutions may live in either
R ( , 1) or R ( 1, ) ⇥ 1 ⇥ 1
Remember that it may just be on a subset of this for given (t0,y0).
Exercise(Existence)-Where do solutions exist?
y0 ln(t)+y = cot(t),y(2) = 3
Since this is 1st order linear, let’s rewrite this in the standard form
1 cot(t) y0 + y = ln(t) ln(t)
9 Tutorial #2,#3, & #4 – Summer 2013 MAT 244
From our previous algebra, we know 1 y(t)= µ(t)g(t) µ(t) Z Solves the ODE. As long as the integrals make sense, we’ll have a solution. Check µ(t)first,
dt µ(t)=exp ln(t) Z We have a singularity around t = 1, since ln(1) = 0. This means t must be larger than 1 (since our initial data starts at t0 = 2.). The other integral we have to worry about is
cot(t)µ(t)dt Z cotangent has singularities at t = n⇡ with n Z.Since2< ⇡, We see that our solution will be defined on 2 t (1, ⇡). 2
10 Tutorial #5 & #6 MAT 244 – ODE – Summer 2013
2nd Order Equations
2nd Order Linear O.D.E Second order linear ordinary di↵erential equations take the form
0 = Homogeneous y00(t)+p(t)y0(t)+q(t)y(t)= ( ( g(t) = Non-Homogeneous ( How do we solves equations of this form? It depends on quite a few things, but let’s stress the importance of the word linear. Suppose that y1 and y2 solve the homogeneous problem, then we have that their linear combination is also a solution. Let’s check this by plugging y = Ay1 + By2, A, B R,intotheODE.Wehave 2
y00 + py0 + qy =Ay100 + By200 + p(Ay10 + By20 )+q(Ay1 + By2)
=A (y100 + py10 + qy1) +B (y200 + py20 + qy2) =0 =0 =0 | {z } | {z }
Thus any linear combination of solutions is a solution. We’ll only consider the simpler cases in this course, starting with Homogeneous with constant coe cients. It’s convenient to write this case as
ay00 + by0 + cy =0 where a, b, c R 2 instead of solving this directly, let’s try to guess the solution here( later we’ll have a more constructive approach with 1st order systems which will show that nth order ODE’s have n linearly independent solutions). Guess
y(t)=e t as a solution with C.IfweplugthisintotheODEweseey(x) is a solution if 2 e t(a 2 + b + c)=0
Since the exponential is never zero, our only hope is that is a root of the quadratic polynomial. This polynomial has a special name, it is called the characteristic equation
P ( )=a 2 + b + c
Clearly is a root if b pb2 4ac = ± ± 2a
1 Tutorial #5 & #6 – Summer 2013 MAT 244