<<

Intro to Differential Equations: A Subtle Subject Big Question: Given a differential equation, does it have any solutions? (Some have many solutions. Others don’t have any at all.) Which have solutions and which don’t? This is a very difficult question.

Q: If our differential equation has solutions, can we find all of the solutions explicitly? (i.e.: Can we find actual formulas for the solutions?) Usually no. Only in rare cases can we actually find explicit solutions.

Best Case Scenario: Would be finding all explicit solutions. But to reiterate: This is usually not possible!

Next Best Thing: If we are lucky, we might be able to: (1) Determine qualitative behavior of solutions. (2) Prove an abstract existence-uniqueness theorem. (3) Construct approximations to solutions by a numerical method.

Note: Even if a differential equation has solutions, different solution functions might have different domains! The domain need not be all of R. For instance:

Example: The differential equation y0 = y2 has infinitely many solutions. ◦ One solution has its domain as all of R. ◦ All other solutions have domains like (−∞, a) ∪ (a, ∞) for some a ∈ R.

Types of Differential Equations There are four ways to distinguish differential equations: ◦ ODE vs PDE ◦ Single Equations vs Systems of Equations ◦ 1st-order vs 2nd-order vs 3rd-order etc. ◦ Linear vs Nonlinear

This Class: Only ODEs.* Mostly linear ones (but some nonlinear). ◦ 1st-Order ODEs. (Week 1) ◦ 1st-Order systems of ODEs. (Weeks 3-6) ◦ 2nd-Order ODEs. (Weeks 6-9)

(* At the end of the course, we’ll examine a PDE (Laplace Equation).) 1st-Order ODEs General Form: A first-order ODE looks like: dy = F (t, y). dt There is no general method for finding explicit solutions to all first-order ODEs. (Life is unfair.) However, there are a few things we can say about 1st-Order ODEs.

(0) Special Cases: Some special kinds of first-order ODEs can be solved explicitly (more or less). A couple of these are discussed below.

(1) Geometric Insight: Direction fields. Direction fields are a geometric technique for gaining insight into the be- havior of solutions to first-order ODEs.

(2) Abstract Existence-Uniqueness Theorem: Q: Do all first-order ODE’s have solutions? This is subtle. A first answer in this direction is given by an abstract existence-uniqueness theorem. We will discuss this much later in the course.

(3) Approximations to Solutions: There are numerical methods (e.g.: Euler’s Method) for approximating solutions. We will discuss this much later in the course.

Special Kinds of 1st-Order ODEs (A) Linear. These have the form dy = p(t)y + g(t). dt They are called homogeneous if g(t) ≡ 0. Else, they are non-homogeneous. To solve: Method of Integrating Factors.

(B) Separable. These have the form dy dy f(t) = f(t)g(y) or = . dt dt g(y) To solve: Method of . Method of Integrating Factors Goal: Explicitly solve first-order linear ODE. That is: dy + p(t)y = g(t). dt Method: 0. Start with y0 + p(t)y = g(t). 1. Multiply both sides by the , Z  µ(t) = exp p(t) dt .

Result is: µ(t)y0 + µ(t)p(t)y = µ(t)g(t). 2. Write the result as

(µ(t)y)0 = µ(t)g(t).

3. Integrate and solve for y(t).

Method of Separation of Variables Goal: Explicitly solve first-order separable ODE. That is: dy = f(t)g(y). dt Method: dy 0. Start with = f(t)g(y). dt 1. Move all functions of y to one side, and all functions of t to the other: dy = f(t) dt. g(y) 2. Integrate and solve for y.

dy f(t) Note: Separable equations can also be written in the form = . dt g(y) The text describes them in the form f(t) dt + g(y) dy = 0, which is also the same thing. Eigenvalues and Eigenvectors: Motivation The simplest matrices are the diagonal matrices. These are of the form   d1 0 ··· 0  0 d ··· 0   2  D =  . . .. .  . . . 0  0 0 ··· dn Geometrically, D acts as follows: ◦ Vectors along the x1-axis are stretched/contracted/reflected by d1. . . ◦ Vectors along the xn-axis are stretched/contracted/reflected by dn.

Note: Stretching along the xi-axis happens if |di| > 1. Contracting along the xi-axis happens if |di| < 1. Reflection happens if di is negative.

Hopes and Dreams: ◦ Maybe every is really a after a “change-of-.” ◦ Maybe every matrix A is similar to a diagonal matrix, meaning that A = TDT −1 for a diagonal matrix D. ◦ Maybe every matrix geometrically acts like a diagonal matrix, stretching some line L1 by a factor d1, some other line L2 by a factor d2, and so on.

Reality Check: None of these hopes are true as stated. (Life is hard.)

But there is hope in the following two theorems.

Def: A matrix A is diagonalizable iff A is similar to a diagonal matrix. That is: A = TDT −1 for some diagonal D and some invertible T .

Diagonalization Criterion: Let A be an n × n matrix. Then:

A is diagonalizable ⇐⇒ A has n linearly independent eigenvectors.

Spectral Theorem: If A is a , then A is diagonalizable. Eigenvalues and Eigenvectors: Definitions Def: Let A be an n × n matrix. An eigenvector of A is a non-zero vector v ∈ Cn, v 6= 0 such that Av = λv for some λ ∈ C. The scale factor λ ∈ C is the eigenvalue associated to the eigenvector v.

Note: If v is an eigenvector for A, then so too is every scalar multiple cv. ∴ Every eigenvector of A determines an entire line of eigenvectors of A.

Q: How to find eigenvectors and their eigenvalues? A: Observe that:

Av = λv for some v 6= 0 ⇐⇒ (A − λI)v = 0 for some v 6= 0 ⇐⇒ det(A − λI) = 0.

Def: The characteristic polynomial of A is

p(λ) = det(A − λI).

So: Eigenvalues are the zeros of the characteristic polynomial! By the Fundamental Theorem of Algebra, we can factor p(λ):

n m1 mk p(λ) = (−1) (λ − λ1) ··· (λ − λk) .

The algebraic multiplicity of λj is the corresponding exponent mj.

Def: Let A be an n × n matrix. The eigenspace of λ ∈ C is n E(λ) = {v ∈ C | Av = λv} = N(A − λI). The geometric multiplicity of λ is:

GeoMult(λ) = dim(E(λ)) = # of linearly independent eigenvectors with eigenvalue λ.

Fact: For every eigenvalue λ, we have:

GeoMult(λ) ≤ AlgMult(λ). Eigenvalues and Eigenvectors: More Facts To Find Eigenvectors of A: (1) First find the eigenvalues. They will be the zeros of the characteristic polynomial p(λ) = det(A − λI). (2) For each eigenvalue λ that you found in part (1), solve the equation

(A − λI)v = 0 for vectors v. These vectors are the eigenvectors associated to λ. (In fact, these vectors are the elements of E(λ) = N(A − λI).)

Fact: Let A be an n × n matrix with eigenvalues λ1, . . . , λn (each repeated according to its algebraic multiplicity). (a) tr(A) = λ1 + ··· + λn. (b) det(A) = λ1 ··· λn.

Fact: Let A be an n × n matrix (with real entries). If λ = a + ib is an eigenvalue of A, then λ = a − ib is an eigenvalue of A.

Fact: If v1,..., vk are eigenvectors associated to distinct eigenvalues, then {v1,..., vk} is linearly independent.

Diagonalization Criterion: Let A be an n × n matrix. Then:

A is diagonalizable ⇐⇒ A has n linearly independent eigenvectors ⇐⇒ Every eigenvalue λ of A has GeoMult(λ) = AlgMult(λ). Review: Dot Products

Recall: The dot product (or inner product) of u, v ∈ Rn is

u · v = hu, vi = u1v1 + ··· + unvn.

2 2 2 Also: kvk = v1 + ··· + vn = hv, vi.

Def: A set of vectors {v1,..., vk} is an orthogonal set iff every pair of (distinct) vectors is orthogonal: hvi, vji = 0 for all i 6= j.

Fact: If {v1,..., vk} is an orthogonal set of non-zero vectors, then {v1,..., vk} is linearly independent.

Symmetric Matrices Def: An n × n matrix is symmetric iff AT = A.

Theorem: Let A be a symmetric matrix. Then: (a) All eigenvalues of A are real. (b) If v1, v2 are eigenvectors of A corresponding to different eigenvalues, then hv1, v2i = 0. (c) () There exists a set of n linearly independent eigen- vectors for A. Thus, A is diagonalizable.

N.B.: Parts (a), (b), (c) are stated under the assumption that A is symmet- ric. If A is not symmetric, then (a), (b), (c) may or may not be true! Jordan Form Diagonalization Criterion: Let A be an n × n matrix. Then: A is diagonalizable ⇐⇒ A has n linearly independent eigenvectors ⇐⇒ Every eigenvalue λ of A has GeoMult(λ) = AlgMult(λ). Q: If A is not diagonalizable, is A still similar to a “simple” matrix? A: Yes: If we can’t diagonalize, the next best thing is Jordan form.

Def: A Jordan block is a matrix of the form   λ 1    λ 1   λ 1   λ or or ... or  .. ..  0 λ  . . λ A matrix is in Jordan form if it is of the form   J1  J   2  J =  .. ,  .  Jk where each Ji is a Jordan block (possibly of different sizes and λ’s).

Example: The following two matrices are both in Jordan form: 4  4   7 1   7       7   7  J =   and J =    4 1   4       4 1  4  4 4 Notice: Diagonal matrices are exactly the Jordan form matrices in which every Jordan block has size 1.

Theorem: Every n × n matrix A is similar to a matrix in Jordan form. That is: Every n × n matrix A can be written as A = TJT −1, where J is in Jordan form. Diagonal entries of J are the eigenvalues of A. The Def: Let A be an n × n matrix. The matrix exponential is

∞ X tk t2 t3 etA = Ak = I + tA + A2 + A3 + ··· k! 2! 3! k=0 Fortunately, this series always converges (so etA does make sense).

Theorem: The solution to the initial-value problem

( 0 x = Ax tA is exactly x(t) = e x0. x(0) = x0

Main Facts: (a) If P is invertible, then eP AP −1 = P eAP −1. (b) If AB = BA, then eA+B = eAeB (c) e(t+s)A = etAesA for all t, s ∈ R. (d) enA = (eA)n for all n ∈ Z (e) e−A = (eA)−1. (f) det(eA) = etr(A).

Note: The most important are Facts (a) and (b). Notice that Fact (b) implies Facts (c), (d), (e) as special cases.

Example 1: Suppose D is a diagonal matrix.

   d1  d1 e . D . If D =  .. , then e =  .. . d dn e n

Example 2: Suppose B satisfies BN = 0. Then: 1 1 eB = I + B + B2 + ··· + BN−1. 2! (N − 1)! Diagonalizable Case: Suppose A is a . Then we can write A = TDT −1 with T invertible and D diagonal. Therefore:

−1 eA = eTDT = T eDT −1. Advanced: Jordan Form Recall: Let A be an n × n matrix. The eigenspace of λ ∈ C is n E(λ) = {v ∈ C | Av = λv} = N(A − λI).

A vector v ∈ E(λ) is an eigenvector of eigenvalue λ.

Def: The generalized eigenspace of rank k of λ ∈ C is k Ek(λ) = N((A − λI) ).

A vector v ∈ Ek(λ) is a .

Fact: For a given λ ∈ C, the generalized eigenspaces are nested:

E(λ) = E1(λ) ⊂ E2(λ) ⊂ · · · Fact: Let A be an n × n matrix. Then the Jordan form of A has: ◦ dim(E1(λ)) Jordan blocks of eigenvalue λ size ≥ 1 ◦ dim(E2(λ)) − dim(E1(λ)) Jordan blocks of eigenvalue λ of size ≥ 2 ◦ dim(E3(λ)) − dim(E2(λ)) Jordan blocks of eigenvalue λ of size ≥ 3 ◦ etc. Advanced: The Matrix Exponential Let A be any n × n matrix. Write A in Jordan form as A = TJT −1, where J has Jordan blocks J1,...,Jk. Then:   eJ1 J2 −1  e  A TJT J −1   −1 e = e = T e T = T  ..  T  .  eJk

Q: How do we calculate eJ1 , . . . , eJk ?

Let Ji be a Jordan block of eigenvalue λi of size Ni. We can write       λi 1 λi 0 1  λ 1   λ   0 1   i   i    Ji =  .. ..  =  ..  +  .. ..  .  . .  .   . . λi λi 0 | {z } | {z } λiI Bi

Ni Notice that the matrix Bi, which is of size Ni × Ni, is such that Bi = 0. Therefore:  1 1  Ji λiI+Bi λi Bi λi 2 Ni−1 e = e = e e = e I + Bi + Bi + ··· + Bi . 2! (Ni − 1)! 1st-Order Linear Systems: Intro Def: A first-order linear ODE system is one of the form dx = P (t)x + g(t). (†) dt Here, P (t) is a matrix of functions. In detail, this is  0        x1(t) p11(t) ··· p1n(t) x1(t) g1(t) ......  .  =  . .. .   .  +  . . 0 xn(t) pn1(t) ··· pnn(t) xn(t) gn(t) For now, this is too difficult. So, we simplify things with two assumptions:

(1) Constant Coefficients: This is the special case of dx = Ax + b. (‡) dt That is, P (t) ≡ A is a constant matrix and g(t) = b is a constant function. These are sometimes called autonomous.

(2) Homogeneous: This is the special case that b = 0. That is, we study: dx = Ax. (?) dt Note: Studying (?) will ultimately be the key to understanding (†) and (‡). Remark: Geometry of Solutions

A solution to an ODE system consists of n functions x1(t), . . . , xn(t) of t. Geometrically, there are two ways to visualize solutions:

(a) The graph of x1(t) is a curve in the (t, x1)-plane, the graph of x2(t) is a curve in the (t, x2)-plane, etc. So: A solution consists of n graphical curves (called component plots), each in a different plane.

(b) The function x(t) = (x1(t), . . . , xn(t)) is a parametric curve in the n (or state space) R , which has coordinates (x1, . . . , xn). So: A solution consists of one parametric curve in Rn.

Remark: Equilibrium Solutions of x0 = Ax + b Def: An equilibrium solution (or critical point) is a solution x(t) that dx is constant. That is, a solution for which dt = 0.

Observation: Consider the first-order autonomous linear system x0 = Ax + b. An equilibrium solution must satisfy x0 = 0, so Ax = −b. There are three possibilities: (i) Ax = −b has ∞-many solutions ⇐⇒ ∞-many equilibrium solutions. (ii) Ax = −b has one solution ⇐⇒ Exactly one equilibrium solution. (iii) Ax = −b has no solutions ⇐⇒ No equilibrium solutions.

Notice: In the special case of x0 = Ax (i.e., if b = 0), then case (iii) cannot occur. So, in this case, there only two possibilities: (i) det A = 0 ⇐⇒ Ax = 0 has ∞-many solutions. (ii) det A 6= 0 ⇐⇒ Ax = 0 has exactly one solution: the origin. 1st-Order Linear Systems: General Observations Goal: Study 1st-order linear systems of the form dx = Ax (?) dt for n dependent variables x(t) = (x1(t), . . . , xn(t)).

1. Superposition Principle: Consider the 1st-order linear system x0 = Ax. If x1(t),..., xk(t) are solutions on an interval I, then any linear combina- tion x(t) = c1x1(t) + ··· + ckxk(t) is also a solution on I. (Here, c1, . . . , ck ∈ R are any real numbers.)

Def: Consider the 1st-order linear system x0 = Ax. A fundamental set of solutions on an interval I is a set of solutions {x1(t),..., xn(t)} that is linearly independent on I. That is: The only constants c1, . . . , ck with

c1x1(t) + ··· + ckxk(t) = 0 for all t ∈ I are c1 = ··· = ck = 0.

2. Theorem: Consider the 1st-order linear system x0 = Ax. If {x1(t),..., xn(t)} is a fundamental set of solutions on an interval I, then every solution of x0 = Ax on I is of the form

c1x1(t) + ··· + cnxn(t).

We then call c1x1(t) + ··· + cnxn(t) the general solution of the system.

0 Def: Let x1(t),..., xn(t) be n solutions of x = Ax. The of these n solutions is the function  

W [x1,..., xn](t) = detx1(t) ··· xn(t)

3. Fact: Consider the 1st-order linear system x0 = Ax. If x1(t),..., xn(t) are n solutions with W [x1,..., xn](t) 6= 0 for all t, then {x1(t),..., xn(t)} is a fundamental set of solutions. 1st-Order Linear Systems: Methods of Solution Goal: Find solutions to 1st-order linear systems of the form dx = Ax (?) dt There are essentially two methods.

1. Matrix Exponential Method: Every solution to x0 = Ax is of the form x(t) = eAtc, where c is a constant vector. In other words: Every solution is     c1 At . x(t) = e c = y1(t) ··· yn(t)  .  = c1y1(t) + ··· + cnyn(t), cn At  where e = y1(t) ··· yn(t) and c1, . . . , cn ∈ R are constants.

Note: This method is cool because it gives all solutions to the system. But it is annoying because calculating eAt can be a pain. Here’s an alternative:

2. Eigenvalue Method: Consider x0 = Ax. If Av = λv, then x(t) = eλtv is a solution to x0 = Ax.

Proof: Let x(t) = eλtv. Notice that x0 = λeλtv and Ax = A(eλtv) = eλtAv = eλtλv = λeλtv,

0 so x = Ax holds. Yay. ♦

Concern: Does this second method give us all solutions? Not necessarily.

Special Case: Two Dependent Variables Going forward: We will only consider the case of two dependent variables: x(t) = (x(t), y(t)) and A will be a 2 × 2 matrix. There will be three cases to consider: (1) Eigenvalues of A are real and distinct. (2) Eigenvalues of A are real and repeated (λ1 = λ2). (3) Eigenvalues of A are complex. The Eigenvalue Method Goal: Find a fundamental set of solutions to the 1st-order ODE system x0 = Ax, in two dependent variables x(t) = (x(t), y(t)). The starting point will be the following:

Fact: If Av = λv, then x(t) = eλtv is a solution to x0 = Ax.

Case 1: A has real eigenvalues & two independent eigenvectors ( x (t) = eλ1t v , where Av = λ v The Fact above gives a fundamental set: 1 1 1 1 1 λ t x2(t) = e 2 v2, where Av2 = λ2v2. Case 2: A has a real eigenvalue & one independent eigenvector

Let λ be the real (repeated) eigenvalue of A, with Av = λv. Then the Fact above gives only one solution:

λt x1(t) = e v. A second solution (independent from the first) is

λt λt x2(t) = te v + e w, where w is any solution of (A − λI)w = v.

Case 3: A has complex eigenvalues

Let λ = µ + iν and λ = µ − iν be the complex eigenvalues of A, with eigenvectors v = a + ib and v = a − ib, respectively. The Fact above gives two solutions:

(µ+iν)t µt iνt µt z1(t) = e v = e e (a + ib) = e (cos(νt) + i sin(νt))(a + ib) (µ−iν)t µt −iνt µt z2(t) = e v = e e (a − ib) = e (cos(νt) − i sin(νt))(a − ib)

The real and imaginary parts of z1(t) (or of z2(t)) are a fundamental set:

µt x1(t) = e (a cos(νt) − b sin(νt)) µt x2(t) = e (a sin(νt) + b cos(νt)). For Fun: What even is Stability? Setup: Consider a system x0 = Ax. Notice that the origin x(t) = 0 is an equilibrium solution.

Def: The origin is stable if for every choice of disk {x2 + y2 < }, there is a smaller “threshold disk” {x2 + y2 < δ} such that: Every solution that starts inside the threshold disk exists for all t > 0 and stays inside the chosen disk forever.

Def: The origin is unstable if it is not stable.

Def: The origin is asymptotically stable if: (1) It is stable, and: (2) There exists a “threshold disk” {x2 + y2 < η} such that every solution y(t) starting inside the threshold disk has the property that lim y(t) = 0. t→∞ Classification of Critical Points of x0 = Ax Q: Given a first-order ODE system x0 = Ax. How do solutions behave near the equilibrium solution?

Note: We only deal with systems of two equations. That is, x(t) = (x(t), y(t)) and A is a 2 × 2 matrix. We also suppose that det A 6= 0.

A: Depends on the eigenvalues of A. There are 10 possibilities: (1) Eigenvalues are real and distinct. (1a) λ1, λ2 > 0 =⇒ Unstable Node (1b) λ1, λ2 < 0 =⇒ Asymp Stable Node (1c) λ1 < 0 and λ2 > 0 =⇒ (Unstable) Saddle point (2) Eigenvalues are real and repeated (λ1 = λ2). (2a) λ1 > 0 =⇒ Unstable Proper/Improper Node (2b) λ1 < 0 =⇒ Asymp Stable Proper/Improper Node (3) Eigenvalues are complex: λ1, λ2 = µ ± iν. (3a) µ > 0 =⇒ Unstable Spiral (3b) µ < 0 =⇒ Asymp Stable Spiral (3c) µ = 0 =⇒ (Stable) .

Corollary: Let T = tr(A) and D = det(A) and ∆ = T 2 − 4D. Then the characteristic polynomial of A is p(λ) = λ2 − T λ + D. √ 1 Thus, the eigenvalues are: λ = 2(T ± ∆). (1) ∆ > 0 =⇒ Node or Saddle. (2) ∆ = 0 =⇒ Proper or Improper Node (3) ∆ < 0 and T 6= 0 =⇒ Spiral (30) ∆ < 0 and T = 0 =⇒ Center. Moreover: (i) D > 0 and T < 0 =⇒ Asymp stable (ii) D > 0 and T = 0 =⇒ Stable. (iii) D < 0 or T > 0 =⇒ Unstable.

Stability Diagram: Page 190. Solving x0 = Ax + b Goal: Want to study x0 = Ax + b. ◦ A invertible =⇒ Exactly one equilibrium solution. ◦ A not invertible =⇒ No equil solutions OR ∞-many equil solutions.

Suppose A is invertible. Equilibrium solutions those with x0 = 0, so Ax + b = 0, so −1 xequil = −A b.

Fact: Suppose A is invertible. If y(t) is a solution of x0 = Ax, then

x(t) = y(t) − A−1b is a solution of x0 = Ax + b.

Q: What if A is not invertible? A: Can use a method called “.” (next time)

Reminder on Jordan Form Recall: Let A be an n × n matrix. The eigenspace of λ ∈ C is n E(λ) = {v ∈ C | Av = λv} = N(A − λI). The geometric multiplicity of λ is

GeoMult(λ) = dim(E(λ)) = # of linearly independent eigenvectors with eigenvalue λ.

Fact: Consider the Jordan form of A. The number of Jordan blocks of eigenvalue λ is exactly GeoMult(λ).