<<

Math 285-3 ODE Notes Revised Spring 2007 E. Kosygina, C. Robinson, M. Stein Ordinary Differential Equations

Definition. For a vector field F on Rn, the determined by the F is denoted by x0 = F(x).A solution to the differential equation is a differentiable path x(t) 0 0 in Rn such that x (t) = F(x(t)), i.e., the velocity vector x (t) of the path at time t is given by the vector field F at the point x(t). (In the book by Colley, a solution is called a flow line. See p. 211.) We often specify the initial condition x0 at some time t0, i.e., we seek a solution x(t) such that x(t0) = x0. We shall concentrate on the following two questions: • How can we describe all possible solutions to a given differential equation? • How can we find a solution with given initial condition (t0, x0)? We will mainly consider linear differential equations of the form x0 = Ax, but will consider a few nonlinear examples. We will consider first n = 1 and then n ≥ 2.

1. Differential Equations When n = 1 and we have a vector field on the line, we shall denote it by a non-boldfaced F. Even though in this case F is just a function from R to R, we shall think of F(x) as a one-dimensional vector emanating from the point x on the line.

Theorem 1. a. For a constant a ∈ R, all solutions y(t) to the linear differential equation y0 = ay are of the form y(t) = Ceat , where C is an arbitrary constant. a(t−t0) b. Given an initial condition (t0, y0), there is exactly one solution y(t) = y0e to 0 y = ay with y(t0) = y0. Proof. (a) It is very easy to check that y(t) of the form Ceat are solutions. (Do it!) We shall show that there are no other solutions. Let y(t) be an arbitrary solution of our equation. We compare this unknown solution y(t) to the known solution eat by defining z(t) = e−at y(t). Then z0(t) = −ae−at y(t) + e−at y0(t) = −ae−at y(t) + e−at ay(t) = 0, since y0(t) = ay(t). Therefore z(t) has to be a constant. Denoting the constant by C, we have that C = e−at y(t), or y(t) = Ceat for all t. (b) Suppose that we want to find a solution y(t) = Ceat that satisfies the initial condition at0 −at0 (t0, y0), i.e., y0 = y(t0) = Ce and C = y0e . Thus, our solution is −at0 at a(t−t0) y(t) = y0e e = y0e .  Problem 1. Find the solution of the differential equation y0 = 2y, which satisfies the following initial conditions: a. y(0) = 0; b. y(0) = −1; c. y(2) = 3.

Problem 2. Assume that a : R → R is a continuous function. Imitate the proof of Theorem 0 1 to show that the solution y(t) to the differential equation y = a(t)y with y(t0) = y0 is b(t) R t 0 given by y(t) = y0e , where b(t) = a(s) ds. Hint: b (t) = a(t). t0 1 2

Problem 3. Find the solution of the differential equation y0 = 2ty, which satisfies the initial condition y(0) = 1. Problem 4. Find the solution of the differential equation y0 = cos(t) y, which satisfies the − initial condition y( π/2) = −1.

Theorem 2. Let a, g : R → R be continuous functions. Then the solution of the nonho- 0 mogeneous differential equation y = a(t)y + g(t) with y(t0) = y0 is given by Z t b(t) b(t) −b(s) (1) y(t) = y0 e + e e g(s) ds. t0 where b(t) = R t a(s) ds. t0

Proof. Let y(t) be the solution with y(t0) = y0. We use the the solution of the correspond- ing homogeneous equation found in Problem 2, to form what is called an integrating factor −b(t) −b(t) b(t0) e . We define z(t) = e y(t). Note that b(t0) = 0 so e = 1 and z(t0) = y0. Also, z0(t) = −e−b(t)b0(t)y(t) + e−b(t) y0(t) = −e−b(t)a(t)y(t) + e−b(t) [a(t)y(t) + g(t)] = e−b(t)g(t). Since g(t) and a(t) are given, and b(t) is determined by integration, we know the derivative of z(t). Integrating from t0 to t, Z t −b(s) z(t) − z(t0) = e g(s) ds and t0  Z t  b(t) b(t) −b(s) y(t) = e z(t) = e y0 + e g(s) ds , t0 b(t) where we used the fact that z(t0) = y0. Notice that (i) y0 e is solution of the associated b(t) R t −b(s) homogeneous equation with y(t0) = y0 given in Problem 2 and (ii) e e g(s) ds is t0 a particular solution of the nonhomogeneous equation with initial condition (t0, 0). 

Example 1. An initial amount of money y0 is put in an account on which interest is compounded continuously at a rate of r > 0, for an effective annual rate of er . (For r = 0.05, the effective rate is 0.0513.) Assume that money is added continuously to the account at the rate of g. The differential equation governing the amount of money in the account is given by y0 = r y + g. The solution is determined as follows: a(t) = r, b(t) = ert , and Z t rt rt rs y(t) = y0 e + e e g ds 0 t rt rt h g rs = y0 e + e − e r 0 g = y ert + ert 1 − e rt  0 r h g i g = y + ert − . 0 r r 3

In the long term, the amount in the account approaches the amount of an initial deposit of g y + with no money added. 0 r  Problem 5. Find the solution of the differential equation y0 = 2ty + t, which satisfies the condition y(0) = 0. Problem 6. For k = 2, 1, 0, 1, 2 find the solution (for t > 0) of the differential equation 0 y = y/t + t, which satisfies the initial condition y(1) = k. Graph these solutions in the (t, y) plane. What happens to these solutions when t → 0? Notice that at t = 0 the function a(t) = 1/t is undefined! Problem 7. Consider the nonhomogeneous linear equation (NH) y0 = a(t)y + g(t) with the associated homogeneous linear equation (H) y0 = a(t)y. a. If y p(t) is one (particular) solution of the nonhomogeneous equation (NH) and yh(t) is a solution of the homogeneous equation (H), show that y p(t) + yh(t) is a solution of the nonhomogeneous equation (NH). b. Assume that y p(t) is a particular solution of the nonhomogeneous equation (NH). Show that the general solution of the nonhomogeneous equation (NH) is y p(t) + Ceb(t) where b(t) is given as in Problem 2 and C is an arbitrary constant. Hint: For any solution y(t) of (NH), show that y(t) − y p(t) satisfies (H). Example 2 (Logistic equation). We consider one nonlinear differential equation, the so called logistic differential equation,  y  y0 = ry 1 − , K where K > 0 is a given constant. There are two constant solutions, y1(t) ≡ 0 and y2(t) ≡ K . To find solutions with y(0) 6= 0, K , this equation is solved by the method of separation of variables, that converts it into a problem of integrals, and then we use partial fractions: K y0 = r y (K − y) y0 y0 + = r. y K − y Integrating with respect to t, the term y0 dt changes it to and integral with respect to y: Z 1 Z 1 Z dy + dy = r dt y K − y ln(|y|) − ln(|K − y|) = rt + C1 |y| = C ert , where C = eC1 . |K − y| 2 2 Assuming 0 < y < K so we can drop the absolute value signs, we solve for y: rt rt y = C2 K e − C2e y rt rt (1 + C2e )y = C2 K e C K ert K y = 2 = , rt −rt (1 + C2e ) Ce + 1 4

1 where C = /C2. If y0 is the initial condition at t = 0, then some more shows that (K − y0) C = /y0 so y K y(t; y ) = 0 . 0 −rt y0 + (K − y0)e It can be shown that this form of the solution is valid for any y0 and not just those with 0 < y0 < K . For the logistic equation, a solution y(t; y0) for an initial condition 0 < y0 < K has 0 0 y > 0 and it increases toward K . Also, for y0 > K , y < 0, and the solution y(t; y0) decrease toward K . So we can conclude, even without solving the differential equation, that for any y0 > 0 the solution y(t; y0) tends toward K as t goes to infinity. For this reason, K is called the carrying capacity. For y0 < 0, the denominator becomes 0 for = (K − y0)  ∞ t1 ln /( y0) , and the solution goes to as t goes to t1.  Remark. Notice that the solution y(t) of a linear differential equation or a nonhomoge- neous differential equation is equal to an expression found by means of integrals. A nonlinear equation of the form y0 = f (y) is solved by separation of variables and then integrals. However, the result gives a expression relating some combination of y to t. It is then necessary to solve this implicit solution for y(t). In general, this can be difficult. Problem 8. Solve the nonlinear differential equation y0 = y2 − 1 = (y + 1)(y − 1) by separation of variables. Example 3 (Economic growth). The Solow-Swan model of economic growth is given by K 0 = s F(K, L) − δ K and L0 = n L, where K is the capital, L is the labor force, F(K, L) is the product function that we take to be AK a L1−a where 0 < a < 1, 0 < s ≤ 1 is the rate of reinvestment of income, δ > 0 is the rate of depreciation of capital, and n > 0 is the rate or growth of the labor force. A new variable k = K/L is introduced that is the capital per capita (of labor). The differential equation that k satisfies is as follows: 1 K k0 = K 0 − L0 L L2 1 1 K = s A K a L1−a − δ K − n L L L L2 = s A ka − (δ + n) k = ka s A − (δ + n) k1−a . Notice the similarity to the logistic equation. The equilibrium where k0 = 0 and k > 0 occurs for s A = (n + δ)k1−a, 1  s A  1−a k∗ = . n + δ For 0 < k < k∗, k0 > 0 and k increases toward k∗. For k > k∗, k0 < 0 and k decreases ∗ toward k . It can be shown that for any initial capital k0 > 0, the solution k(t; k0) limits to 5

∗ k as t goes to infinity. Therefore, all solutions with k0 tend to the steady state of the capital ∗ to labor ratio of k .  2. Exponential Solutions of Constant Coefficient Systems We now turn to the case of the differential equation for a linear vector fields on Rn, (2) x0 = Ax, where A is a n × n matrix. This is called a constant coefficient linear differential equation or system of linear differential equations. We will show that there is a solution x(t) of (2) that satisfies the condition x(t0) = x0 is given by

(t−t0)A (3) x(t) = e x0. This expression is similar to the one-dimensional case, and the only problem is to make sense out of e(t−t0)A, where A is a matrix. Even though Theorems 6, 7, and 8 give a more efficient way to solve the differential equation, the exponential is a convenient notation for solutions and helps us understand the form of solutions. We shall use the following fact from calculus: for every z ∈ R, ∞ 1 1 X 1 ez = 1 + z + z2 + · · · + zk + · · · = zk. 2 k! k! k=0 For A be a square n × n matrix, define ∞ 1 1 X 1 eA = I + A + A2 + · · · + Ak + · · · = Ak, 2 k! k! k=0 where I is the n × n . Notice that each term in the right hand side is an n × n matrix, and at least finite sums make perfect sense. The following results states that this infinite sum converges. Theorem 3. Let A, B, and P be n × n matrices. P∞ 1 k A a. The series k=0 k! A converges to a matrix that we denote by e . − b. Assume that P is a non-singular. Then ePAP 1 = PeAP−1. c. If AB = BA then eA+B = eAeB. Proof. (a) We indicate the ingredients of the proof and skip the details. First, there are several norms of square matrices: kAxk kAk = sup , x6=0 kxk n X kAk1 = |ai j |, i, j=1 n kAk2 = max |ai j |. i, j=1 Any of these norms have the following properties for two n × n matrices A and B and a vector x ∈ Rn: 6

(i) kA + Bk ≤ kAk + kBk (ii) kABk ≤ kAk · kBk (iii) kAxk ≤ kAk · kxk Given an n × n matrix A, the exponential ekAk converges (as the exponential of a ), so given an  > 0, there is an K (kAk, ) such that P∞ 1 kAkk <  for any k=k0 k! k0 ≥ K (kAk, ). Therefore for k0 ≥ K (kAk, ), the tail of the series has norm less that ,

∞ ∞ X 1 k X 1 k A ≤ kAk < , k! k! k=k0 k=k0 and the sequence of matrix converges in the set of all n × n matrices. The proof of parts (b) and (c) are left to Problems 12 and 13.  a 0 Example 4. Let A = be a 2 × 2 matrix. Then it is easy to see that etA is 0 d  k k   at  ∞ t a 0 e 0 also a and etA = P 1 = . k=1 k! 0 tkdk 0 edt T Now, further assume that a = 2, d = 1, t0 = 1, and x0 = (1, 3) . Then, e2(t−1) 0   1  e2(t−1)  x(t) = e(t−1)Ax = = . 0 0 e1−t 3 3e1−t Moreover, x(1) = (1, 3)T , and x0(t) = (2e2(t−1), 3e1−t )T = Ax(t). Therefore, we have shown that in this particular case formula (3) gives you a solution of (2). Notice that x(t) = e2(t−1)e1 − 3e1−t e2 where the standard unit vectors e1 and e2 are the eigenvectors of 2 and 1 respectively. Also notice that in this case where A is a a 2 × 2 diagonal matrix, the system of two 0 = 0 = equations splits into two independent one dimensional equations x1 2x1 and x2 x2, 2(t−1) 1−t which have solutions x1(t) = e and x2(t) = 3e with x1(1) = 1 and x2(1) = 3.  Theorem 4. Let A be an n × n matrix.

0n a. If In and 0n are the n × n identity matrix and zero matrix, then e = In. tA d tA = tA b. e is a matrix solution of (2): dt e Ae . (t−t0)A c. x(t) = e x0 is a solution of (2) with x(t0) = x0. 0 = + + 1 2 + · · · = Proof. (a) e In 0n 2 0n In. (b) Differentiating the series for etA term by term (which can be justified by the conver- gence of the series),   d tA d 1 2 2 1 3 3 1 j j e = In + tA + t A + t A + · · · + t A + · · · dt dt 2 3! j! 1 1 1 = 0 + A + (2t)A2 + (3t2)A3 + · · · + ( jt j−1)A j + · · · n 2 3! j!  1 1  = A I + tA + t2A2 + · · · + t j−1A j−1 + · · · n 2 ( j − 1)! = AetA. 7

(t−t0)A 0 (t−t0)A (c) By part (b), if x(t) = e x0 then x (t) = Ae x0 = Ax(t). Moreover, by part 0 (t−t0)A (a), x(t0) = e x0 = Ix0 = x0. This proves that x(t) = e x0 is a solution of (2) which satisfies the condition x(t0) = x0.  The following examples and problems give a few cases where that is is possible to cal- culate directly the exponential, although it is not always easy to do.

   λ1t  λ1 0 0 e 0 0 tA λ2t Problem 9. a. Let A =  0 λ2 0 . Show that e =  0 e 0 . λ3t 0 0 λ3 0 0 e   λ1 0 ... 0  0 λ2 ... 0  =   tA b. Let A  . . .. .  be a diagonal matrix. What is e ?  . . . .  0 0 . . . λn  0 1   1 0   0 1  Example 5. Let J = . Then, J2 = = I, J3 = = J, 1 0 0 1 1 0  1 0  J4 = = I, J5 = J, J6 = J2 = I, etc. Therefore, 0 1 t2 t3 t4 t5 t6 t7 etJ = I + t J − I − J + I + J − I − J + · · · 2! 3! 4! 5! 6! 7!  t2 t4 t6   t3 t5 t7  = 1 − + − + · · · I + t − + − + · · · J 2! 4! 6! 3! 5! 7! = cos(t) I + sin(t) J   = cos(t) sin(t) sin(t) cos(t)  0 1 Example 6. Let N = . Then N2 = 0, so Nk = 0 for k ≥ 2. Then, using the power 0 0 series 1 etN = I + tN + t2 0 + · · · 2   = 1 t 0 1  0 1 0 Problem 10. a. Let N = 0 0 1. Calculate etN. Hint: N3 = 0. 0 0 0 0 1 0 0 0 0 1 0 b. Let N =  . Calculate etN. Hint: N4 = 0. 0 0 0 1 0 0 0 0 8

0 1 0 ... 0 0 0 0 1 ... 0 0   0 0 0 ... 0 0 ×   c. Let N be an n n matrix of the form ...... . What is the smallest ......    0 0 0 ... 0 1 0 0 0 ... 0 0 integer k satisfying Nk = 0? What is etN? λ 1 Problem 11. a. Let A = . Calculate etA. Hint: Use A = λI + (A − λI). Note that 0 λ λI and N = A − λI commute. Also, use Example 6. λ 1 0 b. Let A = 0 λ 1. Calculate etA. Hint: Use Problem 10a. 0 0 λ λ 1 0 ... 0 0 0 λ 1 ... 0 0   0 0 λ . . . 0 0 ×   tA = c. Let A be an n n matrix of the form  ...... . What is e  ......    0 0 0 . . . λ 1 0 0 0 ... 0 λ eλt et(A−λI)? Hint: Use Problem 10c. Problem 12. Let A be an n × n matrix, and let P be a non-singular n × n matrix. Show that − − ePAP 1 = PeAP−1. Hint: Write out the power series for ePAP 1 . Problem 13. Let A and B be two n × n matrices such that AB = BA. Prove that eA+B = eAeB. Hint: You may assume that the binomial theorem applies to commuting matrices, + k = P k! i j i.e., (A B) i+ j=k i! j! A B . 0 1  0 0  Problem 14. Let A = and B = . 0 0 1 0 a. Show that AB 6= BA. 1 1  1 0  b. Show that eA = and eB = . Hint: Use Example 6 with t = 1 for 0 1 1 1 eA and a similar calculation for eB. c. Show that eAeB 6= eA+B. Hint: A + B = J, where etJ was calculated in Example 5.

2. Systems of Linear Differential Equations: Some General Facts The following theorem shows that the set of solution form a finite dimensional . Theorem 5. Assume that A is a n × n matrix. n+1 a. For every pair (t0, x0) ∈ R , there is a unique solution x(t) to (2) which satisfies the initial condition x(t0) = x0, and this solution is defined for all t ∈ R. b. The set of all solutions of equation (2) forms an subspace of the vector space of C1 functions from R to Rn,C1(R, Rn). 9

1 n n i n c. Assume that B = {v ,..., v } is any of R , t0 ∈ R, and x : R → R is the i i solution of (2) with the initial condition x (t0) = v for each i = 1, 2,..., n. Then { i }n x (t) i=1 forms a basis of the set of solutions of (2). Therefore, the set of solutions is an n-dimensional subspace of C1(R, Rn). Also, any solution x(t) can be written 1 as c1x (t) + · · · + cnx(t) for some choice of the constants c j . To get a solution with the initial condition x(t0) = x0, the c j are taken to be the coordinates of vector x0 T relative to the basis B, (c1,..., cn) = [x0]B. { i }n Definition. Any basis x (t) i=1 for the set of solutions of the (2) is called a fundamental 1 n set of solutions and c1x (t) + · · · + cnx (t) is called the general solution.

(t−t0)A Proof. (a) Given any initial condition (t0, x0), Theorem 4c proves that x(t) = e x0 is a solution with the correct initial condition, i.e., this theorem proves exisitence. Assume y(t) is another solution with y(t0) = x0. Then we compare this solution with the exponential: d e−(t−t0)Ay(t) = −e−(t−t0)AAy(t) + e−(t−t0)AAy(t) = 0. dt

−(t−t0)A −(t−t0)A Therefore, e y(t) is a constant. Evaluating it at t = t0, e y(t) = x0 and (t−t0)A y(t) = e x0. This proves uniqueness. The proof of part (b) is left to Problem 15. { i }n { }n (c) We need to show that the set x (t) i=1 is linearly independent. Assume that di i=1 are scalars such that

1 2 n t0 = d1x (t) + d2x (t) + · · · + dnx (t) for all t.

Since the above equality holds for all t, it holds for t = t0 and

1 2 n 1 2 n t0 = d1x (t0) + d2x (t0) + · · · + dnx (t0) = d1v + d2v + · · · + dnv , i = i { i }n = = because x (t0) v . Since the set of vectors v i=1 is linearly independent, d1 d2 · · · = dn = 0. This shows that the set of solutions is linearly independent. { i }n The next step is to show that the set x (t) i=1 spans the subspace of solutions. Let x(t) be any solution of (2). Taking the initial condition at t = t0, the vector x(t0) can be written 1 n n 1 n as a of the basis {v ,..., v } of R : x(t0) = c1v + · · · + cnv . Using 1 2 n i these coefficients, set y(t) ≡ c1x (t) + c2x (t) + · · · + cnx (t). Then, since each x (t) is a solution, y(t) is a solution by Problem 15 or part (b). Moreover, y(t) and x(t) have the same initial conditions at t = t0,

1 n 1 2 n y(t0) = c1x (t0) + c2x2(t0) + · · · + cnx (t0) = c1v + c2v + · · · + cnv = x(t0).

By part (a) such a solution is unique, so x(t) ≡ y(t). This proves that every solution of (2) { i }n is in the span of x (t) i=1. We have also shown how to prescribe the initial conditions.  Problem 15. Show that the sum of any two solutions of (2) is again a solution, and any scalar multiple of a solution is a solution. This means that the set of solutions of (2) is closed under addition and , and therefore is a subspace of C1(R, Rn). 10

4. Eigenvalue and Eigenvector Solutions of Constant Coefficient Equations Rather than calculate the matrix exponential etA, we want to find vectors for which etAv has a simple form. The vectors we seek are real and complex eigenvectors and generalized eigenvectors. m1 mk For a matrix A with characteristic equation 0 = (r1 − λ) ··· (rk − λ) with r j 6= r` for j 6= `, the exponent m j is called the algebraic multiplicity of the eigenvalue r j . The of the eigenspace E(r j ) = {v : (A − r j I)v = 0 } is called the geo- metric multiplicity of the eigenvalue r j . We know that the geometric multiplicity satisfies 1 ≤ dim(E(r j )) ≤ m j . The generalized eigenspace for the eigenvalue r j is defined to be gen m j gen E (r j ) = {v : (A − r j I) v = 0 }. A vector in E (r j ) is called a generalized eigenvector gen for r j . A theorem in states that the dimension of E (r j ) is always equal to the algebraic multiplicity of r j , so there are always enough generalized eigenvectors. We break down the consideration of constructing solutions to equation (2) into four cases: Case 1: A has a real eigenvector for a real eigenvalue. Case 2: A has a complex eigenvector for a complex eigenvalue. Case 3: A has a generalized real eigenvector for a real eigenvalue. Case 4: A has a generalized complex eigenvector for a complex eigenvalue. Case 1: A real eigenvector for a real eigenvalue. Theorem 6. Assume that r is a real eigenvalue of A with eigenvector v. Then x(t) = ert v r(t−t0) 0 and y(t) = e v are solutions of x = Ax with x(0) = v and y(t0) = v. Proof. We will prove this result in two ways, first using the exponential. Since Av = rv, Akv = r kv for all k ≥ 1, and t2 tk etAv = Iv + tAv + A2v + · · · + Akv + · · · 2! k! t2r 2 tkr k = v + trv + v + · · · + v + · · · 2! k! = ert v.

This shows that ert v = etAv is a solution. The calculation for er(t−t0)v is similar. Alternatively, we can say that the solution of scalar equations gives an indication that a solution should be of form x(t) = ert v. Using this function, the following calculation shows that it is a solution provided that v is eigenvector for the eigenvalue r: d ert v = ert r v = ert Av = Aert v. dt  Example 7. Find the general solution of the equation 1 1 x0 = x. 4 1 T Then find a solution which passes through (3, 2) at time t0 = 1. First we need the eigenvalues and eigenvectors of the matrix A. We find that the eigen- values are r1 = 3 and r2 = 1. Since the eigenvalues are distinct, the corresponding eigenvectors v1 = (1, 2)T and v2 = (1, 2)T are independent and form a basis in R2. 11

By Theorem 5b, the general solution is given by 1  1  x(t) = c e3t + c e t . 1 2 2 2 To find a particular solution we can either set t = 1, x(1) = (3, 2)T in the above formula T and solve the linear system for c1 and c2 (do it!), or use our theorem, and write (3, 2) = 2(1, 2)T + (1, 2)T (we also solved a linear system here), and set 1  1  x(t) = 2e3(t−1) + e−(t−1) . 2 2

 Problem 16. In each of problems (a) through (c) find the general solution of the equation x0 = Ax and a particular solution for the given initial condition. 3 2  1 4 3 1 (a) A = and x(0) = ; (b) A = and x( 1) = ; 2 2 1 8 6 0

 2 1   1  (c) A = and x(2) = . 1 2 5 Case 2: A complex eigenvector for a complex eigenvalue. We always assume that A is an n × n matrix with real entries. We want to consider the case when λ = a + ib is a complex eigenvalue, where a and b 6= 0 are real. Problem 17. Assume that u + iw is a eigenvector for the eigenvalue a + ib with b 6= 0. a. Show that u − iw is an eigenvector for the eigenvalue a − ib. b. The eigenvectors u+iw and u−iw must be linearly independent with complex scalars since they correspond to distinct eigenvalues. Use this fact to show that u and w are linearly independent. As noted in the Linear Algebra book by Lay, e(at+ibt) = eat [cos(bt) + i sin(bt)], as can be seen by comparing power series expansions. The following theorem indicates how to get two real solutions from the pair of complex eigenvalues a ± ib. Theorem 7. Let A be a real matrix. a. If z(t) = x(t) + i y(t) is a complex solution to (2) where x(t) and y(t) are real, then x(t) and y(t) are each real solutions to (2). b. If v = u + i w is a complex eigenvector for a complex eigenvalue a + ib, then x1(t) = etAu = eat [cos(bt)u − sin(bt)w] and x2(t) = etAw = eat [sin(bt)u + cos(bt)w] are two real solutions for the pair of complex eigenvalues a ± i b with x1(0) = u and x2(0) = w. Proof. (a) The real and imaginary parts of the the two sides of the equation d + d = + dt x(t) i dt y(t) Ax(t) i Ay(t) must be equal. This proves part (a). Part (b) follows from part (a) and the following calculation: e(at+ibt)[u + iw] = eat [cos(bt)u − sin(bt)w] + i eat [sin(bt)u + cos(bt)w] 12 is a complex solution. Taking the real and imaginary parts gives the two real solutions. Note that the complex solution for a − ib is e(at−ibt)[u − iw] = eat [cos(bt)u − sin(bt)w] − i eat [sin(bt)u + cos(bt)w]. This gives the same two real solutions. Because they have initial conditions vu and w respectively, they equal etAu and etAw respectively. Compare with the exponential in Ex- ample 5.  Example 8. The matrix of the linear system of differential equations  3 0 2  0 x =  1 1 0  x 2 1 0 √ 2 has characteristic equation 0 = −(λ + 2)(λ + 2λ + 3) and eigenvalues√ 2 and 1 ± 2i. The eigenvector for 2 is (2, 2, 1)T . To find the eigenvector for 1 + 2i, we need to row reduce the following matrix: √   √ -2 − 2 i √0 2 A − (-1 + 2 i) =  1 - 2 i 0√  -2 -1 1 − 2 i √ multiplying row 1 by -2 + 2 i √   6√ 0 −4 + 2 2 i ∼  1 - 2 i 0√  -2 -1 1 − 2 i interchanging rows 1 & 2 and dividing the new row 2 by 2 √   1 - 2 i 0√ ∼  3 0 −2 +√ 2 i -2 -1 1 − 2 i clearing column 1 √   1 -√2 i 0√ ∼ 0 3 2√i −2 +√ 2 i 0 -1 − 2 2 i 1 − 2 i √ √ multiplying row 2 by − 2 i and row 3 by -1 + 2 2 i √   1 - 2 i 0√ ∼ 0 6 2 + 2√2 i 0 9 3 + 3 2 i dividing row 2 by 2 and eliminating row 3 √   1 - 2 i 0√ ∼ 0 3 1 + 2 i . 0 0 0 13 √ √ These give us the equations v1 = 2 i v2 and (1 + 2 i)v3 = -3v2, so one solution√ is     √ √ √ √ 2 √2 v3 = 3, v2 = -1 − 2 i, and v1 = 2 i(-1 − 2 i) = 2 − 2 i: v = -1 − i  2. We 3 0 have the three desired independent solutions, and the general solution is √        2 √ 2 √ √2 2t t x(t) = c1e  2 + c2e cos( 2t) -1 + sin( 2t)  2 1 3 0 √      √ 2 √ √2 t + c3e sin( 2t) -1 − cos( 2t)  2 . 3 0   2 0 1  Problem 18. Consider the matrix A =  0 2 1 , with eigenvalues 2, 1 + i, and 2 0 0 1 − i. Find the general real solution of the differential equation x0 = Ax. Case 3: A generalized real eigenvector for a real eigenvalue. Consider the simplest case where A has a real eigenvalue r with algebraic multiplicity 2 and geometric multiplicity 1. Let v1 be an eigenvector corresponding to r. Since Egen(r) must have dimension 2, there must be a generalized eigenvector w that is not an eigenvector, so (A − rI)w 6= 0 but (A − rI)2w = 0. It follows that (A − rI)w must be a scalar multiple of v1. By scaling w, we can find a generalized eigenvector v2 such that (A − rI)v2 = v1. Problem 19. If (A − rI)v2 = v1 6= 0 and (A − rI)v1 = 0, show that v1 and v2 are linearly independent. Theorem 8. a. Assume that A has a real eigenvalue r with multiplicity m and generalized eigenvector w. Then there is a solution of x0 = Ax with x(0) = w of the form  tm−1  x(t) = etAw = ert w + t (A − rI)w + · · · + (A − rI)m−1w . (m − 1)! b. Assume A has a real eigenvalue r with algebraic multiplicity 2 but geometric mul- tiplicity 1. Assume that v is an eigenvector and w is a generalized eigenvector solving (A − rI)w = v. Then (2) has two independent solutions x1(t) = etAv = ert v and x2(t) = etAw = ert w + tert v. Notice that x1(0) = v and x2(0) = w. Proof. (a) We note that rI and A − rI commute so that etA = erI+(A−rI) = ertIet(A−rI) = ert et(A−rI), and etA w = ert et(A−rI) w  t j  = ert I w + t(A − rI) w + · · · + (A − rI) j w + · · · j!  tm−1  = ert w + t (A − rI)w + · · · + (A − rI)m−1w , (m − 1)! 14

j t − j = ≥ since j! (A rI) w 0 for j m. Compare with the exponential in Problem 11. Part (b) follows from part (a).  Remark. The proof of this theorem should be compared with the exponential matrices of Problem 11(a-c). For each of those matrices, there is single eigenvalue of algebraic multiplicity 2, 3, and n respectively, and there is only one eigenvector. The exponential contains terms of degree 1, 2, and n − 1 respectively. This is similar to the solution given in the last theorem. Example 9. The matrix of the linear system of differential equations  0 1 1  0 x =  2 3 1  x 1 1 1 has eigenvalues 1, 1, 2. An eigenvector for 2 is v2 = (0, 1, 1)T . Since dim(Egen( 2)) = 1, this eigenvector v2 spans both Egen( 2) and E( 2). The other eigenvalue 1 has dim(Egen( 1)) = 2.  1 -1 1   1 -1 1   1 -1 0  (A + I) =  2 -2 1  ∼  0 0 -1  ∼  0 0 1  1 -1 0 0 0 -1 0 0 0 Thus, there is only one independent eigenvector, which can be take to be v1 = (1, 1, 0)T . This eigenvector v1 spans E( 1) but not Egen( 1). To find another generalized eigenvector for λ = 1, we solve the following nonhomoge- neous equation (A + I)w = v1 by considering the following augmented matrix:       1 1 1 1 1 1 1 1 1 1 0 0

2 2 1 1 ∼ 0 0 1 1 ∼ 0 0 1 1 .

1 1 0 0 0 0 1 1 0 0 0 0 T T 1 Thus, the solution is w1 = w2 and w3 = 1, or w = w2(1, 1, 0) + (0, 0, 1) = w2v + (0, 0, 1)T . Notice that the solution involves an arbitrary multiple of the eigenvector v1: this T is always the case. We take w2 = 0 and get w = (0, 0, 1) as the generalized eigenvector. The vectors v1 and w span Egen( 1). We have found three independent solutions, and the general solution is 0 1 0 1 2t t t x(t) = c1e 1 + c2e 1 + c3e 0 + t 1 . 1 0 1 0  3 4 Problem 20. For the linear system x0 = x, find the general solution and the solu- 1 1 tion with initial condition (5, 1)T at time t = 0.  0 6 6  0 Problem 21. Find the general solution of the system x =  5 3 1  x. Hint: The 1 5 1 eigenvalues for (b) are 6, 6, and 6. 15

Case 4: A generalized complex eigenvector for a complex eigenvalue. Problem 22. Assume that A is a real matrix with a complex eigenvalue r = a + bi with one eigenvector v = vr + ivi , and a generalized eigenvector w = wr + iwi such that (A − rI) w = v. Find two real solutions associated with the complex solution etAw. Hint: Write et(A−rI)w as a finite sum involving v and w, and then expand etAw = e(a+bi)t et(A−rI)w to find the real and imaginary parts. Fundamental matrix solution and etA. By Theorem 5, for an n × n matrix A, we need n solutions of (2) x0 = Ax with lin- early independent initial conditions. We write the characteristic equation of A as 0 = m1 mk (r1 −λ) ··· (rk −λ) with r j 6= r` for j 6= `. We have denoted the eigenspace and gen- gen eralized eigenspace by E(r j ) and E (r j ). For a complex eigenvalue r j , we denote the real space of vector generated by the real and imaginary parts of the eigenvectors by E(r j , r¯j ) gen and the subspace generated by the generalized complex eigenvectors by E (r j , r¯j ). For r j gen gen real, m j = dim(E (r j )) ≥ dim(E(r j )) ≥ 1; for r j complex, 2m j = dim(E (r j , r¯j )) ≥ dim(E(r j , r¯j )) ≥ 2. For each eigenvector, real and imaginary parts of each complex eigen- vectors, and each generalized eigenvector vi , by Theorems 6, 7b, 8a and Problem 22, we can find a real solution xi (t) of (2) with xi (0) = vi , that has the form ert vi , ert vi + tz, or eat cos(bt)vi + sin(bt)w. There are n such solutions with linearly independent initial conditions, so a basis of solutions for the the system of differential equations. There are several situations where we want to put these different solutions together into a single matrix solution. A curve of matrices M(t) is called a fundamental matrix solution of d x = Ax provided that d M(t) = AM(t). and M(0) is invertible, i.e., det(M(0)) 6= 0. dt dt We have shown that etA is always a fundamental matrix solution, but it is not always easy to compute. The next theorem indicates how the solutions for eigenvectors and generalized eigenvectors can be used to form a fundamental matrix solution and etA. The point is that the solutions xi (t) are found by the (generalize) eigenvector method and not by exponentiating the matrix. × { i }n n Theorem 9. Let A be a real n n matrix. Let v i=1 be a basis of R of eigenvectors, real and imaginary parts of complex eigenvectors, and generalized eigenvectors. By Theorems 6, 7b, 8a and Problem 22, we know solutions xi (t) = etAvi of (2) with xi (0) = vi . Let M(t) = [x1(t), . . . , xn(t)] be the matrix whose columns are the solutions xi (t). 1 n a. M(t) is a fundamental matrix solution and M(t) = c1x (t) + · · · + cnx (t) is the general solution. b. etA = M(t)M(0)−1. d = = = c. The solution of dt x Ax with initial condition x(0) x0 is given by x(t) −1 1 n M(t)M(0) x0 or M(t)c = c1x (t) + · · · + cnx (t) where c = [x0]B the coefficients of x0 with respect to the basis B. Proof. (a) The derivative of the matrix M(t) satisfies the following: d  d d  M(t) = x1(t), . . . , xn(t) = Ax1(t), . . . , Axn(t) dt dt dt = A x1(t), . . . , xn(t) = AM(t). 16

This shows that M(t) is a matrix solution. Also det(M(0)) = det[v1,..., vn] 6= 0 because { i }n the vectors v i=1 are a basis. The fact that M(t)c is the general solution follows from Theorem 5. − (b) The matrix Me(t) = M(t)M(0) 1 satisfies d d − Me(t) = M(t)M(0) 1 dt dt = AM(t)M(0)−1 = AMe(t), − so Me(t) is a fundamental matrix solution. At t = 0, Me(0) = M(0)M(0) 1 = I = e0A, so Me(t) = etA for all t. (c) For an initial condition x0 at t = 0, a direct calculation shows that Me(t)x0 is a solu- 1 n tion and it satisfies the initial condition Me(0)x0 = x0. The vector x0 = [v ,..., v ][x0]B = tA −1 M(0)[x0]B, so e x0 = M(t)M(0) M(0)[x0]B = M(t)[x0]B is a solution with the cor- rect initial condition.  Example 10. For the differential equation in Example 9, a fundamental matrix solution is  0 e t te t  M(t) = e 2t e t te t  . e 2t 0 e t The exponential is −  0 e t te t  0 1 0 1  0 e t te t   1 1 0 tA e = e 2t e t te t  1 1 0 = e 2t e t te t   1 0 0 e 2t 0 e t 1 0 1 e 2t 0 e t 1 1 1  e t + te t te t te t  =  e 2t + e t + te t e 2t − e t + te t te t  e 2t + e t e 2t − e t e t  5. of Solutions: Phase Plane 0 For x = Ax, the x-space Rn is called the phase plane for n = 2 and the phase space for n > 2. For n = 2, we can plot the curves that are traced by solutions x(t) in the phase plane. We consider a few simple examples, which represent generic types of behavior.  1 0  Example 11. A solution of the differential equation x0 = x is of the form 0 2 T t 2t T t 1 2t 2 x(t) = (x1(t), x1(t)) = (c1e , c2e ) = c1e e + c2e e . As t goes to infinity, any solution x(t) converges to the origin, so the the origin is called stable or an attractor. t 1 t 2 For c1 6= 0, x(t) = e [c1e + c2e e ] and the solution approaches the origin in a 1 = 2 2 direction asymptotic to e . In fact, we can express x2 as a function of x1: x2 c2x1 /c1. Thus, each solution lies on a (half) parabola. If c1 = 0 then we get the half line x1 = 0 with x2 > 0 if c2 > 0 and the half line x1 = 0 with x2 < 0 if c2 < 0. If c2 = 0, the solution like on half of the line x2 = 0. See Figure 1. When there are two distinct negative eigenvalues, the origin is called a stable node.  17

FIGURE 1. Example 11: Stable node.

2 Remark. For any linear system in R with two real negative eigenvalues, r1 and r2, the origin will be an attractor but the shape of flow lines depends on the ratio of r1 and r2. = = 0 = 0 = If we take r1 1 and r2 2 (x1(t) x1(t), x2(t) 2x2(t)) then the flow lines will still be parabolas, but the origin becomes unstable or a repeller.  1 0 Example 12. A solution of the differential equation x0 = x is of the form 0 2 T t 2t T t 1 2t 2 x(t) = (x1(t), x1(t)) = (c1e , c2e ) = c1e e + c2e e . If both c1 6= 0 and c2 6= 0, then position vector is asymptotic to the x2-axis as t goes to infinity and is asymptotic to the x1-axis as t goes to minus infinity. See Figure 2.

FIGURE 2. Example 12: Saddle

6= 6= = 2 2 For c1 0 so x1 0, we can express x2 as a function of x1: x2 c2c1/x1 . These curves look similar to hyperbolas. If c1 = 0 then we get the half line x1 = 0 with x2 > 0 if c2 > 0 and the half line x1 = 0 with x2 < 0 if c2 < 0. If c2 = 0, the solution like on half of the line x2 = 0. When there is one positive and one negative eigenvalue, the origin is called a saddle.  Remark. For a linear system in R2 with one negative and one positive real negative eigen- value, r1 and r2, the shape of flow lines depends on the ratio of absolute values of r1 and r2. For example, if we take r1 = 1 and r2 = 1, then the flow lines will be usual hyperbolas.  2 1  1 t  Example 13. The general solution of x0 = x is c e 2t + c e 2t . 0 2 1 0 2 1 18

FIGURE 3. Example 13: Improper stable node

As t goes to infinity, the quantity te−2t goes to 0, so all the solutions go to the origin.  1  1 Also, te 2t approaches the direction of the eigenvector , so all solutions approach 1/t 0 the origin asymptotic to the line of the eigenspace. See Figure 3. When there is a repeated negative eigenvalue with geometric multiplicity one, the origin is called an improper stable node.   2 1 Example 14. The system x0 = x has complex eigenvalues 2 ± i, and a general 1 2 cos(t)  sin(t) solution c e 2t + c e 2t . These solutions spiral around because of the 1 sin(t) 2 cos(t) imaginary part of the eigenvalue and they approach the origin since the real part of the eigenvalue is negative, 2. See Figure 4. When there are of complex eigenvalues with negative real part, the origin is called an stable focus. 

FIGURE 4. Example 14: Stable focus

a b Problem 23. Show that all solutions to the system x0 = x approach zero as c d t → +∞ if and only if τ = tr(A) = a + d < 0 and 1 = det(A) = ad − bc > 0. Hint: look at the signs of the real part of the eigenvalues. 19

6. Nonhomogeneous Linear Systems The solution of a nonhomogeneous linear system is similar to the nonhomogeneous lin- ear scalar differential equation, and the proof is basically the same.

Theorem 10 (Variation of Parameters). Let A be an n × n matrix and g : R → Rn be a continuous function. Then the solution of the nonhomogeneous equation x0 = Ax + g(t) with x(0) = x0 is given by Z t Z t tA tA −sA −1 1 x(t) = e x0 + e e g(s) ds = M(t)M(0) x0 + M(t) M(s) g(s) ds 0 0 where M(t) is any fundamental matrix solution.

The calculation of a solution using the formula in the theorem is tedious, but we work one simple example.

Example 15. Consider the nonhomogeneous linear system of differential equations

d x   0 1 x   0  1 = 1 + B , dt x2 1 0 x2 sin(ωt) with ω 6= 1. The fundamental matrix solution of the homogeneous equation is

 cos(t) sin(t) cos(s) sin(s) eAt = and e−As = . sin(t) cos(t) sin(s) cos(s)

The integral term in the expression for the solution gives a particular solution xp(t). Using some trigonometry identities,

Z t  0  xp(t) = BeAt e−As ds 0 sin(ωs) Z t  sin(s) sin(ωs) = BeAt ds 0 cos(s) sin(ωs) B Z t  + − −  = At cos((1 ω)s) cos((1 ω)s) e + + − + ds 2 0 sin((1 ω)s) sin(( 1 ω)s)  1 1  B sin((1 + ω)t) − sin((1 − ω)t) = At  1 + ω 1 − ω  e  1 1  2 cos((1 + ω)t) − cos((ω − 1)t) 1 + ω ω − 1   0 B At + e  1 1  2 + 1 + ω ω − 1 B  sin(ωt)  B 0 = − eAt . 1 − ω2 ω cos(ωt) 1 − ω2 ω (The last equality requires some calculation.) 20

The solution with the initial condition x0 is At p x(t) = e x0 + x (t)  B 0 B  sin(ωt)  = eAt x − + . 0 1 − ω2 ω 1 − ω2 ω cos(ωt)  d = + Problem 24. Consider the nonhomogeneous equation (NH) dt x Ax g(t) with the d = associated homogeneous equation (H) dt x Ax. a. If xp(t) is one (particular) solution of the nonhomogeneous equation (NH) and xh(t) is a solution of the homogeneous equation (H), show that xp(t) + xh(t) is a solution of the nonhomogeneous equation (NH). b. Assume that xp(t) is a particular solution of the nonhomogeneous equation (NH). Show that the general solution of the nonhomogeneous equation (NH) is given by xp(t) + etAc for vectors of constants c. Hint: For any solution x(t) of (NH), x(t) − xp(t) satisfies (H). Problem 25. Find the general solution of d 1 0 e t  x = x + dt 0 2 0 with x(0) = (1, 3)T by using the variation of parameters formula given in Theorem 10.

7. Second Order Scalar Equations There is a close relationship between second order scalar linear differential equations, and systems of linear differential equations. Consider (4) y00 + ay0 + by = 0, where a and b are constants. “Solve” means we are looking for a C2 function y(t) which satisfies the above equation. We say that this equation is second order since it involves derivatives up to order two. Assume that y(t) is a solution (4), set x1(t) = y(t), x2(t) = 0 0 T T y (t), and consider the vector x(t) = (y(t), y (t)) = (x1(t), x2(t)) . Then  0        0 y (t) x2(t) 0 1 x1(t) x (t) = 00 = = , y (t) −ax2(t) − bx1(t) b a x2(t) 00 0 since y (t) = −ay (t)−by(t) = −ax2(t)−bx1(t). We have shown that if y(t) is a solution T 0 T of the equation (4) then x(t) = (x1(t), x2(t)) = (y(t), y (t)) is a solution of the equation (2), where  0 1  (5) A = . b a

T Problem 26. Show that if x(t) = (x1(t), x2(t)) is a solution of the differential equation (2), with A given in equation (5), then the first coordinate y(t) = x1(t) is a solution of (4). 21

Remark. Since an equation of the form (4) can be converted to a system of the type (2) and visa versa, we do not need to consider equations of type (4) separately. We can use the solutions of (2) to give solutions (4). For a second order equation (4), we have to specify the initial conditions for both y(t0) 0 and y (t0). Why? Problem 27. Consider the second order scalar differential equation y00 − 5y0 + 4y = 0, with initial conditions y(0) = 3, y0(0) = 6. a. Write down the corresponding 2×2 linear system and solve it for the general vector solution and the solution with the given initial conditions. Use this vector solution to get the general scalar solution for second order scalar differential equation and the solution with the given initial conditions. b. Solve the second order scalar equation direct way as follows: Look for a solution in the form y(t) = ert , where r is to be determined. Put it in the equation and solve for all possible values of r. You will get a quadratic equation and find two solutions y(1)(t) = er1t and y(2)(t) = er2t . Convince yourself that the general solution is found by taking all linear combinations of these two solutions. To find a particular solution solve for the unknown constants in that linear combination. Problem 28. Consider the second order scalar differential equation y00 − 2y0 + y = 0. a. Write down the corresponding 2×2 linear system and solve it for the general vector solution. Show that two solutions of the scalar equation are of the form ert and tert for the correct choice of r. b. Find the general solution to this equation a second direct way as follows: Try to find a solution in the form y(1)(t) = ert . You will see that r is a double root of the quadratic equation. To obtain all solutions you need to find another linearly independent solution. Try to look for it by setting y(2)(t) = tert and substituting into the second order scalar equation. Take an arbitrary linear combination of y(1)(t) and y(2)(t). This will be the general solution. c. Find a solution which satisfies the conditions y(0) = 2, y0(0) = 5. Problem 29. Consider the differential equation y00 − 4y0 + 25 = 0. Find two real solutions by looking for solutions of the form ert . Hint: For r = a + ib complex, what does eat+ibt equal? Problem 30. For the 2 × 2 linear systems x0 = Jx given in Example 5, write down the corresponding second order equation. Find the solution of this equation, which satisfies the conditions y(0) = 2, and y0(0) = 1.

8. Applications Example 16 (Market model with price expectations). A dynamic market model with price expectations is given in [1]. For a given product, the quantity demanded is denoted by Qd , the quantity supplied by Qs, and the price by P. The assumption is that Qd and Qs are determined by d P d2 P Q = −γ + δ P and Q = α − β P + m + n . s d dt dt2 22

We are assuming that the supply only depends on the current price, but the demand increases with increasing price and positive second derivative. The market is assumed to clear at each time, Qs = Qd . The result is the following nonhomogeneous second order differential equation: d2 P d P n + m − (β + δ) P = −(α + γ ). dt2 dt α + γ A dynamic equilibrium occurs for P∗ = > 0. This is a particular solution of the β + δ nonhomogeneous equation. The characteristic equation of the homogeneous equation is 0 = n r 2 + n r − (β + δ). If all the parameters are positive, then there are two real roots s m 1 m 2 β + δ  r , r = − ± + 4 , 1 2 2n 2 n n one positive and one negative. The general solution of the nonhomogeneous differential equation is α + γ P(t) = c er1t + c er2t + , 1 2 β + δ and the equilibrium is dynamically unstable. However, if n < 0, m < 0, and the rest of the parameters are positive, then both roots have negative real part, and the equilibrium is dynamically stable. The eignevalues can be real distinct, real and equal, or complex, depending on the size of the constants.  Example 17 (Model for inflation and unemployment). A model for inflation and un- employment is given in [1] based on a Phillips relation as applied by M. Friedman. The variables are the expected rate of inflation π and the rate of unemployment U. There are two other auxiliary variables which are the actual rate of inflation p and the rate of growth of wages w. The rate of monetary expansion is m > 0 and the increase in productivity is T > 0, which are taken as parameters. The model assumes the following: w = α − β U, p = h π − β U + α − T with 0 < h ≤ 1, dπ = j(p − π) = − j(1 − h) π − jβ U + j (α − T ) , dt dU = k(p − m) = kh π − kβ U + k (α − T − m) , dt so d π  − j(1 − h) − jβ π   j(α − T )  = + . dt U kh −kβ U k(α − T − m) All the parameters are assumed to be positive. The equilibrium is obtained by finding the values where the time derivatives are zero, dπ dU = 0 = . This yields a system of two linear equations that can be solved to give dt dt ∗ = ∗ = 1 − − − = − ∗ = π m and U β [α T m(1 h)]. Then the variables x1 π π and x2 23

U −U ∗ satisfy x0 = Ax, where A is the coefficient matrix given above. A direct calculations shows that the trace and are as follows: tr(A) = − j(1 − h) − kβ < 0, det(A) = jkβ > 0.

Since r1 + r2 = tr(A) < 0 and r1r2 = det(A) > 0, the real part of both eigenvalues must be negative. Therefore, any solutions of the linear equations in the x-variables goes to 0 and (π(t), U(t))T goes to (π ∗, U ∗)T as t goes to infinity. At the equilibrium, the expected rate of inflation equals the rate of monetary expansion, the unemployment is given by U ∗ and ∗ the rate of inflation p can be calculated by its formula.  Example 18 (Monetary policy). A model of N. Obst for inflation and monetary policy is given in [1]. The amount of the national product is Q, the “price” of the national product is P, and the value of the national product is PQ. The money supply and demand are given Md by Ms and Md , and the variable µ = /Ms is the ratio. It is assumed that Md = a P Q a P Q where a > 0 is a constant, so µ = /Ms. The following rates of change are given: 1 d P = p is the rate of inflation, P dt 1 d Q = q is the rate of growth of national product, and Q dt 1 d M s = m is the rate of monetary expansion. Ms dt We assume q is a constant, and p and m are variables. The rate of change of inflation is assumed to be given by dp  M − M  = h s d = h(1 − µ), dt Ms where h > 0 is a constant. Obst argued that the policy that sets m should not be a function dp of p but a function of so of µ − 1. For simplicity of discussion, we assume that dt m = m1µ+m0 with m1 > 0. Differentiating ln(µ) with respect to t, we get the following:

ln(µ) = ln(a) + ln(P) + ln(Q) − ln(Ms) 1 dµ 1 d P 1 d Q 1 d M = 0 + + − s = p + q − m. µ dt P dt Q dt Ms dt Combining, we have the system of nonlinear equations dp = h (1 − µ) dt dµ = (p + q − m µ − m ) µ. dt 1 0 ∗ ∗ At the equilibrium, µ = 1 and p = m1 + m0 − q. To avoid deflation, the monetary policy should be made with m1 + m0 ≥ q, i.e., the monetary growth needs to be large enough 24 to sustain the growth of the national product. The system can be linearized at (p∗, µ∗) by forming the matrix of partial derivatives,     0 h = 0 h + − − − . µ (p q m1µ m0) m1 µ (p∗,µ∗) 1 m1

The determinant is h > 0, and the trace is m1 < 0. Therefore, the eigenvalues at the q 2 2 m1 + − m equilibrium have negative real parts. For 4h > m1, they are complex, /2 i h 1/4. Just like for critical points of a real valued function, the linearization dominates the behavior near the equilibrium and solutions near the equilibrium spiral in toward the equilibrium. See Figure 5.

µ

µ∗ p p∗

FIGURE 5. Example 18: Monetary policy

By itself, the linearization does not tell what happens to solutions far from the equilib- rium, but another method shows that all solutions with µ0 > 0 do converge to the equilib- rium. We form a real valued Lyapunov function (by judicious guessing and integrating), p2 L(p, µ) = + (q − m − m )p + hµ − h ln(µ). 2 1 0 This function is defined on the upper half plane, µ > 0, and has a minimum at (p∗, µ∗). The time derivative of L along solution of the differential equation is given as follows: d dp dp dµ h dµ L = p + (q − m − m ) + h − dt dt 1 0 dt dt µ dt = ph(1 − µ) + (q − m0)h(1 − µ) + h(p + q − m1µ + m1 − m0)µ

− h(p + q − m1µ + m1 − m0) 2 = hm1(µ − 1) − hm1µ + hm1µ 2 = −hm1(µ − 1) ≤ 0. This function decreases along solutions, and it can be shown that it must go to the minimum value so the solution goes to the equilibrium.  25

Problem 31. Consider the nonlinear system of equations x0 = x + y y0 = xy − 1. a. Find all the fixed points, i.e., points where x0 = 0 = y0. b. Linearize the system at each fixed point to determine its stability type. (Attracting, repelling, or saddle) Problem 32. Consider the nonlinear system of equations x0 = −x3 + xy2 y0 = −3x2 y − y3. = (x2 + y2) d 6= a. Let L(x, y) /2. Show that dt L(x, y) < 0 for (x, y) (0, 0). b. Explain why all solutions converge to the origin and the origin is an attracting fixed point.

REFERENCES [1] Chiang, A., Fundamental Methods of , McGraw Hill, Inc., New York, 1984.