<<

MATH 8430

Fundamental Theory of Ordinary Differential Equations

Lecture Notes

Julien Arino Department of University of Manitoba

Fall 2006

Contents

1 General theory of ODEs 3 1.1 ODEs, IVPs, solutions ...... 3 1.1.1 Ordinary differential equation, initial value problem ...... 3 1.1.2 Solutions to an ODE ...... 4 1.1.3 Geometric interpretation ...... 8 1.2 Existence and uniqueness theorems ...... 9 1.2.1 Successive approximations ...... 9 1.2.2 Local existence and uniqueness – Proof by fixed point ...... 10 1.2.3 Local existence and uniqueness – Proof by successive approximations 13 1.2.4 Local existence (non Lipschitz case) ...... 16 1.2.5 Some examples of existence and uniqueness ...... 21 1.3 Continuation of solutions ...... 24 1.3.1 Maximal interval of existence ...... 27 1.3.2 Maximal and global solutions ...... 28 1.4 Continuous dependence on initial data, on parameters ...... 29 1.5 Generality of first order systems ...... 32 1.6 Generality of autonomous systems ...... 34 1.7 Suggested reading, Further problems ...... 34

2 Linear systems 35 2.1 Existence and uniqueness of solutions ...... 35 2.2 Linear systems ...... 36 2.2.1 The of solutions ...... 37 2.2.2 Fundamental solution ...... 38 2.2.3 Resolvent matrix ...... 41 2.2.4 ...... 43 2.2.5 Autonomous linear systems ...... 43 2.3 Affine systems ...... 46 2.3.1 The space of solutions ...... 46 2.3.2 Construction of solutions ...... 46 2.3.3 Affine systems with constant coefficients ...... 47 2.4 Systems with periodic coefficients ...... 48 2.4.1 Linear systems: Floquet theory ...... 48

i Fund. Theory ODE Lecture Notes – J. Arino ii CONTENTS

2.4.2 Affine systems: the Fredholm alternative ...... 51 2.5 Further developments, bibliographical notes ...... 54 2.5.1 A variation of constants formula for a with a linear component ...... 54

3 Stability of linear systems 57 3.1 Stability at fixed points ...... 57 3.2 Affine systems with small coefficients ...... 57

4 Linearization 63 4.1 Some linear stability theory ...... 63 4.2 The stable manifold theorem ...... 65 4.3 The Hartman-Grobman theorem ...... 69 4.4 Example of application ...... 74 4.4.1 A chemostat model ...... 74 4.4.2 A second example ...... 75

5 Exponential dichotomy 79 5.1 Exponential dichotomy ...... 79 5.2 Existence of exponential dichotomy ...... 81 5.3 First approximate theory ...... 82 5.4 Stability of exponential dichotomy ...... 84 5.5 Generality of exponential dichotomy ...... 84

References 85

A Definitions and results 87 A.1 Vector spaces, norms ...... 87 A.1.1 Norm ...... 87 A.1.2 Matrix norms ...... 87 A.1.3 Supremum (or operator) norm ...... 87 A.2 An inequality involving norms and integrals ...... 88 A.3 Types of convergences ...... 88 A.4 Asymptotic Notations ...... 89 A.5 Types of continuities ...... 89 A.6 Lipschitz function ...... 90 A.7 Gronwall’s lemma ...... 91 A.8 Fixed point theorems ...... 93 A.9 ...... 94 A.10 Matrix exponentials ...... 96 A.11 Matrix logarithms ...... 97 A.12 Spectral theorems ...... 99 B Problem sheets 101 Homework sheet 1 – 2003 ...... 103 Homework sheet 2 – 2003 ...... 113 Homework sheet 3 – 2003 ...... 117 Homework sheet 4 – 2003 ...... 125 Final examination – 2003 ...... 137 Homework sheet 1 – 2006 ...... 145

iii

Introduction

This course deals with the elementary theory of ordinary differential equations. The word elementary should not be understood as simple. The underlying assumption here is that, to understand the more advanced topics in the analysis of nonlinear systems, it is important to have a good understanding of how solutions to differential equations are constructed. If you are taking this course, you most likely know how to analyze systems of nonlinear ordinary differential equations. You know, for example, that in order for solutions to a system to exist and be unique, the system must have a C1 vector field. What you do not necessarily know is why that is. This is the object of Chapter 1, where we consider the general theory of existence and uniqueness of solutions. We also consider the continuation of solutions as well as continuous dependence on initial data and on parameters. In Chapter 2, we explore linear systems. We first consider homogeneous linear systems, then linear systems in full generality. Homogeneous linear systems are linked to the theory for nonlinear systems by means of linearization, which we study in Chapter 4, in which we show that the behavior of nonlinear systems can be approximated, in the vicinity of a hyperbolic equilibrium point, by a homogeneous linear system. As for autonomous sys- tems, nonautonomous nonlinear systems are linked to a linearized form, this time through exponential dichotomy, which is explained in Chapter 5.

1

Chapter 1

General theory of ODEs

We begin with the general theory of ordinary differential equations (ODEs). First, we define ODEs, initial value problems (IVPs) and solutions to ODEs and IVPs in Section 1.1. In Section 1.2, we discuss existence and uniqueness of solutions to IVPs.

1.1 ODEs, IVPs, solutions

1.1.1 Ordinary differential equation, initial value problem Definition 1.1.1 (ODE). An nth order ordinary differential equation (ODE) is a functional relationship taking the form  d d2 dn  F t, x(t), x(t), x(t),..., x(t) = 0, dt dt2 dtn that involves an independent variable t ∈ I ⊂ R, an unknown function x(t) ∈ D ⊂ Rn of the independent variable, its derivative and derivatives of order up to n. For simplicity, the time dependence of x is often omitted, and we in general write equations as F t, x, x0, x00, . . . , x(n) = 0, (1.1) where x(n) denotes the nth order derivative of x. An equation such as (1.1) is said to be in general (or implicit) form. An equation is said to be in normal (or explicit) form when it is written as x(n) = f t, x, x0, x00, . . . , x(n−1) . Note that it is not always possible to write a differential equation in normal form, as it can be impossible to solve F (t, x, . . . , x(n)) = 0 in terms of x(n). Definition 1.1.2 (First-order ODE). In the following, we consider for simplicity the more restrictive case of a first-order ordinary differential equation in normal form x0 = f(t, x). (1.2)

3 Fund. Theory ODE Lecture Notes – J. Arino 4 1. General theory of ODEs

Note that the theory developed here holds usually for nth order equations; see Section 1.5. The function f is assumed continuous and real valued on a set U ⊂ R × Rn. Definition 1.1.3 (Initial value problem). An initial value problem (IVP) for equation (1.2) is given by x0 = f(t, x) (1.3) x(t0) = x0, n where f is continuous and real valued on a set U ⊂ R × R , with (t0, x0) ∈ U.

Remark – The assumption that f be continuous can be relaxed, piecewise continuity only is needed. However, this leads in general to much more complicated problems and is beyond the scope of this course. Hence, unless otherwise stated, we assume that f is at least continuous. The function f could also be complex valued, but this too is beyond the scope of this course. ◦

Remark – An IVP for an nth order differential equation takes the form x(n) = f(t, x, x0, . . . , x(n−1)) 0 0 (n−1) (n−1) x(t0) = x0, x (t0) = x0, . . . , x (t0) = x0 , i.e., initial conditions have to be given for derivatives up to order n − 1. ◦ We have already seen that the order of an ODE is the order of the highest derivative involved in the equation. An equation is then classified as a function of its linearity. A linear equation is one in which the vector field f takes the form f(t, x) = a(t)x(t) + b(t). If b(t) = 0 for all t, the equation is linear homogeneous; otherwise it is linear nonhomogeneous. If the vector field f depends only on x, i.e., f(t, x) = f(x) for all t, then the equation is autonomous; otherwise, it is nonautonomous. Thus, a linear equation is autonomous if a(t) = a and b(t) = b for all t. Nonlinear equations are those that are not linear. They too, can be autonomous or nonautonomous. Other types of classifications exist for ODEs, which we shall not deal with here, the previous ones being the only one we will need.

1.1.2 Solutions to an ODE Definition 1.1.4 (Solution). A function φ(t) (or φ, for short) is a solution to the ODE (1.2) if it satisfies this equation, that is, if φ0(t) = f(t, φ(t)), for all t ∈ I ⊂ R, an open interval such that (t, φ(t)) ∈ U for all t ∈ I. The notations φ and x are used indifferently for the solution. However, in this chapter, to emphasize the difference between the equation and its solution, we will try as much as possible to use the notation x for the unknown and φ for the solution. Fund. Theory ODE Lecture Notes – J. Arino 1.1. ODEs, IVPs, solutions 5

Definition 1.1.5 (Integral form of the solution). The function

Z t φ(t) = x0 + f(s, φ(s))ds (1.4) t0 is called the integral form of the solution to the IVP (1.3).

Let R = R((t0, x0), a, b) be the domain defined, for a > 0 and b > 0, by

R = {(t, x): |t − t0| ≤ a, kx − x0k ≤ b} , where k k is any appropriate norm of Rn. This domain is illustrated in Figures 1.1 and 1.2; it is sometimes called a security system, i.e., the union of a security interval (for the independent variable) and a security domain (for the dependent variables) [19]. Suppose

x

x +b 0

(t ,x ) x0 0 0

x0 −b

t −a t +a t 0 t 0 0

Figure 1.1: The domain R for D ⊂ R.

y

y 0

t t 0

x 0

x

2 Figure 1.2: The domain R for D ⊂ R : “security tube”. Fund. Theory ODE Lecture Notes – J. Arino 6 1. General theory of ODEs

that f is continuous on R, and let M = maxR kf(t, x)k, which exists since f is continuous on the compact set R. In the following, existence of solutions will be obtained generally in relation to the domain R by considering a subset of the time interval |t − t0| ≤ a defined by |t − t0| ≤ α, with  a if M = 0 α = b min(a, M ) if M > 0. This choice of α = min(a, b/M) is natural. We endow f with specific properties (continuity, Lipschitz, etc.) on the domain R. Thus, in order to be able to use the definition of φ(t) as 0 the solution of x = f(t, x), we must be working in R. So we require that |t − t0| ≤ a and kx−x0k ≤ b. In order to satisfy the first of these conditions, choosing α ≤ a and working on |t − t0| ≤ α implies of course that |t − t0| ≤ a. The requirement that α ≤ b/M comes from the following argument. If we assume that φ(t) is a solution of (1.3) defined on [t0, t0 + α], then we have, for t ∈ [t0, t0 + α], Z t

kφ(t) − x0k = f(s, φ(s))ds t0 Z t ≤ kf(s, φ(s))k ds t0 Z t ≤ M ds t0

= M(t − t0), where the first inequality is a consequence of the definition of the integrals by Riemann sums (Lemma A.2.1 in Appendix A.2). Similarly, we have kφ(t) − x0k ≤ −M(t − t0) for all t ∈ [t0 − α, t0]. Thus, for |t − t0| ≤ α, kφ(t) − x0k ≤ M|t − t0|. Suppose now that α ≤ b/M. It follows that kφ − x0k ≤ M|t − t0| ≤ Mb/M = b. Taking α = min(a, b/M) then ensures that both |t − t0| ≤ a and kφ − x0k ≤ b hold simultaneously. The following two theorems deal with the localization of the solutions to an IVP. They make more precise the previous discussion. Note that for the moment, the existence of a solution is only assumed. First, we establish that the security system described above performs properly, in the sense that a solution on a smaller time interval stays within the security domain.

Theorem 1.1.6. If φ(t) is a solution of the IVP (1.3) in an interval |t − t0| < α˜ ≤ α, then kφ(t) − x0k < b in |t − t0| < α˜, i.e., (t, φ(t)) ∈ R((t0, x0), α,˜ b) for |t − t0| < α˜.

Proof. Assume that φ is a solution with (t, φ(t)) 6∈ R((t0, x0), α,˜ b). Since φ is continuous, it follows that there exists 0 < β < α˜ such that   kφ(t) − x0k < b for |t − t0| < β and kφ(t0 + β) − x0k = b or kφ(t0 − β) − x0k = b , (1.5) i.e., the solution escapes the security domain at t = t0 ± β. Sinceα ˜ ≤ α ≤ a, β < a. Thus

(t, φ(t)) ∈ R for |t − t0| ≤ β. Fund. Theory ODE Lecture Notes – J. Arino 1.1. ODEs, IVPs, solutions 7

0 Thus kf(t, φ(t))k ≤ M for |t − t0| ≤ β. Since φ is a solution, we have that φ (t) = f(t, φ(t)) and φ(t0) = x0. Thus

Z t φ(t) = x0 + f(s, φ(s))ds for |t − t0| ≤ β. t0 Hence Z t

kφ(t) − x0k = f(s, φ(s))ds for |t − t0| ≤ β t0

≤ M|t − t0| for |t − t0| ≤ β.

As a consequence, b kφ(t) − x k ≤ Mβ < Mα˜ ≤ Mα ≤ M = b for |t − t | ≤ β. 0 M 0

In particular, kφ(t0 ± β) − x0k < b. Hence contradiction with (1.5). The following theorem is proved using the same sort of technique as in the proof of Theorem 1.1.6. It links the variation of the solution to the nature of the vector field.

Theorem 1.1.7. If φ(t) is a solution of the IVP (1.3) in an interval |t − t0| < α˜ ≤ α, then kφ(t1) − φ(t2)k ≤ M|t1 − t2| whenever t1, t2 are in the interval |t − t0| < α˜.

Proof. Let us begin by considering t ≥ t0. On t0 ≤ t ≤ t0 +α ˜,

Z t1 Z t2 φ(t1) − φ(t2) = x0 + f(s, φ(s))ds − x0 − f(s, φ(s))ds t0 t0 Z t2 = − f(s, φ(s))ds if t2 > t1 t1 Z t2 f(s, φ(s))ds if t1 > t2 t1

Now we can see formally what is needed for a solution.

n Theorem 1.1.8. Suppose f is continuous on an open set U ⊂ R × R . Let (t0, x0) ∈ U, and φ be a function defined on an open set I of R such that t0 ∈ I. Then φ is a solution of the IVP (1.3) if, and only if,

i) ∀t ∈ I, (t, φ(t)) ∈ U.

ii) φ is continuous on I. R t iii) ∀t ∈ I, φ(t) = x0 + f(s, φ(s))ds. t0 Fund. Theory ODE Lecture Notes – J. Arino 8 1. General theory of ODEs

0 Proof. (⇒) Let us suppose that φ = f(t, φ) for all t ∈ I and that φ(t0) = x0. Then for all t ∈ I,(t, φ(t)) ∈ U (i). Also, φ is differentiable and thus continuous on I (ii). Finally,

φ0(s) = f(s, φ(s)) so integrating both sides from t0 to t,

Z t φ(t) − φ(t0) = f(s, φ(s))ds t0 and thus

Z t φ(t) = x0 + f(s, φ(s))ds t0 hence (iii). (⇐) Assume i), ii) and iii). Then φ is differentiable on I and φ0(t) = f(t, φ(t)) for all t ∈ I. R t0 From (3), φ(t0) = x0 + f(s, φ(s))ds = x0. t0 Note that Theorem 1.1.8 states that φ should be continuous, whereas the solution should of course be C1, for its derivative needs to be continuous. However, this is implied by point iii). In fact, more generally, the following result holds about the regularity of solutions.

Theorem 1.1.9 (Regularity). Let f : U → Rn, with U an open set of R × Rn. Suppose that f ∈ Ck. Then all solutions of (1.2) are of class Ck+1.

Proof. The proof is obvious, since a solution φ is such that φ0 ∈ Ck.

1.1.3 Geometric interpretation The function f is the vector field of the equation. At every point in (t, x) space, a solution φ is tangent to the value of the vector field at that point. A particular consequence of this fact is the following theorem.

Theorem 1.1.10. Let x0 = f(x) be a scalar autonomous differential equation. Then the solutions of this equation are monotone.

Proof. The direction field of an autonomous scalar differential equation consists of vectors that are parallel for all t (since f(t, x) = f(x) for all t). Suppose that a solution φ of 0 x = f(x) is non monotone. Then this means that, given an initial point (t0, x0), one the following two occurs, as illustrated in Figure 1.3.

i) f(x0) 6= 0 and there exists t1 such that φ(t1) = x0.

ii) f(x0) = 0 and there exists t1 such that φ(t1) 6= x0. Fund. Theory ODE Lecture Notes – J. Arino 1.2. Existence and uniqueness theorems 9

tt t t tt 0 2 1 0 2 1 Figure 1.3: Situations that would lead to a scalar autonomous differential equation having nonmonotone solutions.

Suppose we are in case i), and assume we are in the case f(x0) > 0. Thus, the solution curve 0 φ is increasing at (t0, x0), i.e., φ (t0) > 0. As φ is continuous, i) implies that there exists t2 ∈ (t0, t1) such that φ(t2) is a maximum, with φ increasing for t ∈ [t0, t2) and φ decreasing 0 0 for t ∈ (t2, t1]. It follows that φ (t1) < 0, which is a contradiction with φ (t0) > 0. Now assume that we are in case ii). Then there exists t2 ∈ (t0, t1) with φ(t2) = x0 but 0 such that φ (t2) < 0. This is a contradiction.

Remark – If we have uniqueness of solutions, it follows from this theorem that if φ1 and φ2 are 0 two solutions of the scalar autonomous differential equation x = f(x), then φ1(t0) < φ2(t0) implies that φ1(t) < φ2(t) for all t. ◦

Remark – Be careful: Theorem 1.1.10 is only true for scalar equations. ◦

1.2 Existence and uniqueness theorems

Several approaches can be used to show existence and/or uniqueness of solutions. In Sec- tions 1.2.2 and 1.2.3, we take a direct path: using either a fixed point method (Section 1.2.2) or an iterative approach (Section 1.2.3), we obtain existence and uniqueness of solutions under the assumption that the vector field is Lipschitz. In Section 1.2.4, the Lipschitz assumption is dropped and therefore a different approach must be used, namely that of approximate solutions, with which only existence can be established.

1.2.1 Successive approximations Picard’s successive approximation method consists in using the integral form (1.4) of the solution to the IVP (1.3) to construct a sequence of approximation of the solution, that converges to the solution. The steps followed in constructing this approximating sequence are the following. Fund. Theory ODE Lecture Notes – J. Arino 10 1. General theory of ODEs

Step 1. Start with an initial estimate of the solution, say, the constant function φ0(t) = φ0 = x0, for |t − t0| ≤ h. Evidently, this function satisfies the IVP. Step 2. Use φ0 in (1.4) to define the second element in the sequence: Z t φ1(t) = x0 + f(s, φ0(s))ds. t0

Step 3. Use φ1 in (1.4) to define the third element in the sequence: Z t φ2(t) = x0 + f(s, φ1(s))ds. t0 ... Step n. Use φn−1 in (1.4) to define the nth element in the sequence: Z t φn(t) = x0 + f(s, φn−1(s))ds. t0 At this stage, there are two major ways to tackle the problem, which use the same idea: if we can prove that the sequence {φn} converges, and that the limit happens to satisfy the differential equation, then we have the solution to the IVP (1.3). The first method (Section 1.2.2) uses a fixed point approach. The second method (Section 1.2.3) studies explicitly the limit.

1.2.2 Local existence and uniqueness – Proof by fixed point Here are two slightly different formulations of the same theorem, which establishes that if the vector field is continuous and Lipschitz, then the solutions exist and are unique. We prove the result in the second case. For the definition of a Lipschitz function, see Section A.6 in the Appendix.

Theorem 1.2.1 (Picard local existence and uniqueness). Assume f : U ⊂ R × Rn → D ⊂ Rn is continuous, and that f(t, x) satisfies a Lipschitz condition in U with respect to x. Then, given any point (t0, x0) ∈ U, there exists a unique solution of (1.3) on some interval containing t0 in its interior. Theorem 1.2.2 (Picard local existence and uniqueness). Consider the IVP (1.3), and as- sume f is (piecewise) continuous in t and satisfies the Lipschitz condition

kf(t, x1) − f(t, x2)k ≤ Lkx1 − x2k for all x1, x2 ∈ D = {x : kx − x0k ≤ b} and all t such that |t − t0| ≤ a. Then there exists b  0 < δ ≤ α = min a, M such that (1.3) has a unique solution in |t − t0| ≤ δ. To set up the proof, we proceed as follows. Define the operator F by Z t F : x 7→ x0 + f(s, x(s))ds. t0 Fund. Theory ODE Lecture Notes – J. Arino 1.2. Existence and uniqueness theorems 11

Note that the function (F φ)(t) is a of t. Then Picard’s successives 2 2 approximations take the form φ1 = F φ0, φ2 = F φ1 = F φ0, where F represents F ◦ F . Iterating, the general term is given for k = 0,... by

k φk = F φ0.

Therefore, finding the limit limk→∞ φk is equivalent to finding the function φ, solution of the fixed point problem x = F x, with x a continuously differentiable function. Thus, a solution of (1.3) is a fixed point of F , and we aim to use the contraction mapping principle to verify the existence (and uniqueness) of such a fixed point. We follow the proof of [14, p. 56-58].

Proof. We show the result on the interval t − t0 ≤ δ. The proof for the interval t0 − t ≤ δ is similar. Let X be the space of continuous functions defined on the interval [t0, t0 + δ], X = C([t0, t0 + δ]), that we endow with the sup norm, i.e., for x ∈ X,

kxkc = max kx(t)k t∈[t0,t0+δ] Recall that this norm is the norm of uniform convergence. Let then

S = {x ∈ X : kx − x0kc ≤ b}

Of course, S ⊂ X. Furthermore, S is closed, and X with the sup norm is a complete metric space. Note that we have transformed the problem into a problem involving the space of continuous functions; hence we are now in an infinite dimensional case. The proof proceeds in 3 steps. Step 1. We begin by showing that F : S → S. From (1.4),

Z t (F φ)(t) − x0 = f(s, φ(s))ds t0 Z t = f(s, φ(s)) − f(s, x0) + f(s, x0)ds t0 Therefore, by the triangle inequality, Z t kF φ − x0k ≤ kf(s, φ(s)) − f(t, x0)k + kf(t, x0)kds t0

As f is (piecewise) continuous, it is bounded on [t0, t1] and there exists M = maxt∈[t0,t1] kf(t, x0)k. Thus Z t kF φ − x0k ≤ kf(s, φ(s)) − f(t, x0)k + Mds t0 Z t ≤ Lkφ(s) − x0k + Mds, t0 Fund. Theory ODE Lecture Notes – J. Arino 12 1. General theory of ODEs

since f is Lipschitz. Since φ ∈ S for all kφ − x0k ≤ b, we have that for all φ ∈ S,

Z t kF φ − x0k ≤ Lb + Mds t0

≤ (t − t0)(Lb + M)

As t ∈ [t0, t0 + δ], (t − t0) ≤ δ, and thus

kF φ − x0kc = max kF φ − x0k ≤ (Lb + M)δ [t0,t0+δ]

Choose then δ such that δ ≤ b/(Lb + M), i.e., t sufficiently close to t0. Then we have

kF φ − x0kc ≤ b

This implies that for φ ∈ S, F φ ∈ S, i.e., F : S → S. Step 2. We now show that F is a contraction. Let φ1, φ2 ∈ S, Z t

k(F φ1)(t) − (F φ2)(t)k = f(s, φ1(s)) − f(s, φ2(s))ds t0 Z t ≤ kf(s, φ1(s)) − f(s, φ2(s))kds t0 Z t ≤ Lkφ1(s) − φ2(s)kds t0 Z t ≤ Lkφ1 − φ2kc ds t0 and thus ρ kF φ − F φ k ≤ Lδkφ − φ k ≤ ρkφ − φ k for δ ≤ 1 2 c 1 2 c 1 2 c L Thus, choosing ρ < 1 and δ ≤ ρ/L, F is a contraction. Since, by Step 1, F : S → S, the contraction mapping principle (Theorem A.11) implies that F has a unique fixed point in S, and (1.3) has a unique solution in S. Step 3. It remains to be shown that any solution in X is in fact in S (since it is on X that we want to show the result). Considering a solution starting at x0 at time t0, the solution leaves S if there exists a t > t0 such that kφ(t)−x0k = b, i.e., the solution crosses the border of D. Let τ > t0 be the first of such t’s. For all t0 ≤ t ≤ τ, Z t kφ(t) − x0k ≤ kf(s, φ(s)) − f(s, x0)k + kf(s, x0)kds t0 Z t ≤ Lkφ(s) − x0k + Mds t0 Z t ≤ Lb + Mds t0 Fund. Theory ODE Lecture Notes – J. Arino 1.2. Existence and uniqueness theorems 13

As a consequence, b = kφ(τ) − x0k ≤ (Lb + M)(τ − t0)

As τ = t0 + µ, for some µ > 0, it follows that if b µ > Lb + M then the solution φ is confined to D.

Note that the condition x1, x2 ∈ D = {x : kx − x0k ≤ b} in the statement of the theorem refers to a local Lipschitz condition. If the function f is Lipschitz, then the following theorem holds. Theorem 1.2.3 (Global existence). Suppose that f is piecewise continuous in t and is Lipschitz on U = I × D. Then (1.3) admits a unique solution on I.

1.2.3 Local existence and uniqueness – Proof by successive ap- proximations Using the method of successive approximations, we can prove the following theorem. Theorem 1.2.4. Suppose that f is continuous on a domain R of the (t, x)-plane defined, for a, b > 0, by R = {(t, x): |t − t0| ≤ a, kx − x0k ≤ b}, and that f is locally Lipschitz in x on R. Let then, as previously defined,

M = sup kf(t, x)k < ∞ (t,x)∈R and b α = min(a, ) M Then the sequence defined by

φ0 = x0, |t − t0| ≤ α Z t φi(t) = x0 + f(s, φi−1(s))ds, i ≥ 1, |t − t0| ≤ α t0 converges uniformly on the interval |t − t0| ≤ α to φ, unique solution of (1.3). Proof. We follow [20, p. 3-6]. Existence. Suppose that |t − t0| ≤ α. Then Z t

kφ1 − φ0k = f(s, φ0(s))ds t0

≤ M|t − t0| ≤ αM ≤ b Fund. Theory ODE Lecture Notes – J. Arino 14 1. General theory of ODEs

R t from the definitions of M and α, and thus kφ1 − φ0k ≤ b. So f(s, φ1(s))ds is defined for t0 |t − t0| ≤ α, and, for |t − t0| ≤ α, Z t Z t

kφ2(t) − φ0k = f(s, φ1(s))ds ≤ k kf(s, φ1(s))kds ≤ αM ≤ b. t0 t0

All subsequent terms in the sequence can be similarly defined, and, by induction, for |t−t0| ≤ α, kφk(t) − φ0k ≤ αM ≤ b, k = 1, . . . , n.

Now, for |t − t0| ≤ α, Z t Z t

kφk+1(t) − φk(t)k = x0 + f(s, φk(s))ds − x0 − f(s, φk−1(s))ds t0 t0 Z t

= f(s, φk(s)) − f(s, φk−1(s)) ds t0 Z t ≤ L kφk(s) − φk−1(s)kds, t0 where the inequality results of the fact that f is locally Lipschitz in x on R. We now prove that, for all k,

(L|t − t |)k kφ − φ k ≤ b 0 for |t − t | ≤ α (1.6) k+1 k k! 0 Indeed, (1.6) holds for k = 1, as previously established. Assume that (1.6) holds for k = n. Then Z t

kφn+2 − φn+1k = f(s, φn+1(s)) − f(s, φn(s))ds t0 Z t ≤ Lkφn+1(s) − φn(s)kds t0 Z t n (L|s − t0|) ≤ Lb ds for |t − t0| ≤ α t0 n! n+1 n+1 s=t L |t − t0| ≤ b n! n + 1 s=t0 (L|t − t |)n+1 ≤ b 0 (n + 1)! and thus (1.6) holds for k = 1,.... Thus, for N > n we have

N−1 N−1 k N−1 k X X (L|t − t0|) X (Lα) kφ (t) − φ (t)k ≤ kφ (t) − φ (t)k ≤ b ≤ b N n k+1 k k! k! k=n k=n k=n Fund. Theory ODE Lecture Notes – J. Arino 1.2. Existence and uniqueness theorems 15

The rightmost term in this expression tends to zero as n → ∞. Therefore, {φk(t)} converges uniformly to a function φ(t) on the interval |t − t0| ≤ α. As the convergence is uniform, the PN limit function is continuous. Moreover φ(t0) = x0. Indeed, φN (t) = φ0(t) + k=1(φk(t) − P∞ φk−1(t)), so φ(t) = φ0(t) + k=1(φk(t) − φk−1(t)). The fact that φ is a solution of (1.3) follows from the following result. If a sequence of functions {φk(t)} converges uniformly and that the φk(t) are continuous on the interval |t − t0| ≤ α, then Z t Z t lim φn(s)ds = lim φn(s)ds n→∞ n→∞ t0 t0 Hence,

φ(t) = lim φn(t) n→∞ Z t = x0 + lim f(s, φn−1(s))ds n→∞ t0 Z t = x0 + lim f(s, φn−1(s))ds n→∞ t0 Z t = x0 + f(s, φ(s))ds, t0 which is to say that Z t φ(t) = x0 + f(s, φ(s))ds for |t − t0| ≤ α. t0 As the integrand f(t, φ) is a continuous function, φ is differentiable (with respect to t), and φ0(t) = f(t, φ(t)), so φ is a solution to the IVP (1.3). Uniqueness. Let φ and ψ be two solutions of (1.3), i.e., for |t − t0| ≤ α, Z t φ(t) = x0 + f(s, φ(s))ds t0 Z t ψ(t) = x0 + f(s, ψ(s))ds. t0

Then, for |t − t0| ≤ α, Z t

kφ(t) − ψ(t)k = f(s, φ(s)) − f(s, ψ(s))ds t0 Z t ≤ L kφ(s) − ψ(s)kds. (1.7) t0 We now apply Gronwall’s Lemma A.7) to this inequality, using K = 0 and g(t) = kφ(t) − ψ(t)k. First, applying the lemma for t0 ≤ t ≤ t0 + α, we get 0 ≤ kφ(t) − ψ(t)k ≤ 0, that is, kφ(t) − ψ(t)k = 0, Fund. Theory ODE Lecture Notes – J. Arino 16 1. General theory of ODEs

and thus φ(t) = ψ(t) for t0 ≤ t ≤ t0 + α. Similarly, for t0 − α ≤ t ≤ t0, kφ(t) − ψ(t)k = 0. Therefore, φ(t) = ψ(t) on |t − t0| ≤ α.

0 Example – Let us consider the IVP x = −x, x(0) = x0 = c, c ∈ R. For initial solution, we choose φ0(t) = c. Then Z t φ1(t) = x0 + f(s, φ0(s))ds 0 Z t = c + −φ0(s)ds 0 Z t = c − c ds 0 = c − ct.

To find φ2, we use φ1 in (1.4).

Z t φ2(t) = x0 + f(s, φ1(s))ds 0 Z t = c − (c − cs)ds 0 t2 = c − ct + c . 2 Continuing this method, we find a general term of the form

n X (−1)iti φ (t) = c . n i! i=0

−t −t This is the power series expansion of ce , so φn → φ = ce (and the approximation is valid on R), which is the solution of the initial value problem.  Note that the method of successive approximations is a very general method that can be used in a much more general context; see [8, p. 264-269].

1.2.4 Local existence (non Lipschitz case) The following theorem is often called Peano’s existence theorem. Because the vector field is not assumed to be Lipschitz, something is lost, namely, uniqueness.

Theorem 1.2.5 (Peano). Suppose that f is continuous on some region

R = {(t, x): |t − t0| ≤ a, kx − x0k ≤ b}, with a, b > 0, and let M = maxR kf(t, x)k. Then there exists a continuous function φ(t), differentiable on R, such that

i) φ(t0) = x0, Fund. Theory ODE Lecture Notes – J. Arino 1.2. Existence and uniqueness theorems 17

0 ii) φ (t) = f(t, φ) on |t − t0| ≤ α, where

 a if M = 0 α = b  min a, M if M > 0.

Before we can prove this result, we need a certain number of preliminary notations and results. The definition of equicontinuity and a statement of the Ascoli lemma are given in Section A.5. To construct a solution without the Lipschitz condition, we approximate the differential equation by another one that does satisfy the Lipschitz condition. The unique solution of such an approximate problem is an ε-approximate solution. It is formally defined as follows [8, p. 285].

Definition 1.2.6 (ε-approximate solution). A differentiable mapping u of an open ball J ∈ I into U is an approximate solution of x0 = f(t, x) with approximation ε (or an ε-approximate solution) if we have ku0(t) − f(t, u(t))k ≤ ε, for any t ∈ J.

Lemma 1.2.7. Suppose that f(t, x) is continuous on a region

R = {(t, x): |t − t0| ≤ a, kx − x0k ≤ b}.

Then, for every positive number ε, there exists a function Fε(t, x) such that

i) Fε is continuous for |t − t0| ≤ a and all x,

ii) Fε has continuous partial derivatives of all orders with respect to x1, . . . , xn for |t−t0| ≤ a and all x,

iii) kFε(t, x)k ≤ maxR kf(t, x)k = M for |t − t0| ≤ a and all x,

iv) kFε(t, x) − f(t, x)k ≤ ε on R. See a proof in [12, p. 10-12]; note that in this proof, the property that f defines a differential equation is not used. Hence Lemma 1.2.7 can be used in a more general context than that of differential equations. We now prove Theorem 1.2.5. Proof of Theorem 1.2.5. The proof takes four steps. 1. We construct, for every positive number ε, a function Fε(t, x) that satisfies the re- quirements given in Lemma 1.2.7. Using an existence-uniqueness result in the Lipschitz case (such as Theorem 1.2.2), we construct a function φε(t) such that

(P1) φε(t0) = x0,

0 (P2) φε(t) = Fε(t, φε(t)) on |t − t0| < α.

(P3) (t, φε(t)) ∈ R on |t − t0| ≤ α. Fund. Theory ODE Lecture Notes – J. Arino 18 1. General theory of ODEs

2. The set F = {φε : ε > 0} is bounded and equicontinuous on |t − t0| ≤ α. Indeed, property (P3) of φε implies that F is bounded on |t − t0| ≤ α and that kFε(t, φε(t))k ≤ M on |t − t0| ≤ α. Hence property (P2) of φε implies that

kφε(t1) − φε(t2)k ≤ M|t1 − t2|, if |t1 − t0| ≤ α and |t2 − t0| ≤ α (this is a consequence of Theorem 1.1.7). Therefore, for a given positive number µ, we have kφε(t1) − φε(t2)k ≤ µ whenever |t1 − t0| ≤ α, |t2 − t0| ≤ α, and |t1 − t2| ≤ µ/M. 3. Using Lemma A.5, choose a sequence {εk : k = 1, 2,...} of positive numbers such that limk→∞ εk = 0 and that the sequence {φεk : k = 1, 2,...} converges uniformly on |t−t0| ≤ α as k → ∞. Then set φ(t) = lim φε (t) on |t − t0| ≤ α. k→∞ k 4. Observe that Z t φε(t) = x0 + Fε(s, φε(s))ds t0 Z t Z t = x0 + f(s, φε(s))ds + Fε(s, φε(s)) − f(s, φε(s))ds, t0 t0 and that it follows from iv) in Lemma 1.2.7 that Z t

Fε(s, φε(s)) − f(s, φε(s))ds ≤ ε|t − t0| on |t − t0| ≤ α. t0 This is true for all ε ≥ 0, so letting ε → 0, we obtain Z t φ(t) = x0 + f(s, φ(s))ds, t0 which completes the proof. See [12, p. 13-14] for the outline of two other proofs of this result. A proof, by Hartman [11, p. 10-11], now follows.

1 Proof. Let δ > 0 and φ`(t) be a C n-dimensional vector-valued function on [t0 − δ, t0] 0 0 satisfying φ`(t0) = x0, φ`(t0) = f(t0, x0) and kφ`(t) − x0k ≤ b, kφ`(t)k ≤ M. For 0 < ε ≤ δ, define a function φε(t) on [t0 − δ, t0 + α] by putting φε(t) = φ`(t) on [t0 − δ, t0] and Z t φε(t) = x0 + f(s, φε(s − ε))ds on [t0, t0 + α]. (1.8) t0

The function φε can indeed be thus defined on [t0 − δ, t0 + α]. To see this, remark first that this formula is meaningful and defines φε(t) for t0 ≤ t ≤ t0 + α1, α1 = min(α, ε), so that 1 φε(t) is C on [t0 − δ, t0 + α1] and, on this interval,

kφε(t) − x0k ≤ b, kφε(t) − φε(s)k ≤ M|t − s|. (1.9) Fund. Theory ODE Lecture Notes – J. Arino 1.2. Existence and uniqueness theorems 19

1 It then follows that (1.8) can be used to extend φε(t) as a C function over [t0 − δ, t0 + α2], where α2 = min(α, 2ε), satisfying relation (1.9). Continuing in this fashion, (1.8) serves to 0 define φε(t) over [t0, t0 +α] so that φε(t) is a C function on [t0 −δ, t0 +α], satisfying relation (1.9). 0 Since kφε(t)k ≤ M, M can be used as a Lipschitz constant for φε, giving uniform con- tinuity of φε. It follows that the family of functions, φε(t), 0 < ε ≤ δ, is equicontinuous. Thus, using Ascoli’s Lemma (Lemma A.5), there exists a sequence ε(1) > ε(2) > . . ., such that ε(n) → 0 as n → ∞ and

φ(t) = lim φε(n)(t) exists uniformly n→∞ on [t0 − δ, t0 + α]. The continuity of f implies that f(t, φε(n)(t − ε(n)) tends uniformly to f(t, φ(t)) as n → ∞; thus term-by-term integration of (1.8) where ε = ε(n) gives Z t φ(t) = x0 + f(s, φ(s))ds t0 and thus φ(t) is a solution of (1.3). An important corollary follows. Corollary 1.2.8. Let f(t, x) be continuous on an open set E and satisfy kf(t, x)k ≤ M. Let E0 be a compact subset of E. Then there exists an α = α(E,E0,M) > 0 with the property that if (t0, x0) ∈ E0, then the IVP (1.3) has a solution and every solution exists on |t − t0| ≤ α. In fact, hypotheses can be relaxed a little. Coddington and Levinson [5] define an ε- approximate solution as Definition 1.2.9. An ε-approximate solution of the differential equation (1.2), where f is continuous, on a t interval I is a function φ ∈ C on I such that i) (t, φ(t)) ∈ U for t ∈ I; ii) φ ∈ C1 on I, except possibly for a finite set of points S on I, where φ0 may have simple discontinuities (g has finite discontinuities at c if the left and right limits of g at c exist but are not equal); iii) kφ0(t) − f(t, φ(t))k ≤ ε for t ∈ I \ S. Hence it is assumed that φ has a piecewise continuous derivative on I, which is denoted 1 by φ ∈ Cp (I). Theorem 1.2.10. Let f ∈ C on the rectangle

R = {(t, x): |t − t0| ≤ a, kx − x0k ≤ b}.

Given any ε > 0, there exists an ε-approximate solution φ of (1.3) on |t − t0| ≤ α such that φ(t0) = x0. Fund. Theory ODE Lecture Notes – J. Arino 20 1. General theory of ODEs

Proof. Let ε > 0 be given. We construct an ε-approximate solution on the interval [t0, t0 +ε]; the construction works in a similar way for [t0 − α, t0]. The ε-approximate solution that we construct is a polygonal path starting at (t0, x0). Since f ∈ C on R, it is uniformly continuous on R, and therefore for the given value of ε, there exists δε > 0 such that

kf(t, φ) − f(t,˜ φ˜)k ≤ ε (1.10) if ˜ ˜ (t, φ) ∈ R, (t,˜ φ) ∈ R and |t − t˜| ≤ δε kφ − φk ≤ δε.

Now divide the interval [t0, t0 + α] into n parts t0 < t1 < ··· < tn = t0 + α, in such a way that  δ  max |t − t | ≤ min δ , ε . (1.11) k k−1 ε M

From (t0, x0), construct a line segment with slope f(t0, x0) intercepting the line t = t1 at (t1, x1). From the definition of α and M, it is clear that this line segment lies inside the triangular region T bounded by the lines segments with slopes ±M from (t0, x0) to their intercept at t = t0 + α, and the line t = t0 + α. In particular, (t1, x1) ∈ T . At the point (t1, x1), construct a line segment with slope f(t1, x1) until the line t = t2, obtaining the point (t2, x2). Continuing similarly, a polygonal path φ is constructed that meets the line t = t0 + α in a finite number of steps, and lies entirely in T . The function φ, which can be expressed as

φ(t ) = x 0 0 (1.12) φ(t) = φ(tk−1) + f(tk−1, φ(tk−1))(t − tk−1), t ∈ [tk−1, tk], k = 1, . . . , n,

1 is the ε-approximate solution that we seek. Clearly, φ ∈ Cp ([t0, t0 + α]) and

kφ(t) − φ(t˜)k ≤ M|t − t˜| for t, t˜∈ [t0, t0 + α]. (1.13)

If t ∈ [tk−1, tk], then (1.13) together with (1.11) imply that kφ(t) − φ(tk−1)k ≤ δε. But from (1.12) and (1.10),

0 kφ (t) − f(t, φ(t))k = kf(tk−1, φ(tk−1)) − f(t, φ(t))k ≤ ε.

Therefore, φ is an ε-approximation.

We can now turn to their proof of Theorem 1.2.5.

Proof. Let {εn} be a monotone decreasing sequence of positive real numbers with εn → 0 as n → ∞. By Theorem 1.2.10, for each εn, there exists an εn-approximate solution φn of (1.3) on |t − t0| ≤ α such that φn(t0) = x0. Choose one such solution φn for each εn. From (1.13), it follows that kφn(t) − φn(t˜)k ≤ M|t − t˜|. (1.14) Fund. Theory ODE Lecture Notes – J. Arino 1.2. Existence and uniqueness theorems 21

Applying (1.14) to t˜ = t0, it is clear that the sequence {φn} is uniformly bounded by kx0k + b, since |t − t0| ≤ b/M. Moreover, (1.14) implies that {φn} is an equicontinuous set. By Ascoli’s lemma (Lemma A.5), there exists a subsequence {φnk }, k = 1,..., of {φn}, converging uniformly on [t0 −α, t0 +α] to a limit function φ, which must be continuous since each φn is continuous. This limit function φ is a solution to (1.3) which meets the required specifications. To see this, write Z t φn(t) = x0 + f(s, φn(s)) + ∆n(s)ds, (1.15) t0 0 0 where ∆n(t) = φ (t) − f(t, φn(t)) at those points where φn exists, and ∆n(t) = 0 otherwise. Because φn is an εn-approximate solution, k∆n(t)k ≤ εn. Since f is uniformly continuous on

R, and φnk → φ uniformly on [t0 − α, t0 + α] as k → ∞, it follows that f(t, φnk ) → f(t, φ(t)) uniformly on [t0 − α, t0 + α] as k → ∞. Replacing n by nk in (1.15) and letting k → ∞ gives Z t φ(t) = x0 + f(s, φ(s))ds. (1.16) t0 0 Clearly, φ(t0) = 0, when evaluated using (1.16), and also φ (t) = f(t, φ(t)) since f is contin- uous. Thus φ as defined by (1.16) is a solution to (1.3) on |t − t0| ≤ α.

1.2.5 Some examples of existence and uniqueness

Example – Consider the IVP 0 2 x = 3|x| 3 (1.17) x(t0) = x0 Here, Theorem 1.2.5 applies, since f(t, x) = 3x2/3 is continuous. However, Theorem 1.2.2 does not apply, since f(t, x) is not locally Lipschitz in x = 0 (or, f is not Lipschitz on any interval containing 0). This means that we have existence of solutions to this IVP, but not uniqueness of the solution. The fact that f is not Lipschitz on any interval containing 0 is established using the following argument. Suppose that f is Lipschitz on an interval I = (−ε, ε), with ε > 0. Then, there exists L > 0 such that for all x1, x2 ∈ I,

kf(t, x1) − f(t, x2)k ≤ L|x1 − x2| that is, 2 2 3 3 3 |x1| − |x2| ≤ L|x1 − x2|

Since this has to hold true for all x1, x2 ∈ I, it must hold true in particular for x2 = 0. Thus 2 3|x1| 3 ≤ L|x1| 1 1 Given an ε > 0, it is possible to find Nε > 0 such that n < ε for all n ≥ Nε. Let x1 = n . Then for n ≥ Nε, if f is Lipschitz there must hold 2  1  3 L 3 ≤ n n Fund. Theory ODE Lecture Notes – J. Arino 22 1. General theory of ODEs

So, for all n ≥ Nε, 1 L n 3 ≤ 3 1/3 This is a contradiction, since limn→∞ n = ∞, and so f is not Lipschitz on I. Let us consider the set E = {t ∈ R : x(t) = 0} The set E can be have several forms, depending on the situation.

1) E = ∅,

2) E = [a, b], (closed since x is continuous and thus reaches its bounds),

3) E = (−∞, b),

4) E = (a, +∞),

5) E = R.

Note that case 2) includes the case of a single intersection point, when a = b, giving E = {a}. Let us now consider the nature of x in these different situations. Recall that from Theorem 1.1.10, since (1.17) is defined by a scalar autonomous equation, its solutions are monotone. For simplicity, we consider here the case of monotone increasing solutions. The case of monotone decreasing solutions can be treated in a similar fashion.

1) Here, there is no intersection with the x = 0 axis. Thus if follows that  > 0, if x > 0 x(t) is 0 < 0, if x0 < 0

2) In this case,   < 0, if t < a x(t) is = 0, if t ∈ [a, b]  > 0, if t > b

3) Here,  = 0, if t < b x(t) is > 0, if t > b

4) In this case,  < 0, if t < a x(t) is = 0, if t > a

5) In this last case, x(t) = 0 for all t ∈ R. Now, depending on the sign of x, we can integrate the equation. First, if x > 0, then |x| = x and so x0 = 3x2/3 1 ⇔ x−2/3x0 = 1 3 1/3 ⇔ x = t + k1 3 ⇔ x(t) = (t + k1) Fund. Theory ODE Lecture Notes – J. Arino 1.2. Existence and uniqueness theorems 23

for k1 ∈ R. Then, if x < 0, then |x| = −x, and

x0 = 3 (−x)2/3 1 ⇔ (−x)−2/3(−x0) = −1 3 1/3 ⇔ (−x) = −t + k2 3 ⇔ x(t) = −(−t + k2) for k2 ∈ R. We can now use these computations with the different cases that were discussed earlier, depending on the value of t0 and x0. We begin with the case of t0 > 0 and x0 > 0.

1) The case E = ∅ is impossible, for all initial conditions (t0, x0). Indeed, as x0 > 0, we have 3 3 x(t) = (t + k1) . Using the initial condition, we find that x(t0) = x0 = (t0 + k1) , i.e., 1/3 1/3 3 k1 = x0 − t0, and x(t) = (t + x0 − t0) .

2) If E = [a, b], then  3  −(−t + k2) if t < a x(t) = 0 if t ∈ [a, b] 3  (t + k1) if t > b 3 Since x0 > 0, we have to be in the t > b region, so t0 > b, and (t0 + k1) = x0, which implies 1/3 that k1 = x0 − t0. Thus

 −(−t + k )3 if t < a  2 x(t) = 0 if t ∈ [a, b]  1/3 3 (t + x0 − t0) if t > b Since x is continuous, 1/3 3 lim (t + x0 − t0) = 0 t→b,t>b and 3 lim −(−t + k2) = 0 t→a,t

1/3 This implies that b = t0 − x0 and k2 = a. So finally,

 −(−t + a)3 if t < a  1 1 3 3 x(t) = 0 if t ∈ [a, t0 − x0 ](a ≤ t0 − x0 ) 1  1/3 3 3 (t + x0 − t0) if t > t0 − x0

1/3 Thus, choosing a ≤ t0 − x0 , we have solutions of the form shown in Figure 1.4. Indeed, any ai satisfying this property yields a solution.

3) The case [a, +∞) is impossible. Indeed, there does not exist a solution through (t0, x0) such that x(t) = 0 for all t ∈ [a, +∞); since we are in the case of monotone increasing functions, if x0 > 0 then x(t) ≥ x0 for all t ≥ t0.

4) E = R is also impossible, for the same reason. Fund. Theory ODE Lecture Notes – J. Arino 24 1. General theory of ODEs

Figure 1.4: Case t0, x0 > 0, subcase 2, in the resolution of (1.17).

5) For the case E = (−∞, b], we have

 0 if t ∈ (−∞, b] x(t) = 3 (t + k1) if t > b

1/3 1/3 Since x(t0) = x0, k1 = x0 − t0, and since x is continuous, b = −k1 = t0 − x0 . So, ( 0 if t ∈ (−∞, t − x1/3] x(t) = 0 0 1/3 3 1/3 (t + x0 − t0) if t > t0 − x0

The other cases are left as an exercise. 

Example – Consider the IVP x0 = 2tx2 (1.18) x(0) = 0 Here, we have existence and uniqueness of the solutions to (1.18). Indeed, f(t, x) = 2tx2 is contin- uous and locally Lipschitz on R. 

1.3 Continuation of solutions

The results we have seen so far deal with the local existence (and uniqueness) of solutions to an IVP, in the sense that solutions are shown to exist in a neighborhood of the initial data. The continuation of solutions consists in studying criteria which allow to define solutions on possibly larger intervals. Consider the IVP x0 = f(t, x) (1.19) x(t0) = x0, Fund. Theory ODE Lecture Notes – J. Arino 1.3. Continuation of solutions 25

with f continuous on a domain U of the (t, x) space, and the initial point (t0, x0) ∈ U.

Lemma 1.3.1. Let the function f(t, x) be continuous in an open set U in (t, x)-space, and assume that a function φ(t) satisfies the condition φ0(t) = f(t, φ(t)) and (t, φ(t)) ∈ U, in an open interval I = {t1 < t < t2}. Under this assumption, if limj→∞(τj, φ(τj)) = (t1, η) ∈ U for some sequence {τj : j = 1, 2,...} of points in the interval I, then limτ→t1 (τ, φ(τ)) = (t1, η). Similarly, if limj→∞(τj, φ(τj)) = (t2, η) ∈ U for some sequence {τj : j = 1, 2,...} of points in the interval I, then limτ→t2 (τ, φ(τ)) = (t2, η).

Proof. Let W be an open neighborhood of (t1, η). Then (t, φ(t)) ∈ W in an interval τ1 < t < τ(W) for some τ(W) determined by W. Indeed, assume that the closure of W, W¯ ⊂ U, and that |f(t, x)| ≤ M in W for some positive number M. For every positive j and every positive number ε, consider a rectangular region

Rj(ε) = {(t, x): |t − tj| ≤ ε, kx − φ(tj)k ≤ Mε}

Mε  Then there exists an ε > 0 and a j such that (τ1, η) ∈ Rj(ε) ⊂ W, with ε = min ε, M and tj − ε ≤ τ1. 0 From Theorem 1.1.6 applied to the solution φ of the IVP x = f(t, x), x(τj) = φ(τj), we obtain that (τ, φ(τ)) ∈ Rj(ε) ∈ U on the interval t1 < τ ≤ τj. Since U is an arbitrary open neighborhood of (t1, η), we conclude that limj→∞(τj, φ(τj)) = (t1, η) ∈ R.

From the previous result, we can derive a result concerning the maximal interval over which a solution can be extended. To emphasize the fact that the solution φ of an ODE exists in some interval I, we denote (φ, I). We need the notion of extension of a solution. It is defined in the classical manner (see Figure 1.5).

Definition 1.3.2 (Extension). Let (φ, I) and (φ,˜ I˜) be two solutions of the same ODE. We say that (φ,˜ I˜) is an extension of (φ, I) if, and only if,

˜ ˜ I ⊂ I, φ|I = φ where |I denotes the restriction to I.

Theorem 1.3.3. Let f(t, x) be continuous in an open set U in (t, x)-space, and the function φ(t) be a function satisfying the condition φ0(t) = f(t, φ(t)) and (t, φ(t)) ∈ U, in an open interval I = {t1 < t < t2}. If the following two conditions are satisfied:

i) φ(t) cannot be extended to the left of t1 (or, respectively, to the right of t2),

ii) limj→∞(τj, φ(τj)) = (t1, η) (or, respectively, (t2, η)) exists for some sequence {τj : j = 1, 2,...} of points in the interval I, then the limit point (t1, η) (or, respectively, (t2, η)) must be on the boundary of U. Fund. Theory ODE Lecture Notes – J. Arino 26 1. General theory of ODEs

~ φ φ ~ φ

I ~ I Figure 1.5: The extension φ˜ on the interval I˜ of the solution φ (defined on the interval I).

Proof. Suppose that the hypotheses of the theorem are satisfied, and that (t1, η) ∈ U (re- spectively, (t2, η) ∈ U). Then, from Lemma 1.3.1, it follows that

lim (τ, φ(τ)) = (t1, η) τ→t1

(or, respectively, limτ→t2 (τ, φ(τ)) = (t2, η)). Thus we can apply Theorem 1.2.5 (Peano’s Theorem) to the IVP x0 = f(t, x)

x(t1) = η,

0 (or, respectively, x = f(t, x), x(t2) = η). This implies that the solution φ can be extended to the left of t1 (respectively, to the right of t2), since Theorem 1.2.5 implies existence in a neighborhood of t1. This is a contradiction. A particularly important consequence of the previous theorem is the following corollary. n Corollary 1.3.4. Assume that f(t, x) is continuous for t1 < t < t2 and all x ∈ R . Also, assume that there exists a function φ(t) satisfying the following conditions: 0 a) φ and φ are continuous in a subinterval I of the interval t1 < t < t2, b) φ0(t) = f(t, φ(t)) in I. Then, either

i) φ(t) can be extended to the entire interval t1 < t < t2 as a solution of the differential equation x0 = f(t, x), or

ii) limt→τ kφ(t)k = ∞ for some τ in the interval t1 < t < t2. Fund. Theory ODE Lecture Notes – J. Arino 1.3. Continuation of solutions 27

1.3.1 Maximal interval of existence

Another way of formulating these results is with the notion of maximal intervals of existence. Consider the differential equation x0 = f(t, x) (1.20)

Let x = x(t) be a solution of (1.20) on an interval I.

Definition 1.3.5 (Right maximal interval of existence). The interval I is a right maximal interval of existence for x if there does not exist an extension of x(t) over an interval I1 so that x remains a solution of (1.20), with I ⊂ I1 (and I and I1 having different right endpoints). A left maximal interval of existence is defined in a similar way.

Definition 1.3.6 (Maximal interval of existence). An interval which is both a left and a right maximal interval of existence is called a maximal interval of existence.

Theorem 1.3.7. Let f(t, x) be continuous on an open set U and φ(t) be a solution of (1.20) on some interval. Then φ(t) can be extended (as a solution) over a maximal interval of existence (ω−, ω+). Also, if (ω−, ω+) is a maximal interval of existence, then φ(t) tends to the boundary ∂U of U as t → ω− and t → ω+.

Remark – The extension need not be unique, and ω± depends on the extension. Also, to say, for example, that φ → ∂U as t → ω+ is interpreted to mean that either ω+ = ∞ or that ω+ < ∞ and 0 0 if U is any compact subset of U, then (t, φ(t)) 6∈ U when t is near ω+. ◦ Two interesting corollaries, from [11].

n Corollary 1.3.8. Let f(t, x) be continuous on a strip t0 ≤ t ≤ t0 + a (< ∞), x ∈ R arbitrary. Let φ be a solution of (1.3) on a right maximal interval J. Then either

i) J = [t0, t0 + a],

ii) or J = [t0, δ), δ ≤ t0 + a, and kφ(t)k → ∞ as t → δ.

Corollary 1.3.9. Let f(t, x) be continuous on the closure U¯ of an open (t, x)-set U, and let (1.3) possess a solution φ on a maximal right interval J. Then either

i) J = [t0, ∞),

ii) or J = [t0, δ), with δ < ∞ and (δ, φ(δ)) ∈ ∂U,

iii) or J = [t0, δ) with δ < ∞ and kφ(t)k → ∞ as t → δ. Fund. Theory ODE Lecture Notes – J. Arino 28 1. General theory of ODEs

1.3.2 Maximal and global solutions Linked to the notion of maximal intervals of existence of solutions is the notion of maximal and global solutions.

Definition 1.3.10 (Maximal solution). Let I1 ⊂ R and I2 ⊂ R be two intervals such that ˜ ˜ I1 ⊂ I2. A solution (φ, I1) is maximal in I2 if φ has no extension (φ, I) solution of the ˜ ODE such that I1 ( I ⊂ I2.

Definition 1.3.11 (Global solution). A solution (φ, I1) is global on I2 if φ admits a exten- ˜ sion φ defined on the whole interval I2.

U

φ 1

φ 2

I

Figure 1.6: φ1 is a global and maximal solution on I; φ2 is a maximal solution on I, but it is not global on I.

Every global solution on a given interval I is maximal on that same interval. The converse is false. 0 2 0 −2 Example – Consider the equation x = −2tx on R. If x 6= 0, x x = −2t, which implies that 2 x(t) = 1/(t − c), with c ∈ R. Depending on c, there are several cases. 2 • if c < 0, then x(t) = 1/(t − c) is a global solution on R, √ √ √ √ • if c > 0, the solutions are defined on (−∞, − c), (− c, c) and ( c, ∞). The solutions are maximal solutions on R, but are not global solutions.

• if c = 0, then the maximal non global solutions on R are defined on (−∞, 0) and (0, ∞).

Another solution is x ≡ 0, which is a global solution on R.  Lemma 1.3.12. Every solution φ of the differential equation x0 = f(t, x) is contained in a maximal solution φ˜. Fund. Theory ODE Lecture Notes – J. Arino 1.4. Continuous dependence on initial data, on parameters 29

The following theorem extends the uniqueness property to an interval of existence of the solution.

n 0 Theorem 1.3.13. Let φ1, φ2 : I → R be two solutions of the equation x = f(t, x), with f locally Lipschitz in x on U. If φ1 and φ2 coincide at a point t0 ∈ I, then φ1 = φ2 on I.

Proof. Under the assumptions of the theorem, φ1(t0) = φ2(t0). Suppose that there exists a t1, t1 6= t0, such that φ1(t1) 6= φ2(t1). For simplicity, let us assume that t1 > t0. By the local uniqueness of the solution, it follows from φ1(t0) = φ2(t0) that there exists a neighborhood N of t0 such that φ1(t) = φ2(t) for all t ∈ N . Let

E = {t ∈ [t0, t1]: φ1(t) 6= φ2(t)}

Since t1 ∈ E, E 6= ∅. Let α = inf(E), we have α ∈ (t0, t1], and for all t ∈ [t0, α), φ1(t) = φ2(t). By continuity of φ1 and φ2, we thus have φ1(α) = φ2(α). This implies that there exists a neighborhood W of α on which φ1 = φ2. This is a contradiction, since φ1(t) 6= φ2(t) for t > α, hence there exists no such t1, and φ1 = φ2 on I. Corollary 1.3.14 (Global uniqueness). Let f(t, x) be locally Lipschitz in x on U. Then by n any point (t0, x0) ∈ U, there passes a unique maximal solution φ : I → R . If there exists a global solution on I, then it is unique.

1.4 Continuous dependence on initial data, on param- eters

Let φ be a solution of (1.3). To emphasize the fact the this solution depends on the initial condition (t0, x0), we denote it φt0,x0 . Let η be a parameter of (1.3). When we study the dependence of φt0,x0 on η, we denote the solution as φt0,x0,η. We suppose that kf(t, x)k ≤ M and |∂f(t, x)/∂xi| ≤ K for i = 1, . . . , n for (t, x) ∈ U, with U ∈ R×Rn. Note that these conditions are automatically satisfied on a closed bounded region of the form R = {(t, x): |t − t0| ≤ a, kx − x0k ≤ b}, where a, b > 0. Our objective here is to characterize the nature of the dependence of the solution on the initial time t0 and the initial data x0. Theorem 1.4.1. Suppose that f and ∂f/∂x are continuous and bounded in a given region

U. Let φt0,x0 be a solution of (1.3) passing through (t0, x0) and ψtˆ0,xˆ0 be a solution of (1.3) passing through (tˆ0, xˆ0). Suppose that φ and ψ exist on some interval I. Then, for each ε > 0, there exists δ > 0 such that if |t − tˆ| < δ and kx0 − xˆ0k < δ, then kφ(t) − ψ(tˆ)k < ε, for t, tˆ∈ I. Proof. The prooof is from [2, p. 135-136]. Since φ is the solution of (1.3) through the point (t0, x0), we have, for all t ∈ I, Z t φ(t) = x0 + f(s, φ(s))ds (1.21) t0 Fund. Theory ODE Lecture Notes – J. Arino 30 1. General theory of ODEs

As ψ is the solution of (1.3) through the point (t˜0, x˜0), we have, for all t ∈ I,

Z t ψ(t) =x ˜0 + f(s, ψ(s))ds (1.22) t˜0 Since ˜ Z t Z t0 Z t f(s, φ(s))ds = f(s, φ(s))ds + f(s, φ(s))ds, t0 t0 t˜0 substracting (1.22) from (1.21) gives

˜ Z t0 Z t φ(t) − ψ(t) = x0 − x˜0 + f(s, φ(s))ds + f(s, φ(s)) − f(s, ψ(s))ds t0 t˜0 and therefore

˜ Z t0 Z t

kφ(t) − ψ(t)k ≤ kx0 − x˜0k + f(s, φ(s))ds + f(s, φ(s)) − f(s, ψ(s))ds ˜ t0 t0

Using the boundedness assumptions on f and ∂f/∂x to evaluate the right hand side of the latter inequation, we obtain

Z t

kφ(t) − ψ(t)k ≤ kx0 − x˜0k + M|t˜0 − t0| + K φ(s) − ψ(s)ds t˜0

If |t0 − t˜0| < δ, kx0 − x˜0k < δ, then we have

Z t

kφ(t) − ψ(t)k ≤ δ + Mδ + K φ(s) − ψ(s)ds (1.23) t˜0

Applying Gronwall’s inequality (Appendix A.7) to (1.23) gives

˜ kφ(t) − ψ(t)k ≤ δ(1 + M)eK|t−t0| ≤ δ(1 + M)eK(τ2−τ1) using the fact that |t − t˜0| < τ2 − τ1, if we denote I = (τ1, τ2). Since

Z t

kψ(t) − ψ(t˜)k < f(s, ψ(s))ds ≤ M|t − t˜| ≤ Mδ t˜ if |t − t˜| < δ, we have

kφ(t) − ψ(t˜)k ≤ kφ(t) − ψ(t)k + kψ(t) − ψ(t˜)k ≤ δ(1 + M)eK(τ2−τ1) + δM

Now, given ε > 0, we need only choose δ < ε/[M + (1 + M)K(τ2−τ1)] to obtain the desired inequality, completing the proof. Fund. Theory ODE Lecture Notes – J. Arino 1.4. Continuous dependence on initial data, on parameters 31

What we have shown is that the solution passing through the point (t0, x0) is a continuous function of the triple (t, t0, x0). We now consider the case where the parameters also vary, comparing solutions to two different but “close” equations. Theorem 1.4.2. Let f, g be defined in a domain U and satisfy the hypotheses of Theo- rem 1.4.1. Let φ and ψ be solutions of x0 = f(t, x) and x0 = g(t, x), respectively, such that φ(t0) = x0, ψ(t0) =x ˆ0, existing on a common interval α < t < β. Suppose that kf(t, x) − g(t, x)k ≤ ε for (t, x) ∈ U. Then the solutions φ and ψ satisfy

K|t−t0| K|t−t0| kφ(t) − ψ(t)k ≤ kx0 − xˆ0ke + ε(β − α)e for all t, α < t < β. The following theorem [6, p. 58] is less restrictive in its hypotheses than the previous one, requiring only uniqueness of the solution of the IVP.

Theorem 1.4.3. Let U be a domain of (t, x) space, Iµ the domain |µ − µ0| < c, with c > 0, and Uµ the set of all (t, x, µ) satisfying (t, x) ∈ U, µ ∈ Iµ. Suppose f is a continuous function on Uµ, bounded by a constant M there. For µ = µ0, let x0 = f(t, x, µ) (1.24) x(t0) = x0 have a unique solution φ0 on the interval [a, b], where t0 ∈ [a, b]. Then there exists a δ > 0 such that, for any fixed µ such that |µ − µ0| < δ, every solution φµ of (1.24) exists over [a, b] and as µ → µ0 φµ → φ0 uniformly over [a, b].

Proof. We begin by considering t0 ∈ (a, b). First, choose an α > 0 small enough that the region R = {|t − t0| ≤ α, kx − x0k ≤ Mα} is in U; note that R is a slight modification of the usual security domain. All solutions of (1.24) with µ ∈ Iµ exist over [t0 − α, t0 + α] and remain in R. Let φµ denote a solution. Then the set of functions {φµ}, µ ∈ Iµ is an uniformly bounded and equicontinuous set in |t − t0| ≤ α. This follows from the integral equation Z t φµ(t) = x0 + f(s, φµ(s), µ)ds (|t − t0| ≤ α) (1.25) t0 and the inequality kfk ≤ M. Suppose that for some t˜ ∈ [t0 − α, t0 + α], φµ(t˜) does not tend to φ0(t˜). Then there exists a sequence {µk}, k = 1, 2,..., for which µk → µ0, and corresponding solutions φµk such that φµk converges uniformly over [t0 − α, t0 + α] as k → ∞ to a limit function ψ, with ˜ ˜ ψ(t) 6= φ0(t). From the fact that f ∈ C on Uµ, that ψ ∈ C on [t0 − α, t0 + α], and that φµk converges uniformly to ψ, (1.25) for the solutions φµk yields Z t ψ(t) = x0 + f(s, ψ(s), µ0)ds (|t − t0| ≤ α) t0 Fund. Theory ODE Lecture Notes – J. Arino 32 1. General theory of ODEs

Thus ψ is a solution of (1.24) with µ = µ0. By the uniqueness hypothesis, it follows that ψ(t) = φ0(t) on |t − t0| ≤ α. Thus ψ(t˜) = φ0(t˜). Thus all solutions φµ on |t − t0| ≤ α tend to φ0 as µ → µ0. Because of the equicontinuity, the convergence is uniform. Let us now prove that the result holds over [a, b]. For this, let us consider the interval [t0, b]. Let τ ∈ [t0, b), and suppose that the result is valid for every small h > 0 over [t0, τ −h] but not over [t0, τ + h]. It is clear that τ ≥ t0 + α. By the above assumption, for any small ε > 0, there exists a δε > 0 such that

kφµ(τ − ε) − φ0(τ − ε)k < ε (1.26) for |µ − µ0| < δε. Let H ⊂ U be defined as the region

H = {|t − τ| ≤ γ, kx − φ0(τ − γ)k ≤ γ + M|t − τ + γ|} with γ small enough that H ⊂ U. Any solution of x0 = f(t, x, µ) starting on t = τ − γ with initial value ξ0, |ξ0 − φ0(τ − γ)| ≤ γ will remain in H as t increases. Thus all solutions can be continued to τ + γ. By choosing ε = γ in (1.26), it follows that for |µ − µ0| < δε, the solutions φµ can all be continued to τ + ε. Thus over [t0, τ + ε] these solutions are in U so that the argument that φµ → φ0 which has been given for |t − t0| ≤ α, also applies over [t0, τ + ε]. Thus the assumption about the existence of τ < b is false. The case τ = b is treated in similar fashion on τ − γ ≤ t ≤ τ. A similar argument applies to the left of t0 and therefore the result is valid over [a, b]. Definition 1.4.4 (Well-posedness). A problem is said to be well-posed if solutions exist, are unique, and that there is continuous dependence on initial conditions.

1.5 Generality of first order systems

Consider an nth order differential equation in normal form

x(n) = f t, x, x0, . . . , x(n−1) (1.27)

This equation can be reduced to a system of n first order ordinary differential equations, by 0 00 (n) proceeding as follows. Let y0 = x, y1 = x , y2 = x , . . . , yn−1 = x . Then (1.27) is equivalent to y0 = F (t, y) (1.28) T with y = (y0, y1, . . . , yn−1) and   y1  y   2   .  F (t, z) =  .     yn−1  f(t, y0, . . . , yn−1) Fund. Theory ODE Lecture Notes – J. Arino 1.5. Generality of first order systems 33

Similarly, the IVP associated to (1.27) is given by

x(n) = f t, x, x0, . . . , x(n−1) (1.29) 0 (n−1) x(t0) = x0, x (t0) = x1, . . . , x (t0) = xn−1 is equivalent to the IVP y0 = F (t, y) T (1.30) y(t0) = y0 = (x0, . . . , xn−1) As a consequence, all results in this chapter are true for equations of order higher than 1. Example – Consider the second order IVP

x00 = −2x0 + 4x − 3 x(0) = 2, x0(0) = 1

To transform it into a system of first-order differential equations, we let y = x0. Substituting (where possible) y for x0 in the equation gives

y0 = −2y + 4x − 3

The initial condition becomes x(0) = 2, y(0) = 1. So finally, the following IVP is equivalent to the original one:

x0 = y y0 = 4x − 2y − 3 x(0) = 2, y(0) = 1

Note that the linearity of the initial problem is preserved. 

Example – The differential equation

(n) (n−1) 0 x (t) = an−1(t)x (t) + ··· + a1(t)x (t) + a0(t)x(t) + b(t) is an nth order nonhomogeneous linear differential equation. Together with the initial condition

(n−1) (n−1) 0 0 x (t0) = x0 , . . . , x (t0) = x0, x(t0) = x0

0 (n−1) where x0, x0, . . . , x0 ∈ R, it forms an IVP. We can transform it to a system of linear first order equations by setting

y0 = x 0 y1 = x . . (n−1) yn−1 = x (n) yn = x Fund. Theory ODE Lecture Notes – J. Arino 34 1. General theory of ODEs

The nth order linear equation is then equivalent to the following system of n first order linear equations

0 y0 = y1 0 y1 = y2 . . 0 yn−2 = yn−1 0 yn−1 = yn 0 yn = an−1(t)yn(t) + an−2(t)yn−1(t) + ··· + a1(t)y1(t) + a0(t)y0(t) + b(t) under the initial conditions

(n−1) 0 yn−1(t0) = x0 , . . . , y1(t0) = x0, y0(t0) = x0



1.6 Generality of autonomous systems

A nonautonomous system x0(t) = f(t, x(t)) can be transformed into an autonomous system of equations by setting an auxiliary variable, say y, equal to t, giving

x0 = f(y, x) y0 = 1.

However, this transformation does not always make the system any easier to study.

1.7 Suggested reading, Further problems

Most of these results are treated one way or another in Coddington and Levinson [6] (first edition published in 1955), and the current text, as many others, does little but paraphrase them. We have not seen here any results specific to complex valued differential equations. As complex numbers are two-dimensional real vectors, the results carry through to the complex case by simply assuming that if, in (1.2), we consider an n-dimensional complex vector, then this is equivalent to a 2n-dimensional problem. Furthermore, if f(t, x) is analytic in t and x, then analytic solutions can be constructed. See Section I-4 in [12], ..., for example. Chapter 2

Linear systems

Let I be an interval of R, E a normed vector space over a field K (E = Kn, with K = R or C), and L(E) the space of continuous linear maps from E to E. Let k k be a norm on E, and ||| ||| be the induced supremum norm on L(E) (see Appendix A.1). Consider a map A : I → L(E) and a map B : I → E. A linear system of first order equations is defined by

x0(t) = A(t)x(t) + B(t) (2.1) where the unknown x is a map on I, taking values in E, defined differentiable on a sub- interval of I. We restrict ourselves to the finite dimensional case (E = Kn). Hence we n consider A ∈ Mn(K), n × n matrices over the field K, and B ∈ K . We suppose that A and B have continuous entries. In most of what follows, we assume K = R.

The name linear for system (2.1) is an abuse of language. System (2.1) should be called an affine system, with associated linear system

x0(t) = A(t)x(t). (2.2)

Another way to distinguish systems (2.1) and (2.2) is to refer to the former as a nonhomo- geneous linear system and the latter as an homogeneous linear system. In order to lighten the language, since there will be other qualificatives added to both (2.1) and (2.2), we use in this chapter the names affine system for (2.1) and linear system for (2.2). The exception to this naming convention is that we refer to (2.1) as a linear system if we consider the generic properties of (2.1), with (2.2) as a particular case, as in this chapter’s title or in the next section, for example.

2.1 Existence and uniqueness of solutions

Theorem 2.1.1. Let A and B be defined and continuous on I 3 t0. Then, for all x0 ∈ E, there exists a unique solution φt(x0) of (2.1) through (t0, x0), defined on the interval I.

35 Fund. Theory ODE Lecture Notes – J. Arino 36 2. Linear systems

Proof. Let k(t) = |||A(t)||| = supkxk≤1 kA(t)xk. Then for all t ∈ I and all x1, x2 ∈ K,

kf(t, x1) − f(t, x2)k = kA(t)(x1 − x2)k

≤ |||A(t)|||kx1 − x2k

≤ k(t)kx1 − x2k, where the inequality kA(t)(x1 − x2)k ≤ |||A(t)|||kx1 − x2k results from the nature of the norm ||| ||| (see Appendix A.1). Furthermore, k is continuous on I. Therefore the conditions of Theorem 1.2.2 hold, leading to existence and uniqueness on the interval I. With linear systems, it is possible to extend solutions easily, as is shown by the next theorem. Theorem 2.1.2. Suppose that the entries of A(t) and the entries of B(t) are continuous on an open interval I. Then every solution of (2.1) which is defined on a subinterval J of the interval I can be extended uniquely to the entire interval I as a solution of (2.1).

Proof. Suppose that I = (t1, t2), and that a solution φ of (2.1) is defined on J = (τ1, τ2), with J ( I. Then Z t

kφ(t)k ≤ kφ(t0)k + A(s)φ(s) + B(s)ds t0 for all t ∈ J , where t0 ∈ J . Let  K = kφ(t0)k + (τ2 − τ1) maxτ1≤t≤τ2 kB(t)k

L = maxτ1≤t≤τ2 kA(t)k

Then, for t0, t ∈ J , Z t Z t

kφ(t)k ≤ K + L φ(s)ds ≤ K + L kφ(s)kds. t0 t0 Thus, using Gronwall’s Lemma (Lemma A.7), the following estimate holds in J ,

kφ(t)k ≤ KeL|t−t0| ≤ KeL(τ2−τ1) < ∞

This implies that case ii) in Corollary 1.3.4 is ruled out, leaving only the possibility for φ to be extendable over I, since the vector field in (2.1) is Lipschitz.

2.2 Linear systems

We begin our study of linear systems of ordinary differential equations by considering ho- n mogeneous systems of the form (2.2) (linear systems), with x ∈ R and A ∈ Mn(R), the set of square matrices over the field R, A having continuous entries on an interval I. Fund. Theory ODE Lecture Notes – J. Arino 2.2. Linear systems 37

2.2.1 The vector space of solutions Theorem 2.2.1 (Superposition principle). Let S0 be the set of solutions of (2.2) that are 0 0 defined on some interval I ⊂ R. Let φ1, φ2 ∈ S , and λ1, λ2 ∈ R. Then λ1φ1 + λ2φ2 ∈ S .

0 Proof. Let φ1, φ2 ∈ S be two solutions of (2.2), λ1, λ2 ∈ R. Then for all t ∈ I,

0 φ1 = A(t)φ1 0 φ2 = A(t)φ2, from which it comes that d (λ φ + λ φ ) = A(t)[λ φ + λ φ ], dt 1 1 2 2 1 1 2 2

0 implying that λ1φ1 + λ2φ2 ∈ S . Thus the linear combination of any two solutions of (2.2) is in S0. This is a hint that S0 must be a vector space of n on K. To show this, we need to find a of S0. We proceed in the classical manner, with the notable difference from classical that the basis is here composed of time-dependent functions.

Definition 2.2.2 (Fundamental set of solutions). A set of n solutions of the linear differ- ential equation (2.2), all defined on the same open interval I, is called a fundamental set of solutions on I if the solutions are linearly independent functions on I.

Proposition 2.2.3. If A(t) is defined and continuous on the interval I, then the system (2.2) has a fundamental set of solutions defined on I.

n Proof. Let t0 ∈ I, and e1, . . . , en denote the of K . Then, from Theorem 2.1.1, there exists a unique solution φ(t0) = (φ1(t0), . . . , φn(t0)) such that φi(t0) = ei, for i = 1, . . . , n. Furthermore, from Theorem 2.1.1, each function φi is defined on the interval I. Assume that {φi}, i = 1, . . . , n, is linearly dependent. Then there exists αi ∈ R, i = Pn 1, . . . , n, not all zero, such that i=1 αiφi(t) = 0 for all t. In particular, this is true for Pn Pn t = t0, and thus i=1 αiφi(t0) = i=1 αiei = 0, which implies that the canonical basis of n K is linearly dependent. Hence a contradiction, and the φi are linearly independent. Proposition 2.2.4. If F is a fundamental set of solutions of the linear system (2.2) on the open interval I, then every solution defined on I can be expressed as a linear combination of the elements of F.

Let t0 ∈ I, we consider the application

0 n Φt0 : S → K

Y 7→ Φt0 (x) = x(t0)

Lemma 2.2.5. Φt0 is a linear isomorphism. Fund. Theory ODE Lecture Notes – J. Arino 38 2. Linear systems

n Proof. Φt0 is bijective. Indeed, let v ∈ K , from Theorem 2.1.1, there exists a unique solution passing through (t0, v), i.e.,

n 0 ∀v ∈ K , ∃!x ∈ S , x(t0) = v ⇒ Φt0 (x) = v,

so Φt0 is surjective. That Φt0 is injective follows from uniqueness of solutions to an ODE. 0 n Furthermore, Φt0 (λ1x1 +λ2x2) = λ1Φt0 (x1)+λ2Φt0 (x2). Therefore dim S = dim K = n.

2.2.2 Fundamental matrix solution Definition 2.2.6. An n × n matrix function t 7→ Φ(t), defined on an open interval I, is called a matrix solution of the homogeneous linear system (2.2) if each of its columns is a (vector) solution. A matrix solution Φ is called a fundamental matrix solution if its columns form a fundamental set of solutions. If in addition Φ(t0) = I, a fundamental matrix solution is called the principal fundamental matrix solution. An important property of fundamental matrix solutions is the following, known as Abel’s formula.

Theorem 2.2.7 (Abel’s formula). Let A(t) be continuous on I and Φ ∈ Mn(K) be such that Φ0(t) = A(t)Φ(t) on I. Then det Φ satisfies on I the differential equation

(det Φ)0 = (trA)(det Φ), or, in integral form, for t, τ ∈ I,

Z t  det Φ(t) = det Φ(τ) exp trA(s)ds . (2.3) τ

0 Proof. Writing the differential equation Φ (t) = A(t)Φ(t) in terms of the elements ϕij and aij of, respectively, Φ and A, n 0 X ϕij(t) = aik(t)ϕkj(t), (2.4) k=1 for i, j = 1, . . . , n. Writing

ϕ11(t) ϕ12(t) . . . ϕ1n(t)

ϕ21(t) ϕ22(t) . . . ϕ2n(t) det Φ = ,

ϕn1(t) ϕn2(t) . . . ϕnn(t) we see that

0 0 0 ϕ11 ϕ12 . . . ϕ1n ϕ11 ϕ12 . . . ϕ1n ϕ11 ϕ12 . . . ϕ1n 0 0 0 0 ϕ21 ϕ22 . . . ϕ2n ϕ ϕ . . . ϕ ϕ21 ϕ22 . . . ϕ2n (det Φ) = + 21 22 2n + ··· + .

0 0 0 ϕn1 ϕn2 . . . ϕnn ϕn1 ϕn2 . . . ϕnn ϕn1 ϕn2 . . . ϕnn Fund. Theory ODE Lecture Notes – J. Arino 2.2. Linear systems 39

Indeed, write det Φ(t) = Γ(r1, r2, . . . , rn), where ri is the ith row in Φ(t). Γ is then a linear function of each of its arguments, if all other rows are constant, which implies that d  d   d   d  det Φ(t) = Γ r , r , . . . , r + Γ r , r , . . . , r + ··· + Γ r , r ,..., r . dt dt 1 2 n 1 dt 2 n 1 2 dt n (To show this, use the definition of the derivative as a limit.) Using (2.4) on the first of the n in (det Φ)0 gives P P P a1kϕk1 a1kϕk2 ... a1kϕkn k k k ϕ21 ϕ22 . . . ϕ2n .

ϕn1 ϕn2 . . . ϕnn

Adding −a12 times the second row, −a13 times the first row, etc., −a1n times the nth row, to the first row, does not change the , and thus P P P a1kϕk1 a1kϕk2 ... a1kϕkn a11ϕ11 a11ϕ12 . . . a11ϕ1n k k k ϕ21 ϕ22 . . . ϕ2n ϕ21 ϕ22 . . . ϕ2n = = a11 det Φ.

ϕn1 ϕn2 . . . ϕnn ϕn1 ϕn2 . . . ϕnn

0 0 Repeating this for each of the terms in (det Φ) , we obtain (det Φ) = (a11 + a22 + ··· + 0 ann) det Φ, giving finally (det Φ) = (trA)(det Φ). Note that this equation takes the form u0 − α(t)u = 0, which implies that Z t  u exp α(s)ds = constant, τ which in turn implies the integral form of the formula.

Remark – Consider (2.3). Suppose that τ ∈ I is such that det Φ(τ) 6= 0. Then, since ea 6= 0 for any a, it follows that det Φ 6= 0 for all t ∈ I. In short, of solutions for a t ∈ I is equivalent to linear independence of solutions for all t ∈ I. As a consequence, the column vectors of a fundamental matrix are linearly independent at every t ∈ I. ◦ Theorem 2.2.8. A solution matrix Φ of (2.2) is a fundamental solution matrix on I if, and only if, det Φ(t) 6= 0 for all t ∈ I

Proof. Let Φ be a fundamental matrix with column vectors φi, and suppose that φ is any nontrivial solution of (2.2). Then there exists c1, . . . , cn, not all zero, such that

n X φ = cjφj, j=1

T or, writing this equation in terms of Φ, φ = Φc, if c = (c1, . . . , cn) . At any point t0 ∈ I, this is a system of n linear equations with n unknowns c1, . . . , cn. This system has a unique Fund. Theory ODE Lecture Notes – J. Arino 40 2. Linear systems

solution for any choice of φ(t0). Thus det Φ(t0) 6= 0, and by the remark above, det Φ(t) 6= 0 for all t ∈ I. Reciproqually, let Φ be a solution matrix of (2.2), and suppose that det Φ(t) 6= 0 for t ∈ I. Then the column vectors are linearly independent at every t ∈ I. From the remark above, the condition “det Φ(t) 6= 0 for all t ∈ I” in Theorem 2.2.8 is equivalent to the condition “there exists t ∈ I such that det Φ(t) 6= 0”. A frequent candidate for this role is t0.

To conclude on fundamental solution matrices, remark that there are infinitely many of them, for a given linear system. However, since each fundamental solution matrix can provide a basis for the vector space of solutions, it is clear that the fundamental matrices associated to a given problem must be linked. Indeed, we have the following result.

Theorem 2.2.9. Let Φ be a fundamental matrix solution to (2.2). Let C ∈ Mn(K) be a constant nonsingular matrix. Then ΦC is a fundamental matrix solution to (2.2). Con- versely, if Ψ is another fundamental matrix solution to (2.2), then there exists a constant nonsingular C ∈ Mn(K) such that Ψ(t) = Φ(t)C for all t ∈ I. Proof. Since Φ is a fundamental matrix solution to (2.2), we have

(ΦC)0 = Φ0C = (A(t)Φ)C = A(t)(ΦC), and thus ΦC is a matrix solution to (2.2). Since Φ is a fundamental matrix solution to (2.2), Theorem 2.2.8 implies that det Φ 6= 0. Also, since C is nonsingular, det C 6= 0. Thus, det ΦC = det Φ det C 6= 0, and by Theorem 2.2.8, ΦC is a fundamental matrix solution to (2.2). Conversely, assume that Φ and Ψ are two fundamental matrix solutions. Since ΦΦ−1 = I, taking the derivative of this expression gives Φ0Φ−1 + Φ (Φ−1)0 = 0, and therefore (Φ−1)0 = −Φ−1Φ0Φ−1. We now consider the product Φ−1Ψ. There holds

Φ−1Ψ0 = Φ−10 Ψ + Φ−1Ψ0 = −Φ−1Φ0Φ−1Ψ + Φ−1A(t)Ψ = −Φ−1A(t)ΦΦ−1 + Φ−1A(t) Ψ = −Φ−1A(t) + Φ−1A(t) Ψ = 0.

−1 0 −1 Therefore, integrating (Φ Ψ) gives Φ Ψ = C, with C ∈ Mn(K) is a constant. Thus, Ψ = CΦ. Furthermore, as Φ and Ψ are fundamental matrix solutions, det Φ 6= 0 and det Ψ 6= 0, and therefore det C 6= 0.

Remark – Note that if Φ is a fundamental matrix solution to (2.2) and C ∈ Mn(K) is a constant nonsingular matrix, then it is not necessarily true that CΦ is a fundamental matrix solution to (2.2). See Exercise 2.3. ◦ Fund. Theory ODE Lecture Notes – J. Arino 2.2. Linear systems 41

2.2.3 Resolvent matrix If t 7→ Φ(t) is a matrix solution of (2.2) on the interval I, then Φ0(t) = A(t)Φ(t) on I. Thus, by Proposition 2.2.3, there exists a fundamental matrix solution.

Definition 2.2.10 (Resolvent matrix). Let t0 ∈ I and Φ(t) be a fundamental matrix solution of (2.2) on I. Since the columns of Φ are linearly independent, it follows that Φ(t0) is invertible. The resolvent (or state transition matrix) of (2.2) is then defined as −1 R(t, t0) = Φ(t)Φ(t0) .

It is evident that R(t, t0) is the principal fundamental matrix solution at t0 (since −1 R(t0, t0) = Φ(t0)Φ(t0) = I). Thus system (2.2) has a principal fundamental matrix solution at each point in I. Proposition 2.2.11. The resolvent matrix satisfies the Chapman-Kolmogorov identities 1) R(t, t) = I,

2) R(t, s)R(s, u) = R(t, u), as well as the identities 3) R(t, s)−1 = R(s, t),

∂ 4) ∂s R(t, s) = −R(t, s)A(s), ∂ 5) ∂t R(t, s) = A(t)R(t, s). Proof. First, for the Chapman-Kolmogorov identities. 1) is R(t, t) = Φ(t)Φ−1(t) = I. Also, 2) gives R(t, s)R(s, u) = Φ(t)Φ−1(s)Φ(s)Φ−1(u) = Φ(t)Φ−1(u) = R(t, u). The other equalities are equally easy to establish. Indeed, R(t, s)−1 = Φ(t)Φ−1(s)−1 = Φ−1(s)−1 Φ(t)−1 = Φ(s)Φ−1(t) = R(s, t), whence 3). Also, ∂ ∂ R(t, s) = Φ(t)Φ−1(s) ∂s ∂s  ∂  = Φ(t) Φ−1(s) ∂s As Φ is a fundamental matrix solution, Φ0 exists and Φ is nonsingular, and differentiating ΦΦ−1 = I gives ∂  ∂   ∂  Φ(s)Φ−1(s) = 0 ⇔ Φ(s) Φ−1(s) + Φ(s) Φ−1(s) = 0 ∂s ∂s ∂s  ∂   ∂  ⇔ Φ(s) Φ−1(s) = − Φ(s) Φ−1(s) ∂s ∂s ∂  ∂  ⇔ Φ−1(s) = −Φ−1(s) Φ(s) Φ−1(s). ∂s ∂s Fund. Theory ODE Lecture Notes – J. Arino 42 2. Linear systems

Therefore, ∂  ∂   ∂  R(t, s) = −Φ(t)Φ−1(s) Φ(s) Φ−1(s) = −R(t, s) Φ(s) Φ−1(s). ∂s ∂s ∂s Now, since Φ(s) is a fundamental matrix solution, it follows that ∂Φ(s)/∂s = A(s)Φ(s), and thus ∂ R(t, s) = −R(t, s)A(s)Φ(s)Φ−1(s) = −R(t, s)A(s), ∂s giving 4). Finally, ∂ ∂ R(t, s) = Φ(t)Φ−1(s) ∂t ∂t = A(t)Φ(t)Φ−1(s) since Φ is a fundamental matrix solution = A(t)R(t, s), giving 5).

The role of the resolvent matrix is the following. Recall that, from Lemma 2.2.5, Φt0 defined by

n Φt0 : S → K x 7→ x(t0), is a K-linear isomorphism from the space S to the space Kn. Then R is an application from Kn to Kn, n n R(t, t0): K → K v 7→ R(t, t0)v = w such that

−1 R(t, t0) = Φt ◦ Φt0 i.e.,

(R(t, t0)v = w) ⇔ (∃x ∈ S, w = x(t), v = x(t0)) .

n Since Φt and Φt0 are K-linear isomorphisms, R is a K-linear isomorphism on K . Thus R(t, t0) ∈ Mn(K) and is invertible.

Proposition 2.2.12. R(t, t0) is the only solution in Mn(K) of the initial value problem d M(t) = A(t)M(t) dt M(t0) = I, with M(t) ∈ Mn(K). Fund. Theory ODE Lecture Notes – J. Arino 2.2. Linear systems 43

Proof. Since d(R(t, t0)v)/dt = A(t)R(t, t0)v,  d  R(t, t ) v = (A(t)R(t, t )) v, dt 0 0

n 0 for all v ∈ R . Therefore, R(t, t0) is a solution to M = A(t)M. But, by Theorem 2.1.1, we know the solution to the associated IVP to be unique, hence the result. From this, the following theorem follows immediately. Theorem 2.2.13. The solution to the IVP consisting of the linear homogeneous nonau- tonomous system (2.2) with initial condition x(t0) = x0 is given by

φ(t) = R(t, t0)x0.

2.2.4 Wronskian

Definition 2.2.14. The Wronskian of a system {x1, . . . , xn} of solutions to (2.2) is given by W (t) = det(x1(t), . . . , xn(t)).

Let vi = xi(t0). Then we have

xi(t) = R(t, t0)vi, and it follows that

W (t) = det(R(t, t0)v1,..., R(t, t0)vn)

= det R(t, t0) det(v1, . . . , vn).

The following formulae hold Z t  ∆(t, t0) := det R(t, t0) = exp trA(s)ds (2.5a) t0 Z t  W (t) = exp trA(s)ds det(v1, . . . , vn). (2.5b) t0

2.2.5 Autonomous linear systems

At this point, we know that solutions to (2.2) take the form φ(t) = R(t, t0)x0, but this was obtained formally. We have no indication whatsoever as to the precise form of R(t, t0). Typically, finding R(t, t0) can be difficult, if not impossible. There are however cases where the resolvent can be explicitly computed. One such case is for autonomous linear systems, which take the form x0(t) = Ax(t), (2.6) that is, where A(t) ≡ A. Our objective here is to establish the following result. Fund. Theory ODE Lecture Notes – J. Arino 44 2. Linear systems

(t−t0)A Lemma 2.2.15. If A(t) ≡ A, then R(t, t0) = e for all t, t0 ∈ I. This result is deduced easily as a corollary to another result developped below, namely Theorem 2.2.16. Note that in Lemma 2.2.15, the notation e(t−t0)A involves the notion of exponential of a matrix, which is detailed in Appendix A.10.

Because the reasoning used in constructing solutions to (2.6) is fairly straightforward, we now detail this derivation. Using the intuition from one-dimensional linear equations, we λt n seek a λ ∈ K such that φλ(t) = e v be a solution to (2.6) with v ∈ K \{0}. We have 0 λt φλ = λe v, and thus φλ is a solution if, and only if, λeλtv = Aeλtv = eλtAv ⇔ λv = Av ⇔ (A − λI)v = 0 (with I the ).

As v = 0 is not the only solution, this implies that A − λI must not be invertible, and so

φλ is a solution ⇔ det(A − λI) = 0, i.e., λ is an eigenvalue of A. n In the simple case where A is diagonalizable, there exists a basis (v1, . . . , vn) of K , with v1, . . . , vn the eigenvectors of A corresponding to the eigenvalues λ1, . . . , λn. We then obtain λi(t−t0) n linearly independent solutions φλi (t) = e , i = 1, . . . , n. The general solution is given by λ1(t−t0) λn(t−t0)  φ(t) = e x01, . . . , e x0n , where x0i is the ith component of x0, i = 1, . . . , n. In the general case, we need the notion of matrix exponentials. Defining the exponential of matrix A as ∞ X An eA = n! k=0 (see Appendix A.10), we have the following result.

Theorem 2.2.16. The global solution φ on K of (2.6) such that φ(t0) = x0 is given by

(t−t0)A φ(t) = e x0.

(t−t0)A 0A Proof. Assume φ = e x0. Then φ(t0) = e x0 = Ix0 = x0. Also, ∞ ! X 1 φ(t) = (t − t )nAn x n! 0 0 n=0 ∞ X 1 = (t − t )nAnx , n! 0 0 n=0 Fund. Theory ODE Lecture Notes – J. Arino 2.2. Linear systems 45 so φ is a power series with radius of convergence R = ∞. Therefore, φ is differentiable on R and ∞ X 1 φ0(t) = n(t − t )n−1Anx n! 0 0 n=1 ∞ X 1 = (n + 1)(t − t )nAn+1x (n + 1)! 0 0 n=0 ∞ X 1 = (t − t )nAn+1x n! 0 0 n=0 ∞ ! X 1 = A (t − t )nAnx n! 0 0 n=0 = Aφ(t) so φ is solution of (2.6). Since (2.6) is linear, solutions are unique and global. The problem is now to evaluate the matrix etA. We have seen that in the case where A is diagonalizable, solutions take the form

λ1(t−t0) λn(t−t0)  φ(t) = e x01, . . . , e x0n , which implies that, in this case, the matrix R(t, t0) takes the form   eλ1(t−t0) 0 0  0 eλ2(t−t0) 0    R(t, t0) =  . ..  .  . .  0 0 eλn(t−t0) In the general case, we need the notion of generalized eigenvectors. Definition 2.2.17 (Generalized eigenvectors). Let λ be an eigenvalue of the n × n matrix A, with multiplicity m ≤ n. Then, for k = 1, . . . , m, any nonzero solution v of

k (A − λI) v = 0 is called a of A.

Theorem 2.2.18. Let A be a real n × n matrix with real eigenvalues λ1, . . . , λn repeated according to their multiplicity. Then there exists a basis of generalized eigenvectors for Rn. n And if {v1, . . . , vn} is any basis of generalized eigenvectors for R , the matrix P = [v1 ··· vn] is invertible, A = D + N, where −1 P DP = diag(λj), the matrix N = A − D is of order k ≤ n, and D and N commute. Fund. Theory ODE Lecture Notes – J. Arino 46 2. Linear systems

2.3 Affine systems

We consider the general (affine) problem (2.1), which we restate here for convenience. Let x ∈ Rn, A : I → L(E) and B : I → E, where I ⊂ R and E is a normed vector space, we consider the system x0(t) = A(t)x(t) + B(t) (2.1)

2.3.1 The space of solutions The first problem that we are faced with when considering system (2.1) is that the set of solutions does not constitute a vector space; in particular, the superposition principle does not hold. However, we have the following result.

Proposition 2.3.1. Let x1, x2 be two solutions of (2.1). Then x1 − x2 is a solution of the associated homogeneous equation (2.2).

Proof. Since x1 and x2 are solutions of (2.1),

0 x1 = A(t)x1 + B(t) 0 x2 = A(t)x2 + B(t)

Therefore d (x − x ) = A(t)(x − x ) dt 1 2 1 2 Theorem 2.3.2. The global solutions of (2.1) that are defined on I form an n dimensional affine subspace of the vector space of maps from I to Kn.

Theorem 2.3.3. Let V be the vector space over R of solutions to the linear system x0 = A(t)x. If ψ is a particular solution of the affine system (2.1), then the set of all solutions of (2.1) is precisely {φ + ψ, φ ∈ V }.

Practical rules:

1. To obtain all solutions of (2.1), all solutions of (2.2) must be added to a particular solution of (2.1).

2. To obtain all solutions of (2.2), it is sufficient to know a basis of S0. Such a basis is called a fundamental system of solutions of (2.2).

2.3.2 Construction of solutions We have the following variation of constants formula. Fund. Theory ODE Lecture Notes – J. Arino 2.3. Affine systems 47

0 Theorem 2.3.4. Let R(t, t0) be the resolvent of the homogeneous equation x = A(t)x asso- ciated to (2.1). Then the solution x to (2.1) is given by Z t x(t) = R(t, t0) + R(t, s)B(s)ds (2.7) t0 0 Proof. Let R(t, t0) be the resolvent of x = A(t)x. Any solution of the latter equation is given by n x(t) = R(t, t0)v, v ∈ R

Let us now seek a particular solution to (2.1) of the form x(t) = R(t, t0)v(t), i.e., using a variation of constants approach. Taking the derivative of this expression of x, we have d x0(t) = [R(t, t )]v(t) + R(t, t )v0(t) dt 0 0 0 = A(t)R(t, t0)v(t) + R(t, t0)v (t) Thus x is a solution to (2.1) if 0 A(t)R(t, t0)v(t) + R(t, t0)v (t) = A(t)R(t, t0)v(t) + B(t) 0 ⇔ R(t, t0)v (t) = B(t) 0 ⇔ v (t) = R(t0, t)B(t)

−1 R t since R(t, s) = R(s, t). Therefore, v(t) = R(t0, s)B(s)ds. A particular solution is given t0 by Z t x(t) = R(t, t0) R(t0, s)B(s)ds t0 Z t = R(t, t0)R(t0, s)B(s)ds t0 Z t = R(t, s)B(s)ds t0

2.3.3 Affine systems with constant coefficients We consider the affine equation (2.1), but with the matrix A(t) ≡ A. Theorem 2.3.5. The general solution to the IVP x0(t) = Ax(t) + B(t) (2.8) x(t0) = x0 is given by Z t (t−t0)A (t−t0)A x(t) = e x0 + e B(s)ds (2.9) t0 Proof. Use Lemma 2.2.15 and the variation of constants formula (2.7). Fund. Theory ODE Lecture Notes – J. Arino 48 2. Linear systems

2.4 Systems with periodic coefficients

2.4.1 Linear systems: Floquet theory We consider the linear system (2.2) in the following case,

x0 = A(t)x (2.10) A(t + ω) = A(t), ∀t, with entries of A(t) continuous on R.

Definition 2.4.1 (Monodromy operator). Associated to system (2.10) is the resolvent R(t, s). For all s ∈ R, the operator C(s) := R(s + ω, s) is called the monodromy operator.

Theorem 2.4.2. If X(t) is a fundamental matrix for (2.10), then there exists a nonsingular constant matrix V such that, for all t,

X(t + ω) = X(t)V.

This matrix takes the form V = X−1(0)X(ω), and is called the monodromy matrix.

Proof. Since X is a fundamental matrix solution, there holds that X0(t) = A(t)X(t) for all t. Therefore X0(t+ω) = A(t+ω)X(t+ω), and by periodicity of A(t), X0(t+ω) = A(t)X(t+ω), which implies that X(t + ω) is a fundamental matrix of (2.10). As a consequence, by Theorem 2.2.9, there exists a matrix V such that X(t + ω) = X(t)V . Since at t = 0, X(ω) = X(0)V , it follows that V = X−1(0)X(ω).

Theorem 2.4.3 (Floquet’s theorem, complex case). Any fundamental matrix solution Φ of (2.10) takes the form Φ(t) = P (t)etB (2.11) where P (t) and B are n × n complex matrices such that

i) P (t) is invertible, continuous, and periodic of period ω in t,

ii) B is a constant matrix such that Φ(ω) = eωB.

Proof. Let Φ be a fundamental matrix solution. From 2.4.2, the monodromy matrix V = −1 Φ (0)Φ(ω) is such that Φ(t + ω) = Φ(t)V . By Theorem A.11.1, there exists B ∈ Mn(C) Fund. Theory ODE Lecture Notes – J. Arino 2.4. Systems with periodic coefficients 49 such that eBω = V . Let P (t) = Φ(t)e−Bt, so Φ(t) = P (t)eBt. It is clear that P is continuous and nonsingular. Also, P (t + ω) = Φ(t + ω)e−B(t+ω) = Φ(t)V e−B(ω+t) = Φ(t)eBωe−Bωe−Bt = Φ(t)e−Bt = P (t), proving the P is ω-periodic. Theorem 2.4.4 (Floquet’s theorem, real case). Any fundamental matrix solution Φ of (2.10) takes the form Φ(t) = P (t)etB (2.12) where P (t) and B are n × n real matrices such that i) P (t) is invertible, continuous, and periodic of period 2ω in t, ii) B is a constant matrix such that Φ(ω)2 = e2ωB. Proof. The proof works similarly as in the complex case, except that here, Theorem A.11.1 2ωB 2 −Bt implies that there exists B ∈ Mn(R) such that e = V . Let P (t) = Φ(t)e , so Φ(t) = P (t)etB. It is clear that P is continuous and nonsingular. Also, P (t + 2ω) = Φ(t + 2ω)e−(t+2ω)B = Φ(t + ω)V e−(2ω+t)B = Φ(t)V 2e−(2ω+t)B = Φ(t)e2ωBe−2ωBe−tB = Φ(t)e−tB = P (t), proving the P is ω-periodic. See [12, p. 87-90], [4, p. 162-179]. Theorem 2.4.5 (Floquet’s theorem, [4]). If Φ(t) is a fundamental matrix solution of the ω-periodic system (2.10), then, for all t ∈ R, Φ(t + ω) = Φ(t)Φ−1(0)Φ(ω). In addition, for each possibly complex matrix B such that eωB = Φ−1(0)Φ(ω), there is a possibly complex ω-periodic matrix function t 7→ P (t) such that Φ(t) = P (t)etB for all t ∈ R. Also, there is a real matrix R and a real 2ω-periodic matrix function t → Q(t) such that Φ(t) = Q(t)etR for all t ∈ R. Fund. Theory ODE Lecture Notes – J. Arino 50 2. Linear systems

Definition 2.4.6 (Floquet normal form). The representation Φ(t) = P (t)etR is called a Floquet normal form.

In the case where Φ(t) = P (t)etB, we have dP (t)/dt = A(t)P (t) − P (t)B. Therefore, letting x = P (t)z, we obtain x0 = P (t)x0 + dP (t)/dtx = P (t)A(t)x + A(t)P (t)x − P (t)Bx −1 0 dP −1(t) −1 0 dP −1(t) −1 z = P (t)x, so z = dt x + P (t)x = dt P (t)z + P (t)A(t)P (t)z

Definition 2.4.7 (Characteristic multipliers). The eigenvalues λ1, . . . , λn of a monodromy matrix B are called the characteristic multipliers of equation (2.10).

Definition 2.4.8 (Characteristic exponents). Numbers µ such that eµω is a characteristic multiplier of (2.10) are called the Floquet exponents of (2.10).

Theorem 2.4.9 (Spectral mapping theorem). Let K = R or C. If C ∈ GLn(K) is written C = eB, then the eigenvalues of C coincide with the exponentials of the eigenvalues of B, with same multiplicity.

Definition 2.4.10 (Characteristic exponents). The eigenvalues λ1, . . . , λn of a monodromy matrix B are called the characteristic exponents of equation (2.10). The exponents ρ1 = 2 exp(2ωλ1), . . . , ρn = exp(2ωλn) of the matrix Φ(ω) are called the (Floquet) multipliers of (2.10).

Proposition 2.4.11. Suppose that X,Y are fundamental matrices for (2.10) and that X(t+ ω) = X(t)V , Y (t + ω) = Y (t)U. Then the monodromy matrices U and V are similar.

Proof. Suppose that X(t + ω) = X(t)V and Y (t + ω) = Y (t)U. But, by Theorem 2.2.9, since X and Y are fundamental matrices for (2.10), there exists an C such that X(t) = Y (t)C for all t. Thus, in particular, X(t + ω) = Y (t + ω)C, and so

C−1UCX(t + ω) = Y (t + ω)C = Y (t)UC = X(t)C−1UC, since Y (t) = X(t)C−1. It follows that V = C−1UC, so U and V are similar.

From this Proposition, it follows that monodromy matrices share the same spectrum.

Corollary 2.4.12. All solutions of (2.10) tend to 0 as t → ∞ if and only if |ρj| < 1 for all j (or <(λj) < 0 for all j).

Let p be an eigenvector of Φ(ω)2 associated with a multiplier ρ. Then the solution φ(t) = Φ(t)p of (2.10) satisfies the condition φ(t + 2ω) = ρφ(t). This is the origin of the term multiplier. Fund. Theory ODE Lecture Notes – J. Arino 2.4. Systems with periodic coefficients 51

2.4.2 Affine systems: the Fredholm alternative We discuss here an extension of a theorem that was proved implicitly in Exercise 4, Assign- ment 2. Let us start by stating the result in question. We consider here the system

x0 = A(t)x + b(t), (2.13)

n n where x ∈ R , A ∈ Mn(R) and b ∈ R , with A and b continuous and ω-periodic. Theorem 2.4.13. If the homogeneous equation

x0 = A(t)x (2.14) associated to (2.13) has no nonzero solution of period ω, then (2.13) has for each function f, a unique ω-periodic solution. The Fredholm alternative concerns the case where there exists a nonzero periodic solution of (2.14). We give some needed results before going into details. Consider (2.14). Associated to this system is the so-called adjoint system, which is defined by the following differential equation, y0 = −AT (t)y (2.15) Proposition 2.4.14. The adjoint equation has the following properties.

i) Let R(t, t0) be the resolvent matrix of (2.14). Then, the resolvent matrix of (2.15) is T R (t0, t). ii) There are as many independent periodic solutions of (2.14) as there are of (2.15).

iii) If x is a solution of (2.14) and y is a solution of (2.15), then the scalar product hx(t), y(t)i is constant.

∂ ∂ T T T Proof. i) We know that ∂s R(t, s) = −R(t, s)A(s). Therefore, ∂s R (t, s) = −A (s)R (t, s). As R(s, s) = I, the first point is proved. T ii) The solution of (2.15) with initial value y0 is R (0, t)y0. The initial value of a periodic solution of (2.15) is y0 such that T R (0, ω)y0 = y0 This can also be written as  T  R (0, ω) − I y0 = 0 or, taking the transpose, T y0 [R(0, ω) − I] = 0 Now, since R(0, ω) = R−1(ω, 0), it follows that

T T  −1  y0 [R(0, ω) − I] = 0 ⇔ y0 R (ω, 0) − I = 0

T This is equivalent to y0 [R(0, ω) − I] = 0. The latter equation has as many solutions as [R(0, ω) − I] x0 = 0; the number of these depends on the rank of R(ω, 0) − I. Fund. Theory ODE Lecture Notes – J. Arino 52 2. Linear systems

iii) Recall that for differentiable functions a, b, d d d ha(t), b(t)i = h a(t), b(t)i + ha(t), b(t)i dt dt dt Thus d hx(t), y(t)i = hA(t)x(t), y(t)i + hx(t), −AT (t)y(t)i = 0 dt Before we carry on to the actual Fredholm alternative in the context of ordinary differ- ential equations, let us consider the problem in a more general setting. Let H be a Hilbert space. If A ∈ L(H,H), the adjoint operator A∗ of A is the element of L(H,H) such that

∀u, v ∈ H, hAu, vi = hu, A∗vi

Let Img(A) be the image of A, Ker(A∗) be the of A∗. Then we have H = Img(A) ⊕ Ker(A∗). Theorem 2.4.15 (Fredholm alternative). For the equation Af = g to have a solution, it is necessary and sufficient that g be orthogonal to every element of Ker(A∗). We now use this very general setting to prove the following theorem, in the context of ODEs. Theorem 2.4.16 (Fredholm alternative for ODEs). Consider (2.13) with A and f con- tinuous and ω-periodic. Suppose that the homogeneous equation (2.14) has p independent solutions of period ω. Then the adjoint equation (2.15) also has p independent solutions of period p, which we denote y1, . . . , yp. Then i) If Z ω hyk(t), b(t)idt = 0, k = 1, . . . , p (2.16) 0 then there exist p independent solutions of (2.13) of period ω, and,

ii) if this condition is not fulfilled, (2.13) has no nontrivial solution of period ω.

Proof. First, remark that x0 is the initial condition of a periodic solution of (2.13) if, and only if, Z ω [R(0, ω) − I] x0 = R(0, s)b(s)ds (2.17) 0

By Theorem 2.3.4, the solution of (2.13) through (0, x0) is given by Z t x(t) = R(t, 0)x0 + R(t, s)b(s)ds 0 Hence, at time ω, Z ω x(ω) = R(ω, 0)x0 + R(ω, s)b(s)ds 0 Fund. Theory ODE Lecture Notes – J. Arino 2.4. Systems with periodic coefficients 53

If x0 is the initial condition of a ω-periodic solution, then x(ω) = x0, and so

Z ω x0 − R(ω, 0) = R(ω, s)b(s)ds 0

On the other hand, yk(0) is the initial condition of an ω-periodic solution yk if, and only if,  T  R (0, ω) − I yk(0) = 0

Let C = R(0, ω) − I. We have that Rn = Img(C) ⊕ Ker(CT ). We now use the Fredholm alternative in this context. There exists x0 such that Z ω Cx0 = R(0, s)b(s)ds 0 if, and only if, Z ω R(0, s)b(s)ds ∈ Img(C) 0 R ω Indeed, from the Fredholm alternative, setting f = x0 and g = 0 R(0, s)b(s)ds, we have that Cf = g has a solution if, and only if, g is orthogonal to every element of Ker(CT ), i.e., since Rn = Img(C) ⊕ Ker(CT ), if, and only if, g ∈ Img(C). T Now, y1(0), . . . , yp(0) is a basis of Ker(C ). It follows that there exists a solution of (2.13) if, and only if, for all k = 1, . . . , p,

Z ω ∀k = 1, . . . , p, h R(0, s)b(s)ds, yk(0)i = 0 0 Z ω ⇔ ∀k = 1, . . . , p, hR(0, s)b(s), yk(0)ids = 0 0 Z ω T ⇔ ∀k = 1, . . . , p, hb(s),R (0, s)yk(0)ids = 0 0 Z ω ⇔ ∀k = 1, . . . , p, hb(s), yk(s)ids = 0 0 If these relations are satisfied, the set of vectors v such that

Z ω Av = R(0, s)b(s)ds 0

T is of the form v0 + Ker(C ), where v0 is one of these vectors; hence there exist p of them which are independent and are initial conditions of the p independent ω-periodic solutions of (2.13).

Example – The equation x00 = f(t) (2.18) Fund. Theory ODE Lecture Notes – J. Arino 54 2. Linear systems where f is ω-periodic, has solutions of period ω if, and only if, Z ω f(s)ds = 0 0 Let y = x0. Then, differentiating y and substituting into (2.18), we have y0 = f(t) Hence the system is x0 0 1 x  0  = + y 0 0 y f(t) Hence, 0 0 AT = 1 0 and the adjoint equation ξ0 = AT ξ has the periodic solution (0, a)T . 

2.5 Further developments, bibliographical notes

2.5.1 A variation of constants formula for a nonlinear system with a linear component The variation of constants formula given in Theorem 2.3.4 can be extended. Theorem 2.5.1 (Variation of constants formula). Consider the IVP x0 = A(t)x + g(t, x) (2.19a)

x(t0) = x0, (2.19b)

n n where g : R × R → R a smooth function, and let R(t, t0) be the resolvent associated to the 0 homogeneous system x = A(t)x, with R defined on some interval I 3 t0. Then the solution φ of (2.19) is given by Z t φ(t) = R(t, t0)x0 + R(t, s)g(φ(s), s)ds, (2.20) t0 on some subinterval of I. Proof. We proceed using a variation of constants approach. It is known that the general solution to the homogeneous equation x0 = A(t)x associated to (2.19) is given by

φ(t) = R(t, t0)x0.

We seek a solution to (2.19) by assuming that φ(t) = R(t, t0)v(t). We have  d  φ0(t) = R(t, t ) v(t) + R(t, t )v0(t) dt 0 0 0 = A(t)R(t, t0)v(t) + R(t, t0)v (t), Fund. Theory ODE Lecture Notes – J. Arino 2.5. Further developments, bibliographical notes 55 from Proposition 2.2.11. For φ to be solution, it must satisfy the differential equation (2.19), and thus

0 0 φ (t) = A(t)φ(t) + g(t, φ(t)) ⇔ A(t)R(t, t0)v(t) + R(t, t0)v (t) = A(t)R(t, t0)v(t) + g(t, φ(t)) 0 ⇔ R(t, t0)v (t) = g(t, φ(t)) 0 −1 ⇔ v (t) = R(t, t0) g(t, φ(t)) 0 ⇔ v (t) = R(t0, t)g(t, φ(t)) Z t ⇔ v(t) = R(t0, s)g(s, φ(s))ds + C, t0 using Proposition 2.2.11 again. Therefore,

Z t  φ(t) = R(t, t0) R(t0, s)g(s, φ(s))ds + C . t0

Evaluating this expression at t = t0 gives φ(t0) = C, so C = x0. Therefore,

Z t φ(t) = R(t, t0)x0 + R(t, t0) R(t0, s)g(s, φ(s))ds t0 Z t = R(t, t0)x0 + R(t, t0)R(t0, s)g(s, φ(s))ds t0 Z t = R(t, t0)x0 + R(t, s)g(s, φ(s))ds, t0 from Proposition 2.2.11.

Chapter 3

Stability of linear systems

3.1 Stability at fixed points

We consider here the autonomous equation (not necessarily linear), x0 = f(x). (3.1)

To emphasize the fact that we are dealing with flows, we write x(t, x0) the solution to (3.1), at time t and satisfying at time t = 0 the initial condition x(0) = x0. Definition 3.1.1 (Fixed point). A fixed point of (3.1) is a point x∗ such that f(x∗) = 0. This is evident, as a point such that f(x∗) = 0 satisfies (x∗)0 = f(x∗) = 0, so that the solution is constant when x = x∗. Note also that this implies that x(t) = x∗ is a solution defined on R. Definition 3.1.2 (Stable equilibrium point). The fixed point x∗ is ( positively) stable if the following two conditions hold:

∗ i) There exists r > 0 such that if kx0 − x k < r, then the solution x(t, x0) is defined for all t ≥ 0. (This is automatically satisfied for flows).

∗ ∗ ii) For any ε > 0, there exists δ > 0 such that kx0 − x k < δ implies kx(t, x0) − x k < ε. Definition 3.1.3 (Asymptotically stable equilibrium point). If the equilibrium x∗ is (pos- ∗ itively) stable and that additionally, there exists γ > 0 such that kx0 − x k < γ implies ∗ ∗ limt→∞ x(t, x0) = x , then x is (positively) asymptotically stable.

3.2 Affine systems with small coefficients

We consider here a linear system of the form x0 = Ax + b(t)x (3.2) where b is continuous and small, i.e., limt→∞ b(t) = 0, with A ∈ Mn(R) and b ∈ R.

57 Fund. Theory ODE Lecture Notes – J. Arino 58 3. Stability of linear systems

Theorem 3.2.1. Suppose that all eigenvalues of A have negative real parts, and that b is continuous and such that limt→∞ b(t) = 0. Then 0 is a g.a.s. equilibrium of (3.2). The proof comes from [2, p. 156-157].

Proof. For any given (t0, x0), t0 > 0, we have, from [2, Th 2.1, p. 37] about the existence and uniqueness of the solutions to the linear equation x0 = A(t)x + g(t), that the (unique) solution φt(x0) satisfying the initial condition φt0 (x0) = x0 exists for all t ≥ t0. by the variation of constants formula, using b(t)x as the inhomogeneous term, we can express the solution by means of the equivalent integral equation, for t0 ≤ t < ∞, Z t (t−t0)A (t−s)A φt(x0) = e x0 + e b(s)φs(x0)ds (3.3) t0

(t−t0)A −σ(t−t0) by the hypothesis on A, e is such that for t0 ≤ t < ∞, kΨ(t, t0)k ≤ Ke for (t−t0)A K > 0, σ > 0, where R(t, t0) = e is the fundamental matrix of the homogeneous part of (3.2). Since limt→∞ b(t) = 0, given any η > 0, there exists a number T ≥ t0 such that |b(t)| < η for t ≥ T . We now use the variation of constants formula (3.3) with the point (T, φT (x0)) for initial condition. We have, for T ≤ t < ∞,

Z t (t−T )A (t−s)A φt(x0) = e φT (x0) + e b(s)φs(x0)ds T

−σ(t−t0) Thus, using kΦ(t)k ≤ Ke (with t0 = T ) and |b(t)| < η for t ≥ T , we obtain, for T ≤ t < ∞, Z t −σ(t−T ) −σ(t−s) kφt(x0)k ≤ Ke kφT (x0)k + Kη e kφs(x0)kds T Multiplying both sides of this inequality by eσt and using Gronwall’s inequality (Appendix A.7) σt with the function kφt(x0)ke , we obtain, for T ≤ t < ∞,

−(σ−Kη)(t−T ) kφt(x0)k ≤ KkφT (x0)ke (3.4)

From this we conclude that if 0 < η < σ/K, the solution φt(x0) will approach zero exponen- tially. This does not yet prove that the zero solution of (3.2) is stable. To do this, we compute a bound on kφT (x0)k. Returning to (3.3) and restricting t to the interval t0 ≤ t ≤ T , we have Z t −σ(t−t0) −σ(t−s) kφt(x0)k ≤ Ke kx0k + K1K e kφ(s, t0, x0)kds t0 σt where K1 = maxt0≤t≤T |b(t)|. Multiplying by e and applying the Gronwall inequality we obtain

−σ(t−t0)+K1K(t−t0) kφt(x0)k ≤ Kkx0ke

K1K(t−t0) ≤ Kkx0ke , t0 ≤ t ≤ T (3.5) Fund. Theory ODE Lecture Notes – J. Arino 3.2. Affine systems with small coefficients 59

Therefore, K1K(T −t0) kφT (x0)k ≤ Kkx0ke , t0 ≤ T (3.6)

Thus we can make |φT (x0)| small by choosing |x0| sufficiently small. This together with (3.4) gives the stability. Indeed, substituting (3.6) into (3.4) gives, for T ≤ t < ∞,

2 K1K(T −t0) −(σ−Kη)(t−T ) kφt(x0)k ≤ K kx0ke e (3.7)

K1K(T −t0) 2 K1K(T −t0) Let then K2 = max Ke ,K e . From (3.5) and (3.7) we have  K2kx0k if t0 ≤ t ≤ T kφt(x0)k ≤ −(σ−Kη)(t−T ) (3.8) K2kx0ke if T ≤ t < ∞

For a given matrix A, we can compute K and σ; we next pick any 0 < η < σ/K and then T ≥ t0 so that |b(t)| < η for t ≥ T . We then compute K1 and K2. Now, given any ε > 0, choose δ < ε/K2. Then from (3.8), if kx0k < δ, kφt(x0)k < ε for all t ≥ t0 so that the zero solution is stable. From (3.8), it is clear that the zero solution is globally asymptotically stable.

Corollary 3.2.2. Let all eigenvalues of A have negative real part, so that |eAt| ≤ Ke−σt for some constants K > 0, σ > 0 and all t ≥ 0. Let b(t) be continuous for 0 ≤ t < ∞ and suppose that there exists T > 0 such that |b(t)| < σ/K for t ≥ T . Then the zero solution of (3.2) is globally asymptotically stable.

Theorem 3.2.3. Let all eigenvalues of A have negative real part, and let b(t) be continuous R ∞ for 0 ≤ t < ∞ and such that 0 |b(s)|ds < ∞. Then the zero solution of (3.2) is globally asymptotically stable. Fund. Theory ODE Lecture Notes – J. Arino 60 3. Stability of linear systems

We give some notions of linear stability theory, in the case of the autonomous linear system (2.6), repeated here for convenience:

x0(t) = Ax(t). (2.6)

We let wj = uj + ivj be a generalized eigenvector of A corresponding to an eigenvalue λj = aj + ibj, with vj = 0 if bj = 0, and

B = {u1, . . . , uk, uk+1, vk+1, . . . , um, vm} be a basis of Rn, with n = 2m − k. Definition 3.2.4 (Stable, unstable and center subspaces). The stable, unstable and center subspaces of the linear system (2.6) are given, respectively, by

s E = Span{uj, vj : aj < 0},

u E = Span{uj, vj : aj > 0} and c E = Span{uj, vj : aj = 0}.

Definition 3.2.5. The mapping eAt : Rn → Rn is called the flow of the linear system (2.6). At n The term flow is used since e describes the motion of points x0 ∈ R along trajectories of (2.6). Definition 3.2.6. If all eigenvalues of A have nonzero real part, that is, if Ec = ∅, then the flow eAt of system (2.6) is called a hyperbolic flow, and the system (2.6) is a hyperbolic linear system.

Definition 3.2.7. A subspace E ⊂ Rn is invariant with respect to the flow eAt, or invariant under the flow of (2.6), if eAtE ⊂ E for all t ∈ R. Theorem 3.2.8. Let E be the generalized eigenspace of A associated to the eigenvalue λ. Then AE ⊂ E.

Theorem 3.2.9. Let A ∈ Mn(R). Then

n s u c R = E ⊕ E ⊕ E . Furthermore, if the matrix A is the matrix of the linear autonomous system (2.6), then Es, Eu and Ec are invariant under the flow of (2.6). Definition 3.2.10. If all the eigenvalues of A have negative (resp. positive) real parts, then the origin is a sink (resp. source) for the linear system (2.6). Theorem 3.2.11. The stable, center and unstable subspaces ES, EC and EU , respectively, At S C U At S are invariant with respect to e , i.e., let x0 ∈ E , y0 ∈ E and z0 ∈ E , then e x0 ∈ E , At C At U e y0 ∈ E and e z0 ∈ E . Fund. Theory ODE Lecture Notes – J. Arino 3.2. Affine systems with small coefficients 61

Definition 3.2.12 (Homeomorphism). Let X be a metric space and let A and B be subsets of X.A homeomorphism h : A → B of A onto B is a continuous one-to-one map of A onto B such that h−1 : B → A is continuous. The sets A and B are called homeomorphic or topologically equivalent if there is a homeomorphism of A onto B.

Definition 3.2.13 (Differentiable manifold). An n-dimensional differentiable manifold M k (or a manifold of class C ) is a connected metric space with an open covering {Uα} ( i.e., M = ∪αUα) such that

n n i) for all α, Uα is homeomorphic to the open unit ball in R , B = {x ∈ R : |x| < 1}, i.e., for all α there exists a homeomorphism of Uα onto B, hα : Uα → B,

ii) if Uα ∩ Uβ 6= ∅ and hα : Uα → B, hβ : Uβ → B are homeomorphisms, then hα(Uα ∩ Uβ) n and hβ(Uα ∩ Uβ) are subsets of R and the map

−1 h = hα ◦ hβ : hβ(Uα ∩ Uβ) → hα(Uα ∩ Uβ)

k is differentiable (or of class C ) and for all x ∈ hβ(Uα ∩ Uβ), the determinant of the Jacobian, det Dh(x) 6= 0.

−1 Remark – The manifold is analytic if the maps h = hα ◦ hβ are analytic. ◦ Next, recall that x∗ is an equilibrium point of (4.1) if f(x∗) = 0. An equilibrium point x∗ is hyperbolic if the Jacobian matrix Df of (4.1) evaluated at x∗, denoted Df(x∗), has no eigenvalues with zero real part. Also, recall that the solutions of (4.1) form a one-parameter group that defines the flow of the nonlinear differential equation (4.1). To be more precise, consider the IVP consisting of (4.1) and an initial condition x(t0) = x0. Let I(x0) be the maximal interval of existence of the solution to the IVP. Let then φ : R × Rn → Rn be n defined as follows: For x0 ∈ R and t ∈ I(x0), φ(t, x0) = φt(x0) is the solution of the IVP defined on its maximal interval of existence I(x0). 0 Example – Consider the (linear) ordinary differential equation x = ax, with a, x ∈ R. The at solution is φ(t, x0) = e x0, and satisfies the group property

a(t+s) at as as φ(t + s, x0) = e x0 = e (e x0) = φ(t, e x0) = φ(t, φ(s, x0))

 For simplicity and without loss of generality since both results are local results, we assume hereforth that x∗ = 0, i.e., that a change of coordinates has been performed translating x∗ to the origin. We also assume that t0 = 0.

Chapter 4

Linearization

We consider here the autonomous nonlinear system in Rn x0 = f(x) (4.1)

The object of this chapter is to show two results which link the behavior of (4.1) near a hyperbolic equilibrium point x∗ to the behavior of the linearized system

x0 = Df(x∗)(x − x∗) (4.2) about that same equilibrium.

4.1 Some linear stability theory

We now give some notions of linear stability theory, in the case of the autonomous linear system (2.6), repeated here for convenience:

x0(t) = Ax(t). (2.6)

We let wj = uj + ivj be a generalized eigenvector of A corresponding to an eigenvalue λj = aj + ibj, with vj = 0 if bj = 0, and

B = {u1, . . . , uk, uk+1, vk+1, . . . , um, vm} be a basis of Rn, with n = 2m − k. Definition 4.1.1 (Stable, unstable and center subspaces). The stable, unstable and center subspaces of the linear system (2.6) are given, respectively, by

s E = Span{uj, vj : aj < 0},

u E = Span{uj, vj : aj > 0} and c E = Span{uj, vj : aj = 0}.

63 Fund. Theory ODE Lecture Notes – J. Arino 64 4. Linearization

Definition 4.1.2. The mapping eAt : Rn → Rn is called the flow of the linear system (2.6). At n The term flow is used since e describes the motion of points x0 ∈ R along trajectories of (2.6). Definition 4.1.3. If all eigenvalues of A have nonzero real part, that is, if Ec = ∅, then the flow eAt of system (2.6) is called a hyperbolic flow, and the system (2.6) is a hyperbolic linear system.

Definition 4.1.4. A subspace E ⊂ Rn is invariant with respect to the flow eAt, or invariant under the flow of (2.6), if eAtE ⊂ E for all t ∈ R. Theorem 4.1.5. Let E be the generalized eigenspace of A associated to the eigenvalue λ. Then AE ⊂ E.

Theorem 4.1.6. Let A ∈ Mn(R). Then

n s u c R = E ⊕ E ⊕ E . Furthermore, if the matrix A is the matrix of the linear autonomous system (2.6), then Es, Eu and Ec are invariant under the flow of (2.6). Definition 4.1.7. If all the eigenvalues of A have negative (resp. positive) real parts, then the origin is a sink (resp. source) for the linear system (2.6). Theorem 4.1.8. The stable, center and unstable subspaces ES, EC and EU , respectively, At S C U At S are invariant with respect to e , i.e., let x0 ∈ E , y0 ∈ E and z0 ∈ E , then e x0 ∈ E , At C At U e y0 ∈ E and e z0 ∈ E . Definition 4.1.9 (Homeomorphism). Let X be a metric space and let A and B be subsets of X.A homeomorphism h : A → B of A onto B is a continuous one-to-one map of A onto B such that h−1 : B → A is continuous. The sets A and B are called homeomorphic or topologically equivalent if there is a homeomorphism of A onto B. Definition 4.1.10 (Differentiable manifold). An n-dimensional differentiable manifold M k (or a manifold of class C ) is a connected metric space with an open covering {Uα} ( i.e., M = ∪αUα) such that

n n i) for all α, Uα is homeomorphic to the open unit ball in R , B = {x ∈ R : |x| < 1}, i.e., for all α there exists a homeomorphism of Uα onto B, hα : Uα → B,

ii) if Uα ∩ Uβ 6= ∅ and hα : Uα → B, hβ : Uβ → B are homeomorphisms, then hα(Uα ∩ Uβ) n and hβ(Uα ∩ Uβ) are subsets of R and the map

−1 h = hα ◦ hβ : hβ(Uα ∩ Uβ) → hα(Uα ∩ Uβ)

k is differentiable (or of class C ) and for all x ∈ hβ(Uα ∩ Uβ), the determinant of the Jacobian, det Dh(x) 6= 0. Fund. Theory ODE Lecture Notes – J. Arino 4.2. The stable manifold theorem 65

−1 Remark – The manifold is analytic if the maps h = hα ◦ hβ are analytic. ◦ Next, recall that x∗ is an equilibrium point of (4.1) if f(x∗) = 0. An equilibrium point x∗ is hyperbolic if the Jacobian matrix Df of (4.1) evaluated at x∗, denoted Df(x∗), has no eigenvalues with zero real part. Also, recall that the solutions of (4.1) form a one-parameter group that defines the flow of the nonlinear differential equation (4.1). To be more precise, consider the IVP consisting of (4.1) and an initial condition x(t0) = x0. Let I(x0) be the maximal interval of existence of the solution to the IVP. Let then φ : R × Rn → Rn be n defined as follows: For x0 ∈ R and t ∈ I(x0), φ(t, x0) = φt(x0) is the solution of the IVP defined on its maximal interval of existence I(x0). 0 Example – Consider the (linear) ordinary differential equation x = ax, with a, x ∈ R. The at solution is φ(t, x0) = e x0, and satisfies the group property a(t+s) at as as φ(t + s, x0) = e x0 = e (e x0) = φ(t, e x0) = φ(t, φ(s, x0))  For simplicity and without loss of generality since both results are local results, we assume hereforth that x∗ = 0, i.e., that a change of coordinates has been performed translating x∗ to the origin. We also assume that t0 = 0.

4.2 The stable manifold theorem

Theorem 4.2.1 (Stable manifold theorem). Let E be an open subset of Rn containing the 1 origin, let f ∈ C (E), and let φt be the flow of the nonlinear system (4.1). Suppose that f(0) = 0 and that Df(0) has k eigenvalues with negative real part and n − k eigenvalues with positive real part. Then there exists a k-dimensional differentiable manifold S tangent s to the stable subspace E of the linear system (4.2) at 0 such that for all t ≥ 0, φt(S) ⊂ S and for all x0 ∈ S, lim φt(x0) = 0 t→∞ and there exists an (n − k)-dimensional differentiable manifold U tangent to the unstable u subspace E of (4.2) at 0 such that for all t ≤ 0, φt(U) ⊂ U and for all x0 ∈ U,

lim φt(x0) = 0 t→−∞ There are several approaches to the proof of this result. Hale [10] gives a proof which uses functional analysis. The proof we give here comes from [18, p. 108-111], who derives it from [6, p. 330-335]. It consists in showing that there exists a real nonsingular constant matrix −1 C such that if y = C x then there are n − k real continuous functions yj = ψj(y1, . . . , yk) defined for small |yi|, i ≤ k, such that

yj = ψj(y1, . . . , yk)(j = k + 1, . . . , n) define a k-dimensional differentiable manifold S˜ in y space. The stable manifold in S space is obtained by applying the transformation P −1 to y so that x = P −1y defines S in terms of k curvilinear coordinates y1, . . . , yk. Fund. Theory ODE Lecture Notes – J. Arino 66 4. Linearization

Proof. System (4.1) can be written as

x0 = Df(0)x + F (x) with F (x) = f(x) − Df(0)x. Since f ∈ C1(E), F ∈ C1(E), and F (0) = f(0) = 0 since f(0) = 0. Also, DF (x) = Df(x) − Df(0) and so DF (0) = 0. To continue, we use the following lemma (which we will not prove). Lemma 4.2.2. Let E be an open subset of Rn containing the origin. If F ∈ C1(E), then for all x, y ∈ Nδ(0) ⊂ E, there exists a ξ ∈ Nδ(0) such that

|F (x) − F (y)| ≤ kDF (ξ)k |x − y|

From Lemma 4.2.2, it follows that for all ε > 0 there exists a δ > 0 such that |x| ≤ δ and |y| ≤ δ imply that |F (x) − F (y)| ≤ ε|x − y| Let C be an invertible n × n matrix such that

P 0  B = C−1Df(0)C = 0 Q where the eigenvalues λ1, . . . , λk of the k × k matrix P have negative real part and the eigenvalues λk+1, . . . , λn of the (n − k) × (n − k) matrix Q have positive real part. Let α > 0 −1 be chosen small enough that for j = 1, . . . , k, <(λj) < −α < 0. Let y = C x, we have

y0 = C−1x0 = C−1Df(0)x + C−1F (x) = C−1Df(0)Cy + C−1F (Cy) = By + G(y) where G(y) = C−1F (Cy). Since F ∈ C1(E), G ∈ C1(E˜), where E˜ = C−1(E). Also, Lemma 4.2.2 applies to G. Now consider the system y0 = By + G(y) (4.3) and let eP t 0 0 0  U(t) = and V (t) = 0 0 0 eQt Then U 0 = BU, V 0 = BV and eBt = U(t)+V (t). Choosing as previously noted α sufficiently small, we can then choose K > 0 large enough and σ > 0 small enough that

kU(t)k ≤ Ke−(α+σ)t for all t ≥ 0 and kV (t)k ≤ Ke−σt for all t ≤ 0 Fund. Theory ODE Lecture Notes – J. Arino 4.2. The stable manifold theorem 67

Consider now the integral equation

Z t Z ∞ u(t, a) = U(t)a + U(t − s)G(u(s, a))ds − V (t − s)G(u(s, a))ds (4.4) 0 t where a, u ∈ Rn and a is a constant vector. We can solve this equation using the method of successive approximations. Indeed, let

u(0)(t, a) = 0 and Z t Z ∞ u(j+1)(t, a) = U(t)a + U(t − s)G(u(j)(s, a))ds − V (t − s)G(u(j)(s, a))ds (4.5) 0 t We show by induction that for all j = 1,... and t ≥ 0,

K|a|e−αt |u(j)(t, a) − u(j−1)(t, a)| ≤ (4.6) 2j−1 Clearly, (4.6) holds for j = 1 since

Z t Z ∞ |u(1)(t, a)| ≤ kU(t)ak + kU(t − s)G(u(s, a))kds + kV (t − s)G(u(s, a))kds 0 t Now suppose that (4.6) holds for j = k. We have

Z t (k+1) (k)  (k+1) (k)  |u (t, a) − u (t, a)| = U(t − s) G(u (s, a)) − G(u (s, a)) ds 0

Z ∞  (k+1) (k)  − V (t − s) G(u (s, a)) − G(u (s, a)) ds t Z t (k+1) (k) ≤ kU(t − s)k G(u (s, a)) − G(u (s, a)) ds 0 Z ∞ (k+1) (k) + kV (t − s)k G(u (s, a)) − G(u (s, a)) ds t which, since G verifies a Lipschitz-type condition as given by Lemma 4.2.2, implies that there exists ε > 0 such that

Z t (k+1) (k) (k+1) (k) |u (t, a) − u (t, a)| ≤ ε kU(t − s)k u (s, a) − u (s, a) ds 0 Z ∞ (k+1) (k) + ε kV (t − s)k u (s, a) − u (s, a) ds t Fund. Theory ODE Lecture Notes – J. Arino 68 4. Linearization

Using the bounds on kUk and kV k as well as the induction hypothesis (4.6), it follows that

Z t −αs (k+1) (k) −(α+σ)(t−s) K|a|e |u (t, a) − u (t, a)| ≤ ε Ke k−1 ds 0 2 Z ∞ −αs σ(t−s) K|a|e + ε Ke k−1 ds t 2 εK2|a|e−αt εK2|a|e−αt ≤ + σ2k−1 σ2k−1 which, if we choose ε < σ/(4K), i.e., εK/σ < 1/4, implies that

1 1 K|a|e−αt K|a|e−αt |u(k+1)(t, a) − u(k)(t, a)| < + = (4.7) 4 4 2k−1 2k Note that for G to satisfy a Lipschitz-type condition, we must choose K|a| < δ/2, i.e., |a| < δ/(2K). Then, by induction, (4.6) holds for all t ≥ 0 and j = 1,.... As a consequence, for t ≥ 0, n > m > N,

∞ X |u(n)(t, a) − u(m)(t, a)| ≤ |u(j+1)(t, a) − u(j)(t, a)| j=N ∞ X 1 K|a| ≤ K|a| = 2j 2N−1 j=N

As this last quantity approaches 0 as N → ∞, it follows that {u(j)(t, a)} is a Cauchy sequence (of continuous functions). It follows that lim u(j)(t, a) = u(t, a) j→∞ uniformly for all t ≥ 0 and |a| < δ/(2K). From the uniform convergence, we deduce that u(t, a) is continuous. Now taking the limit as j → ∞ in both sides of (4.5), it follows that u(t, a) satisfies the integral equation (4.4) and as a consequence, the differential equation (4.3). Since G ∈ C1(E˜), it follows from induction on (4.5) that u(j)(t, a) is a differentiable function of a for |a| < δ/(2K) and t ≥ 0. Since u(j)(t, a) → u(t, a) uniformly, it then follows that u(t, a) is differentiable for t ≥ 0 and |a| < δ/(2K). The estimate (4.7) implies that

|u(t, a)| ≤ 2K|a|e−αt (4.8) for t ≥ 0 and |a| < δ/(2K). It is clear from (4.4) that the last n − k components of a do not enter the computation of u(t0, a) and may thus be taken as zero. So the components uj(t, a) of the solution u(t, a) satisfy the initial conditions

uj(0, a) = aj for j = 1, . . . , k Fund. Theory ODE Lecture Notes – J. Arino 4.3. The Hartman-Grobman theorem 69 and Z ∞  uj(0, a) = − V (−s)G(u(s, a1, . . . , ak, 0,..., 0))ds for j = k + 1, . . . , n 0 j where ( )j denotes the jth component. For j = k + 1, . . . , n, define the functions

ψj(a1, . . . , ak) = uj(0, a1, . . . , ak, 0,..., 0) (4.9)

The initial values yj = uj(0, a1, . . . , ak, 0,..., 0) then satisfy

yj = ψj(y1, . . . , yk) for j = k + 1, . . . , n

˜ p 2 2 which defines a differentiable manifold S in y space for y1 + ··· + yk < δ/(2K). Fur- thermore, if y(t) is a solution of the differential equation (4.3) with y(0) ∈ S˜, i.e., with y(0) = u(0, a), then y(t) = u(t, a) It follows from the estimate (4.8) that if y(t) is a solution of (4.3) with y(0) ∈ S˜, then y(t) → 0 as t → ∞. It can also be shown that if y(t) is a solution of (4.3) with y(0) 6∈ S˜, then y(t) 6→ 0 as t → ∞; see [6, p. 332]. ˜ This implies, as φt satisfies the group property φs+t(x0) = φs(φt)(x0), that if y(0) ∈ S, then y(t) ∈ S˜ for all t ≥ 0. And it can be shown as in [6, Th 4.2, p. 333] that ∂ψ j (0) = 0 ∂yi for i = 1, . . . , k and j = k + 1, . . . , n, i.e., that the differentiable manifold S˜ is tangent to s n 0 the stable subspace E = {y ∈ R : y1 = ··· = yk = 0} of the linear system y = By at 0. The existence of the unstable manifold U˜ of (4.3) is established the same way, but con- sidering a reversal of time, t → −t, i.e., considering the system

y0 = −By − G(y)

The stable manifold for this system is the unstable manifold U˜ of (4.3). In order to determine the (n − k)-dimensional manifold U˜ using the above process, the vector y has to be replaced by the vector (yk+1, . . . , yn, y1, . . . , yk).

4.3 The Hartman-Grobman theorem

Adapted from [4, p. 311]. Theorem 4.3.1 (Hartman-Grobman). Suppose that 0 is an equilibrium point of the non- linear system (4.1). Let ϕt be the flow of (4.1), and ψt be the flow of the linearized system x0 = Df(0)x. If 0 is a hyperbolic equilibrium, then there exists an open subset D of Rn containing 0, and a homeomorphism G with domain in D such that G(ϕt(x)) = ψt(G(x)) whenever x ∈ D and both sides of the equation are defined. Fund. Theory ODE Lecture Notes – J. Arino 70 4. Linearization

We follow here [18].

Theorem 4.3.2 (Hartman-Grobman). Let E be an open subset of Rn containing the origin, 1 let f ∈ C (E), and let φt be the flow of the nonlinear system (4.1). Suppose that f(0) = 0 and that the matrix A = Df(0) has no eigenvalue with zero real part. Then there exists a homeomorphism H of an open set U containing the origin onto an open set V containing the origin such that for each x0 ∈ U, there is an open interval I0 ⊂ R containing 0 such that for all x0 ∈ U and t ∈ I0,

At H ◦ φt(x0) = e H(x0); i.e., H maps trajectories of (4.1) near the origin onto trajectories of x0 = Df(0)x near the origin and preserves the parametrization by time. Proof. Suppose that f ∈ C1(E), f(0) = 0 (i.e., 0 is an equilibrium) and A = Df(0) the jacobian matrix of f at 0. 1. As we have assumed that the matrix A has no eigenvalues with zero real part (i.e., 0 is an hyperbolic equilibrium point), we can write A in the form

P 0  A = 0 Q where P has only eigenvalues with negative real parts and Q has only eigenvalues with positive real parts. 2. Let φt be the flow of the nonlinear system (4.1). Let us write the solution as   y(t, y0, z0) x(t, x0) = φt(x0) = z(t, y0, z0) where   y0 n x0 = ∈ R z0 s u has been decomposed as y0 ∈ E , the stable subspace of A, and z0 ∈ E , the unstable subspace of A. 3. Let ˜ P Y (y0, z0) = y(1, y0, z0) − e y0 and ˜ Q Z(y0, z0) = z(1, y0, z0) − e z0 Then Y˜ (0) = Y˜ (0, 0) = y(1, 0, 0) = 0. The same is true of Z˜(0) = 0. Also, DY˜ (0) = ˜ 1 ˜ ˜ DZ(0) = 0. Since f ∈ C (E), Y (y0, z0) and Z(y0, z0) are continuously differentiable. Thus ˜ kDY (y0, z0)k ≤ a and ˜ kDZ(y0, z0)k ≤ a Fund. Theory ODE Lecture Notes – J. Arino 4.3. The Hartman-Grobman theorem 71

2 2 2 on the compact set |y0| + |z0| ≤ s0. By choosing s0 sufficiently small, we can make a as small as we like. We let Y (y0, z0) and Z(y0, z0) be smooth functions, defined by

2  ˜ 2 2 s0  Y (y0, z0) for |y0| + |z0| ≤ 2 Y (y0, z0) = 2 2 2 0 for |y0| + |z0| ≥ s0 and 2  ˜ 2 2 s0  Z(y0, z0) for |y0| + |z0| ≤ 2 Z(y0, z0) = 2 2 2 0 for |y0| + |z0| ≥ s0 By the mean value theorem,

p 2 2 |Y (y0, z0)| ≤ a |y0| + |z0| ≤ a(|y0| + |z0|) and p 2 2 |Z(y0, z0)| ≤ a |y0| + |z0| ≤ a(|y0| + |z0|) n P Q for all (y0, z0) ∈ R . Let B = e and C = e . Assuming that P and Q have been normalized in a proper way, we have b = kBk < 1 and c = kC−1k < 1 4. For y x = ∈ n z R define the transformations By L(y, z) = Cz and By + Y (y, z) T (y, z) = Cz + Z(y, z)

A i.e., L(x) = e x and, locally, T (x) = φ1(x). Then the following lemma holds, which we prove later. Lemma 4.3.3. There exists a homeomorphism H of an open set U containing the origin onto an open set V containing the origin such that

H ◦ T = L ◦ H

t t 5. We let H0 be the homeomorphism defined above and L and T be the one-parameter families of transformations defined by

t At t L (x0) = e x0 and T (x0) = φt(x0)

Define Z 1 −s s H = L H0T ds 0 Fund. Theory ODE Lecture Notes – J. Arino 72 4. Linearization

It follows from the above lemma that there exists a neighborhood of the origin for which

Z 1 t t−s s−t t L H = L H0T dsT 0 Z 1−t −s s t = L H0T dsT −t Z 0 Z 1−t  −s s −s s t = L H0T ds + L H0T ds T −t 0 Z 1 −s s t = L H0T dsT 0 = HT t

−1 since by the above lemma, H0 = L H0T which implies that Z 0 Z 0 −s s −s−1 s+1 L H0T ds = L H0T ds −s −t Z 1 −s s = L H0T ds 1−t Thus H ◦ T t = LtH or equivalently

At H ◦ φt(x0) = e H(x0) and it can be shown that H is a homeomorphism on Rn. The outline of the proof is complete. We now prove Lemma 4.3.3.

Proof. We use the method of successive approximations. For x ∈ Rn, let Φ(y, z) H(x) = Ψ(y, z)

Then H ◦ T = L ◦ H is equivalent to the pair of equations

BΦ(y, z) = Φ(By + Y (y, z), Cz + Z(y, z)) (4.10a) CΨ(y, z) = Ψ(By + Y (y, z), Cz + Z(y, z)) (4.10b)

Successive approximations for (4.10b) are defined by

Ψ0(y, z) = z −1 (4.11) Ψk+1(y, z) = C Ψk(By + Y (y, z), Cz + Z(y, z))

It can be shown by induction that for k = 0, 1,... the functions Ψk are continuous and such that Ψk(y, z) = z for |y| + |z| ≥ 2s0. Fund. Theory ODE Lecture Notes – J. Arino 4.3. The Hartman-Grobman theorem 73

Let us now prove that {Ψk} is a Cauchy sequence. For this, we show by induction that for all j ≥ 1, j δ |Ψj(y, z) − Ψj−1(y, z)| ≤ Mr (|y| + |z|) (4.12) where r = c[2 max(a, b, c)]δ with δ ∈ (0, 1) chosen sufficiently small that r < 1 (which is 1−δ possible since c < 1) and M = ac(2s0) /r. Inequality (4.12) is satisfied for j = 1 since

−1 |Ψ1(y, z) − Ψ0(y, z)| = |C Ψ0(By + Y (y, z), Cz + Z(y, z)) − z| = |C−1(Cz + Z(y, z)) − z| = |C−1Z(y, z)| ≤ kC−1k |Z(y, z)| ≤ ca(|y| + |z|) ≤ Mr(|y| + |z|)δ since Z(y, z) = 0 for |y| + |z| ≥ 2s0. Now assuming that (4.12) holds for j = k gives

−1 −1 |Ψk+1(y, z) − Ψk(y, z)| = |C Ψk(By + Y (y, z), Cz + Z(y, z)) − C Ψk−1(By + Y (y, z), Cz + Z(y, z))| −1 = |C (Ψk − Ψk−1)| −1 ≤ kC k |Ψk − Ψk−1| which, using induction hypothesis (4.12) and c = kC−1k gives

 δ ≤ cMrk |By + Y (y, z)| + |Cz + Z(y, z)|  δ ≤ cMrk b|y| + 2a(|y| + |z|) + c|z|

≤ cMrk2 max(a, b, c)δ|y| + |z|δ ≤ Mrk+1|y| + |z|)δ

Using the same type of argument as in the proof of the stable manifold theorem, Ψk is thus a Cauchy sequence of continuous functions that converges uniformly as k → ∞ to a continuous function Ψ(y, z). Also, Ψ(y, z) = z for |y| + |z| ≥ 2s0. Taking limits in (4.11) and left-multiplying by C shows that Ψ(y, z) is a solution of (4.10b). Now for (4.10a). This equation can be written

−1 −1 −1 B Φ(y, z) = Φ(B y + Y1(y, z),C z + Z1(y, z)) (4.13) where Y1 and Z1 occur in the inverse of T , which exists provided that a is small enough (i.e., s0 is sufficiently small),  −1  −1 B y + Y1(y, z) T (y, z) = −1 C z + Z1(y, z)

Successive approximations with Φ0(y, z) = y can then be used as above (since b = kBk < 1) to solve (4.13). Fund. Theory ODE Lecture Notes – J. Arino 74 4. Linearization

Therefore, we obtain the continuous map Φ(y, z) H(y, z) = Ψ(y, z) and it follows as in [11, p. 248-249] that H is a homeomorphism of Rn onto Rn.

4.4 Example of application

4.4.1 A chemostat model To illustrate the use of the theorems in this chapter, we take an example of nonlinear system, a system of two nonlinear differential equations modeling a biological device called a chemostat. Without going into details, the system is the following. dS = D(S0 − S) − µ(S)x (4.14a) dt dx = (µ(S) − D)x (4.14b) dt The parameters S0 and D, respectively the input concentration and the dilution rate, are real and positive. The function µ is the growth function. It is generally assumed to satisfy µ(0) = 0, µ0 > 0 and µ00 < 0. To be complete, one should verify that the positive quadrant is positively invariant under the flow of (4.14), i.e., that for S(0) ≥ 0 and x(0) ≥ 0, solutions remain nonnegative for all positive times, and similar properties. But since we are here only interested in applications of the stable manifold theorem, we proceed to a very crude analysis, and will not deal with this point. Note that in vector form, the system is noted

ξ0 = f(ξ) with ξ = (S, x)T and D(S0 − S) − µ(S)x f(ξ) = (µ(S) − D)x Equilibria of the system are found by solving f(ξ) = 0. We find two, the first one situated on one of the boundaries of the positive quadrant,

∗ ∗ ∗ 0 ξT = (ST , xT ) = (S , 0) the second one in the interior,

∗ ∗ ∗ 0 ξI = (S , x ) = (λ, S − λ)

0 ∗ where λ is such that µ(λ) = D. Note that this implies that if λ ≥ S , ξT is the only equilibrium of the system. Fund. Theory ODE Lecture Notes – J. Arino 4.4. Example of application 75

At an arbitrary point ξ = (S, x), the Jacobian matrix is given by −D − µ0(S)x −µ(S)  Df(ξ) = µ0(S)x µ(S) − D

∗ Thus, at the trivial equilibrium ξT , −D −µ(S0)  Df(ξ∗ ) = T 0 µ(S0) − D We have two eigenvalues, −D and µ(S0)−D. Let us suppose that µ(S0)−D < 0. Note that ∗ ∗ this implies that ξT is the only equilibrium, since, as we have seen before, ξI is not feasible if λ > S0. ∗ As the system has dimensionality 2, and that the matrix Df(ξT ) has two negative eigen- values, the stable manifold theorem (Theorem 4.2.1) states that there exists a 2-dimensional differentiable manifold M such that

• φt(M) ⊂ M

∗ • for all ξ0 ∈ M, limt→∞ φt(ξ0) = ξT . ∗ S 0 ∗ • At ξT , M is tangent to the stable subspace E of the linearized system ξ = Df(ξT )(ξ− ∗ ξT ). Since there are no eigenvalues with positive real part, there does not exist an unstable manifold in this case. Let us now caracterize the nature of the stable subspace ES. It is obtained by studying the linear system

0 ∗ ∗ ξ = Df(ξT )(ξ − ξT ) −D −µ(S0)  S − S0 = 0 µ(S0) − D x −D(S − S0) − µ(S0)x = (4.15) (µ(S0) − D)x Of course, the Jacobian matrix associated to this system is the same as that of the nonlinear ∗ T 0 system (at ξT ). Associated to the eigenvalue −D is the eigenvector v1 = (1, 0) , to µ(S )−D T is v2 = (−1, 1) . 2 The stable subspace is thus given by Span (v1, v2), i.e., the whole of R . ∗ In fact, the stable manifold of ξT is the whole positive quadrant, since all solutions limit to this equilibrium. But let us pretend that we do not have this information, and let us try to find an approximation of the stable manifold.

4.4.2 A second example This example is adapted from [18, p. 111]. Consider the nonlinear system x0 = −x − y2 (4.16) y0 = x2 + y Fund. Theory ODE Lecture Notes – J. Arino 76 4. Linearization

From the nullclines equations, it is clear that (x, y) = (0, 0) is the only equilibrium point. At (0, 0), the Jacobian matrix of (4.16) is given by

−1 0 J = 0 1

The linearized system at 0 is x0 = −x (4.17) y0 = y So the eigenvalues are 1 and −1, with associated eigenvectors (1, 0)T and (0, 1)T , respec- tively. Therefore, the stable manifold theorem (Theorem 4.2.1) implies that there exists a 1-dimensional stable (differentiable) manifold S such that φt(S) ⊂ S and limt→∞ φt(x0) = 0 for all x0 ∈ S, and a 1-dimensional unstable (differentiable) manifold U such that φt(U) ⊂ U and limt→−∞ φt(x0) for all x0 ∈ U. Furthermore, at 0, S is tangent to the stable subspace ES of (4.17), and U is tangent to the unstable subspace EU of (4.17). S T The stable subspace E is given by Span (v1), with v1 = (0, 1) , i.e., the y-axis. The U T unstable subspace E is Span (v2), with v2 = (1, 0) , i.e., the x-axis. The behavior of this system is illustrated in Figure 4.1.

0.05

0.04

0.03

0.02

0.01

y 0

−0.01

−0.02

−0.03

−0.04

−0.05

−0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 0.05 x

Figure 4.1: Vector field of system (4.16) in the neighborhood of 0.

To be more precise about the nature of the stable manifold S, we proceed as follows. First of all, as A is in form, we have

−1 0 A = B = 0 1

−y2 and C = I. Also, F (ξ) = G(ξ) = . Here, the matrices P and Q are in fact scalars, x2 Fund. Theory ODE Lecture Notes – J. Arino 4.4. Example of application 77

P = −1 and Q = 1. Thus e−t 0 0 0  U(t) = ,V (t) = 0 0 0 et

T Finally, a = (a1, 0) . So now we can use successive approximations to find an approximate solution to the integral equation (4.4), which here takes the form

 −t  Z t  −(t−s) 2  Z ∞   e a1 −e u2(s) 0 u(t, a) = + ds − (t−s) 2 ds 0 0 0 t e u1(s) To construct the sequence of successive approximations, we start with u(t, a) = (0, 0)T , then compute the successive terms using equation (4.5), which takes the form

 −t  Z t  −(t−s)  Z ∞   (j+1) e a1 e 0 (j)  0 0 (j)  u (t, a) = + G u (s) ds − (t−s) G u (s) ds 0 0 0 0 t 0 e 2 2  (j)    (j)    −t  Z t  −(t−s)  u (s) Z ∞   u (s) e a1 e 0 2 0 0 2 = + 2 ds − 2 ds  (j)   (t−s)  (j)   0 0 0 0 t 0 e u1 (s) u1 (s)  2! ! e−ta  Z t −(t−s) (j) Z ∞ 0 1 −e u2 (s) 2 = + ds − (t−s)  (j)  ds 0 0 0 t e u1 (s) Therefore, e−aa  u(1)(t, a) = U(t)a = 1 0 ! u(0)(t, a) 0 since u(0)(t, a) = 1 = . (0) 0 u2 (t, a) Then,  −t  Z ∞   (2) e a1 0 u (t, a) = − (t−s) −s 2 ds 0 t e (e a1)  −t    e a1 0 = − 1 2 −2t 0 3 a1e  −t  e a1 = 1 2 −2t − 3 a1e and continuing this process, we find

 −t 1 −4t −t 4 (3) e a1 + 27 (e − e )a1 u (t, a) = 1 2 −2t − 3 a1e (4) (3) 5 Also, it is possible to show that u (t, a) − u (t, a) = O(a1). The stable manifold S is 1-dimensional, so here it has the form ψ2(a1) = u2(0, a1, 0), and is here approximated by 1 ψ (a ) = − a2 + O(a5) 2 1 3 1 1 Fund. Theory ODE Lecture Notes – J. Arino 78 4. Linearization

as a1 → 0. Thus S is approximated by x2 y = − + O(x5) 3 as x → 0. Chapter 5

Exponential dichotomy

Our aim here is to show the equivalent of the Hartman-Grobman theorem for linear systems with variable coefficients. Compared to other results we have seen so far, this is a much more recent field. The first results were shown in the 60s by Lin. We give here only the most elementary results. For more details, see, e.g., [13]. We consider the linear system of differential equations

dx = A(t)x (5.1) dt where the n × n matrix A(t) is continuous on the real axis.

5.1 Exponential dichotomy

Definition 5.1.1 (Exponential dichotomy). Let X(t) be a the fundamental matrix solution of (5.1). If X(t) and X−1(s) can be decomposed into the following forms

X(t) = X1(t) + X2(t) −1 X (s) = Z1(s) + Z2(s) −1 X(t)Z (s) = X1(t)Z1(s) + X2(t)Z2(s) and satisfy the conditions that there exists α, β, positive constants such that

−α(t−s) kX1(t)Z1(s)k ≤ βe , t ≥ s (5.2) α(t−s) kX2(t)Z2(s)k ≤ βe , s ≥ t where

X1(t) = (X11(t), 0),X2(t) = (0,X12(t)),     Z11(s) 0 Z1(s) = ,Z2(s) = , 0 Z21(s)

79 Fund. Theory ODE Lecture Notes – J. Arino 80 5. Exponential dichotomy or there is a projection P on the stable manifold such that

kX(t)PX−1(s)k ≤ βe−α(t−s), t ≥ s (5.3) kX(t)(I − P )X−1(s)k ≤ βeα(t−s), s ≥ t then we say that the system (5.1) admits exponential dichotomy.

Remark – A matrix P defines a projection if it is such that P 2 = P . ◦ Another definition [1].

Definition 5.1.2. Let Φ(t, s), Φ(t, t) = I, be the principal matrix solution of (5.1). We say that (5.1) has an exponential dichotomy on the interval I if there are projections P (t): Rn → Rn, t ∈ I, continuous in t, such that if Q(t) = I − P (t), then i) Φ(t, s)P (s) = P (t)φ(t, s), for t, s ∈ I.

ii) kΦ(t, s)P (s)k ≤ Ke−α(t−s), for t ≥ s ∈ I.

iii) kΦ(t, s)Q(s)k ≤ Keα(t−s), for s ≥ t ∈ I.

A more general definition is the following (see, e.g., [16]).

Definition 5.1.3 ((µ1, µ2)-dichotomy). If µ1, µ2 are continuous real-valued functions on the real interval I = (ω−, ω+), system (5.1) is said to have a (µ1, µ2)-dichotomy if there n exist supplementary projections P1,P2 on R such that Z t −1 i kX(t)PiX (s)k ≤ Ki exp( µi), if (−1) (s − t) ≥ 0, i = 1, 2. s where K1,K2 are positive constants. In the case that µ1, µ2 are constants, the system (5.1) is said to have an exponential dichotomy if µ1 < 0 < µ2 and an ordinary dichotomy if µ1 = µ2 = 0. The following remark is from [7]. Remark – The autonomous system x0 = Ax has an exponential dichotomy on R+ if and only if no eigenvalue of A has zero real part. It has ordinary dichotomy is and only if all eigenvalues of A with zero real part are semisimple (their algebraic multiplicities are equal to their geometric multiplicities, i.e., the dimension of their eigenspace). In each case, X(t) = etA and we can take P to be the spectral projection defined by 1 Z P = (zI − A)−1dz 2πi γ with γ any rectifiable simple closed curve in the open left half-plane which contains in its interior all eigenvalues of A with negative real part. ◦ Fund. Theory ODE Lecture Notes – J. Arino 5.2. Existence of exponential dichotomy 81

5.2 Existence of exponential dichotomy

To check that the previous definitions hold can be a very tedious task. Some authors have thus worked on deriving simpler conditions that imply exponential dichotomy.

Theorem 5.2.1. If the matrix A(t) in (5.1) is continuous and bounded on R, and there exists a quadratic form V (t, x) = xT G(t)x, where the matrix G(t) is symmetric, regular, bounded and C1, such that the derivative of V (t, x) with respect to (5.1) is positive definite, then (5.1) admits exponential dichotomy. The converse is true, without the requirement that A(t) be bounded.

A result of [7].

Theorem 5.2.2. Let A(t) be a continuous n × n matrix function defined on an interval I such that

i) A(t) has k eigenvalues with real part ≤ −α < 0 and n − k eigenvalues with real part ≥ β > 0 for all t ∈ I,

ii) kA(t)k ≤ M for all t ∈ I.

For any positive constant ε < min(α, β), there exists a positive constant δ = δ(M, α + β, ε) such that, if kA(t2) − A(t1)k ≤ δ for |t2 − t1| ≤ h where h > 0 is a fixed number not greater than the length of I, then the equation

x0 = A(t)x has a fundamental matrix X(t) satisfying the inequalities

kX(t)PX˜ −1(s)k ≤ Ke−(α−ε)(t−s) for t ≥ s

kX(t)(I − P˜)X−1(s)k ≤ Le−(β−ε)(s−t) for s ≥ t where K,L are positive constants depending only on M, α + β, ε and

I 0 P˜ = k 0 0

The following result, due to Muldowney [16], gives a criterion for the existence of a (µ1, µ2)-dichotomy. Proposition 5.2.3. Suppose there is a continuous real-valued function ρ on I and constants li, 0 ≤ li < 1, i = 1, 2, such that for some m, 0 ≤ m ≤ n,

( m n ) X X max l1<(ajj) + l1 |aij| + |aij| : j = 1, . . . , m ≤ l1ρ, i=1,i6=j i=m+1 Fund. Theory ODE Lecture Notes – J. Arino 82 5. Exponential dichotomy

( m n ) X X min l2<(ajj) − |aij| − l2 |aij| : j = m + 1, . . . , n ≥ l2ρ. i=1 i=m+1,i6=j

Then the system (5.1) has a (µ1, µ2)-dichotomy, where

( m n ) X X µ1 = max l1<(ajj) + |aij| + l2 |aij| : j = 1, . . . , m , i=1,i6=j i=m+1

( m n ) X X µ2 = min <(ajj) − l1 |aij| − |aij| : j = m + 1, . . . , n . i=1 i=m+1,i6=j

The same sort of theorem can be proved with sums of the columns replaced by sums of the rows. Example – Consider −1 0 1/2 A(t) = t/2 t t2  , t > 0 t/2 −t2 t 

5.3 First approximate theory

We consider the nonlinear system

dx = A(t)x + f(t, x) (5.4) dt

n n 2 where f : R × R → R is continuous, f(t, x) = O(kxk ), kxk = o(1), kf(t, x1) − f(t, x2)k ≤ Lkx1 − x2k with L small enough. Let x(t) be a non trivial solution of (5.1); define

¯ 1 kx(t)k λu(x(t)) = lim sup log t−s→∞ t − s kx(s)k and 1 kx(t)k λu(x(t)) = lim inf log t−s→∞ t − s kx(s)k ¯ The numbers λu(x(t)) and λu(x(t)) are called the uniform upper characteristic exponent and uniform lower characteristic exponent of x(t), respectively.

Remark – If λ¯(x) ≤ −α < 0, then lims→−∞ kx(s)k = ∞. If λ(x) ≥ α > 0, then limt→∞ kx(t)k = ∞. ◦ Then we have the following theorem. Fund. Theory ODE Lecture Notes – J. Arino 5.3. First approximate theory 83

Theorem 5.3.1. If (5.1) admits the exponential dichotomy, then the linear system (5.1) and the nonlinear system (5.4) are topologically equivalent, i.e.,

i) if the solution x(t) of (5.4) remains in a neighborhood of the origin for t ≥ 0, or t ≤ 0, then limt→∞ x(t) = 0, or limt→−∞ x(t) = 0, respectively;

ii) there exists positive constants α0 and β0 such that if a solution x(t) of (5.4) is such that limt→∞ x(0) = 0, or limt→−∞ x(t) = 0, then

−α0(t−s) kx(t)k ≤ β0kx(s)ke , t ≥ s

or α0(t−s) kx(t)k ≤ β0kx(s)ke , s ≥ t ¯ respectively. At this time, λu(x(t)) ≤ −α0 < 0, or λu(x(t)) ≥ α0 > 0; ¯ iii) for a k-dimensional solution x of (5.1) with λu(x(t)) ≤ −α < 0, or an (n − k)- dimensional solution x(t) of (5.1) with λu(x(t)) ≥ α > 0, there is a unique k- ¯ dimensional or (n − k)-dimensional y(t), solution of (5.4), such that λu(x(t)) ≤ −α < 0, or λu(x(t)) ≥ α > 0 respectively.

A different statement of the same sort of result is given by Palmer [17].

Theorem 5.3.2. Suppose that A(t) is a continuous matrix function such that the linear equation x0 = A(t)x has an exponential dichotomy. Suppose that f(t, x) is a continuous function of R × Rn into Rn such that

kf(t, x)k ≤ µ, kf(t, x1) − f(t, x2)k ≤ Lkx1 − x2k for all t, x, x1, x2. Then if 4LK ≤ α there is a unique function H(t, x) of R × Rn into Rn satisfying

i) H(t, x) − x is bounded in R × Rn, ii) if x(t) is any solution of the differential equation x0 = A(t)x + f(t, x), then H(t, x(t)) is a solution of x0 = A(t)x.

Moreover, H is continuous in R × Rn and

kH(t, x) − xk ≤ 4Kµα−1

n −1 for all t, x. For each fixed t, Ht(x) = H(t, x) is a homeomorphism of R . L(t, x) = Ht (x) is continuous in R×Rn and if y(t) is any solution of x0 = A(t)x, then L(t, y(t)) is a solution of x0 = A(t)x + f(t, x). Fund. Theory ODE Lecture Notes – J. Arino 84 5. Exponential dichotomy

5.4 Stability of exponential dichotomy

Theorem 5.4.1. Suppose that the linear system (5.1) admits exponential dichotomy. Then there exists a constant η > 0 such that the linear system dx = (A(t) + B(t))x (5.5) dt also admits exponential dichotomy, when B(t) is continuous on R and kB(t)k ≤ η. Another version of this theorem.

Theorem 5.4.2. Let B : Mn(R+) be a bounded, continuous matrix function. Suppose that 2 (5.1) has an exponential dichotomy on R+. If δ = sup kB(t)k < α/(4K ), then the perturbed equation x0 = (A(t) + B(t))z ˜ also has an exponential dichotomy on R+ with constants K and α˜ determined by K, α and δ. Moreover, if P˜(t) is the corresponding projection, then kP (t) − P˜(t)k = O(δ) uniformly in t ∈ R+. Also, |α − α˜| = O(δ).

5.5 Generality of exponential dichotomy

The exposition has been done here in the case of a system of ODEs. But it is important to realize that exponential dichotomies exist in a much more general setting. Bibliography

[1] A. Acosta and P. Garc´ıa. Synchronization of non-identical chaotic systems: an expo- nential dichotomies approach. J. Phys. A: Math. Gen., 34:9143–9151, 2001.

[2] F. Brauer and J.A. Nohel. The Qualitative Theory of Ordinary Differential Equations. Dover, 1989.

[3] H. Cartan. Cours de calcul diff´erentiel. Hermann, Paris, 1997. Reprint of the 1977 edition.

[4] C. Chicone. Ordinary Differential Equations with Applications. Springer, 1999.

[5] E.A. Coddington and N. Levinson. Theory of Ordinary Differential Equations. McGraw- Hill, 1955.

[6] E.A. Coddington and N. Levinson. Theory of Ordinary Differential Equations. Krieger, 1984.

[7] W.A. Coppel. Dichotomies in Stability Theory, volume 629 of Lecture Notes in Mathe- matics. Springer-Verlag, 1978.

[8] J. Dieudonn´e. Foundations of Modern Analysis. Academic Press, 1969.

[9] N.B. Haaser and J.A. Sullivan. Real Analysis. Dover, 1991. Reprint of the 1971 edition.

[10] J.K. Hale. Ordinary Differential Equations. Krieger, 1980.

[11] P. Hartman. Ordinary Differential Equations. John Wiley & Sons, 1964.

[12] P.-F. Hsieh and Y. Sibuya. Basic Theory of Ordinary Differential Equations. Springer, 1999.

[13] Z. Lin and Y-X. Lin. Linear Systems. Exponential Dichotomy and Structure of Sets of Hyperbolic Points. World Scientific, 2000.

[14] H.J. Marquez. Nonlinear Control Systems. Wiley, 2003.

[15] R.K Miller and A.N. Michel. Ordinary Differential Equations. Academic Press, 1982.

85 Fund. Theory ODE Lecture Notes – J. Arino 86 BIBLIOGRAPHY

[16] J.S Muldowney. Dichotomies and asymptotic behaviour for linear differential systems. Transactions of the AMS, 283(2):465–484, 1984.

[17] K.J. Palmer. A generalization of Hartman’s linearization theorem. J. Math. Anal. Appl., 41:753–758, 1973.

[18] L. Perko. Differential Equations and Dynamical Systems. Springer, 2001.

[19] L. Schwartz. Cours d’analyse, volume I. Hermann, Paris, 1967.

[20] K. Yosida. Lectures on Differential and Integral Equations. Dover, 1991. Appendix A

A few useful definitions and results

Here, some results that are important for the course are given with a somewhat random ordering.

A.1 Vector spaces, norms

A.1.1 Norm

Consider a vector space E over a field K. A norm is an application, denoted k k, from E to R+ that satisfies the following:

1) ∀x ∈ E, kxk ≥ 0, with kxk = 0 if and only if x = 0.

2) ∀x ∈ E, ∀a ∈ K, kaxk = |a| kxk.

3) ∀x, y ∈ E, kx + yk ≤ kxk + kyk.

A vector space E equiped with a norm k k is a normed vector space.

A.1.2 Matrix norms A.1.3 Supremum (or operator) norm The supremum norm is defined by

kL(x)k ∀L ∈ L(E), |||L||| = sup = sup kL(x)k. x∈E−{0} kxk kxk≤1

The inequality

kA(t)(x1 − x2)k ≤ |||A(t)|||kx1 − x2k

87 Fund. Theory ODE Lecture Notes – J. Arino 88 A. Definitions and results results from the nature of the norm ||| |||. See appendix A.1. is best understood by indicating the spaces in which the various norms are defined. We have   x kAxka = kxkb A kxkb a ≤ kxkb |||A|||

= kAkkxkb, since x = 1. kxkb b A.2 An inequality involving norms and integrals

Lemma A.2.1. Suppose f : Rn → Rn is a continuous function. Then Z b Z b

f(x)dx ≤ kf(x)k dx. a a Proof. First, note that we have Z b  Z b Z b 

f(x)dx = f1(x)dx ,..., fn(x)dx . a a a

For a given component function fi, i = 1, . . . , n, using the definition of the Riemann integral,

Z b k X ∗ fi(x)dx = lim fi(xj )∆xj, k→∞ a j=1

∗ where xj is the sample point in the interval [xj−1, xj] with width ∆xj. Therefore,

Z b k k X ∗ X ∗ fi(x)dx = lim fi(xj )∆x = lim fi(xj )∆x , k→∞ k→∞ a j=1 j=1 since the norm is a continuous function. The result then follows from the triangle inequality.

A.3 Types of convergences

Definition A.1 (Pointwise convergence). Let X be any set, and let Y be a topological space. A sequence f1, f2,... of mappings from X to Y is said to be pointwise convergent (or simply convergent) to a mapping f : X → Y , if the sequence fn(x) converges to f(x) for each x in X. This is usually denoted by fn → f. In other words,

(fn → f) ⇔ (∀x ∈ X, ∀ε > 0, ∃N ≥ 0, ∀n ≥ N, d(fn(x), f(x)) < ε) . Fund. Theory ODE Lecture Notes – J. Arino A.4. Asymptotic Notations 89

Definition A.2 (Uniform convergence). Let X be any set, and let Y be a topological space. A sequence f1, f2,... of mappings from X to Y is said to be uniformly convergent to a mapping f : X → Y , if given ε > 0, there exists N such that for all n ≥ N and all x ∈ X,

d(fn(x), f(x)) < ε. u This is usually denoted by fn → f. In other words, u (fn → f) ⇔ (∀ε > 0, ∃N ≥ 0, ∀n ≥ N, ∀x ∈ X, d(fn(x), f(x)) < ε) . An important theorem follows.

Theorem A.3.1. If the sequence of maps {fn} is uniformly convergent to the map f, then f is continuous.

A.4 Asymptotic Notations

Let n be a integer variable which tends to ∞ and let x be a continuous variable tending to some limit. Also, let φ(n) or φ(x) be a positive function and f(n) or f(x) any function. Then i) f = O(φ) means that |f| < Aφ for some constant A and all values of n and x, ii) f = o(φ) mean that f/φ → 0, iii) f ∼ φ means that f/φ → 1. Note that f = o(φ) implies f = O(φ).

A.5 Types of continuities

Definition A.3 (Uniform continuity). Let (X, dX ) and (Y, dY ) be two metric spaces, E ⊆ X and F ⊆ Y . A function f : E → F is uniformly continuous on the set E ⊂ X if for every ε > 0, there exists δ > 0 such that

dY (f(x), f(y)) < ε whenever x, y ∈ E and dX (x, y) < δ. In other words,

(f : E ⊆ (X, dX ) → F ⊆ (Y, dY ) uniformly continuous on E)

⇔ (∀ε > 0, ∃δ > 0, ∀x, y ∈ E, dX (x, y) < δ ⇒ dY (f(x), f(y)) < ε) . Definition A.4 (Equicontinuous set). A set of functions F = {f} defined on a real interval I is said to be equicontinuous on I if, given any ε > 0, there exists a δε > 0, independent of f ∈ F and also t, t˜∈ I such that

kf(t) − f(t˜)k < ε whenever |t − t˜| < δε Fund. Theory ODE Lecture Notes – J. Arino 90 A. Definitions and results

An interpretation of equicontinuity is that a sequence of functions is equicontinuous if all the functions are continuous and they have equal variation over a given neighbourhood. Equicontinuity of a sequence of functions has important consequences.

Theorem A.5.1. Let {fn} be an equicontinuous sequence of functions. If fn(x) → f(x) for every x ∈ X, then the function f is continuous.

Lemma A.5 (Ascoli). On a bounded interval I, let F = {f} be an infinite, uniformly bounded, equicontinuous set of functions. Then F contains a sequence {fn}, n = 1, 2,..., which is uniformly convergent on I.

Theorem A.6. Let C(X) be the space of continuous functions on the complete metric space n X with values in R . If a sequence {fn} in C(X) is bounded and equicontinuous, then it has a uniformly convergent subsequence.

A.6 Lipschitz function

Definition A.6.1 (Lipschitz function). A map f : U ⊂ R × Rn → Rn is Lipschitz in x if there exists a L such that for all (t, x1) ∈ U and (t, x2) ∈ U,

kf(t, x1) − f(t, x2)k ≤ Lkx1 − x2k.

In the case of a Lipschitz function as defined above, the constant L is independent of x1 and x2, it is given for U. A weaker version is local Lipschitz functions.

Definition A.6.2 (Locally Lipschitz function). A map f : U ⊂ R × Rn → Rn is locally Lipschitz in x if, for all (t0, x0) ∈ U, there exists a neighborhood V ⊂ U of (t0, x0) and a real number L, such that for all (t, x1), (t, x2) ∈ V ,

kf(t, x1) − f(t, x2)k ≤ Lkx1 − x2k.

In other words, f is locally Lipschitz if the restriction of f to V is Lipschitz.

Thus, a locally Lipschitz function is Lipschitz if it is locally Lipschitz on U with every- where the same Lipschitz constant L. Another definition of a locally Lipschitz function is as follows.

Definition A.6.3. A function f : U ⊂ Rn+1 → Rn is locally Lipschitz continuous if for every compact set V ⊂ U, the number

kf(t, x) − f(t, y)k L = sup (t,x)6=(t,y)∈V kx − yk is finite, with L depending on V .

Property A.6.4. Let f(t, x) be a function. The following properties hold. Fund. Theory ODE Lecture Notes – J. Arino A.7. Gronwall’s lemma 91

i) f Lipschitz ⇒ f uniformly continuous in x.

ii) f uniformly continuous 6⇒ f Lipschitz.

iii) f(t, x) has continuous partial derivative ∂f/∂x on a bounded closed domain D ⇒ f is locally Lipschitz on D.

Proof. i) Suppose that f is Lipschitz, i.e., there exists L > 0 such that kf(t, x1)−f(t, x2)k ≤ Lkx1 − x2k. Recall that f is uniformly continuous if for every ε > 0, there exists δ > 0 such that for all x1, x2, kx1 − x2k < δ implies that kf(t, x1) − f(t, x2)k < ε. So, given an ε > 0, choose δ < ε/L. Then kx1 − x2k < δ implies that

kf(t, x1) − f(t, x2)k < Lkx1 − x2k < Lδ < Lε/L = ε.

Thus f is uniformly continuous (see Definition A.3). ii) This is left as an exercise. Consider for example the function f defined by f(x) = 1 1/ ln x on (0, 2 ], f(0) = 0. iii) Notice that ∂f/∂x continuous on D implies that k∂f/∂xk is continuous on (the bounded closed domain) D, and thus k∂f/∂xk is bounded on D. Let

∂f(t, x) L = sup (t,x)∈D ∂x

If (t, x1), (t, x2) ∈ U, by the mean-value theorem, there exists η ∈ [x1, x2] such that f(t, x2)− ∂f ∂f f(t, x1) = (x2 − x1) ∂x (t, η). As η ∈ U, it follows that k ∂x (t, η)k ≤ L, and thus kf(t, x2) − f(t, x1)k ≤ Lkx2 − x1k.

A.7 Gronwall’s lemma

The name of Gronwall is associated to a certain number of inequalities. We give a few of them here. We prove the most simple one (as it is an easy proof to remember), as well as the most general one (Lemma A.9). In [12, p. 3], the lemma is stated as follows.

Lemma A.7 (Gronwall). If

i) g(t) is continuous on t0 ≤ t ≤ t1,

ii) for t0 ≤ t ≤ t1, g(t) satisfies the inequality

Z t 0 ≤ g(t) ≤ K + L g(s)ds. t0

Then 0 ≤ g(t) ≤ KeL(t−t0), for t0 ≤ t ≤ t1. Fund. Theory ODE Lecture Notes – J. Arino 92 A. Definitions and results

Proof. Let v(t) = R t g(s)ds. Then v0(t) = g(t), which implies that the assumption on g can t0 be written 0 ≤ v0(t) ≤ K + Lv(t). The right inequality is a linear differential inequality, with exp(− R t Lds) t0 Also, v(t0) = 0. Hence, d e−L(t−t0)v(t) ≤ Ke−L(t−t0) dt and therefore, K e−L(t−t0)v(t) ≤ 1 − e−L(t−t0) . L Thus Lv(t) ≤ K eL(t−t0) − 1, and g(t) ≤ K + Lv(t) ≤ KeL(t−t0).

In [4, p. 128-130], Gronwall’s inequality is stated as

Lemma A.8. Suppose that a < b and let g, K and L be nonnegative continuous functions defined on the interval [a, b]. Moreover, suppose that either K is a constant function, or K is differentiable on [a, b] with positive derivative K0. If, for all t ∈ [a, b]

Z t g(t) ≤ K(t) + L(t)g(s)ds, a then Z t  g(t) ≤ K(t) exp L(s)ds , a for all t ∈ [a, b].

Finally, the most general formulation is in [8, p. 286].

Lemma A.9. If ϕ and ψ are two nonnegative regulated functions on the interval [0, c], then for every nonnegative regulated function w in [0, c] satisfying the inequality

Z t w(t) ≤ ϕ(t) + ψ(s)w(s)ds, 0 we have, in [0, c],

Z t Z t  w(t) ≤ ϕ(t) + ϕ(s)ψ(s) exp ψ(ξ)dξ ds. (A.1) 0 s

Before proving the result, let us recall that a function f from an interval I ⊂ R to a F is regulated if it admits in each point in I a left limit and a right limit. In particular, every continuous mapping from I ⊂ R to a Banach space is regulated, as well as monotone maps from I to R; see, e.g., [8, Section 7.6]. Fund. Theory ODE Lecture Notes – J. Arino A.8. Fixed point theorems 93

R t R t Proof. Let y(t) = 0 ψ(s)w(s)ds; y is continuous, and since w(t) ≤ ϕ(t) + 0 ψ(s)w(s)ds, it follows that, except maybe at a denumerable number of points of [0, c], we have

y0(t) − ψ(t)y(t) ≤ ϕ(t)ψ(t) (A.2) R t from [8, Section 8.7]. Let z(t) = y(t) exp(− 0 ψ(s)ds). Then (A.2) is equivalent to  Z t  z0(t) ≤ ϕ(t)ψ(t) exp − ψ(s)ds . 0 Using a mean-value type theorem (see, e.g., [8, Th. 8.5.3]) and using the fact that z(0) = 0, we get, for t ∈ [0, c], Z t  Z s  z(t) ≤ ϕ(s)ψ(s) exp − ψ(ξ)dξ ds, 0 0 whence by definition Z t Z t  y(t) ≤ ϕ(s)ψ(s) exp ψ(ξ)dξ ds, 0 s and inequality (A.1) now follows from the relation w(t) ≤ ϕ(t) + y(t).

A.8 Fixed point theorems

Definition A.10 (Contraction mapping). Let (X, d) be a metric space, and let S ⊂ X.A mapping f : S → S is a contraction on S if there exists K < 1 such that, for all x, y ∈ S,

d(f(x), f(y)) ≤ Kd(x, y)

Every contraction is uniformly continuous on X (from Proposition A.6.4, since a con- traction is Lipschitz). Theorem A.11 (Contraction mapping principle). Consider the complete metric space (X, d). Every contraction mapping f : X → X has one and only one x ∈ X such that f(x) = x.

Proof. Existence We use successive approximations. Let x0 ∈ X. Define x1 = f(x0), x2 = f(x1), . . . , xn = f(xn−1),... This defines an infinite sequences of elements of X. As f is a contraction,

d(x2, x1) = d(f(x1), f(x0))

≤ Kd(x1, x0). Similarly,

d(x3, x2) = d(f(x2), f(x1))

≤ Kd(x2, x1) 2 ≤ K d(x1, x0). Fund. Theory ODE Lecture Notes – J. Arino 94 A. Definitions and results

Iterating, n d(xn+1, xn) ≤ K d(x1, x0). Therefore,

d(xn+p, xn) ≤ d(xn+p, xn+p−1) + d(xn+p−1, xn+p−2) + ··· + d(xn+1, xn) p−1 p−2 n ≤ (K + K + ··· + K + 1)K d(x1, x0) Kn ≤ d(x , x ). 1 − K 1 0

Therefore d(xn+p, xn) tends to 0 as n → ∞, so {xn} is a Cauchy sequence. Since X is a complete space, it follows that {xn} admits a limit `. As limn→∞ xn = `, it follows from continuity of f that xn+1 = f(xn) tends to f(`). But xn+1 also converges to `, so f(`) = `, that is, ` is a fixed point of f. Uniqueness Suppose `1 and `2 are two fixed points. Then there must hold that

d(`1, `2) ≤ Kd(`1, `2) < d(`1, `2), if d(`1, `2) 6= 0. Therefore d(`1, `2) = 0, and `1 = `2. In the case that f : S ⊂ X → S, the theorem takes the form of Theorem A.12. Closedness of S is an implicit requirement, since the set S in the complete metric space X is closed if, and only if, S is complete. Theorem A.12. Let S be a closed subset of the complete metric space (X, d). Every con- traction mapping f : S → S has one and only one x ∈ S such that f(x) = x. Theorem A.8.1. Consider a mapping f : X → X, where X is a complete metric space. Suppose that f is not necessarily a contraction, but that one of the iterates f k of f, is a contraction. Then f has a unique fixed point.

A.9 Jordan normal form

Theorem A.9.1. Every complex n × n matrix A is similar to a matrix of the form   J0 0 0 ··· 0  0 J1 0 ··· 0  J =    ·····  0 0 0 ··· Js where J0 is a with diagonal λ1, . . . , λn, and, for i = 1, . . . , s,   λq+i 1 0 0 ··· 0 0  0 λq+i 1 0 ··· 0 0    Ji =  ·········     0 0 0 0 ··· λq+i 1  0 0 0 0 ··· 0 λq+i Fund. Theory ODE Lecture Notes – J. Arino A.9. Jordan normal form 95

The λj, j = 1, . . . , q + s, are the characteristic roots of A, which need not all be distinct. If λj is a simple root, then it occurs in J0, and therefore, if all the roots are distinct, A is similar to the diagonal matrix   λ1 0 0 ··· 0  0 λ2 0 ··· 0  J =    ·····  0 0 0 ··· λn An algorithm to compute the Jordan of an n × n matrix A [15].

i) Compute the eigenvalues of A. Let λ1, . . . , λm be the distinct eigenvalues of A with multiplicities n1, . . . , nm, respectively.

ii) Compute n1 linearly independent generalized eigenvectors of A associated with λ1 as follows. Compute i (A − λ1En) k k+1 for i = 1, 2,... until the rank of (A−λ1En) is equal to the rank of (A−λ1En) . Find k−1 a generalized eigenvector of rank k, say u. Define ui = (A−λ1En) u, for i = 1, . . . , k. If k = n1, proceed to step 3. If k < n1, find another linearly independent generalized eigenvector with rank k. If this is not possible, try k − 1, and so forth, until n1 linearly independent generalized eigenvectors are determined. Note that if ρ(A − λ1En) = r, then there are totally (n − r) chains of generalized eigenvectors associated with λ1.

iii) Repeat step 2 for λ2, . . . , λm.

iv) Let u1, . . . , uk,... be the new basis. Observe that

Au1 = λ1u1,

Au2 = u1 + λ1u2, . .

Auk = uk−1 + λ1uk Thus in the new basis, A has the representation   α1 1  ..   . 1   α1       α2 1     ...  J =  1   α2     α 1   3   .   .. 1   α   3  ... Fund. Theory ODE Lecture Notes – J. Arino 96 A. Definitions and results

where each chain of generalized eigenvectors generates a Jordan block whose order equals the length of the chain.

−1 v) The similarity transformation which yields J = Q AQ is given by Q = [u1, . . . , uk,...].

A.10 Matrix exponentials

Let A ∈ Mn(K). The exponential of A is defined by ∞ X An eA = (A.3) n! n=0

A 1 n 1 n P 1 n We have e ∈ Mn(K). Also n! A ≤ n! |||A||| , so that the series n! A is absolutely A convergent in Mn(K). Thus e is well defined.

Property A.10.1. Let A, B ∈ Mn(K). Then

i) eA ≤ e|||A|||. ii) If A and B commute ( i.e., AB = BA), then eA+B = eAeB.

A P∞ 1 n iii) e = I + n=1 n! A . iv) e0 = I.

v) eA is invertible with inverse e−A.

At d At At vi) e is differentiable, and dt e = Ae . vii) If P and T are linear transformations on Kn, and S = PTP −1, then eS = P eT P −1. Proof. v) For any matrix A, we have A(−A) = −AA = (−A)A, so A and −A commute. Therefore, eAe−A = eA−A = e0 = I. Therefore, for any A, eA is invertible with inverse e−A. There are several shortcuts to computing the exponential of a matrix A:

q A Pq k 1) If A is nilpotent, that is, if there exists q ∈ N such that A = 0, then e = k=1 A /k!. A has several interesting properties. A is nilpotent if and only if all its eigenvalues are zero. A nilpotent matrix has zero determinant and .

2) If A is such that there exists q ∈ N such that Aq = A, then this can sometimes be exploited to simplify the computation of eA.

3) Any matrix A can be written in the form A = D + N, where D is diagonalizable, N is nilpotent, and D and N commute. Therefore, eA = eD+N = eDeN .

4) Other cases require the use of the Jordan normal form (explained below). Fund. Theory ODE Lecture Notes – J. Arino A.11. Matrix logarithms 97

Use of the Jordan form to compute the exponential of a matrix. Suppose that J = P −1AP is the Jordan form of the matrix A. For a block diagonal matrix   B1 0  ..  B =  .  , 0 Bs we have, for k = 0, 1,...,  k  B1 0 k  ..  B =  .  , k 0 Bs

Therefore, for t ∈ R,   eJ0t 0 Jt  ..  e =  .  , 0 eJst with   eλ1t 0 J0  ..  e =  .  . 0 eλkt

Now, since Ji = λk+iIi + Ni, with Ni a nilpotent matrix, and since Ii and Ni commute, there holds eJit = eλk+iteNit.

k For k ≥ Ni, Ni = 0, so  ni−1  1 t ··· t (ni−1)!  tni−2  tJi 0 1 ···  (ni−2)!  e =   .   0 ··· 1

A.11 Matrix logarithms

Theorem A.11.1. Suppose that M is a nonsingular n × n matrix. Then there is an n × n B matrix B (possibly complex), such that e = M. If, additionally, M ∈ Mn(R), then there B 2 is B ∈ Mn(R) such that e = M .

Let u ∈ C, we have, for |u| < 1, 1 = 1 − u + u2 − u3 + ··· 1 + u Fund. Theory ODE Lecture Notes – J. Arino 98 A. Definitions and results and, for |u| < 1, t2 t3 t4 ln(1 + u) = t − + − + ··· 2 3 4 ∞ X tk = (−1)k+1 k k=1

For any z ∈ C, z2 z3 exp(z) = 1 + z + + + ··· 2! 3! ∞ X zn = n! n=1

Therefore, for |u| < 1, u ∈ C, 1 + u = exp(ln(1 + u)) ∞ " ∞ #n X 1 X uk = (−1)k+1 n! k n=1 k=1 Suppose that ∞ X (−1)k+1 1 B = (ln λ)I + Zk k λk k=1 where 0 1 ··· 0  .. ..   . .  Z =   0 0 1 0 0 is an m × m-matrix. Since Z is nilpotent (ZN = 0 for all N ≥ m), the above sum is finite. Observe that

∞ ! X (−1)k+1 1 exp(B) = exp((ln λ)I) exp Zk k λk k=1 ∞ k! X (−1)k+1 Z  = λ exp k λ k=1  Z  = λ I + λ = λI + Z = J

We say ln J = B. Fund. Theory ODE Lecture Notes – J. Arino A.12. Spectral theorems 99

A.12 Spectral theorems

When studying systems of differential equations, it is very important to be able to compute the eigenvalues of a matrix, for instance in order to study the local asymptotic stability of an equilibrium point. This can be a very difficult problem, that often becomes intractable even for systems with low dimensionality. However, even if it is not possible to compute an explicit solution, it is often possible to use spectrum localization theorems. We here give two of the most famous ones: the Routh-Hurwitz criterion, and the Gershgorin theorem. Let A be a n × n matrix, denote its elements by (aij). The set of all eigenvalues of A is called its spectrum, and is denoted Sp(A).

Theorem A.12.1 (Routh-Hurwitz). If n = 2, suppose that det A > 0 and trA < 0. Then A has only eigenvalues with negative real part.

Theorem A.12.2 (Gershgorin). Let

n X Rj = |aij| i=1,i6=j

Let λ ∈ Sp(A). Then [ λ ∈ {|λ − ajj| ≤ Rj} j Gershgorin’s theorem is extremely helpful in certain cases. Suppose that A is stictly Pn diagonally dominant, i.e., |aii| > i=1,i6=j |aij|. Then A has no eigenvalues with zero real part. Also, the number of eigenvalues with negative real part is equal to the number of negative entries on the diagonal of A, and conversely for eigenvalues with positive real parts and the number of positive entries on the diagonal of A.

Appendix B

Problem sheets

Contents Homework sheet 1 – 2003 ...... 103 Homework sheet 2 – 2003 ...... 113 Homework sheet 3 – 2003 ...... 117 Homework sheet 4 – 2003 ...... 125 Final examination – 2003 ...... 137 Homework sheet 1 – 2006 ...... 145

101

Fund. Theory ODE Lecture Notes – J. Arino 103

McMaster University – Math4G03/6G03 Fall 2003 Homework Sheet 1

Exercise 1.1 – We consider here the equation

x0(t) = −αx(t) + f(x(t)) (B.1) where α ∈ R is constant and f is continuous on R.

i) Show that x is a solution of (B.1) on R+ if, and only if,  x is continuous on  R+ and (B.2)  −αt −αt R t αs x(t) = e x(0) + e 0 e f(x(s))ds ∀t ∈ R+

Suppose now that α > 0 and that f is such that

∃a, k ∈ R, a > 0, 0 < k < α : ∀u ∈ R, |u| ≤ a ⇒ |f(u)| ≤ k|u| (B.3)

Part I. Suppose that there exists a solution x of (B.1), defined on R+ and satisfying the inequality |x(t)| ≤ a, t ∈ R+ (B.4) i) Prove the inequality −(α−k)t |x(t)| ≤ |x(0)|e , t ∈ R+ [Hint: Use Gronwall’s lemma with the function g(t) = eαt|x(t)|].

ii) Deduce that x admits the limit 0 as t → ∞.

Part II.

i) Show that any solution of (B.1) on R+ that satisfies |x(0)| < a, satisfies (B.4). ii) Deduce from the preceding questions the two following properties.

a) Any solution x of (B.1) on R+ satisfying the condition |x(0)| < a, admits the limit 0 when t → ∞.

b) The function x ≡ 0 is the unique solution of (B.1) on R+ such that x(0) = 0. Part III. Application. Show that, for α > 1, all solutions of the equation

x0 = −αx + ln 1 + x2 tend to zero when t → ∞. ◦ Fund. Theory ODE Lecture Notes – J. Arino 104 B. Problem sheets

Exercise 1.2 – Let f : [0, +∞) → R, f ∈ C1, and a ∈ R. We consider the initial value problems x0(t) + ax(t) = f(t), t ≥ 0 (B.5) x(0) = 0 and x0(t) + ax(t) = f 0(t), t ≥ 0 (B.6) x(0) = 0 As these equations are linear, the initial value problems (B.5) and (B.6) admit unique solu- tions. We denote φ the solution to (B.5) and ψ the solution to (B.6). Find a necessary and sufficient condition on f such that φ0 = ψ. [Hint: Use a variation of constants formula]. ◦

Exercise 1.3 – Let f : Rn → Rn be continuous. Consider the differential equation x0(t) = f(x(t)) (B.7)

i) Let x be a solution of (B.7) defined on a bounded interval [0, α), with α > 0. Suppose that t 7→ f(x(t)) is bounded on [0, α).

a) Consider the sequence 1 z = x(α − ), n ∈ ∗ α,n n N

∗ Show that (zα,n)n∈N is a Cauchy sequence. n b) Deduce that there exists xα ∈ R such that,

kx(t) − xαk ≤ Mα|t − α|

with M = supt∈[0,α) kf(x(t))k. c) Show that x admits a finite limit when t → α, t < α.

ii) Show that there exists an extension of x that is a solution of (B.7) on the interval [0, α].

◦ Fund. Theory ODE Lecture Notes – J. Arino 105

McMaster University – Math4G03/6G03 Fall 2003 Homework Sheet 1 – Solutions

Solution – Exercise 1 – 1) Let us first show that x solution of (B.1) implies (B.2). If x is a solution of (B.1) on R+, then x is differentiable on R+, and so x is continuous on R+. Furthermore,

0 x (s) = −αx(s) + f(x(s)), for all s ∈ R+ ⇔ eαsx0(s) = −αeαs + eαsf(x(s)) Z t Z t Z t ⇒ eαsx0(s)ds = −α eαsx(s)ds + eαsf(x(s))ds 0 0 0 Z t Z t Z t αs t αs αs αs [e x(s)]0 − α e x(s)ds = −α e x(s)ds + e f(x(s))ds 0 0 0 Z t αt αs e x(t) − x(0) = e f(x(s))ds, for all t ∈ R+ 0 Z t −αt −αt αs x(t) = e x(0) + e e f(x(s))ds, for all t ∈ R+ 0 Let us now prove the converse, i.e., that if a function x satisfies (B.2), then it is a solution αt to the IVP (B.1). Since x and f are continuous on R+, t 7→ e f(x(t)) is continuous on R+. This implies that Z t t 7→ eαsf(x(s))ds 0 is differentiable on R+, and, differentiating the expression for x(t) as given in (B.2), Z t x0(t) = −αe−αtx(0) − αe−αt eαsf(x(s))ds + e−αteαtf(x(t)) 0  Z t  ⇒ x0(t) = −α e−αtx(0) + e−αt eαsf(x(s))ds +f(x(t)) 0 | {z } x(t)

And thus x0(t) = −αx(t) + f(x(t)) which implies that x is a solution to (B.1). Part I. We now assume that (B.3) is also satisfied, and that there exists a solution x on R+ satisfying (B.4). 1) If x is a solution of (B.1), then

Z t x(t) = e−αtx(0) + e−αt eαsf(x(s))ds 0 Fund. Theory ODE Lecture Notes – J. Arino 106 B. Problem sheets

This implies that Z t |x(t)| ≤ e−αt|x(0)| + e−αt eαs|f(x(s))|ds 0 From (B.3) and multiplying both sides by eαt,

Z t eαt|x(t)| ≤ |x(0)| + keαs|x(s)|ds 0 We use Gronwall’s Lemma (Lemma A.2) as follows,

Z t eαt|x(t)| ≤ |x(0)| + k eαs|x(s)| ds | {z } | {z } 0 |{z} | {z } g(t) K(t) L(t) g(s) Thus, Z t  eαt|x(t)| ≤ |x(0)| exp kds 0 ≤ |x(0)|ekt and so finally, for all t ∈ R+, we have |x(t)| ≤ |x(0)|et(k−α) = |x(0)|e−(α−k)t

2) From (B.3), 0 < k < α, hence α − k > 0, which implies, together with the result of the previous question, that limt→∞ x(t) = 0. Part II. 1) Let us suppose that x is a solution of (B.1) that is such that |x(0)| < 0. Let A = {t : |x(t)| ≤ a}. Let us show that A = [0, +∞). First of all, notice that |x(0)| < a and x continuous on R+ implies that there exists t0 ∈ R+ − {0} such that, for all t ∈ [0, t0], |x(t)| ≤ a. Indeed, suppose this were not the case. Then, for all ε > 0, there exists tε ∈ [0, ε] such that |x(tε)| > a. This means that for all 1 n ∈ N − {0}, there exists un ∈ [0, n ] such that |x(un)| > a. As un → 0 as n → ∞ and that x is continuous, |x(0)| ≥ a, since taking the limit implies that strict inequalities become loose. This is a contradiction with |x(0)| < a. Thus [0, t0] ⊂ A. Let us now show that for all t1 ∈ A, [0, t1] ⊂ A, i.e., A is an interval. First, if t1 ≤ t0 then [0, t1] ⊂ [0, t0] ⊂ A. Secondly, in the case t1 > t0, suppose that [0, t1] 6⊂ A. This means that ∃η ∈ [0, t1] such that η 6∈ A. More precisely, ∃η ∈ (t0, t1) such that η 6∈ A, since [0, t0] ⊂ A and t1 ∈ A. Let β be the smallest such η, that is, β = inf{t ∈ (t0, t1); t 6∈ A}. Note that β can also be defined as β = sup{t ∈ (t0, t1); t ∈ A}. Thus β = inf{t ∈ (t0, t1); |x(t)| > a} = sup{t ∈ (t0, t1); |x(t)| < a} Since x is continuous, this implies that |x(β)| ≥ a and |x(β)| ≤ a, hence x(β) = ±a. But, with its sup definition, this implies that β = t1, whereas with its inf definition, this implies that β < t1. Fund. Theory ODE Lecture Notes – J. Arino 107

Hence A is an interval. We now want to show that it is an unbounded interval. Assume it is bounded, hence let c = sup(A) < ∞. Since x is continuous, A = {t ≥ 0, |x(t)| ≤ a} implies that c ∈ A. Thus A = [0, c]. Therefore, for all t ∈ [0, c],

|x(t)| ≤ a so, by Part I, 1),

|x(t)| ≤ |x(0)|e−(α−k)t ⇒ |x(t)| ≤ |x(0)| < a ⇒ |x(c)| < a

Since x is continuous on R+, there exists t > c such that |x(t)| ≤ a, and thus there exists t > c such that t ∈ A, which is a contradiction. Therefore, A = [0, ∞), and we can conclude that ∀t ∈ R+, |x(t)| ≤ a. Another proof, contributed by Guihong Fan, proceeds by contradiction, using the fact the (B.1) is an autonomous scalar equation. Notice that equation (B.1) can be written in the form x0 = g(x), with g(u) = −αu + f(u). Thus, since this mapping is continuous, we can apply Theorem 1.1.8 on the monotonicity of the solutions to an autonomous scalar differential equation. Assume that x(t) is a solution of (B.1) on R+ that satisfies |x(0)| < a, but that (B.4) is violated. Then, since the solution x(t) is monotone, there exists t0 ∈ R+ such that one of the following holds.

0 i) x(t0) = a and x (t0) > 0,

0 ii) x(t0) = −a and x (t0) < 0.

Let us treat case i). From (B.3), it follows that |f(x(t0))| = |f(a)| ≤ ka. Therefore, using (B.1),

0 x (t0) = −αx(t0) + f(x(t0)) = −αa + f(a) ≤ −αa + ka ≤ −(α − k)a < 0

0 since α > k. This is a contradiction with x (t0) > 0. Case ii) is treated similarly, and thus it follows that (B.4) holds for all t ∈ R+. 2 – a) If |x(0)| < a, then we have just proved that for all t ∈ R+, |x(t)| ≤ a. From Part −(α−k)t I, 1), this implies that |x(t)| ≤ |x(0)|e . Therefore, since α > k, limt→∞ x(t) = 0. 2 – b) To show that x ≡ 0 is the only solution of (B.1) such that x(0) = 0, we first show that x ≡ 0 is a solution of (B.1). Condition (B.3) applied to 0 states that |0| < a implies |f(0)| ≤ k|0| = 0. Fund. Theory ODE Lecture Notes – J. Arino 108 B. Problem sheets

Uniqueness: let φ be a solution of (B.1) such that φ(0) = 0. This implies that |φ(0) < a, −(α−k)t and as a consequence, it follows from Part I, 1) that for all t ∈ R+, |φ(t)| ≤ |φ(0)|e , hence for all t ∈ R+, φ(t) = 0. Part III. All solutions of the nonlinear equation x0 = −αx + ln(1 + x2) tend to zero as t → ∞, when α > 1. Indeed, let f(u) = ln(1 + u2). We have 2|u| |f 0(u)| = ≤ 1 1 + u2 since (u − 1)2 = u2 + 1 − 2u ≥ 0 (and hence 2u/(1 + u2) ≤ 1). Then, |f(u) − f(0)| ≤ |u − 0| implies that |f(u)| ≤ |u|, for all u ∈ R. We thus have k = 1 < α, the hypotheses of the exercise are satisfied and all solutions of the equation tend to zero. ◦

Solution – Exercise 2 – Using a variation of constants formula, we obtain Z t −at −a(t−s) φ(t) = e c1 + e f(s)ds, t ≥ 0 0 and Z t −at −a(t−s) 0 ψ(t) = e c2 + e f (s)ds, t ≥ 0 0 Let us show this for the solution of (B.5), the solution of (B.6) is obtained using exactly the same method. The solution to a linear differential equation consists of the general solution to the homogeneous equation together with a particular solution to the nonhomogeneous equation. Here, the homogeneous equation is x0 = −ax

−at and basic integration yields the general solution x(t) = c1e . To obtain a particular solution to the nonhomogeneous equatiob (B.5), we use a variation of constants formula: assume that the constant in the solution x(t) = ce−at is a function of time, hence x(t) = c(t)e−at Taking the derivative of this expression, we obtain x0 = c0e−at − ace−at Substituting both this expression and x = ce−at into (B.5), we get c0e−at − ace−at + ace−at = f(t), t ≥ 0 and hence c0 = eatf(t) Integrating both sides of this expression gives Z t c(t) = easf(s)ds 0 Fund. Theory ODE Lecture Notes – J. Arino 109 so that a particular solution to (B.5) is given by

Z t  Z t x(t) = c(t)e−at = easf(s)ds e−at = e−a(t−s)f(s)ds 0 0 Summing the general solution to the homogeneous equation with this last expression gives the desired result. Using the initial conditions yields

φ(0) = c1 = 0

ψ(0) = c2 = 0

Hence the system under consideration is

Z t φ(t) = e−a(t−s)f(s)ds, t ≥ 0 0 Z t ψ(t) = e−a(t−s)f 0(s)ds, t ≥ 0 0

R t Recall that if g(t) = h(s, t)ds, then for all t0, t0

Z t ∂h g0(t) = h(t, t) + (s, t)ds t0 ∂t This implies that Z t φ0(t) = f(t) − a e−a(t−s)f(s)ds 0 It follows that Z t Z t φ0(t) = ψ(t) ⇔ f(t) − a e−a(t−s)f(s)ds = e−a(t−s)f 0(s)ds 0 0 Z t Z t −a(t−s)  −a(t−s) s=t −a(t−s) ⇔ f(t) − a e f(s)ds = e f(s) s=0 − a e f(s)ds 0 0  −a(t−s) s=t ⇔ f(t) = e f(s) s=0 ⇔ f(t) = f(t) − e−atf(0) ⇔ f(0) = 0

Solution – Exercise 3 – 1 – a) For a vector-valued function, there is no mean value theorem with an equal sign. But the following holds (see, e.g., [3, p. 44], [9, p. 209] or Rudin, p.113). Fund. Theory ODE Lecture Notes – J. Arino 110 B. Problem sheets

Theorem B.1.3. Let f :[a, b] → F be a continuous mapping, with F a Banach space. 0 0 Suppose that f admits a right derivative fr(x) for all x ∈ (a, b), and that kfr(x)k ≤ k, where k ≥ 0 is a constant. Then kf(b) − f(a)k ≤ k(b − a) and more generally, for all x1, x2 ∈ [a, b],

kf(x2) − f(x1)k ≤ k|x2 − x1|

Let us denote M = supt∈[0,α) kf(t, x)k, and let n, p > N, where N is sufficiently large 1 1 that α − n+p ∈ [0, α) and α − n ∈ [0, α). Using Theorem B.1.3, we obtain that 1 1 1 1 kx(α − ) − x(α − )k ≤ M| − | n + p n n n + p 1 1 kz − z k ≤ M| − | α,n+p α,n n n + p

∗ So the sequence (zα,n)n∈N is a Cauchy sequence. 1 – b) For all t ∈ [0, α),

1 Z t kx(t) − x(α − )k ≤ kf(x(s))kds n 1 α− n 1 kx(t) − z k ≤ M |t − α + | (B.8) α,n α n

∗ Since the sequence (zα,n)n∈N is a Cauchy sequence, there exists xα = limn→∞(zα,n). Thus taking n → ∞ in (B.8) gives kx(t) − xαk ≤ Mα|t − α| 1 – c) According to 1.b), we have

lim kx(t) − xαk ≤ 0 t→α,t<α Hence lim x(t) = xα t→α,t<α 2) Let  x(t) if t ∈ [0, α) z(t) = xα if t = α Let us show that z is a solution of (B.7) on [0, α] if z is continuous on [0, α] and satisfies the integral equation Z t z(t) = z(t0) + f(z(s))ds t0 for all t ∈ [0, α], and with an arbitrary t0 ∈ [0, α]. We know by construction that z is continuous (since limt→∞ x(t) = xα). Fund. Theory ODE Lecture Notes – J. Arino 111

Let t ∈ [0, α), Z t z(t) = x(t) = x(t0) + f(x(s))ds t0 because x is a solution. For t = α,

z(α) = xα = lim x(t) t→α,t<α Z t = lim x(t0) + f(x(s))ds t→α,t<α t0 Z t = lim z(t0) + f(z(s))ds t→α,t<α t0 since for t < α, we have z(t) = x(t). Furthermore, t 7→ f(z(t)) is bounded on [0, α), which implies that

Z α Z t f(z(s))ds = lim f(z(s))ds t→α t0 t0 So Z α z(α) = z(t0) + f(z(s))ds t0 So z is solution to (B.7) on [0, α]. ◦

Fund. Theory ODE Lecture Notes – J. Arino 113

McMaster University – Math4G03/6G03 Fall 2003 Homework Sheet 2

Exercise 2.1 – Consider the system

x 0 a b x  1 = 1 x2 c d x2 where a, b, c, d ∈ R are constants such that ad − bc = 0. Discuss all possible behaviours of the solutions and sketch the corresponding trajectories. ◦

Exercise 2.2 – Let A be a constant n × n matrix.

i) Show that |eA| ≤ e|A|.

ii) Show that det eA = etrA.

iii) How should α be chosen so that lim e−αteAt = 0. t→∞ ◦

Exercise 2.3 – Let X(t) be a fundamental matrix for the system x0 = A(t)x, where A(t) is an n × n matrix with continuous entries on R. What conditions on A(t) and C guarantee that CX(t) is a fundamental matrix, where C is a constant matrix. ◦

Exercise 2.4 – Consider the system

x0 = A(t)x (B.9) where A(t) is a continuous n × n matrix on R, and A(t + ω) = A(t), ω > 0. i) Show that P(ω), the set of ω-periodic solutions of (B.9), is a vector space.

ii) Let f be a continuous n × 1 matrix function on R with f(t + ω) = f(t). Show that, for the system x0 = A(t)x + f(t) (B.10) the following conditions are equivalent.

a) System (B.10) has a unique ω-periodic solution, b) [X−1(ω) − X−1(0)] is nonsingular, c) dim P(ω) = 0.

◦ Fund. Theory ODE Lecture Notes – J. Arino 114 B. Problem sheets

McMaster University – Math4G03/6G03 Fall 2003 Homework Sheet 2 – Solutions

Solution – Exercise 1 – The characteristic polynomial of the matrix a b A = c d is (a − λ)(d − λ) − bc = λ2 − (a + d)λ + ad − bc = λ2 − (a + d)λ since ad − bc = 0. Thus A has the eigenvalues 0 and a + d. Hence solutions are of the form

x1 = c1 (a+d)t x2 = c2e with c1, c2 ∈ R, and there are three possibilities.

• If a + d < 0, then all points on the x1 axis are equilibria, and all trajectories in (x1, x2)-space go to them parallely to the x2 axis. • If a + d = 0, then all points are equilibria.

• If a + d > 0, then the x1 axis is invariant and any solution that does not start on the x1 axis diverges to ±∞ parallely to the x2 axis. ◦

A P∞ k Solution – Exercise 2 – 1) We have e = k=0 A /k!. Taking the norm,

∞ X keAk = k Ak/k!k, k=0

A P∞ k kAk whence, by the triangle inequality, ke k ≤ k=0 kA /k!k = e . 2) Let J be the Jordan form of A, i.e., there exists P nonsingular such that P −1AP = J, where J has the form diag(Bj)j=1,...,p, with Bj the Jordan block corresponding to the eigenvalue λj of multiplicity mj. Then, since A and J are similar,

det eA = det eJ = det(eB1 )m1 ... det(eBp )mp = eλ1m1 ··· eλpmp p X = exp( λkmk) k=1 = trA Fund. Theory ODE Lecture Notes – J. Arino 115

3) We can write lim e−αteAt = lim e(A−αI)t t→∞ t→∞ Let Sp (A) be the spectrum of A, i.e., the set of eigenvalues of A. Then, if λ ∈ Sp (A), (A−αI)t λ − α ∈ Sp (A − αI). We have limt→∞ e = 0 if, and only if, <(µ) < 0 for all µ ∈ Sp (A − αI), i.e., <(µ + α) < 0 for all µ ∈ Sp (A), i.e., <(µ) < α for all µ ∈ Sp (A). Hence, choosing α greater than the eigenvalue of A with maximal real part ensures that (A−αI)t limt→∞ e = 0. ◦

Solution – Exercise 3 – We have

(CX(t))0 = CX0(t) = CA(t)X(t)

For CX(t) to be a fundamental matrix for x0 = A(t)x requires that (CX(t))0 = A(t)(CX(t)). So C and A(t) must commute. Also, a fundamental matrix must be nonsingular. As X(t) is a fundamental matrix, it is nonsingular. Thus C must be nonsingular for CX(t) to be a fundamental matrix. So, to conclude, if X(t) is a fundamental matrix for the system x0 = A(t)x, then CX(t) is a fundamental matrix for x0 = A(t)x if C is nonsingular and commutes with A(t). ◦

n n Solution – Exercise 4 – 1) For x ∈ R , x ∈ P(ω) if it satisfies (B.9). Let x1, x2 ∈ R n n and α1, α2 ∈ R. Then, as R is a vector space, α1x1 + α2x2 ∈ R . Now assume that, moreover, x1, x2 ∈ P(ω). Then,

0 0 0 (α1x1 + α2x2) = α1x1 + α2x2

= α1A(t)x1 + α2A(t)x2

= A(t)(α1x1 + α2x2) and therefore, α1x1 + α2x2 ∈ P(ω), and P(ω) is a vector space. 2) There were of course several approaches to this problem. The simplest one required the use of Theorem B.2.4, stated and proved later. c) ⇒ b) Let V be the nonsingular matrix such that X(t + ω) = X(t)V , that we know to exist from Theorem 2.4.2. Then X−1(t + ω) = V −1X−1(t), and X−1(t + ω) − X−1(t) = (V −1 − I)X−1(t). Suppose that dim P(ω) = 0. Then 1 is not an eigenvalue of V . This implies that 1 is neither an eigenvalue of V −1; in turn, 0 is not an eigenvalue of V −1 − I. This means that V −1 − I is nonsingular, and since X(t) is nonsingular, (V −1 − I)X−1(t) is nonsingular. Thus we conclude that if dim P(ω) = 0, then (X−1(t + ω) − X−1(t)) is nonsingular. b) ⇒ c) The previous argument works the other way as well. c) ⇒ a) Suppose dim P(ω) = 0 and that x1, x2 are two solutions to (B.10). Then 0 0 x1 = A(t)x1 + f(t) and x2 = A(t)x2 + f(t). Therefore,

0 (x1 − x2) = A(t)x1 + f(t) − A(t)x2 − f(t) = A(t)(x1 − x2) Fund. Theory ODE Lecture Notes – J. Arino 116 B. Problem sheets

which implies that x1−x2 is a solution to (B.9), and therefore, dim P(ω) 6= 0, a contradiction. Thus the solution to (B.10) is unique. a) ⇒ c) Let x be the unique ω-periodic solution of (B.10), and assume that dim P(ω) 6= 0, i.e., there exists y, non trivial ω-periodic solution to (B.9). Then

(x + y)0 = A(t)x + f(t) + A(t)y = A(t)(x + y) + f(t) and so x + y is a solution to (B.10), which is a contradiction since x is the unique solution to (B.10). ◦

Theorem B.2.4. Consider the system

x0 = A(t)x where A(t) has continuous entries on R and is such that A(t + ω) = A(t) for some ω ∈ R. Then 1 is an eigenvalue of A(t).

Proof. For some constant vector c 6= 0, we have x(t) = X(t)c. Also, because of periodicity, x(t + ω) = X(t + ω)c. As x is periodic of period ω, x(t) = x(t + ω), so that, using the previously obtained forms,

X(t)c = X(t + ω)c X(t)c = X(t)V c c = V c

Hence c is an eigenvector of V with corresponding eigenvalue (Floquet multiplier) λ = 1. Fund. Theory ODE Lecture Notes – J. Arino 117

McMaster University – Math4G03/6G03 Fall 2003 Homework Sheet 3

Exercise 3.1 – Compute the solution of the differential equation x0(t) = x(t) − y(t) − t2 (B.11) y0(t) = x(t) + 3y(t) + 2t ◦

Exercise 3.2 – Consider the initial value problem x0(t) = A(t)x(t) (B.12) x(t0) = x0 We have seen that the solution of this initial value problem is given by

x(t) = R(t, t0)x0 where R(t, t0) is the resolvent matrix of the system. Suppose that we are in the case where the following condition holds ∀t, s ∈ I,A(t)A(s) = A(s)A(t) (B.13) with I ⊂ R.   i) Show that M(t) = exp R t A(s)ds is a solution of the matrix initial value problem t0

M 0(t) = A(t)M(t)

M(t0) = In

where In is the n × n identity matrix. [Hint: Use the formal definition of a derivative, 0 i.e., M (t) = limh→0(M(t + h) − M(t))/h] ii) Deduce that, when (B.13) holds, Z t  R(t, t0) = exp A(s)ds t0

iii) Deduce the following theorem. Theorem B.3.5. Let U, V be constant matrices that commute, and suppose that A(t) = f(t)U + g(t)V for scalar functions f, g. Then Z t  Z t  R(t, t0) = exp f(s)dsU exp g(s)dsV (B.14) t0 t0 Fund. Theory ODE Lecture Notes – J. Arino 118 B. Problem sheets

Exercise 3.3 – Find the resolvent matrix associated to the matrix a(t) −b(t) A(t) = (B.15) b(t) a(t) where a, b are continuous functions on R. ◦

Exercise 3.4 – Consider the linear system 1 x0 = x + ty t (B.16) y0 = y with initial condition x(t0) = x0, y(t0) = y0. i) Solve the initial value problem (B.16).

ii) Deduce the formula for the principal solution matrix R(t, t0). iii) Show that in this case, Z t  R(t, t0) 6= exp A(s)ds t0 with  1 t A(t) = t 0 1 ◦ Fund. Theory ODE Lecture Notes – J. Arino 119

McMaster University – 4G03/6G03 Fall 2003 Solutions – Homework Sheet 3

Solution – Exercise 3.1 – Let ξ(t) = (x(t), y(t))T ,

1 −1 A = 1 3 and B(t) = (−t2, 2t)T . The system (B.11) can then be written as

ξ0 = Aξ + B(t)

This is a nonhomogeneous linear system, so we use the variation of constants,

Z t (t−t0)A (t−s)A ξ(t) = e ξ0 + e B(s)ds t0 where ξ0 = ξ(t0). Let us assume for simplicity that t0 = 0. Eigenvalues of A are (λ − 2)2, with associated subspace

x −1 −1 x 0 ker(A − 2I) = ; = y 1 1 y 0 x  = ; x + y = 0 y

Thus dim ker(A − 2I) = 1 6= 2, and A is not diagonalisable. Let us compute the Jordan canonical form of A. There exists P nonsingular such that

2 α P −1AP = 0 2 where α is a constant that has to be determined.

2 1 P −1AP = 0 2  1 −1  0 −1 P = ,P −1 = −1 0 −1 −1 Then eAt = P e(P −1AP )tP −1 with e(P −1AP )t = eP −1(At)P Fund. Theory ODE Lecture Notes – J. Arino 120 B. Problem sheets

We have P −1AP = 2I + N, where 0 1 0 0 Therefore, e(P −1AP )t = e(2It+Nt) = e2teNt. Now, N is nilpotent (N 2 = 0), so eNt = P∞ tn n n=0 n! N = I + Nt. As a consequence, e(P −1AP )t = e2t(I + Nt) 1 0 0 1 = e2t + 0 1 0 0 e2t te2t = 0 e2t

Thus e2t te2t eAt = P P −1 0 e2t (1 − t)e2t −te2t  = te2t (1 + t)e2t 1 − t −t  = e2t t 1 + t

R t −As We still have to compute 0 e B(s)ds. We have 1 + s s  −s2 −s3 − s2 + 2s2  s2 − s3  e−AsB(s) = e−2s = e−2s = e−2s −s 1 − s 2s s3 + 2s − 2s2 s3 − 2s2 + 2s

R t −2s R t −2s 2 R t −2s 3 Let I1(t) = 0 e sds, I2(t) = 0 e s ds and I3(t) = 0 e s ds. Then Z t  I (t) − I (t)  e−AsB(s)ds = 1 3 0 I3(t) − 2I2(t) + 2I1(t) Evaluating the integrals, we obtain

Z t  −2t 1 2 1 1 1 3 1  −As e 4 t + 4 t + 8 + 2 t − 8 e B(s)ds = −2t 1 2 3 3 1 3 3 0 e 4 t − 4 t − 8 − 2 t + 8 As a conclusion,

     −2t 1 2 1 1 1 3 1  2t 1 − t −t 1 − t −t e 4 t + 4 t + 8 + 2 t − 8 ξ(t) = e ξ0 + −2t 1 2 3 3 1 3 3 t 1 + t t 1 + t e 4 t − 4 t − 8 − 2 t + 8  −2t −2t −2t 1 −2t 1 −2t 1 t 3 −2t 2 e x0 − e tx0 − e ty0 + 2 e t + 8 e − 8 − 4 + 4 e t = −2t −2t −2t 1 −2t 2 −2t t 3 −2t 3 e tx0 + e y0 + e ty0 − 4 e t − e t + 4 − 8 e + 8 is the solution of (B.11) going through (x0, y0) at time 0. ◦ Fund. Theory ODE Lecture Notes – J. Arino 121

Solution – Exercise 3.2 – 1) We have

1 1  Z t+h  Z t  (M(t + h) − M(t)) = exp A(s)ds − exp A(s)ds h h t0 t0

Since, for all s1, s2 ∈ I, A(s1)A(s2) = A(s2)A(s1), we have

Z s1  Z s2  Z s2  Z s1  A(u)du A(u)du = A(u)du A(u)du t0 t0 t0 t0 It follows that 1 1  Z t+h  Z t  Z t  (M(t + h) − M(t)) = exp A(s)ds exp A(s)ds − exp A(s)ds h h t t0 t0 1  Z t+h   = M(t) exp A(s)ds − I h t  1 Z t+h  1  = exp A(s)ds − I M(t) h t h The second term in this equality tends to zero as h → 0, and thus 1 lim (M(t + h) − M(t)) = A(t)M(t) h→0 h therefore M 0(t) = A(t)M(t), hence the result. 2) The implication is trivial. 3) If U and V commute and that A(t) = Uf(t) + V g(t), then

A(s)A(t) = (Uf(s) + V g(s))(Uf(t) + V g(t)) = U 2f(s)f(t) + UV f(s)g(t) + V Ug(s)f(t) + V 2g(s)g(t) = U 2f(t)f(s) + V Ug(t)f(s) + UV f(t)g(s) + V 2g(t)g(s) = (Uf(t) + V g(t))Uf(s) + (Uf(t) + V g(t))V g(s) = (Uf(t) + V g(t))(Uf(s) + V g(s)) that is, A(s) and A(t) commute for all t, s. Then

Z t  Z t  R(t, t0) = exp A(s)ds = exp Uf(s) + V g(s)ds t0 t0 Z t Z t  = exp f(s)dsU + g(s)dsV t0 t0 Z t  Z t  = exp f(s)dsU exp g(s)dsV t0 t0 ◦ Fund. Theory ODE Lecture Notes – J. Arino 122 B. Problem sheets

Solution – Exercise 3.3 – Writing

1 0 0 −1 A(t) = a(t) + b(t) = a(t)I + b(t)V, 0 1 1 0 it is obvious the Theorem B.3.5 can be used here, since I commutes with all matrices. Thus,

Z t  Z t  R(t, t0) = exp a(s)dsI exp b(s)dsV t0 t0

R t R t α(t)I β(t)V α(t) β(t)V Let α(t) = a(s)ds and β(t) = b(s)ds. Then R(t, t0) = e e = e Ie . Now t0 t0 notice that V 2 = I, V 3 = −V , etc., so that we can write that

 (−1)pI if n = 2p, V n = (−1)pV if n = 2p + 1.

This implies that

∞ X 1 eβ(t)V = β(t)nV n n! n=0 ∞ ! ∞ ! X (−1)p X (−1)p = β(t)2p I + β(t)2p+1 V (2p)! (2p + 1)! p=0 p=0 = cos(β(t))I + sin(β(t))V

Thus

α(t) R(t, t0) = e (cos(β(t))I + sin(β(t))V ) eα(t) cos(β(t)) −eα(t) sin(β(t)) = eα(t) sin(β(t)) eα(t) cos(β(t)) ◦

Solution – Exercise 3.4 – 1) Notice that the y0 equation in (B.16) does not involve x. Therefore, we can solve it directly, giving y(t) = Cet, with C ∈ R. Substituting this into the equation for x0, we have 1 x0 = x(t) + tCet t To solve this nonhomogeneous first-order scalar equation, we start by solving the homoge- neous part, x0 = x/t. This equation is separable, giving the solution x(t) = Kt, for K ∈ R. Now we use a variation of constants approach to find a particular solution to the nonhomoge- neous problem. We use the ansatz x(t) = K(t)t, which, when differentiated and substituted into the nonhomogeneous equation, gives K0(t) = Cet, and hence, K(t) = Cet is a particular solution, giving the general solution x(t) = Kt + Cet. Fund. Theory ODE Lecture Notes – J. Arino 123

Let t0 6= 0 (to avoid problems with 1/t), and suppose x(t0) = x0, y(t0) = y0. Then t0 t0 −t0 x0 = Kt0 + Ct0e and y0 = Ce . It follows that K = x0/t0 − y0 and C = e y0, and the solution to the equation going through the point (x0, y0) at time t0 is given by

   x0 −t0 t  x0 t−t0  x(t) ( − y0)t + e y0te t + y0t(e − 1) ξ(t) = = t0 = t0 . −t0 t t−t0 y(t) e y0e y0e

2) The solution to the IVP

ξ0 = A(t)ξ

ξ(t0) = ξ0 is given by ξ(t) = R(t, t0)ξ0. Thus, the resolvent matrix (or principal solution matrix) for (B.16) is given by  t t−t0  t −t + te R(t, t0) = 0 0 et−t0 3) First of all, notice that A(t) and A(s) do not commute. Let us compute B(t) = R t A(s)ds. t0 ! R t 1 ds R t sds B(t) = t0 s t0 0 R t ds t0 ln t 1 (t2 − t2) = t0 2 0 0 t − t0 which, for convenience, we denote

α β B(t) = 0 γ

Eigenvalues of B(t) are α and γ. As R(t0, t0) = B(t0) = I, we are concerned with finding a t 6= t0 such that B(t) is diagonalizable. If a t exists such that α 6= γ, then B(t) is diagonalizable, i.e., there exists P nonsingular such that

α 0 P −1B(t)P = D = 0 β

Then eα 0  eB(t) = P P −1 0 eβ We find 1 β  1 γ − α −β P = ,P −1 = 0 γ − α γ − α 0 1 Fund. Theory ODE Lecture Notes – J. Arino 124 B. Problem sheets

Thus, after a few computations,

 α β γ α   t  B(t) e (e − e ) ∆ e = γ−α = t0 0 eγ 0 et−t0

The element ∆ in this matrix is the only one different from the elements in R(t, t0). We have

t2 − t2  t  0 t−t0 t−t0 ∆ = t e − 6= t(e − 1) 2(t − t0 − ln ) t0 t0 ◦ Fund. Theory ODE Lecture Notes – J. Arino 125

McMaster University – Math4G03/6G03 Fall 2003 Homework Sheet 4

Exercise 3.1 – A differential equation of the form

x0 = f(t, x(t), x(t − ω)) (B.17) for ω > 0, is called a delay differential equation (or also a differential difference equation, or an equation with deviating argument), and ω is called the delay. The basic initial value problem for (B.17) takes the form

x0 = f(t, x(t), x(t − ω)) (B.18) x(t) = φ0(t), t0 − ω ≤ t ≤ t0

i) Use the method of steps to construct the solution to (B.18) on the interval [t0, t0 + ω], that is, find how to construct the solution to the non delayed problem

x0 = f(t, x(t), φ (t − ω)) 0 (B.19) x(t0) = φ0(t0), t0 ≤ t ≤ t0 + ω

ii) Discuss existence and uniqueness of the solution on the interval [t0, t0 + ω], depending on the nature of φ0 and f.

0 iii) Suppose that φ0 ∈ C ([t0 − ω, t0]). Discuss the regularity of the solution to (B.18) on the interval [t0 + kω, t0 + (k + 1)ω], k ∈ N. ◦

Exercise 3.2 – Consider the delay initial value problem

x0(t) = ax(t − ω) (B.20) x(t) = C, t ∈ [t0 − ω, t0]

∗ with a, C ∈ R, ω ∈ R+. Using the ideas of the previous exercise, find the solution to (B.20) on the interval [t0 + kω, t0 + (k + 1)ω], k ∈ N. ◦

n tAi Exercise 3.3 – Compute Ai and e for the following matrices.

 0 1 − sin(θ) 0 1 1 1 A = ,A = ,A = −1 0 cos(θ) . 1 1 0 2 0 1 3   − sin(θ) cos(θ) 0 ◦ Fund. Theory ODE Lecture Notes – J. Arino 126 B. Problem sheets

Exercise 3.4 – Compute etA for the matrix  1 1 0  A = −1 0 −1 . 0 −1 1 ◦

n Exercise 3.5 – Let A ∈ Mn(R) be a matrix (independent of t), k · k be a norm on R and |||·||| the associated on Mn(R). ∗ i) a) Show that for all t ∈ R and all k ∈ N , there exists Ck(t) ≥ 0 such that,   t A t 1 e k − I + A ≤ Ck(t). k k2

t2 2 with limk→∞ Ck(t) = 2 |||A |||. b) Show that for all t ∈ R and all k ∈ N∗,

t |t| |||A||| I + A ≤ e k . k

c) Deduce that  t k etA = lim I + A . k→∞ k

ii) Suppose now that A is symmetric and that its eigenvalues are > −α, with α > 0.

a) Show by induction that, for k ≥ 0,

Z ∞ tk (αI + A)−(k+1) = e−t(αI+A) dt. 0 k!

b) Deduce that for all u > 0,

−k −tA (I + uA) ≤ M if, and only if, e ≤ M, ∀t > 0.

c) Show that

−tA  −1  ∀t > 0, e ≥ 0 ⇔ ∃λ0, ∀λ > λ0, (λI + A) ≥ 0

where by convention, for B = (bij) ∈ Mn(R), writing that B ≥ 0 means that bij ≥ 0 for all i, j = 1, . . . , n. iii) Do the results of part 2) hold true if A is a nonsymmetric matrix? ◦ Fund. Theory ODE Lecture Notes – J. Arino 127

McMaster University – Math4G03/6G03 Fall 2003 Homework Sheet 4 – Solutions

Solution – Exercise 1 – 1) The proposed method consists in considering (B.18) as a nondelayed IVP on the interval [t0, t0 + ω]. Indeed, on this interval, we can consider (B.19). That the latter is a nondelayed problem is obvious if we rewrite the differential equation as x0(t) = g(t, x(t)) (B.21) with g(t, x(t)) = f(t, x(t), φ0(t − ω)), which is well defined on the interval [t0, t0 + ω] since for t ∈ [t0, t0 + ω], t − ω ∈ [−ω, 0], on which the function φ0 is defined. We can then use the integral form to construct the solution on the interval [t0, t0 + ω], Z t x(t) = x(t0) + g(s, x(s))ds t0 Z t = φ0(t0) + f(s, x(s), φ0(s − ω))ds t0 2) Obviously, the discussion to make is on the nature of the function f. As problem (B.19) is an ordinary differential equations initial value problem, existence and uniqueness of solutions on the interval [t0, t0 + ω] follow the usual scheme. To discuss the required properties on f and φ0, the best is to use (B.21). Recall that a vector field has to be continuous both in t and in x for solutions to exist. Thus to have existence of solutions to the equation (B.21), g must be continuous in t and x. This implies that f(t, x, φ0(t − ω)) must be continuous in t, x. Thus φ0 has to be continuous on [t0 − ω, t0]. Now, for uniqueness of solutions to (B.21), we need g to be Lipschitz in x, i.e., we require the same property from f. Note that this does not imply either φ0 or the way f depends on φ0. 1 3) Every successive integration raises the regularity of the solution: x is C on [t0, t0 +ω], 2 n C on [t0 + ω, t0 + 2ω], etc. Hence, x is C on [t0 + (n − 1)ω, t0 + nω]. ◦

Solution – Exercise 2 – We proceed as previously explained. We assume for sim- plicity that t0 = 0. To find the solution on the interval [0, ω], we consider the nondelayed IVP 0 x1 = ax0(t)

x1(0) = C where x0(t) = C for t ∈ [0, ω]. The solution to this IVP is straightforward, x1(t) = C+aCt = C(1 + at), defined on the interval [0, ω]. To integrate on the second interval, we consider the IVP 0 x2 = a[C(1 + at)]

x2(ω) = x1(ω) = C + aCω Fund. Theory ODE Lecture Notes – J. Arino 128 B. Problem sheets

Hence we find the solution to the differential equation to be, on the interval [ω, 2ω],

 1 1  x (t) = C 1 + at + a2t2 − a2ω2 2 2 2

Iterating this process one more time with the IVP

  1 1  x0 = a C 1 + at + a2t2 − a2ω2 3 2 2 3 x (2ω) = x (2ω) = a2Cω2 + 2aCω + C 3 2 2 we find, on the interval [2ω, 3ω], the solution

 1 1 1 1 1  x (t) = C 1 + at + a2t2 + a3t3 − ta3ω2 − a3ω3 − a2ω2 3 2 6 2 3 2

We develop the intuition that the solution at step n (i.e., on the interval [(n − 1)ω, nω]) must take the form n X (t − (k − 1)ω)k x (t) = C ak (B.22) n k! k=0 This can be proved by induction (we will not do it here). ◦

2 3 Solution – Exercise 3 – For matrix A1, we have A1 = I, A1 = A1, etc. Hence, 2n 2n+1 A1 = I and A1 = A1, which implies

∞ ∞ X t2n X t2n+1 eAt = A2n + A2n+1 (2n)! 1 (2n + 1)! 1 n=0 n=0 ∞ ! ∞ ! X t2n X t2n+1 = I + A (2n)! (2n + 1)! 1 n=0 n=0 = cosh tI + sinh tA cosh t sinh t = sinh t cosh t

For matrix A2, remark that it can be written as A2 = I + N, where

0 1 N = 0 0

2 2 2 3 is a nilpotent matrix. Thus A2 = (I +N) = I +2N +N = I +2N, A2 = (I +2N)(I +N) = I + 2N + N + 2N 2 = I + 3N, and it is easily shown by induction that (I + N)n = I + nN. Fund. Theory ODE Lecture Notes – J. Arino 129

It follows that ∞ ! ∞ ! X tn X ntn eA2t = I + N n! n! n=0 n=0 ∞ ! X tn−1 = etI + t N (n − 1)! n=0 = etI + tetN 1 t = et 0 1

Finally, for matrix A3, we have that  2 1  − cos θ − 2 sin 2θ cos θ 2 1 2 A3 = − 2 sin 2θ − sin θ sin θ − cos θ − sin θ 1

2 3 A3t t 2 and A3 = 0, i.e., A3 is nilpotent for the index 3. Therefore, e = I + tA3 + 2 A3. ◦ Solution – Exercise 4 – The Jordan canonical form of A is 0 0 0 J = 0 1 1 0 0 1 Now, to compute eJ t, remark that J has the form  0 0 0   0   A  0 1 where eA1t has been computed in Exercise 3. Hence,   e0t 0 0 1 0 0    t t 0 A t = 0 e te   e 1  t 0 0 0 e We have J = P −1AP , where −1 −1 2  1 1 1 −1 P =  1 0 −1 ,P = 0 −1 1 1 1 −1 1 0 1 are the matrices of change of basis that transform A to its Jordan canonical form. Then A = PJP −1, and eAt = P eJtP −1, i.e., (2 − t)et − 1 et − 1 (1 − t)et − 1 At e =  1 − et 1 1 − et  1 + (t − 1)et 1 − et1 + tet Fund. Theory ODE Lecture Notes – J. Arino 130 B. Problem sheets

Solution – Exercise 5 – This exercise was far from trivial. 1–a) Consider the map t 7→ eAt

R → Mn(R) We have (eAt)0 = AeAt and (eAt)(k) = AkeAt, where u(k) denotes the kth derivative of u.

2 2 t A t t 2 t t e k = I + A + A + ε( ) k 2k2 k2 k Thus 2 2 t A t t 2 t t e k − I − A = A + ε( ) k 2k2 k2 k Therefore, taking the norm |||·||| of this expression,  2  t A t 1 t 2 2 t 1 e k − (I + A) ≤ A + t ε( ) = Ck(t) k k2 2 k k2 We have 2 t 2 lim Ck(t) = A k→∞ 2 P∞ tj j Let Sk = j=3 kj−2j! |||A||| . This series is uniformly convergent, which implies that we can change the order in the following limit,

∞  j  X t j lim Sk = lim |||A||| = 0 k→∞ k→∞ kj−2j! j=3

1–b) We have already seen (Exercise 2, Assignment 2) that eA ≤ e|||A|||. Therefore,

t A t |||A||| k k e ≤ e But ∞ k |t| |||A||| X 1 |t| | k e k = |||A||| k! kk k=0 |t| |t|2| = 1 + |||A||| + |||A|||2 + ··· k 2k2

|t|2 2 which, since 2k2 |||A||| + · · · ≥ 0, implies that

|t| |||A||| |t| t e k ≥ 1 + |||A||| = |||I||| + A k k

t ≥ I + A k Fund. Theory ODE Lecture Notes – J. Arino 131

(since |||I||| = supkvk≤1 kIV k = supkvk≤1 kvk = 1). 1–c) We skip the scalar case, and consider the case n ≥ 2. We have

At t k t A k t k e − (I + A) = (e k ) − (I + A) k k   k−1 t A t X t A k−1−j t j = e k − (I + A) (e k ) (I + A) k n j=0

n n since for two matrices E,F ∈ Mn(R) (or Mn(C)) that commute, E − F = (E − Pk−1 k−1−j j F ) j=0 E F . Therefore, k−1   At t k t A−(I+ t A) X t(k−1−j) A t j e − (I + A) ≤ e k k e k (I + A) (B.23) k k j=0

t(k−1−j) A |t|(k−1−j) |||A||| k k Now, we have e ≤ e . Also,

j j t j t  |t| |||A||| j|t| |||A||| (I + A) ≤ I + A ≤ e k = e k k k where the last inequality results from question 1–b). Therefore, k−1 At t k 1 X |t| k−1−j |||A||| j|t| |||A||| e − (I + A) ≤ Ck(t) e k e k k k2 j=0 k−1 1 X |t| k−1 |||A||| = C (t) e k k2 k j=0

1 |t| k−1 |||A||| = C (t)ke k k2 k 1 |t| k−1 |||A||| = C (t)e k k k We thus have At t k 1 |t| k−1 |||A||| 1 e − (I + A) ≤ Ck(t)e k = Dk(t) k k k t2 k−1 t2 2 |t| k |||A||| |t||||A||| 2 |t||||A||| As k → ∞, Ck(t) → 2 |||A ||| and e → e . Therefore, limk→∞ Dk(t) = 2 |||A ||| e , t k At which in turn implies that limk→∞(I + k A) = e . 2–a) We now suppose that A is a . Recall that any symmetric matrix is diagonalizable, with real eigenvalues. Furthermore, there exists a matrix P such that −1 T T P = P and that P AP = diag(λi), with λi ∈ Sp (A). We assume that the eigenvalues λi, i = 1, . . . , m, are such that λi > −α, for α > 0. We want to show by induction that the following holds. Z ∞ tk ∀k ≥ 0, (αI + A)−(k+1) = e−(αI+A)t dt (B.24) 0 k! Fund. Theory ODE Lecture Notes – J. Arino 132 B. Problem sheets

Suppose that k = 0. Equation (B.24) reads Z ∞ (αI + A)−1 = e−(αI+A)tdt 0 The matrix (αI + A)−1 is nonsingular. Indeed, suppose that det(αI + A) = 0. This is equivalent to det(−αI −A) = 0, which implies that −α is an eigenvalue of A, a contradiction with the hypothesis on the localization of the spectrum. d Bt Bt Now remark that if B is a nonsingular matrix, the equality dt e = Be implies that d Bt Bt −1 B dt e = e . Since (αI + A) is nonsingular, we thus have d e−(αI+A)t = −(αI + A)e−(αI+A)t dt and therefore d e−(αI+A)t = −(αI + A)−1 e−(αI+A)t dt and so, integrating, Z ∞ Z ∞ d e−(αI+A)tds = −(αI + A)−1 e−(αI+A)t dt 0 0 dt Z i d = −(αI + A)−1 nfty e−(αI+A)t dt 0 dt −1  −(αI+A)t∞ = −(αI + A) e 0 h i = −(αI + A)−1 lim e−(αI+A)t − I (B.25) t→∞ Now,   e−(α+λ1)t 0 −(αI+A)t  ..  −1 e = P  .  P 0 e−(α+λn)t

−(α+λi)t Since for all i = 1, . . . , n, λi > −α, it follows that limt→∞ e = 0, which in turn implies −(αI+A)t that limt→∞ e = 0. Using this in (B.25) gives (B.24) for k = 0. Now assume (B.24) holds for k = j, i.e.,

Z ∞ tj−1 (αI + A)−j = e−(αI+A)t dt 0 (j − 1)! Then Z ∞ tj Z ∞ d tj e−(αI+A)t dt = −(αI + A)−1 e−(αI+A)t dt 0 j! 0 dt j! Z ∞ d tj = −(αI + A)−1 e−(αI+A)t dt 0 dt j!  tj ∞ Z ∞ tj−1  = −(αI + A)−1 e−(αI+A)t − e−(αI+A)t dt j! 0 0 (j − 1)! Fund. Theory ODE Lecture Notes – J. Arino 133

As we did in the k = 0 case, we now use the bound on the eigenvalues to get rid of the term

j  t e−(α+λ1)t 0  tj j! −(αI+A)t  ..  −1 e = P  .  P −→0 as t → ∞ j! j t −(α+λn)t 0 j! e

Therefore,

Z ∞ tj Z ∞ tj−1 e−(αI+A)t dt = −(αI + A)−1 e−(αI+A)t dt 0 j! 0 (j − 1)! = −(αI + A)−1(αI + A)−j = −(αI + A)−(j+1) from which we deduce that (B.24) holds for all k ≥ 0.

2–b) Let us begin with the implication (∀u > 0, (I + uA)−k ≤ M) ⇒ (∀t > 0, e−At ≤ M). At t k We know from 1–c) that e = limk→∞(I + k A) . Thus

!−1  t k  t −k e−At = lim I + A = lim I + (B.26) k→∞ k k→∞ k

Let u = t/k with k ∈ N∗ and t > 0. Then

∗ t −k ∀t > 0, ∀k ∈ N , (I + A) ≤ M k

t −k ⇒ ∀t > 0, lim (I + A) ≤ M k→∞ k

t −k ⇒ ∀t > 0, lim (I + A) ≤ M k→∞ k which, using (B.26), implies that

−At ∀t > 0, e ≤ M

We now treat the reverse implication, (∀t > 0, e−At ≤ M) ⇒ (∀u > 0, (I + uA)−k ≤ M). We have 1 1 (I + uA)−k = [u( I + A)]−k = u−k[ I + A]−k u u Suppose that −α > −1/u, i.e., 0 < u < 1/α. Then, from 2–a), it follows that

Z ∞ j−1 −k −k −( 1 I+A)t t (I + uA) = u e u dt 0 (j − 1)! Fund. Theory ODE Lecture Notes – J. Arino 134 B. Problem sheets which, when taking the norm, gives

Z ∞ j−1 −k −k −At − t t (I + uA) ≤ u e e u dt 0 (j − 1)! −k Z ∞ Mu − t k−1 ≤ e u t dt (k − 1)! 0 Let χ = t/u, then dt = udχ, and

−k Z ∞ −k Mu −χ k−1 k−1 (I + uA) ≤ e u χ dχ (k − 1)! 0 M Z ∞ ≤ e−χχk−1dχ (k − 1)! 0

The latter integral is Γ(k), the Gamma Function. It is well known1 that for k ∈ N, Γ(k) = (k − 1)!. Thus, −k (I + uA) ≤ M 2–c) Let us begin with the forward implication (⇒). To apply 2–b) with k = 0, it suffices that the eigenvalues of A be greater than −α. Take λ0 = α. Then λ > α = λ0, and so Z ∞ (λI + A)−1 = e−(λI+A)tdt 0 Z ∞ = e−λte−Atdt ≥ 0 0 since the eigenvalues λ ∈ R and by hypothesis on e−At. ∗ Now for the reverse implication (⇐). That there exists λ0 ∈ R such that for all k ∈ N , (λI + A)−1 ≥ 0 implies that

∗ −k ∀λ > λ0, ∀k ∈ N , (λI + A) ≥ 0 for λ sufficiently large. Take λ = k/t, the previous expression can be written k ∀t > 0, ∀k ≥ k , ( I + A)−k ≥ 0 0 t where k0 is sufficiently large. This implies that

k −k t ∀t > 0, ∀k ≥ k , (I + A)−k ≥ 0 0 t k

As (k/t)−k > 0, t ∀t > 0, ∀k ≥ k , (I + A)−k ≥ 0 0 k 1See, e.g., M. Abramowitz and I.E. Stegun, Handbook of Mathematical Functions. Dover, 1965. Fund. Theory ODE Lecture Notes – J. Arino 135 so t ∀t > 0, lim (I + A)−k ≥ 0 k→∞ k So that this finally implies that ∀t > 0, e−At ≥ 0 3) The results of the previous part hold. However, in the case of a nonsymmetric matrix, we need to ask for the real part of the eigenvalues to be greater than −α. ◦

Fund. Theory ODE Lecture Notes – J. Arino 137

MATH Julien Arino 4G03/6G03

Duration Of Examination: 72 Hours McMaster University Final Examination December 2003

This examination paper includes 4 pages and 3 questions. You are responsible for ensuring that your copy of the paper is complete.

Detailed Instructions You have 72 hours, from the time you pick up and sign for this examination sheet, to complete this examination. You are to work on this examination by yourself. Any hint of collaborative work will be considered as evidence of academic dishonesty. You are not to have any outside contacts concerning this subject, except myself. You can use any document that you find useful. If using theorems from outside sources, give the bibliographic reference, and show clearly how your problem fits in with the conditions of application of the theorem. When citing theorems from the lecture notes, refer to them by the number they have on the last version of the notes, as posted on the website on the first day of the examination period. Pay attention to the form of your answers: as this is a take-home examination, you are expected to hand back a very legible document, in terms of the presentation of your answers. Show your calculations, but try to be concise. This examination consists of 1 independent question and 2 problems. In questions or problems that have multiple parts, you are always allowed to consider as proved the results of a previous part, even if you have not actually done that part.

Foreword to the correction. This examination was long, but established the results in a very guided way. Exercise 1 was almost trivial. Both of the Problems dealt with Sturm theory. This comes as an illustration of the richness of behaviors that can be observed in differential equations: simple equations such as (B.29) can have very complex behaviors. Concerning the difficulty of the problems, it was not excessive. Problem 2 is a shortened and simplified version of a review problem for the CAPES, a French competition to hire high school teachers. The original problem comprised 23 questions, and was written by candidates in 5 hours. Problem 3 introduced the Wronskian, which we did not have time to cover during class. It also established further results of Sturm type. Fund. Theory ODE Lecture Notes – J. Arino 138 B. Problem sheets

Exercise 5.1 – Consider the mapping A : t 7→ A(t), continuous from R to Mn(R), periodic of period ω and such that A(t)A(s) = A(s)A(t) for all t, s ∈ R. Consider the equation x0(t) = A(t)x (B.27)

Let R(t, t0) be the resolvent of (B.27).

i) Show that R(t + ω, t0 + ω) = R(t, t0) for all t, t0 ∈ R. ii) Let u be an eigenvector of R(ω, 0), associated to the eigenvalue λ. Show that the solution of (B.27) taking the value u for t0 = 0 is such that

x(t + ω) = λx(t), ∀t ∈ R. (B.28)

iii) Conversely, show that if x is a nontrivial solution of (B.27) such that (B.28) holds, then λ is an eigenvalue of R(ω, 0).

Problem 5 2 – The aim of this problem is to study some properties of the solutions of the differential equation x00 + q(t)x = 0, (B.29) where q is a continuous function from R to R.

i) Show that for t0, x0, y0 ∈ R, there exists a unique solution of (B.29) such that

0 x(t0) = x0, x (t0) = y0

Preliminary results : convex functions. A function f : I ⊂ R → R is convex if, for all x, y ∈ I and all λ ∈ [0, 1],

f(λx + (1 − λ)y) ≤ λf(x) + (1 − λ)f(y).

Before proceeding with the study of the solutions of (B.29), we establish a few useful results on convex functions.

ii) Let f be a function defined on R, convex and nonnegative. Suppose that f has two zeros t1, t2 and that t1 < t2. Show that f is zero on the interval [t1, t2].

Let c ∈ R and f be a convex function that is bounded from above on the interval [c, +∞). It can then be shown that f is decreasing on [c, +∞). Using this fact, show the following.

iii) Every convex function that is bounded from above on R is constant. Part I. Fund. Theory ODE Lecture Notes – J. Arino 139

iv) Let a, b ∈ R, a < b. We assume that (B.29) has a solution x, zero at a and at b and positive on (a, b). Show that

Z b 4 |q(t)|dt > . a b − a

R ∞ v) We suppose that 0 |q(t)|dt converges. Let x be a bounded solution of (B.29). Deter- mine the behaviour of x0 as t → ∞.

1 vi) We suppose that q ∈ C and is positive and increasing on R+. Show that all solutions of (B.29) are bounded on R+. Part II. We suppose in this part that q is nonpositive and is not the zero function. vii) Let x be a solution of (B.29). Show that x2 is a convex function. viii) Show that if x is a solution of (B.29) that has two distinct zeros, then x ≡ 0. ix) Show that if x is a bounded solution of (B.29), then x ≡ 0. Part III. x) Let x, y be two solutions of (B.29). Show that the function xy0 − x0y is constant.

xi) Let x1 and x2 be the solutions of (B.29) that satisfy

0 x1(0) = 1, x1(0) = 0, 0 x2(0) = 0, x2(0) = 1.

Show that (x1, x2) is a basis of the vector space S on R of the solutions of (B.29). 0 0 What is the value of x1x2 − x1x2? Can the functions x1 and x2 have a common zero? Justify your answer. xii) Discuss the results of question 11) in the context of linear systems, i.e., transform (B.29) into a system of first-order differential equations and express question 11) and its answer in this context. xiii) Show that if q is an even function, then the function x1 is even and the function x2 is odd.

Problem 5 3 – The aim of this problem is to show some elementary properties of the Wronskian of a system of solutions, and to use them to study a second-order differential equation. Consider the nth order ordinary differential equation

(n) 0 (n−1) x = a0(t)x + a1(t)x + ··· + an−1(t)x (t) (B.30)

(k) dkx where x denotes the kth derivative of x, dtk . Fund. Theory ODE Lecture Notes – J. Arino 140 B. Problem sheets

i) Find the matrix A(t) such that this system can be written as the first-order linear system y0 = A(t)y.

Part I : Wronskian Let f1, . . . , fn be n functions from R into R that are n − 1 times differ- entiable. We define W (f1, . . . , fn), the Wronskian of f1, . . . , fn, by   f1(t) ··· fn(t)  f 0 (t) ··· f 0 (t)   1 n  W (f1, . . . , fn)(t) = det  . .  .  . .  (n−1) (n−1) f1 (t) ··· fn (t)

If f1, . . . , fn are linearly dependent, then W (f1, . . . , fn) = 0. The converse is false. Remember that the set of solutions of (B.30) forms a vector space S of dimension n. ii) Using the equivalent linear system y0 = A(t)y, show that every system of n solutions of (B.30) whose Wronskian is nonzero at a time τ constitutes a basis of S.

iii) Using the linear system x0 = A(t)x, show that for every set of n solutions,

Z t  W (t) = W (s) exp an−1(u)du . s

Part II : a theorem of Sturm Let us now consider the second-order differential equation

00 0 a2(t)x + a1(t)x + a0(t)x = 0 (B.31)

The objective here is to show the following theorem of Sturm.

Theorem B.5.6 (Sturm). Let f1, f2 be two independent solutions of (B.31). Between two consecutive zeros of f1, there is exactly one zero of f2.

We suppose f1, f2 are two independent solutions of (B.31).

iv) Let u and v be two consecutive zeros of f1. Using the Wronskian W (f1, f2), show that u and v cannot be zeros of f2.

v) Deduce the theorem. [Hint: consider the function f1/f2.] Part III : another theorem of Sturm. Let us assume that we confine ourselves to segments of R where a0(t) 6= 0. vi) Let Z t a (s)  x(t) = u(t) exp 1 ds 0 a2(s) Show that (B.31) becomes u00 + q(t)u = 0. We want to show the following theorem of Sturm. Fund. Theory ODE Lecture Notes – J. Arino 141

Theorem B.5.7 (Sturm). Let

x00 + q(t)x = 0, q(t) ≤ 0 (B.32) in an interval (t1, t2). Every solution of (B.32) that is not identically zero has at most one zero in the interval [t1, t2].

vi) Let φ be a solution of (B.32) on (t1, t2), and v be a zero of φ in this interval. Discuss the properties of φ. [Hint: Use of Problem 2, Part II is possible, but not strictly necessary.]

vii) Let u < v be another zero of φ in the interval (t1, t2). Discuss properties of φ. Conclude. Fund. Theory ODE Lecture Notes – J. Arino 142 B. Problem sheets

Solution – Exercise 1 – 1) We have

Z t+ω  R(t + ω, t0 + ω) = exp A(s)ds t0+ω Z t  = exp A(s + ω)ds t0 Z t  = exp A(s)ds t0

= R(t, t0)

2) Let u be an eigenvector associated to the eigenvalue λ, i.e.,

R(ω, 0)u = λu

Let x be the solution of (B.27) such that x(t0) = x(0) = u. As x is a solution of (B.27), we have that, for all t, x(t) = R(t, t0)u = R(t, 0)u Therefore,

x(t + ω) = R(t + ω, 0)u = R(t + ω, ω)R(ω, 0)u = R(t + ω, ω)λu = R(t, 0)λu = λR(t, 0)u = λx(t) and hence (B.28). 3) Let x be a nonzero solution of (B.27) such that, for all t ∈ R,

x(t + ω) = λx(t)

We assume that, for all t ∈ R, x(t + ω) = λx(t). This is true in particular for t = 0, and hence x(ω) = λx(0). As x 6≡ 0, there exists v ∈ R − {0} such that x(0) = v. Therefore,

x(t) = λv and as a consequence, R(ω, 0)v = λv and λ is an eigenvalue of R(ω, 0). ◦

Solution – Problem 2 – This problem concerns what are called Sturm theory type results, that is, results dealing with the behavior of second order differential equations. Fund. Theory ODE Lecture Notes – J. Arino 143

1) This is a standard application of the existence-uniqueness result of Cauchy-Lipschitz. To see that, transform the second order equation into a system of first order equations, by setting y = x0. Then, differentiating y, we obtain y0 + qx = 0 Therefore, (B.27) is equivalent to the system of first order equations x0 = y y0 = −qx This is a linear system, hence satisfies a Lipschitz condition, and we can apply the Cauchy- Lipschitz theorem. 2) Since f is nonnegative and convex, we have that, for all λ ∈ [0, 1],

0 ≤ f ((1 − λ)t1 + λt2) ≤ (1 − λ)f(t1) + λf(t2)

But we have supposed that f(t1) = f(t2) = 0. Hence we have that for all λ ∈ [0, 1],

f ((1 − λ)t1 + λt2) = 0 and so f is zero on [t1, t2]. 3) 4) 5) 6) 7) 8) 9) 10) 11) 12) 13) ◦

Solution – Problem 3 – This problem was also about Sturm results. But it also introduced the notion of Wronskian, which is a very general tool intimately linked to the notion of resolvent matrix. 0 (n−1) 0 0 0 1) We let y1 = x, y2 = x , . . . , yn = x . As a consequence, y1 = y2, y2 = y3, . . . , yn−1 = 0 yn, and yn = a0(t)y1 + a1(t)y2 + ··· + an−1(t)yn−1. Written in matrix form, this is equivalent 0 T to y = A(t)y, with y = (y1, . . . , yn) and  0 1 0 ... 0   0 0 1 0 ... 0   . .   . .   . .  A(t) =  .  (B.33)  . 1 0     0 1  a0(t) a1(t) . . . an−1(t) Fund. Theory ODE Lecture Notes – J. Arino 144 B. Problem sheets

2) We know that the system is equivalent to y0 = A(t)y, with A(t) given by (B.33). To every basis (φ1, . . . , φn) of the vector space of solutions of

y0 = A(t)y (B.34) there corresponds a basis (ϕ1, . . . , ϕn) of (B.30), where ϕi is the first coordinate of the vector φi for every i. The converse is also true. We know that a system (φ1, . . . , φn) of solutions of (B.34) is a basis if det(φ1, . . . , φn) 6= 0, and it suffices for this that det(φ1, . . . , φn) be nonzero at one point. Since we have det(φ1, . . . , φn) = W (ϕ1, . . . , ϕn), the result follows. 3) This is a direct application of Liouville’s theorem, which states that if R(t, s) is the resolvent of A(t), then Z t  det R(t, s) = exp trA(u)du s And if a system of coordinates is fixed, for every fundamental matrix Φ,ˆ

Z t  det Φ(ˆ t) = det Φ(ˆ s) exp trA(u)du s From Liouville’s theorem,

Z t  det(Φ(t)Φ−1(s)) = exp trA(u)du s which implies that Z t  det Φ(t) = det Φ(s) exp trA(u)du s Now, note that det Φ(t) = W (t) and det Φ(s) = W (s). This implies the result. Note that for a system of solutions of (B.30), W (ϕ) 6= 0 iff ϕ are linearly independent (i.e., we have the converse implication). 4) 5) 6) 7) ◦ Fund. Theory ODE Lecture Notes – J. Arino 145

University of Manitoba – Math 8430 Fall 2006 Homework Sheet 1

Periodic solutions of differential equations

In this problem, we will study the solutions of some differential equations, and in particular, their periodic solutions. Let T > 0 be a real number, P be the vector space of real valued, continuous and T -periodic functions defined on R, and let a ∈ P . Define Z T Z t  A = a(t)dt, g(t) = exp a(u)du , 0 0 and endow P with the norm kxk = sup |x(t)|. t∈R

First part

1. For what value(s) of A does the differential equation

x0(t) = a(t)x(t) (E1) admit non trivial T -periodic solutions?

We now let b ∈ P , and consider the differential equation

x0(t) = a(t)x(t) + b(t). (E2)

2.a. Describe the set of maximal solutions to (E2) and the intervals of definition of these solutions. 2.b. Describe the set of maximal solutions to (E2) that are T -periodic, first assuming A 6= 0, then A = 0. Fund. Theory ODE Lecture Notes – J. Arino 146 B. Problem sheets

Second part

In this part, we let H be a real valued C1 function defined on R2, and consider the differential equation x0(t) = a(t)x(t) + H(x(t), t). (E3) 3. Check that a function x is solution to (E3) if and only if it satisfies the condition  Z t  x(t) = g(t) x(0) + g(s)−1H(x(s), s)ds . 0 4. Suppose that H is T -periodic with respect to its second argument, and that A 6= 0. Show that, for all functions x ∈ P , the formula

A Z t+T e −1 U(H x)(t) = A g(t) g(s) H(x(s), s)ds, 1 − e t defines a function UH x ∈ P , and that x est solution to (E3) if and only if UH x = x.

In the rest of the problem, we let F be a real-valued C1 function defined on R2, T -periodic with respect to its second argument; for all ε > 0, define Hε = εF and Uε = UHε , so that the differential equation (E3) is written x0(t) = a(t)x(t) + εF (x(t), t). (E4)

Assume that A 6= 0. For all r > 0, we denote Br the closed ball with centre 0 and radius r in the normed space P . We want to show the following assertion: for all r > 0, there exists ε1 > 0 such that, for all ε ≤ ε1, the differential equation (E4) has a unique solution x ∈ Br, that we will denote xε. ∂F We denote αr (resp. βr) the upper bound of the set |F (v, s)| (resp. | ∂v (v, s)|), where v ∈ [−r, r] and s ∈ [0,T ]. 5.a. Find a real ε0 > 0 such that, for all ε ≤ ε0, Uε(Br) ⊂ Br. 5.b. Find a real ε1 ≤ ε0 such that, for all ε ≤ ε1, the restriction of Uε to Br be a contraction of Br. 5.c. Conclude. 6. Study the behavior of the function xε when ε → 0, the number r being fixed. 7. We now suppose that the function a is a constant k 6= 0 et that the function F takes the form F (v, s) = f(v). Determine the solution xε of (E4). 8. We now consider T = 1, k = −1 and f(v) = v2, and thus (E4) takes the form x0(t) = −x(t) + εx(t)2. (E5)

8.a. Give possible values of ε0 and ε1. 8.b. Determine the xε of (E5). 8.c. Let α ∈ R. Show that there exists a unique maximal solution ϕα of (E5) such that ϕα(0) = α. Determine precisely this solution, and graph several of these solutions. Fund. Theory ODE Lecture Notes – J. Arino 147

Third part

Here, we consider the differential equation

x0(t) = kx(t) + εf(x(t)), (E6) where k < 0, f is C1 and zero at zero. We let

λ = sup |f 0(u)|, u∈[−1,1] and assume that ελ < −k. We propose to show the following result: if x is a maximal solution of (E6) such that |x(0)| < 1, then it is defined on [0, ∞) and, for all t ≥ 0,

|x(t)| ≤ |x(0)|e(k+ελ)t.

9. In this question, we suppose that the set of t such that |x(t)| > 1 is non-empty, and we denote its lower bound by θ. Show that, for all t ∈ [0, θ],

|x(t)| ≤ |x(0)|e(k+ελ)t.

10. Conclude. N.B. This result expresses the stability and the asymptotic stability of the trivial solution of (E6).

Fund. Theory ODE Lecture Notes – J. Arino 149

University of Manitoba – Math 8430 Fall 2006 Homework Sheet 1 – Solutions

1. The equation (E1) is a separable equation, so we write

x0(t) x0(t) = a(t)x(t) ⇔ = a(t) x(t) Z t ⇔ ln |x(t)| = a(s)ds + C 0 Z t  ⇔ |x(t)| = exp a(s)ds + C 0 Z t  ⇔ x(t) = K exp a(s)ds , 0 where it was assumed that integration starts at t0 = 0, and where the sign of |x(t)| is absorbed into K ∈ R. Since x(0) = K, the general solution to (E1) is thus Z t  x(t) = x(0) exp a(s)ds . (B.35) 0 A nontrivial solution (B.35) is T -periodic if it satisfies x(t + T ) = x(t) for all t ≥ 0. In particular (for simplicity), there must hold that x(T ) = x(0). This leads to

Z T  x(T ) = x(0) ⇔ x(0) exp a(s)ds = x(0) 0 Z T  ⇔ exp a(s)ds = 1 0 Z T  ⇔ a(s)ds = 0 0 ⇔ A = 0.

2.a. We know that the general solution to the homogeneous equation (E1) associated to (E2) is given by (B.35). To find the general solution to (E2), we need a particular solution to (E2), or to use integrating factors or a variation of constants approach. We do the latter, since we already have the solution (B.35) to (E1). Returning to the solution with undetermined value for K, we consider the ansatz

Z t  φ(t) = K(t) exp a(s)ds = K(t)g(t), 0 Fund. Theory ODE Lecture Notes – J. Arino 150 B. Problem sheets where the second equality uses the definition of g(t). We have

φ0(t) = K0(t)g(t) + K(t)g0(t) = K0(t)g(t) + K(t)a(t)g(t).

The function φ is solution to (E2) if and only if it satisfies (E2); therefore, φ is solution if and only if

φ0(t) = a(t)φ(t) + b(t) ⇔ K0(t)g(t) + K(t)a(t)g(t). = a(t)K(t)g(t) + b(t) ⇔ K0(t)g(t) = b(t) b(t) ⇔ K0(t) = , for g(t) 6= 0 g(t) Z t b(s) ⇔ K(t) = ds + C. 0 g(s)

Note that the remark that g(t) 6= 0 is made “for form”: as it is defined, g(t) > 0 for all t ≥ 0. We conclude that the general solution to (E2) is given by

Z t b(s)  Z t  x(t) = ds + C exp a(s)ds . 0 g(s) 0

Since it will be useful to have information in terms of x(0) (as in question 1.), we note that C = x(0). Thus, the solution to (E2) through x(0) = 0 is given by

Z t b(s)  Z t  x(t) = ds + x(0) exp a(s)ds . (B.36) 0 g(s) 0

With integrating factors, we would have done as follows: write the equation (E2) as

x0(t) − a(t)x(t) = b(t).

The integrating factor is then

 Z  µ(t) = exp − a(t)dt , and the general solution to (E2) is given by

1 Z t  x(t) = µ(s)b(s)ds + C µ(t) 0

Maximal solutions are solutions that are the restriction of no other solution. Fund. Theory ODE Lecture Notes – J. Arino 151

2.b. Solutions to (E2) are T -periodic if x(T ) = x(0); therefore, a T -periodic solution satisfies

Z T b(s)  Z T  x(T ) = x(0) ⇔ ds + x(0) exp a(s)ds = x(0) 0 g(s) 0 Z T b(s)  ⇔ ds + x(0) = x(0)e−A 0 g(s) Z T b(s) ⇔ ds = e−A − 1 x(0) 0 g(s)

Second part

3. Before proceeding, note that

d Z t  g0(t) = exp a(s)ds dt 0 Z t  = a(t) exp a(s)ds 0 = a(t)g(t).

 R t −1  We differentiate x(t) = g(t) x(0) + 0 g(s) H(x(s), s)ds . This gives

 Z t  H(x(t), t) x0(t) = g0(t) x(0) + g(s)−1H(x(s), s)ds + g(t) 0 g(t)  Z t  = g0(t) x(0) + g(s)−1H(x(s), s)ds + H(x(t), t) 0  Z t  = a(t)g(t) x(0) + g(s)−1H(x(s), s)ds + H(x(t), t) 0 = a(t)x(t) + H(x(t), t),

 R t −1  and thus x(t) = g(t) x(0) + 0 g(s) H(x(s), s)ds is solution to (E3).

4. Let x ∈ P . Then UH x ∈ P if and only if UH x is T -periodic. We have

A Z t+2T e −1 (UH x)(t + T ) = A g(t + T ) g(s) H(x(s), s)ds. 1 − e t+T Fund. Theory ODE Lecture Notes – J. Arino 152 B. Problem sheets

Remark that Z t+T  g(t + T ) = exp a(s)ds 0 Z t Z t+T  = exp a(s)ds + a(s)ds 0 t Z t+T  = g(t) exp a(s)ds t = eAg(t), since a(t) is T -periodic. Therefore,

A Z t+2T e A −1 (UH x)(t + T ) = A e g(t) g(s) H(x(s), s)ds 1 − e t+T A Z t+T e A −1 = A e g(t) g(s − T ) H(x(s − T ), s − T )ds. 1 − e t Now Z s−T  g(s − T ) = exp a(u)du 0 Z s Z s−T  = exp a(u)du + a(u)du 0 s  Z s  = g(s) exp − a(u)du s−T = e−Ag(s). So, finally,

A Z t+T e 2A −1 (UH x)(t + T ) = A e g(t) g(s) H(x(s), s)ds 1 − e t 2A = e (UH x)(t), since H is T -periodic in its second argument and x ∈ P . Therefore, UH x ∈ P for x ∈ P . Suppose that x(t) = (UH x)(t). Then,

A  Z t+T   0 e 0 H(x(s), s) H(x(t + T ), t + T ) H(x(t), t) x (t) = A g (t) ds + g(t) − 1 − e t g(s) g(t + T ) g(t) eA  Z t+T H(x(s), s) H(x(t), t) H(x(t), t) = A a(t)g(t) ds + g(t) A − 1 − e t g(s) e g(t) g(t) eA Z t+T H(x(s), s) eA (1 − eA)H(x(t), t) = a(t) A g(t)g(t) ds + A g(t) A 1 − e t g(s) 1 − e e g(t) = a(t)x(t) + H(x(t), t). Fund. Theory ODE Lecture Notes – J. Arino 153

5.a. We seek ε0 > 0 such that for all ε ≤ ε0, kxk ≤ r ⇒ kUεxk ≤ r. Therefore, we compute kUεxk. We have, letting H(x, s) = εF (x, s),

kUεxk = sup |(Uεx)(t)| t∈R A Z t+T e −1 = sup A g(t) g(s) εF (x(s), s)ds t∈R 1 − e t A Z t+T e F (x(s), s) = ε A sup |g(t)| ds |1 − e | t∈R t g(s) A Z t+T e F (x(s), s) ≤ ε A sup |g(t)| ds. |1 − e | t∈R t g(s)

Note that we keep the absolute value of |1 − eA|, since A could be negative, leading to a negative value for 1 − eA. Let kg−1k = sup |g−1(t)|. We then have t∈R

A Z t+T e −1 kUεxk ≤ ε A kgkkg k sup |F (x(s), s)| ds |1 − e | t∈R t A Z t+T e −1 ≤ ε A kgkkg k αrds, |1 − e | t since x(s) ∈ [−r, r], and so

eA kU xk ≤ ε kgkkg−1kα T. ε |1 − eA| r

Letting

 eA −1 ε = kgkkg−1kα T r, 0 |1 − eA| r

we see that if ε ≤ ε0, then kUεxk ≤ r.

5.b. For the restriction of Uε to be a contraction, we must have the inequality obtained above, as well as, for x, y ∈ Br, d(Uεx, Uεy) < d(x, y). In terms of the induced norm, this means that kUεx − Uεyk < kx − yk. Therefore, letting x, y ∈ P be such that kxk ≤ r and Fund. Theory ODE Lecture Notes – J. Arino 154 B. Problem sheets kyk ≤ r, we compute

kUεx − Uεk = sup |(Uεx)(t) − (Uεy)(t)| t∈R A Z t+T e εF (x(s), s) − εF (y(s), s) = sup A g(t) ds t∈R 1 − e t g(s) A Z t+T e F (x(s), s) − F (y(s), s) = ε A sup |g(t)| ds 1 − e t∈R t g(s) A Z t+T e F (x(s), s) − F (y(s), s) ≤ ε A sup |g(t)| ds. 1 − e t∈R t g(s) For s ∈ [0,T ] and x(s) ∈ [−r, r], we have, picking a y(s) ∈ [−r, r],

|F (x(s), s)| = |F (x(s), s) − F (y(s), s) + F (y(s), s)| ≤ |F (x(s), s) − F (y(s), s)| + |F (y(s), s)|

≤ βr|x(s) − y(s)| + αr, from the mean value theorem, and thus

|F (x(s), s)| ≤ 2βrr + αr.

5.c. We used the contraction mapping principle (Theorem ??):

• for ε ≤ ε1, Uε is a contraction of Br,

• P is complete (it is closed in C(R, R) ∩ B(R) endowed with the supremum norm) and Br is closed in P .

Therefore, we conclude that for a given r > 0, for all ε ≤ ε1, there exists a unique solution xε of (E4) in Br.

6. This is the contraction mapping theorem with a parameter:

kxε − xε0 k = kUεxε − Uε0 xε0 k ≤ kUεxε − Uεxε0 k +kUεxε0 − Uε0 xε0 k. | {z } ≤Kkxε−xε0 k Fund. Theory ODE Lecture Notes – J. Arino 155

But we have

A Z t+T 0 e −1 0 0 kUεxε − Uε xε k(t) ≤ |ε − ε | A |g(t)g(s) F (x(s), s)|ds |1 − e | t eA ≤ |ε − ε0| eTAT α . |1 − eA| r | {z } =K0

|ε − ε0|K0 Thus, we have kx −x 0 k ≤ and therefore ε ∈ 7→ x ∈ P is continuous; it follows ε ε 1 − K R ε that lim xε = x0. But the only periodic solution of (E1) when A 6= 0 is the zero solution. ε→0 Therefore, xε → 0 when ε → 0.

kt 7. Let x0(t) = c0, then g(t) = e and A = kT .

kT Z t+T kT  t+T εe kt −ks εe kt 1 −ks Uεx0(t) = kT e f(t0) e ds = kT f(c0)e − e 1 − e t 1 − e k t εekT ekt εekT f(c ) ε = f(c ) e−kt − e−kte−kT  = 0 1 − e−kT  = − f(c ) 1 − ekT 0 k 1 − ekT k k 0

The constant function xε(t) = c0 is solution (where c0 is the unique solution of the equation εf(x) + kx = 0 for ε sufficiently small). ε ε Note : letting g(x) = − f(x), it follows that g0(x) = − f 0(x). Thus, for r > 0 given, k k there exists ε0 > 0 such that ε ≤ ε0 implies g([−r, r]) ⊂ [−r, r], and there exists ε1 ≤ ε0 such that sup |g0(x)| < 1, the fixed point theorem can be applied easily. x∈[−r,r]

2 −t 8.a. Using the formula obtained in 5.a. with αr = r , βr = 2r, g(t) = e , A = −T then

r(1 − e−T )eT e−T 1 − e−T 1 1 − e−T ε ε = = , ε = = 0 . 0 r2T rT 1 2 2rT 4

8.b. The zero function is clearly a 1-periodic solution of (E5). By uniqueness of solutions, xε = 0 is the only solution of (E5). Fund. Theory ODE Lecture Notes – J. Arino 156 B. Problem sheets

8.c. The vector field −x+εx2 is C1, and therefore existence and uniqueness of a maximal solution ϕα is a direct consequence of the theorem of Cauchy-Lipschitz. We solve the equation x0 = −x + εx2 without constraint of periodicity. There are two constant solutions , x(t) = 0 and x(t) = 1/ε. By uniqueness, any other solution never takes the values 0 and 1/ε. We have:

x0 d  x  x λe−t = ln = −1 ⇒ = λe−t ⇒ x(t) = , λ ∈ ∗. x(1 − εx) dt 1 − εx 1 − εx 1 + ελe−t R α The condition x(0) = α gives λ = if α 6= 0 and α 6= 1/ε, and as a consequence, 1 − εα 1 letting β = − ε we obtain α 1 ϕ (t) = . α βet + ε

• If β ≥ 0 then ϕα is defined on R.

 ε  • If β < 0, we let t0 = ln − β . ε – If α > 0 then − > 1, that is, t > 0, ϕ is defined on ] − ∞, t [. β 0 α 0 ε – If α < 0 then − < 1, that is, t < 0, ϕ is defined on ]t , +∞[. β 0 α 0

Here are a few representative solutions obtained when setting ε = 1.

3

alpha>1/epsilon 2 y 1 0

–3 –2 –10 1 2 3 x –1 alpha<0

–2

–3

Third part Fund. Theory ODE Lecture Notes – J. Arino 157

9. θ > 0 since |x(0)| < 1, and we have x(s) ∈ [−1, 1] for all s ∈ [0, θ], by definition of θ. Since f(0) = 0 and |f 0| is bounded by λ on [−1, 1], we have, from the inequality of finite variations, |f(u) − f(0) | ≤ λ|u| for all u ∈ [−1, 1], that is, |f(x(s)) − f(0) | ≤ λ|x(s)|. |{z} |{z} =0 =0 Let t ∈ [0, θ]. From 4.,

Z t Z t −kt −ks −ks |e x(t)| = x(0) + ε e f(x(s)) ds ≤ |x(0)| + ελ e |x(s)| ds, s=0 s=0 from which |e−ktx(t)| ≤ |x(0)|eελt (using Gronwall’s lemma with ϕ(t) = e−kt|x(t)|, η = |x(0)|, ζ = ελ), giving the inequality.

10. Since ελ < 0, it follows that |x(0)|e(k+ελ)t ≤ |x(0)| < 1. Letting E = {t > 0 | |x(t)| > 1}, which is assumed non empty, then θ = inf E > 0 (by continuity, since |x(0)| < 1 there exists η > 0 such that |x(t)| < 1 on [0, η], and thus θ ≥ η > 0). Since lim |x(t)| ≤ 1 and t→θ− lim |x(t)| ≥ 1, it follows |x(θ)| = 1. t→θ+ On [0, θ], |x(t)| ≤ |x(0)|e(k+ελ)t and, taking the limit, |x(θ)| < 1, which is impossible. First conclusion : E = ∅ and, if J is the interval of definition of x, then ∀t ∈ J ∩ [0, +∞[, |x(t)| ≤ 1. If J admits an upper bound b ∈ R, then x0 is bounded in a neighborhood of b. Thus x admits a limit in b. The same is therefore true for x0. We then know that x can be extended beyond b, contradicting the maximality of J. Final conclusion : J ∩[0, +∞[= [0, +∞[, x is defined on [0, +∞[ and the proof of question 9. holds true for all t ≥ 0, i.e.,

∀t ∈ [0, +∞[, |x(t)| ≤ |x(0)|e(k+ελ)t.

N.B. This result expresses the stability and the asymptotic stability of the trivial solution of (E6).

This subject was the Premi`ere composition de math´ematiques for the contest determining admission to Ecole´ Polytechnique in France, for MP (Math-) track students, in 2004. Students, in their second year of university, have 4 hours to write this premi`ere composition.

The original subject comprised another question, originally question 3, which was sup- pressed in this homework sheet. To be complete, this question is included here: Fund. Theory ODE Lecture Notes – J. Arino 158 B. Problem sheets

2’. We suppose here that T = 2π and that the function a is a constant k. 2’.a. Assuming k 6= 0, express the Fourier coefficientsx ˆ(n), n ∈ Z, of a solution x of (E2) belonging to P , as a function of k and the Fourier coefficients of b. What is the mode of convergence of the Fourier series of x? 2’.b. What happens when k = 0?

2’.a. If k 6= 0 then A = 2πk 6= 0, and from 2., there exists a unique 2π-periodic solution. 0 Since the mapping x 7→ xb(n) is linear, and from the relation xb(n) = inxb(n), we have

b(n) xb0(n) = kx(n) + b(n) ⇒ x(n) = . b b in − k

Since x is C1, we know that the Fourier series of x is normally convergent. Since b(n) → 0,  1  we ca also say that x(n) = o . b n 2’.b. Applying here again the result of 2.b.,

• If b(0) = 0 then all solutions are 2π-periodic. In this case, solutions satisfyx ˆ(n) = ˆb(n)/in for n ∈ Z non zero andx ˆ(0) varies with the solutions under consideration.

• If b(0) 6= 0 then no solution is periodic.