
NOTES ON SOLVING LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS LANCE D. DRAGER 1. Introduction A problem that comes up in a lot of different fields of mathematics and engineer- ing is solving a system of linear constant coefficient differential equations. Such a system looks like 0 x1(t) = a11x1(t) + a12x2(t) + ··· + a1nxn(t) 0 x2(t) = a21x1(t) + a22x2(t) + ··· + a2nxn(t) (1.1) . 0 xn(t) = an1x1(t) + an2x2(t) + ··· + annxn(t); where the aij's are constants. This is a system of n differential equations for the n unknown functions x1(t); : : : ; xn(t). It's important to note that the equations are 0 coupled, meaning the the expression for the derivative xi(t) contains not only xi(t), but (possibly) all the rest of the unknown functions. It's unclear how to proceed using methods we've learned for scalar differential equation. Of course, to find a specific solution of (1.1), we need to specify initial conditions for the unknown functions at some value t0 of t, x1(t0) = c1 x2(t0) = c2 (1.2) . xn(t0) = cn; where c1; c2; : : : ; cn are constants. It's pretty clear that linear algebra is going to help here. We can put our unknown functions into a vector-valued function 2 3 x1(t) 6x2(t)7 x(t) = 6 7 ; 6 . 7 4 . 5 xn(t) Version Time-stamp: "2011-03-31 18:11:43 drager". 1 2 LANCE D. DRAGER and our constant coefficients into a matrix an n × n matrix A = [aij]. Recall that to differentiate a vector-valued functions, we differentiate each component, so 2 0 3 x1(t) 0 6x2(t)7 x0(t) = 6 7 ; 6 . 7 4 . 5 0 xn(t) so we can rewrite (1.1) more compactly in vector-matrix form as (1.3) x0(t) = Ax(t): If we put our initial values into a vector 2 3 c1 6c2 7 c = 6 7 ; 6 . 7 4 . 5 cn we can rewrite the initial conditions (1.2) as (1.4) x(t0) = c: Thus, the matrix form of our problem is 0 (1.5) x (t) = Ax(t); x(t0) = c: A problem of this form is called an initial value problem (IVP). For information on the matrix manipulations used in these notes, see the Ap- pendix. Eigenvalues and eigenvectors are going to be important to our solution methods. Of course, even real matrices can have complex, nonreal eigenvalues. To take care of this problem, we'll work with complex matrices and complex solutions to the differential equation from the start. In many (but not all) applications, one is only interested in real solutions, so we'll indicate as we go along what happens when our matrix A and our initial conditions c are real. The equation x0(t) = Ax(t) is a homogeneous equation. An inhomogeneous equation would be one of the form x0(t) = Ax(t) + f(t); where f(t) is a given vector-value function. We'll discuss homogeneous systems to begin with, and show how to solve inhomogeneous systems at the end of these notes. 1.1. Notation. The symbol R will denote the real numbers and C will denote the complex numbers. We will denote by Matm×n(C) the space of m × n matrices with entries in C and Matm×n(R) will denote the space of m × n matrices with entries in R. n We use R as a synonym Matn×1(R), the space of column vectors with n entries. n Similarly, C is a synonym for Matn×1(C). NOTES ON SOLVING LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS 3 2. The Matrix Exponential In this section, we'll first consider the existence and uniqueness question for our system of equations. We'll then consider the fundamental matrix for our system and show how this solves the problem. In the last subsection, we'll view this fundamental matrix as a matrix exponential function. 2.1. Initial Value Problem and Existence and Uniqueness. The main prob- lem we're interested in is the initial value problem 0 (2.1) x (t) = Ax(t); x(t0) = c; n were A 2 Matn×n(C), c 2 C and we're solving for a function x(t) with values in n C , defined on some interval in R containing t0. We state the following existence and uniqueness theorem without proof. Theorem 2.1 (Existance and Uniqueness Theorem). Let A be an n × n complex n matrix, let c 2 C and let t0 2 R. (1) There is a differentiable function x: R ! Cn : t 7! x(t) such that 0 x (t) = Ax(t); for all t 2 R, x(t0) = c: n (2) If J ⊆ R is an open interval in R that contains t0 and y : J ! C is a differentiable function such that y0(t) = Ay(t); for all t 2 J, y(t0) = c; then y(t) = x(t) for all t 2 J. In view of this Theorem, we may as well consider solutions defined on all of R. For brevity, we'll summarize by saying that solutions of the initial value problem are unique. It will turn out to be useful for consider initial value problems for matrix val- ued functions. To distinguish the cases, we'll usually write X(t) for our unknown function with values in Matn×n(C). Theorem 2.2. Suppose that A 2 Matn×n(C) and that t0 2 R. Let C be a fixed n × n complex matrix. Then there is a function X : R ! Matn×n(C): t 7! X(t) such that 0 X (t) = AX(t); t 2 R (2.2) X(t0) = C: This solution is unique in the sense of Theorem 2.1 Proof. If we write X(t) in terms of its columns as X(t) = [x1(t) j x2(t) j · · · j xn(t)]; so each xi(t) is a vector-valued function, then 0 0 0 0 X (t) = [x1(t) j x2(t) j · · · j xn(t)]; AX(t) = [Ax1(t) j Ax2(t) j · · · j Axn(t)] 4 LANCE D. DRAGER Thus, the matrix differential equation X0(t) = AX(t) is equivalent to n vector differential equations 0 x1(t) = Ax1(t) 0 x2(t) = Ax2(t) . 0 xn(t) = Axn(t): If we write the initial matrix C in terms of its columns as C = [c1 j c2 j · · · j cn]; then the initial condition X(t0) = C is equivalent to the vector equations x1(t0) = c1 x2(t0) = c2 . xn(t0) = cn Since each of the initial value problems 0 xj(t) = Axj(t); xj(t0) = cj; j = 1; 2; : : : ; n has a unique solution, we conclude that the matrix initial value problem (2.2) has a unique solution. 2.2. The Fundamental Matrix and It's Properties. It turns out we only need to solve one matrix initial value problem in order to solve them all. Definition 2.3. Let A be a complex n × n matrix. The unique n × n-matrix value function X(t) that solves the matrix initial valued problem (2.3) X0(t) = AX(t);X(0) = I will be denoted by ΦA(t), in order indicate the dependence on A. In other words, ΦA(t) is the unique function so that 0 (2.4) ΦA(t) = AΦA(t); ΦA(0) = I: The function ΦA(t) is called the Fundamental Matrix of (2.3). We'll have a much more intuitive notation for ΦA(t) in a bit, but we need to do some work first. First, let's show that ΦA(t) solves the initial value problems we've discussed so far. Theorem 2.4. Let A be an n × n complex matrix. (1) Let c 2 Cn. The solution of the initial value problem 0 (2.5) x (t) = Ax(t); x(t0) = c; is (2.6) x(t) = ΦA(t − t0)c: NOTES ON SOLVING LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS 5 (2) Let C 2 Matn×n(C). The solution X(t) of the matrix initial value problem 0 (2.7) X (t) = AX(t);X(t0) = C: is (2.8) X(t) = ΦA(t − t0)C: Proof. Consider the matrix-valued function Ψ(t) = ΦA(t − t0): We then have d Ψ0(t) = Φ (t − t ) dt A 0 d = Φ0 (t − t ) (t − t ) A 0 dt 0 = AΦ(t − t0) = AΨ(t): We also have Ψ(t0) = ΦA(t0 − t0) = ΦA(0) = I. For the first part of the proof, suppose c is a constant vector, and let y(t) = Ψ(t)c. Then y0(t) = Ψ0(t)c = AΨ(t)c = Ay(t); and y(t0) = Ψ(t0)c = Ic = c. Thus, y(t) is the unique solution of the initial value problem (2.5). The proof of the second part is very similar. Exercise 2.5. Show that Φ0(t) = I; for all t; where 0 is the n × n zero matrix. In the rest of this subsection, we're going to derive some properties of ΦA(t). The pattern of proof is the same in each case; we show two functions satisfy the same initial value problem, so they must be the same. Here's a simple example to start. Theorem 2.6. Let A be an n × n real matrix. Then ΦA(t) is real. The solutions of the initial value problems (2.5) and (2.7) are real if the initial data, c or C, is real. Recall that the conjugate of a complex number z is denoted byz ¯.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages22 Page
-
File Size-