<<

5-28-2012 The Exponential of a

The solution to the exponential growth equation

dx kt = kx is given by x = c e . dt 0 It is natural to ask whether you can solve a constant coefficient linear system

′ ~x = A~x

in a similar way. If a solution to the system is to have the same form as the growth equation solution, it should look like

At ~x = e ~x0.

The first thing I need to do is to make sense of the matrix exponential eAt. The Taylor for ez is ∞ n z z e = . n! n=0 X It converges absolutely for all z. It A is an n × n matrix with real entries, define

∞ n n At t A e = . n! n=0 X The powers An make sense, since A is a . It is possible to show that this series converges for all t and every matrix A. Differentiating the series term-by-term, ∞ ∞ ∞ ∞ n−1 n n−1 n n−1 n−1 m m d At t A t A t A t A At e = n = = A = A = Ae . dt n! (n − 1)! (n − 1)! m! n=0 n=1 n=1 m=0 X X X X At ′ This shows that e solves the differential equation ~x = A~x. The initial condition vector ~x(0) = ~x0 yields the particular solution At ~x = e ~x0. This works, because e0·A = I (by setting t = 0 in the ). Another familiar property of ordinary exponentials holds for the matrix exponential: If A and B com- mute (that is, AB = BA), then A B A B e e = e + . You can prove this by multiplying the power series for the exponentials on the left. (eA is just eAt with t = 1.)

Example.Compute eAt if 2 0 A = . 0 3   Compute the successive powers of A:

n 2 0 2 4 0 n 2 0 A = , A = , ...,A = n . 0 3 0 9 0 3       1 Therefore,

n ∞ ∞ (2t) n n n 0 2t At t 2 0 =0 n! e 0 e = n = = t .  n  e3 n n! 0 3 P ∞ (3t) 0 =0   0 n   X  =0 n!   P  You can compute the exponential of an arbitrary in the same way:

λ1t λ1 0 ··· 0 e 0 ··· 0 λ2t 0 λ2 ··· 0 At 0 e ··· 0 A =  . . .  , e =  . . .  ......    eλnt   0 0 ··· λn   0 0 ···     

Example. Compute eAt if 1 2 A = . 0 1   Compute the successive powers of A:

1 2 1 4 1 6 n 1 2n A = , A2 = , A3 = , ...,A = . 0 1 0 1 0 1 0 1        

Hence, n n ∞ ∞ t ∞ 2nt n n n t t A t 1 2n =0 n! =0 n! e 2te e = = n = t . n! 0 1  ∞ t  0 e n=0 P P   0 n=0   X  n!    Here’s where the last equality came from: P

∞ n t t = e , n! n=0 X ∞ ∞ ∞ n n−1 m 2nt t t t = 2t = 2t = 2te . n! (n − 1)! m! n=0 n=1 m=0 X X X

Example. Compute eAt, if 3 −10 A = . 1 −4   If you compute powers of A as in the last two examples, there is no evident pattern. Therefore, it would be difficult to compute the exponential using the power series. Instead, set up the system whose coefficient matrix is A:

′ x = 3x − 10y,

′ y = x − 4y. The solution is t − t 1 t 1 − t x = c e + c e 2 , y = c e + c e 2 . 1 2 5 1 2 2 2 Next, note that if B is a 2 × 2 matrix,

1 0 B = first column of B and B = second column of B. 0 1     In particular, this is true for eAt. Now At ~x = e ~x0

is the solution satisfying ~x(0) = ~x0, but

t −2t c1e + c2e ~x = 1 1 . c et + c e−2t " 5 1 2 2 # Set ~x(0) = (1, 0) to get the first column of eAt:

1 c1 + c2 = 1 1 . 0 c + c   " 5 1 2 2 # 5 2 Hence, c = , c = − . So 1 3 2 3 5 t 2 −2t x e − e = 3 3 . y  1 t 1 − t    e − e 2  3 3    Set ~x(0) = (0, 1) to get the second column of eAt:

0 c1 + c2 = 1 1 . 1 c + c   " 5 1 2 2 # 10 10 Therefore, c = − , c = . Hence, 1 3 2 3

10 t 10 −2t x − e + e = 3 3 . y  2 t 5 − t    − e + e 2  3 3    Therefore, 5 2 10 10 et − e−2t − et + e−2t At e = 3 3 3 3 .  1 1 2 5  et − e−2t − et + e−2t  3 3 3 3    I found eAt, but I had to solve a system of differential equations in order to do it.

In some cases, it’s possible to use linear algebra to compute the exponential of a matrix. An n × n matrix A is diagonalizable if it has n independent eigenvectors. (This is true, for example, if A has n distinct eigenvalues.) Suppose A is diagonalizable with independent eigenvectors ~v1,...,~vn and corresponding eigenvalues λ1,...,λn. Let S be the matrix whose columns are the eigenvectors:

↑ ↑ ↑ S = ~v ~v ··· ~vn .  1 2  ↓ ↓ ↓   3 Then λ1 0 ··· 0 − 0 λ2 ··· 0 S 1AS =  . . .  = D. . . .    0 0 ··· λn    As I observed above, eλ1t 0 ··· 0 λ2t Dt 0 e ··· 0 e =  . . .  . . . .  eλnt   0 0 ···    On the other hand, since (S−1AS)n = S−1AnS,

∞ ∞ n −1 n n n Dt t (S AS) − t A − At e = = S 1 S = S 1e S. n! n! n=0 n=0 ! X X Hence, eλ1t 0 ··· 0 λ2t At 0 e ··· 0 − e = S  . . .  S 1. . . .  eλnt   0 0 ···    I can use this approach to compute eAt in case A is diagonalizable.

Example. Compute eAt if 3 5 A = . 1 −1   The eigenvalues are λ =, λ = −2. Since there are two different eigenvalues and A is a 2 matrix, A is diagonalizable. The corresponding eigenvectors are (5, 1) and (−1, 1). Thus,

5 −1 − 1 1 1 S = , S 1 = . 1 1 6 −1 5     Hence,

4t 4t −2t 4t −2t At 5 −1 e 0 1 1 1 1 5e + e 5e − 5e e = − t = t − t t − t . 1 1 0 e 2 6 −1 5 6 e4 − e 2 e4 + 5e 2        

Example. Compute eAt if 5 −6 −6 A = −1 4 2 .   3 −6 −4   The eigenvalues are λ = 1 and λ = 2 (double). The corresponding eigenvectors are (3, −1, 3) for λ = 1, and (2, 1, 0) and (2, 0, 1) for λ = 2. Since I have 3 independent eigenvectors, the matrix is diagonalizable. I have 3 2 2 −1 2 2 − S = −1 1 0 , S 1 = −1 3 2 .     3 0 1 3 −6 −5     4 From this, it follows that

−3et + 4e2t 6et − 6e2t 6et − 6e2t At e = et − e2t −2et + 3e2t −2et + 2e2t .  t t t t t t  −3e + 3e2 6e − 6e2 6e − 5e2   Here’s a quick check on the computation: If you set t = 0 in the right side, you get

1 0 0 0 1 0 .   0 0 1   This checks, since eA·0 = I. Note that this check isn’t foolproof — just because you get I by setting t = 0 doesn’t mean your answer is right. However, if you don’t get I, your answer is surely wrong!

How do you compute eAt is A is not diagonalizable? I’ll describe an iterative algorithm for computing eAt that only requires that one know the eigenvalues of A. There are various algorithms for computing the matrix exponential; this one, which is due to Williamson [1], seems to me to be the easiest for hand computation. (Note that finding the eigenvalues of a matrix is, in general, a difficult problem: Any method for finding eAt will have to deal with it.) Let A be an n × n matrix. Let {λ1,λ2,...,λn} be a list of the eigenvalues, with multiple eigenvalues repeated according to their multiplicity. Let λ1t a1 = e , t λkt λk(t−u) ak = e ⋆ak−1(t) = e ak−1(u) du, k = 2, . . . , n, Z0

B1 = I,

Bk =(A − λk−1I) · Bk−1, k = 2, . . . , n, Then At e = a1B1 + a2B2 + ... + anBn. To prove this, I’ll show that the expression on the right satisfies the differential equation ~x ′ = A~x. To do this, I’ll need two facts about the characteristic polynomial p(x).

1. (x − λ1)(x − λ2) ··· (x − λn) = ±p(x). 2. (Cayley-Hamilton Theorem) p(A) = 0.

Observe that if p(x) is the characteristic polynomial, then using the first fact and the definition of the B’s, p(x) = ±(x − λ1)(x − λ2) ··· (x − λn)

p(A) = ±(A − λ1I)(A − λ2I) ··· (A − λnI)

= ±I(A − λ1I)(A − λ2I) ··· (A − λnI)

= ±B1(A − λ1I)(A − λ2I) ··· (A − λnI)

= ±B2(A − λ2I) ··· (A − λnI) . . = ±Bn(A − λnI)

5 By the Cayley-Hamilton Theorem, ±Bn(A − λnI) = 0. (∗)

I will use this fact in the proof below.

Example. I’ll illustrate the Cayley-Hamilton theorem with the matrix

2 3 A = . 2 1   The characteristic polynomial is (2 − λ)(1 − λ) − 6 = λ2 − 3λ − 4. The Cayley-Hamilton theorem asserts that if you plug A into λ2 − 3λ − 4, you’ll get the . First, 2 3 2 3 10 9 A2 = = . 2 1 2 1 6 7      Therefore, 10 9 6 9 4 0 0 0 A2 − A − 4I = − − = . 6 7 6 3 0 4 0 0        

Proof of the algorithm. First,

t t λk(t−u) λkt −λku ak = e ak−1(u) du = e e ak−1(u) du. Z0 Z0 Recall that the Fundamental Theorem of Calculus says that

d t f(u) du = f(t). dt Z0

Applying this and the Product Rule, I can differentiate ak to obtain

t ′ λkt −λku λkt −λkt ak = λke e ak−1(u) du + e e ak−1(t), Z0 ′ ak = λkak + ak−1.

Therefore, ′ (a1B1 + a2B2+ ... + anBn) =

λ1a1B1+

λ2a2B2 + a1B2+

λ3a3B3 + a2B3+ . .

λnanBn + an−1Bn.

Expand the ai−1Bi terms using

ai−1Bi = ai−1(A − λi−1I)Bi−1 = ai−1ABi−1 − λi−1ai−1Bi−1.

6 Making this substitution and telescoping the sum, I have

λ1a1B1+

λ2a2B2 + a1AB1 − λ1a1B1+

λ3a3B3 + a2AB2 − λ2a2B2+ . .

λnanBn + an−1ABn−1 − λn−1an−1Bn−1 =

λnanBn + A(a1B1 + a2B2 + ... + an−1Bn−1) =

λnanBn − AanBn + A(a1B1 + a2B2 + ... + anBn) =

−an(A − λnI)Bn + A(a1B1 + a2B2 + ... + anBn)

−an · 0 + A(a1B1 + a2B2 + ... + anBn) =

A(a1B1 + a2B2 + ... + anBn)

(The result (*) proved above was used in the next-to-the-last equality.) Combining the results above, I’ve shown that ′ (a1B1 + a2B2 + ... + anBn) = A(a1B1 + a2B2 + ... + anBn). ′ This shows that M = a1B1 + a2B2 + ... + anBn satisfies M = AM. Using the power series expansion, I have e−tAA = Ae−tA. So

−tA ′ −tA −tA −tA −tA (e M) = −Ae M + e AM = −e AM + e AM = 0.

(Remember that is not commutative in general!) It follows that e−tAM is a constant matrix. −0·A Set t = 0. Since a2 = ··· = an = 0, it follows that M(0) = I. In addition, e = I. Therefore, e−tAM = I, and hence M = eAt.

Example. Use the matrix exponential to solve

′ 3 −1 3 ~x = ~x, ~x(0) = . 1 1 4    

The characteristic polynomial is (λ−2)2. You can check that there is only one independent eigenvector, so I can’t solve the system by diagonalizing. I could use generalized eigenvectors to solve the system, but I will use the matrix exponential to illustrate the algorithm. First, list the eigenvalues: {2, 2}. Since λ = 2 is a double root, it is listed twice. First, I’ll compute the ak’s: 2t a1 = e , t t 2t 2(t−u) 2u 2t 2t a2 = e ⋆a1(t) = e e du = e du = te . Z0 Z0 Here are the Bk’s: 1 −1 B = I, B =(A − 2I)B = A − 2I = . 1 2 1 1 −1   Therefore, 2t 2t 2t At 2t 1 0 2t 1 −1 e + te −te e = e + te = t t t . 0 1 1 −1 te2 e2 − te2       As a check, note that setting t = 0 produces the identity.)

7 The solution to the given initial value problem is

e2t + te2t −te2t 3 ~x = t t t . te2 e2 − te2 4   

You can get the general solution by replacing (3, 4) with (c1,c2).

Example. Find eAt if 1 0 0 A = 1 1 0 .   −1 −1 2   The eigenvalues are obviously λ = 1 (double) and λ = 2. t First, I’ll compute the ak’s. I have a1 = e , and

t t t−u u t t a2 = e e du = e du = te , Z0 Z0 t 2(t−u) u t t 2t a3 = e ue du = −te − e + e . Z0 Next, I’ll compute the Bk’s. B1 = I, and

0 0 0 B = A − I = 1 0 0 , 2   −1 −1 1   0 0 0 B =(A − I)B = 0 0 0 . 3 2   −2 −1 1   Therefore, et 0 0 At e = tet et 0 .  t t t t t t  te + 2e − 2e2 e − e2 e2  

Example. Use the matrix exponential to solve

′ 2 −5 ~x = ~x. 2 −4  

This example will demonstrate how the algorithm for eAt works when the eigenvalues are complex. The characteristic polynomial is λ2 + 2λ + 2. The eigenvalues are λ = −1 ± i. I will list them as {−1 + i, −1 − i}. (−1+i)t First, I’ll compute the ak’s. a1 = e , and

t t (−1+i)(t−u) (−1−i)u (−1+i)t (1−i)u (−1−i)u a2 = e e du = e e e du = Z0 Z0 t − i t − iu − i t i − it i − −i t − i t e( 1+ ) e 2 du = e( 1+ ) e 2 − 1 = e( 1 ) − e( 1+ ) . 2 2 Z0    8 Next, I’ll compute the Bk’s. B1 = I, and

3 − i −5 B = A − (−1 + i)I = . 2 2 −3 − i   Therefore, At − i t 1 0 i − −i t − i t 3 − i −5 e = e( 1+ ) + e( 1 ) − e( 1+ ) . 0 1 2 2 −3 − i       I want a real solution, so I’ll use DeMoivre’s Formula to simplify:

− i t −t e( 1+ ) = e (cos t + i sin t) − −i t − i t −t −t e( 1 ) − e( 1+ ) = e (cos t − i sin t) − e (cos t + i sin t) −t = −2ie sin t

i − −i t − i t −t e( 1 ) − e( 1+ ) = e sin t 2   Plugging these into the expression for eAt above, I have

At −t 1 0 −t 3 − i −5 −t cos t + 3 sin t −5 sin t e = e (cos t + i sin t) + e sin t = e . 0 1 2 −3 − i 2 sin t cos t − 3 sin t       Notice that all the i’s have dropped out! This reflects the obvious fact that the exponential of a real matrix must be a real matrix. Finally, the general solution to the original system is

x −t cos t + 3 sin t −5 sin t c = e 1 . y 2 sin t cos t − 3 sin t c     2 

Example. I’ll compare the matrix exponential and the eigenvector solution methods by solving the following system both ways: ′ 2 −1 ~x = ~x. 1 2   The characteristic polynomial is λ2 − 4λ + 5. The eigenvalues are λ = 2 ± i. Consider λ = 2 + i: −i −1 A − (2 + i)I = . 1 −i   As this is an eigenvector matrix, it must be singular, and hence the rows must be multiples. So ignore the second row. I want a vector (a, b) such that (−i)a +(−1)b = 0. To get such a vector, switch the −i and −1 and negate one of them: a = 1, b = −i. Thus, (1, −i) is an eigenvector. The corresponding solution is

i t 1 t cos t + i sin t e(2+ ) = e2 . −i sin t − i cos t     Take the real and imaginary parts:

i t 1 t cos t re e(2+ ) = e2 , −i sin t    

i t 1 t sin t im e(2+ ) = e2 . −i − cos t     9 The solution is t cos t sin t ~x = e2 c + c . 1 sin t 2 − cos t      Now I’ll solve the equation using the exponential. The eigenvalues are {2 + i, 2 − i}. Compute the ak’s. (2+i)t a1 = e , and

t t (2−i)t (2+i)t (2−i)(t−u) (2+i)u (2−i)t 2iu a2 = e ⋆ e = e e du = e e du = Z0 Z0 t −i t i iu i −i t it i t −it −it t e(2 ) − e2 = e(2 ) 1 − e2 = e2 e − e = e2 sin t. 2 2 2  0   (Here and below, I’m cheating a little in the comparison by not showing all the algebra involved in the simplification. You need to use DeMoivre’s Formula to eliminate the complex exponentials.) Next, compute the Bk’s. B1 = I, and

−i −1 B = A − (2 + i)I = . 2 1 −i   Therefore, At i t 1 0 t −i −1 t cos t − sin t e = e(2+ ) + e2 sin t = e2 . 0 1 1 −i sin t cos t       The solution is t cos t − sin t c ~x = e2 1 . sin t cos t c   2  Taking into account some of the algebra I didn’t show for the matrix exponential, I think the eigenvector approach is easier.

Example. Solve the system ′ 5 −8 ~x = ~x. 2 −3   For comparison, I’ll do this first using the generalized eigenvector method, then using the matrix expo- nential. The characteristic polynomial is λ2 − 2λ + 1. The eigenvalue is λ = 1 (double).

4 −8 A − I = . 2 −4   Ignore the first row, and divide the second row by 2, obtaining the vector (1, −2). I want (a, b) such that (1)a +(−2)b = 0. Swap 1 and −2 and negate the −2: I get (a, b) = (2, 1). This is an eigenvector for λ = −1. Since I only have one eigenvector, I need a generalized eigenvector. This means I need (a′, b′) such that

4 −8 a′ 2 ′ = . 2 −4 b 1      Row reduce: 1 4 −8 2 1 −2 → 2 −4 1 2   " 0 0 0 # 1 1 1 This means that a′ = 2b′ + . Setting b′ = 0 yields a′ = . The generalized eigenvector is , 0 . 2 2 2   10 The solution is 1 t 2 t 2 t ~x = c e + c te + e . 1 1 2 1 2     " 0 #! Next, I’ll solve the system using the matrix exponential. The eigenvalues are {1, 1}. First, I’ll compute t the ak’s. a1 = e , and t t t t t−u u t t a2 = e ⋆ e = e e du = e du = te . Z0 Z0 Next, compute the Bk’s. B1 = I, and

4 −8 B = A − I = . 2 2 −4   Therefore, t t t At t 1 0 t 4 −8 e + 4te −8te e = e + te = t t t . 0 1 2 −4 2te e − 4te      

The solution is t t t e + 4te −8te c1 ~x = t t t . 2te e − 4te c   2  In this case, finding the solution using the matrix exponential may be a little bit easier.

[1] Richard Williamson, Introduction to differential equations. Englewood Cliffs, NJ: Prentice-Hall, 1986.

c 2012 by Bruce Ikenaga 11