<<

IOSR Journal of (IOSR-JM) e-ISSN: 2278-5728, p-ISSN: 2319-765X. Volume 12, Issue 1 Ver. IV (Jan. - Feb. 2016), PP 72-86 www.iosrjournals.org

The Exact Methods to Compute The Exponential

Mohammed Abdullah Salman1, V. C. Borkar2 1(Department of Mathematics& Yashwant Mahavidyalaya, Swami Ramamnand Teerth Marthwada University, Nanded, India). 2(Department of Mathematics & Statistics, Yeshwant Mahavidyalaya, Swami Ramamnand Teerth Marthwada University, Nanded, India).

Abstract: in this paper we present several kinds of methods that allow us to compute the exponential matrix e tA exactly. These methods include calculating eigenvalues and Laplace transforms are well known, and are mentioned here for completeness. Other method, not well known is mentioned in the literature, that don’t including the calculation of eigenvectors, and which provide general formulas applicable to any matrix. Keywords: Exponential matrix, functions of matrix, Lagrange-Sylvester interpolation, Putzer Spectral formula, , Commuting Matrix, Non-commuting Matrix.

I. Introduction The exponential matrix is a very useful tool on solving linear systems of first order. It provides a formula for closed solutions, with the help of this can be analyzed controllability and ob servability of a linear system [1]. There are several methods for calculating the , neither computationally efficient [7,8,9,10]. However, from a theoretical point of view it is important to know properties of this matrix function. Formulas involving the calculation of generalized Laplace transform and eigenvectors have been used in a large amount of text books, and for this reason, in this work is to provide alternative methods, not well known, friendly didactic. There are other methods [4] of at s interesting but not mentioned in the list of cases because of its practicality in implementation. Eight cases or develop methods to calculate cases because of its practicality in implementation. Eight cases or develop methods to calculate the matrix exponential. Provide examples of how to apply the lesser-known methods in specific cases, and for the most known cases the respective bibliography cited.

II. Definitions And Results The exponential of an n  n complex matrix A denoted bye tA defined by  ( At ) k ( At ) 2 ( At ) n 1  (t )  e At    I  At   ......   ...... k ! 2! ( n  1)! k  0 To set the convergence of this , we define firstly the frobenius of a matrix of size m  n as follow 1  m n 2  2 A   a  F    ij   i 1 j 1  If A (:, j ) denotes the j-th column of A, and A (i, :) the ith row, it is easy to see that is satisfy

1 1 n 2 m 2  2   2  A   A (:, j )    A (i. :)  F     2  j 1 2   i 1  We will use this standard for convenience, because in a finite all norms are equivalent. An important property is to know how to use narrows the Frobenius norm of a matrix product. Given the m  p p  n matrices A and B then the product of themC ij  AB , with entries ij  A (i, :) B ( j, :) . If A had complex entries, we obtain conjugate C ij applied to row . Recall the Cauchy-Schwarz inequality

2 2 C ij  A (i, :) B ( j, :) then we have :

m n 2 m n 2 m 2 n 2 2 2 2 2 AB  C  A (i, :) B (: , j)  A (i, :) B(:, j)  A B . F   ij   2   F F i 1 j 1 i 1 j 1 2 i 1 2 j 1 2 DOI: 10.9790/5728-12147286 www.iosrjournals.org 72 | Page The Exact Methods To Compute The Matrix Exponential

퐴푝푝푙푦푖푛푔 푡푕푖푠 푖푛푒푞푢푎푙푖푡푦 푡표 푎 푠푞푢푎푟푒 푚푎푡푟푖푥 퐴 푖푠 푒푎푠푦 푡표 푑푒푑푢푐푒 푡푕푒 푛푒푥푡 푓표푟푚: n A n  A , for all n  1,2,3,.... F F Formally we must examine the convergence of the following limit: 푛 퐴푘 lim 푛 →∞ 푘! 푘=0 It is sufficient to observe that satisfies: k k n k n A n A A A    F   F  e F k ! k ! k ! k  0 F k  0 k  0 and thus it demonstrated that 푒퐴 already is well defined for any with constant entry. It is useful to remember how the matrix exponential behaves under derivation: d   t k A k   (t )       dt  k  0 k ! 

d  tA t 2 A 2 t 3 A 3 t 4 A 4 t 5 A 5    I       ......    dt  1! 2! 3! 4! 5!  2 tA 2 3t 2 A 3 kt k 1 A k  A    ......   ...... 2! 3! k !  tA t 2 A 2   A  I    ......     11 2!   Ae tA  e tA A. Using induction method and covenant condition  (t )   (t ) follows the formula: d k ( K ) tA k tA tA k  (2.1)  ( t )  e  A e  e A ,k  Z o dt k tA Note that the formula for the first derivative implies that the function x (t )  e x o is solution of initial value problem of the following first order system:  x (t )  Ax , x (0 )  x o . Two results known of that we used below are the following theorems. Theorem (2.1) Schur Triangularization Theorem. For any square matrix A n  n , there is an U such that A  UTU  1 is upper triangular. In addition, the entries in a T are the eigenvalues of A . Theorem (2.2 ((Cayley Hamilton theorem) Let a square matrix and  ( )  A   I its characteristic then  ( A )  0 .

Now we will discuss several methods by details to calculate or compute and find the matrix exponential.

III. n  n D  diag ( ,  ,...... ,  ) Given a diagonal matrix 1 2 n It is to say that k k k k  D  diag ( 1 ,  2 ,......  n ) is true for all k  Z then we have

 t k   t k  t k  e tD  D k  diag   k ,...... ,  k     1  n  k  0 k !  k  0 k ! k  0 k ! 

   diag e 1 ,...... , e n ,

DOI: 10.9790/5728-12147286 www.iosrjournals.org 73 | Page The Exact Methods To Compute The Matrix Exponential

It is also a diagonal matrix. Now, in the case that A is a matrix diagonalizable it is known that there exists an P formed by the eigenvectors of A and a diagonal matrix D formed by the distinct eigenvalues of such that A  PDP  1 . Now it is easy to verify the identity A k  PD k P  1 for all k  Z  . Then we have:  t k  t k   t k  e tA  A k  ( PD k P 1 )  P  D k  P 1      k  0 k ! k  0 k !  k  0 k !   Pe tD P 1 . Accordingly, it is trivial to find the exponential matrix of a diagonalizable matrix, provided that previously we find all the eigenvalues of with corresponding eigenvectors. This case is well known.

IV. Not Diagonalizable Matrix Suppose is not diagonalizable matrix which it is not possible to find n linearly independent eigenvectors of the matrix , In this case can use the Jordan form of . Suppose j is the Jordan form of ,

A j  1 with the transition matrix. Then e  Pe P Where j  diag ( j  , j  ,....., j  )  diag ( j   j   .....  j  ) 1 1 2 2 1 k 1 1 2 2 1 Then J j  j  j  e  (e 1 1  e 2 2  ......  e k k ) Thus, the problem is to find the matrix exponential of a Jordan block where the Jordan block has the form k J k ( )   k  N k  M k and in general N as ones on the k  th upper diagonal and is the null matrix if k  n the dimension of the matrix. By using the above expression we have

  k J (  ) 1 k 1 k 1  k  k  j j e k  J  ( I  N )    N        k  0 k ! k ! k  0 k ! j  0  J  This can be written 2 n 1 j   N N  e   e  I  N   ......     2! ( n  1)!   Finally, we get the exponential matrix as follow: A j  1 e  Pe P Then, tA t J t J  1 e  Pdiag (e 1 ,...... , e m ) P This case is also well known. One way to demonstrate this formula is to consider the system of first order x   Jx with initial condition x (0 )  x o . On the one hand, we know that the solution of this system is given t J by x (t )  e x o . Furthermore, this system is easy to solve, starting last equation, which is decoupled, then every linear first-order equation is solved one by one, via the method of .

V. : Let S is an upper triangular matrix (lower triangular for a similar development is done) and write it as the sum of a diagonal matrix with a matrix: S  D  N r Recall that a matrix N is called a nilpotent if there is a positive r such that N  0 . The smallest positive integer, for which this equality holds, is called the index of nilpotent of ownership matrix. Assuming the known property on exponential matrices see [1] e A  B  e A e B , if AB  BA , Now, we can use the above formula e t s  e t ( D  N )  e t D e t N . to calculate:

DOI: 10.9790/5728-12147286 www.iosrjournals.org 74 | Page The Exact Methods To Compute The Matrix Exponential

We know how to calculate the matrix exponential of a diagonal matrix, so we discuss how to get e tN . Sufficient to note that when N nilpotent, the series of this matrix becomes finite, since the number of a quantity to be added to another is bounded by index of nilpotent of N . This at the same time is limited by the degree of its minimal polynomial (remember that all has all zero) eigenvalues. We can generalize this method to any parent. To do this, simply apply the theorem of Schur triangulation, A  USU  1 , where U is a unitary matrix and S is upper triangular. These yields: t A t S  1 e  Ue U and here we know how to proceed.

VI. Putzer’s Spectral Formula: tA In [5], Putzer describes two methods to calculate e .These are based on the fact that e tA is a polynomial in A whose coefficients t are scalar functions that can be found recursively by solving a simple system of linear differential equations of the first order. We show only the second method, because it is easier to understand and implement.

Theorem (6.1)

Given a matrix of size n  n , suppose we know all its eigenvalues  1 ,  2 ,...... ,  n , not necessarily distinct, but listed in a specific arbitrary order. Then it holds tA e  r1 (t ) Po  r2 (t ) P1  ......  rn (t ) Pn 1 where k P  I where I identity mstrix , P  A   I , k  1,2 ,..., n  1 o k   j  j 1 and r1 (t ), r2 (t ),...... , rn (t ) are solutions of the differential system  r1   1 r1 r1 (0 )  1  r2   2 r2  r1 r 2 (0 )  0    rn   n rn  r n -1 r n (0 )  0

Proof: put n 1  (t )  r (t ) P ,  k  1 k k  0  and define ro (t )  0 . Given rk 1   k  1 rk 1  rk , compute

n 1 n 1  (t )    (t )  r (t )P   r (t )P n  k 1 k  n k 1 k k  0 k  0

 n 1 n 1   n  2     r P  r P     r P   r P   k  1 k  1 k  k k  n k 1 k n n n 1  k  0 k  0   k  0  since the first term we can extract  n rn Pn 1 and the second term can be described as n 1 n 1 n 1 r P  r P  r P ,  k k  k k  k 1 k 1 k  0 k 1 k  0 the right side of equality It simplifies to n  2 (   ) P  P r .   k  1 n k k 1  k 1 k  0

DOI: 10.9790/5728-12147286 www.iosrjournals.org 75 | Page The Exact Methods To Compute The Matrix Exponential

Now, as is true that Pk 1  ( A   k 1 I ) Pk , the expression in brackets is reduced to (AnI) Pk . Furthermore it has n  2 n 1 ( A   I ) P r  ( A   I ) P r  ( A   I ) P r ,  n k k 1  n k k 1  n  n1n k  0 k  0 Pn

 ( A   n I )  - Pn rn .  But the Cayley-Hamilton theorem Pn  0 . So, we obtained that  -  n   ( A   n I )  it implies that  tA   A  . Finally, as  (0 )  r1 (0 ) Po  I ,it is follow  (t )  e by uniqueness of solutions.∎

As an application of this method, we find formulas for the exponential matrix of a matrix 2  2  in the form

tA e  r1 (t ) I  r2 (t )( A   1 I ) , with eigenvalues  1 ,  2  . According to the nature of the eigenvalues we have three cases to study. a)- Real and distinct eigenvalues: We must solve the system of equations

 r1   1 r1 r1 (0 )  1  r2   2 r2  r1 r 2 (0 )  0  t Solving the first equation, which is always decoupled, we get r (t )  e 1 . For the second, using the method 1 of integrating factor we obtain 1 1 (  1   2 ) t  2 t r2 (t )  e - e .  1   2  1   2 Consequently, we achieved the following formula  t e 2 tA 1t 1t e  e I  e  I  A   1 I  .  1   2

1 t b) - Real and equal eigenvalues: In this case, to solve the system of equations we get r1 (t )  e

1 t and r2 (t )  te . So, we get the formula

tA 1t 1t e  e I  te  A   1 I  . 2 2 C)-Complex eigenvalues: In the case A  C , with eigenvalues  1 ,  2 , no problem in using the same formula as in the case of real and distinct eigenvalues. But if A has real entries and its eigenvalues would be complex conjugates, let say  1  a  ib ,  2  a  ib , with b  0 . In this case, to solve the system of equations we get

1t at r1 (t )  e  e cos( bt )  i sin( bt ) ,

 t  t 1 2 e  e at sin( bt ) r2 (t )   e  1   2 b

tA where x (t )  e x o is a solution for system x  Ax , with initial condition x (0 )  x o , x o and A having real entries, obviously we seek a real solution. So, the real part of the Spectral formula is to be considered real solution, that is tA x( t )  R x( t )  R r1 ( t )I  R r2 ( t ) ( A   1 I )x o  e x o , and therefore it concludes that sin( bt ) e tA  e at cos( bt ) I  e at  A  aI  . b Specific cases of Apostle: In [2], Apostle shows how to obtain explicit formulas for the matrix exponential e tA in the following cases: a)- all eigenvalues of are equal,

DOI: 10.9790/5728-12147286 www.iosrjournals.org 76 | Page The Exact Methods To Compute The Matrix Exponential b)- all eigenvalues of A are distinct , c)- has only two distinct eigenvalues, with one of multiplicity algebraic one. While these cases do not cover all possible alternatives for all of eigenvalues of a matrix, these formulas will show by simplicity and because they help us to find all possible formulas for exponential matrices of less than or equal to size 3  3 . It should be noted that the putzer's spectral formula also help us to deduce these formulas, but the way obtained by Apostol is more forceful. Theorem (7.1): If is a matrix n  n with all its eigenvalues equal to  then we have n 1 t k e tA  e  t  ( A   I ) k . k  0 k ! Proof: As the matrices  tI and t (A -  I) commute, we write

 t k e tA e  t I e t ( A   I )  (e  t I )  ( A   I ) k . k  0 k ! then the Cayley-Hamilton theorem implies ( A   I ) k  0 for k  n , and so the theorem is proven. Theorem (7.2):

If is a matrix with n distinct eigenvalues  1 ,  2 ,  ,  n , then we have

n 1 tA t  e  e k L ( A ) ,  k k  0 where L k ( A ) are the Lagrange interpolation coefficients given by

n A   j I L ( A )  for k  1,2,3,...... , n. k  j 1  k   j j  k Proof: Although this theorem is a special case of the interpolation formula of Lagrange-Sylvester which gives us directs proof. We define the following matrix function of scalar variable as follow n t  F (t )  e k L ( A ) .  k k 1 tA To prove F (t )  e will show that F satisfies the F (t )  AF (t ) with initial condition F (0) = I. In fact, we observed that satisfy n t  AF (t )  F (t )  e k ( A   I ) L ( A ).  k k k 1 k By the Cayley-Hamilton theorem we have ( A   I ) L k ( A )  0 for each k , and so F satisfies the differential equation. In addition to n F (0 )  L ( A )  I ,  k k 1 Finally, follow uniqueness of solutions. Theorem (7.3): Let be a matrix n  n , (n  3) with two eigenvalues different  and  where  has n  1 multiplicity and  has multiplicity1 . Then it holds

n  2 t k  e  t e  t n  2 t k  e tA  e  t ( A   I ) k   (    ) k ( A   I ) n 1 .   n 1 n 1   k ! k ! k  0  (    ) (    ) k  0  Proof: In scalar version, for fixed t , the expansion e x of is centered on  t is

DOI: 10.9790/5728-12147286 www.iosrjournals.org 77 | Page The Exact Methods To Compute The Matrix Exponential

 e  t e x   ( x   t ) k . k  0 k! Now we evaluated tA and conveniently we partition this series  e  t  t k e x   (tA   tI ) k  e  t  ( A   I ) k k  0 k! k  0 k! n  2 t k  t k  e  t  ( A   I ) k  e  t  ( A   I ) k k  0 k! k  n 1 k! n  2 t k  t n 1 i  e  t  ( A   I ) k  e  t  ( A   I ) n 1 i . k! (n - 1  i)! k  0 i  0 Now we rewrite the second term of the last expression. As observe A   I  A   I  (    )I , using the Cayley-Hamilton theorem, we have equality 0  ( A   I ) n  1 ( A   I )  ( A   I ) n  (    )( A   I ) n  1 , that is ( A   I ) n  (    )( A   I ) n  1 . By induction, it follows

( A   I ) n  i  (    ) i  1 ( A   I ) n  1 , and so too 1 ( A   I ) n 1 i  ( A   I ) n 1 ( A   I ) i  (A -  I ) n (A -  I) i   

1  (A -  I ) n  i  (    )(A -  I) n -1 .    By replacing this relationship in the second sum we obtain   n 1 i t i  n 1   (    ) ( A   I )  ( n  1  i )!  i  0 

1   t k   (    ) k ( A   I ) n 1 n 1    (    ) k !  k  n 1 

1  n  2 t k   e t (    )  (    ) k ( A   I ) n 1 , n 1    (    ) k !  k  0  which completes the proof. As we conclude the application of the matrix exponential formulas of any matrix A for size 3 x 3 according to the multiplicity of its eigenvalues: a - For eigenvalue  with algebraic multiplicity three have

tA  t  1 2 2  e  e  I  t( A   I )  t ( A   I )  .  2  b-For different eigenvalues  1 , 2 , 3 , have

tA  t ( A   I )( A   I )  t ( A   I )( A   I )  t ( A   I )( A   I ) e  e 1 2 3  e 2 1 3  e 3 1 2 .

(  1   2 )(  1   3 ) (  2   1 )(  2   3 ) (  3   1 )(  3   2 ) c- For different eigenvalues  1 , 2 with  1 has algebraic multiplicity two have

 2 t  1t  1t tA  t e  e 2 te 2 e  e 1 I  t( A   I )  ( A   I )  ( A   I ) . 1 2 1 1 (  2   1 )  2   1 Interpolation Lagrange-Sylvester and algorithm Gantmacher:

DOI: 10.9790/5728-12147286 www.iosrjournals.org 78 | Page The Exact Methods To Compute The Matrix Exponential

The next method, illustrated in Gantmacher [3], not only helps us to calculate the matrix exponential, but also to evaluation of any analytic matrix function. First we mention the case where the eigenvalues of a matrix are different, then the general case when there are multiplicities. Theorem (8.1):

If f ( A ) is a A n  n and if the eigenvalues of A are different, then f ( A ) it can be decomposed as n f ( A )  f (  ) z (  ) ,  i i i 1  n  n here  i i 1 ,2 ,....., n are the eigenvalues of A and z(  i ) is a matrix is given by n A   I z(  )  k . i  k 1  i   k k  i Proof: By the Cayley-Hamilton can be reduced a polynomial of degree n  1 , let say

n 1 n  2 f ( A )  c n 1 A  c n  2 A  ......  c o I . This expression polynomial can be factored via interpolation of Lagrange which n n j f ( A )  p ( A   I ) .  i  k i 1 k 1 k  i

To calculate p i , we multiply it from the right both side of equality by the j -th eigenvector v j corresponding to  j .This procedure gives us

n n n n f ( A )v  p ( A   I )v  p ( Av -  v )  p ( -  )v , j  i  k j j  j k j j  j k j i 1 k 1 k 1 k 1 k  i k  j k  j where the second equality is obtained by considering ( A   i I )v j  0 . when the eigenvalues of are different, we have f ( A )v j  f (  j )v j , and equality is easily deduced by comparison

f (  j ) p  . j n (    )  j k k 1 k  j Therefore, all together it gives us n n A   I f ( A )  f (  ) k .  i  i 1 k 1  i   k k  i When the eigenvalues are not distinct, algorithm of Gantmacher, see [3], can be used to expand the analytical function f ( A ) as

( k 1 ) m m j f (  j ) f ( A )  Z ,   jk j 1 k 1 ( k  j )! where m is the number of distinct eigenvalues, , m j is the multiplicity of the j-th eigenvalue.

( k ) f (  j ) is the derivative with respect to  evaluated in the j-th eigenvalue and Z jk are the constituent matrices that are once found fixed for any . To illustrate the use of this formula with an example . Example 8.2: Let us find e tA for the matrix

DOI: 10.9790/5728-12147286 www.iosrjournals.org 79 | Page The Exact Methods To Compute The Matrix Exponential

 1  1 2    A   1 3 2  ,     1  1 6 

 1  2 with multiplicity m 1  1 and  2  4 with multiplicity m 2  2 . Then we have ( k 1 ) 2 m j f (  j ) f ( A )  Z   jk j 1 k 1 ( k  j )!

1 f ( k 1 ) (  ) 2 f ( k 1 ) (  )  1 Z  2 Z  1 k  2 k k 1 ( k  1 )! k 1 ( k  1 )!   f (  1 )Z 11  f (  2 )Z 21  f (  2 )Z 22 . Recall that this expansion is valid for any analytic function. In our situation, we are interested in the matrix version of f (  )  e t  . Here are the coefficients to be considered

f (  )  e 2 t , f (  )  e 4 t , f (  )  e 4 t 1 2 2 To find the constituent matrices using known polynomial functions. As the matrix is of size 3 x 3, the Cayley - Hamilton theorem states that e tA must be have a polynomial function of I , A , and A 2 then will use the following criteria.  If f ( A )  I ,the scalar version is f (  )  I . The coefficients are  f (  1 )  1, f (  2 )  1 , f (  2 )  0 . We obtain the equation I  Z 11  Z 12 .  If f ( A )  A , the scalar version is f (  )   . The coefficients are  f (  1 )  2, f (  2 )  4 , f (  2 )  1 . We obtain the equation A  2 Z 11  4 Z 21  Z 22 .  If f ( A )  A 2 , the scalar version is f (  )   2 . The coefficients are  2 f (  1 )  4, f (  2 )  16 , f (  2 )  8 . We obtain the equation A  4 Z 11  16 Z 21  8 Z 22 . Consequently, we must solve the formal matrix system

 1 1 0  Z 11   I      

 2 4 1  Z 21    A  ,

    2  4 16 8 Z   22   A  Then the solution is

 1 2   4 I  2 A  A   4   Z 11   16  8 1  I    1     1  Z   12 8  1 A   3 I  2 A  A 2  12       4 4     2  Z 16  12 2    22    A   1 2   4 I  3 A  A   2  Now, Put it all together we see that

DOI: 10.9790/5728-12147286 www.iosrjournals.org 80 | Page The Exact Methods To Compute The Matrix Exponential

tA 2 t 4 t 4 t f ( A )  e  e Z 11  e Z 21  te Z 22  9 9    5  9   3 7    3   3    4   4 4   4 4   2 2   7 1 7 3  5  1  e 2 t   3   e 4 t  3   te 4 t   4   4 4   4 4   2 2   9 9    9  9   7 7   1   0   4   4 4   4 4   2 2 

 1 2 t 2 t 1 2 t 2 t 4 t 2 t   e ( e ( 6 t  5 )  9 ) e ( e ( 14 t  9 )  9 ) e ( 3  4 t )  3e   4 4  1 1    e 2 t ( e 2 t ( 10 t  7 )  7 ) ( e 4 t ( 3  2 t )  e 2 t ) e 4 t ( 3  4 t )  3e 2 t   4 4    1 2 t 2 t 1 2 t 2 t 4 t 2 t  e ( e ( 14 t  9 )  9 ) e ( e ( 14 t  9 )  9 ) 4 te  e   4 4  It is the desired exponential matrix. It should be emphasizing that found once the constituent matrices; we can find easily cos( A ) , sin( A ) ,...... matrix functions of analytic functions.

Use of fundamental solutions of a linear differential equation with constant coefficients: The following method avoids wonder if the matrix is diagonalizable, and its prerequisites just know Cayley Hamilton theorem and know how to solve linear homogeneous scalar differential equations of order n with constant coefficients, like ( n ) n 1 n  2  x  c n 1 x  c n  2 x  ......  c 1 x  c o x  0 Note that the method is not easy to implement if the roots of the polynomial equation (or characteristic equation) associated with the above equation are uninteresting to get algebraically. The following theorems help us to understand how this method works. The first theorem guarantees the existence and uniqueness of an initial value problem for a matrix differential equation, while the second provides a method for constructing the solutions of matrix exponential based on initial value problems of scalar differential equations.

Theorem 9.1: Let A be a n  n constant matrix with characteristic polynomial n n 1 p (  )  det(  I  A )    c n 1   ......  c 1   c o Then  ( t )  e tA is the only solution of the matrix differential equation of order n given by ( n ) n 1 n  2    c n 1   c n  2   ......  c 1   c o   0 (9.1) with initial condition  ( 0 ) ( 0 )  I ,  (0 )  A ,  ( 0 )  A 2 , .....,  n -1 (0 )  A n -1 (9.2) Proof:

First we prove the uniqueness. Suppose  1 ( t ) and  2 ( t ) are two solutions for (9.1) satisfying the initial conditions given in (9.2). Define  ( t )   1 ( t ) -  2 ( t ) . Because of linearity, this function satisfies (9.1) but with initial condition  ( 0 ) ( 0 )   (0 )   ( 0 )  , .....,   (n -1) (0 )  0 This means that each entry  ( t ) satisfies the following initial value problem to be

( n ) n 1 n  2  x ( t )  c n 1 x ( t )  c n  2 x ( t )  ......  c 1 x ( t )  c o x( t )  0 x( 0 )  x ( 0 )  x ( 0 )  ....  x ( n 1 )  o where it is obvious that the only solution is x( t  0 ) for all t , the trivial. Thus we have  ( t )  0 for all , and obtain uniqueness. Now we prove the existence confirming that  ( t )  e tA satisfies the initial value problem (9.1) - (9.2). Either the constant matrix with characteristic polynomial p (  ) disclosed in hypothesis. Recall the formula for k -th derivative of the exponential (see Equation (2.1))

DOI: 10.9790/5728-12147286 www.iosrjournals.org 81 | Page The Exact Methods To Compute The Matrix Exponential

 ( k ) ( t )  A ( k ) e tA , k  0 ,1,2 ,3 ,...... We replaced on the right side of Equation (9.1) to obtain n n 1 n  2 tA tA ( A  c n 1 A  c n  2 A  ....  c 1 A  c o I )e  p ( A )e  0 supported by the Cayley-Hamilton theorem. Finally, the formula of k -th derivative is deduced ( 0 )   2 n -1 n -1  ( 0 )  I ,  (0 )  A ,  ( 0 )  A , .....,  (0 )  A and  ( t ) satisfies the initial conditions. This completes the proof. Theorem 9.2: Let A be a n  n constant matrix with characteristic polynomial p (  )  det(  I  A )   n  c  n 1  ......  c   c n 1 1 o Then we have tA 2 n 1 e  x 1 ( t )I  x 2 ( t ) A  x 3 ( t ) A  ......  x n ( t ) A , where x k ( t ), 1  k  n ,are the solutions of scalar differential equations n given by

( n ) ( n 1 ) x  c n 1 x  ......  c o x  0 (9.3) and satisfying the initial conditions x 1 ( 0 )  1 , x 2 ( 0 )  0 ,   , x n ( 0 )  0   x 1 ( 0 )  0 , x 2 ( 0 )  1,   , x n ( 0 )  0    (9.4)

  

( n 1 ) ( n 1 ) x 1 ( 0 )  0 , ...... , x n  1 Proof: 2 n 1 Define  ( t )  x 1 ( t )I  x 2 ( t ) A  x 3 ( t ) A  ......  x n ( t ) A , where x k ( t ) solutions of initial value problems mentioned in the theorem. First we show that satisfies equation (9.1). Indeed, we have ( n ) n 1 n  2   ( t )  c n 1  ( t )  c n  2  ( t )  ......  c 1  ( t )  c o  ( t )

( n ) ( n 1 ) ( 1 )  ( x 1  c n 1 x 1  ......  c 1 x 1  c o x 1 )I 

( n ) ( n 1 ) ( 1 )  ( x 2  c n 1 x 2  ......  c 1 x 2  c o x 2 ) A 

( n ) ( n 1 ) ( 1 )  ( x  c x  ......  c x  c x ) A 2  3 n 1 3 1 3 o 3  ...... 

( n ) ( n 1 ) ( 1 ) n 1  ( x n  c n 1 x n  ......  c 1 x n  c o x n ) A  0 I  OA  OA 2  ......  oA n 1 Now we show that satisfies Initial condition which given in (9.2):

n 1  ( 0 )  x 1 ( 0 )1  x 2 ( 0 ) A      x n ( 0 ) A  I    n 1  ( 0 )  x 1 ( 0 )I  x 2 ( 0 ) A      x n ( 0 ) A  A      

( n 1 ) ( n 1 ) ( n  2 ) ( n 1 ) n 1 n 1  ( 0 )  x 1 ( 0 )I  x 2 ( 0 ) A  ......  x n ( 0 ) A  A Then, of uniqueness, is satisfied tA 2 n 1 e   ( t )  x 1 ( t )I  x 2 ( t ) A  x 3 ( t ) A  ......  x n ( t ) A . for all t .Let us give this example to illustrate this method. Example 9.3: We want to find the matrix exponential e tA

DOI: 10.9790/5728-12147286 www.iosrjournals.org 82 | Page The Exact Methods To Compute The Matrix Exponential

 1  1 2    A   1 3 2  ,   1  1 6    For this we calculate that characteristic polynomial of A p (  )  det(  I  A )   3  10  2  32   32 . We assume, that the Theorem (9.2) is satisfied tA 2 e  x 1 ( t )I  x 2 ( t ) A  x 3 ( t ) A (9.5) The characteristic polynomial produces the following scalar differential equation with constant coefficients x   10 x   32 x   32 x  0

then the general solution is found based on the characteristic equation m 3  10 m 2  32 m  32  ( m  2 )( m  4 ) 2  0 thus, the general solution takes the form 2 t 4 t 4 t x( t )   1 e   2 e   3 te    To find x 1 ( t ) we use the initial conditions x( 0 )  1, x ( 0 )  0 , x ( 0 )  0 . by using this information we obtain 2 t 4 t 4 t x 1 ( t )  4 e  3e  4 te .    To find x 2 ( t ) we use the initial conditions x( 0 )  0 , x ( 0 )  1, x ( 0 )  0 . by using this information we obtain 2 t 4 t 4 t x 2 ( t )   2 e  2 e  3te .

 To find x 3 ( t ) we use the initial conditions x( 0 )  0 , x ( 0 )  0 , x ( 0 )  1 . by using this information we obtain

1 2 t 1 4 t 1 4 t x 3 ( t )  e  e  te . 4 4 2 Finally, we replace these functions in formula (9.5) and got 1 e tA  e 2 t (4 - 3e 2 t  4 te 2 t ) I  e 2 t (-2  2e 2 t  3te 2 t ) A  e 2 t (1 - e 2 t  2 te 2 t ) A 2 4

 1 2 t 2 t 1 2 t 2 t 4 t 2 t   e ( e ( 6 t  5 )  9 ) e ( e ( 14 t  9 )  9 ) e ( 3  4 t )  3e   4 4  1 1    e 2 t ( e 2 t ( 10 t  7 )  7 ) ( e 4 t ( 3  2 t )  e 2 t ) e 4 t ( 3  4 t )  3e 2 t  .  4 4    1 2 t 2 t 1 2 t 2 t 4 t 2 t  e ( e ( 14 t  9 )  9 ) e ( e ( 14 t  9 )  9 ) 4 te  e   4 4  It can be verified that the matrix A is not diagonalizable, but this is not relevant for calculations. We can avoid some calculations in the above example? The following theorem sets the stage. Theorem (9.4): Let A be a n  n constant matrix with characteristic polynomial p (  )  det(  I  A )   n  c  n 1  ......  c   c n 1 1 o then we get the solution tA 2 n 1 e  x 1 ( t )I  x 2 ( t ) A  x 3 ( t ) A  ......  x n ( t ) A , where

DOI: 10.9790/5728-12147286 www.iosrjournals.org 83 | Page The Exact Methods To Compute The Matrix Exponential

 x 1 ( t )      ( t )   1   x 2 ( t )   ( t )    2  x ( t )  3  1    B o  , (9.6)                 ( t )    n  x ( t )  n  where B o is the evaluation here at t = 0, the matrix

 x ( t )  ( n 1 ) 1   ( t )   ( t )    ( t )     1 1 1   x 2 ( t )  ( n 1 )   ( t )   ( t )   ( t )    2 2 2 x ( t )    3  1  B o      ,                ( n 1 )       n ( t )  n ( t )  n ( t ) x ( t )    n  provided that S   1 ( t ),  2 ( t ),...... ,  n ( t ) is a fundamental set of solutions for the homogeneous linear differential equation x ( n )  c x ( n 1 )  ......  c x  0 n 1 o Proof: Note that p (  )   n  c  n 1  ......  c   c  0 n 1 1 o is the characteristic equation of the differential equation ( n ) ( n 1 )  x  c n 1 x  ......  c 1 x  c o x  0 (9.7) Due to Theorem (9.2) we have e tA  x ( t )I  x ( t ) A  x ( t ) A 2  ......  x ( t ) A n 1 , 1 2 3 n where x k ( t ) is solution of (9.7) with initial conditions (9.4) for k  1,2 ,3 ,...... ,n . we note the set x 1 ( t ), x 2 ( t ),..., x n ( t ) It is also a fundamental set of solutions of (9.3), since the

W ( x 1 ( t ), x 2 ( t ),...., x n ( t )) is set takes the value 1 at t = 0. Furthermore, when using the known theorem of uniqueness of solutions to the initial value problem (9.3) {(9.4), we have ( n 1 )    k ( t )   k ( 0 )x1 ( t )   k ( 0 )x 2 ( t )   k ( 0 )x 3 ( t )  ....   k ( 0 )x n ( t ) for each , which we deduce

  1 ( t )   x 1 ( t )     

  2 ( t )   x 2 ( t )        B  .   o              ( t ) x ( t )  n   n 

As B o is invertible it is equal (9.6), which completes the proof. Consider the matrix of Example (9.3) to appreciate the simplification to the calculations for the tabulation of tA e . Recall that the characteristic polynomial of A is p (  )  det(  I  A )   3  10  2  32   32 . which we associate the linear differential equation x   10 x   32 x   32 x  0

DOI: 10.9790/5728-12147286 www.iosrjournals.org 84 | Page The Exact Methods To Compute The Matrix Exponential

Such as characteristic equation is (   2 )(   4 ) 2  0 , we obtain e 2 t ,e 4 t ,te 4 t  as a fundamental set of solutions. With these we form the matrix  e 2 t 2 e 2 t 4 e 2 t    4 t 4 t 4 t B t   e 4 e 16 e  ,  4 t 4 t 4 t  te e ( 1  4 t ) 8 e ( 1  2 t )   where it is calculated  1 2 4   16  12 16    1 1   B o   1 4 16  with inverse B o    8 8  12  .   4    0 1 8   1  1 2  tA 2 then we get e  x 1 ( t )I  x 2 ( t ) A  x 3 ( t ) A , by Theorem 9.4 we have to be satisfied    e 2 t   4 e 2 t  3e 4 t  4 te 4 t   x 1 ( t )        1  4 t  2 t 4 t 4 t  x 2 ( t )   B o e    2 e  2 e  3te  .     4 t   x ( t )   1 1 1  3  te  2 t 4 t 4 t     e  e  te   4 4 2  By replacing these functions in the formula of e tA , the exponential matrix of Example 9.3 is recovered.

Laplace Transform: the Laplace transform method is usual use in to solve initial value problems of scalar linear equations of order n with constant coefficients. This method is generalized to solve the problem  of initial value x  Ax , x(0)  x o . Applying Laplace transform we obtain sX ( s )  x( 0 )  AX ( s ) implies that ( sI  A ) X ( s )  x( 0 ), such that we get X ( s )  ( sI  A )  1 x( 0 ) . After taking inverse Laplace transform we obtain x( t )  L  1 ( sI  A )  1 x( 0 ) . Thus, for uniqueness is achieved e tA  L  1 ( sI  A )  1  . in applications of engineering, the exponential matrix is called the state transition matrix . For examples and interesting properties of the exponential matrix, see [1].

VII. Conclusion It illustrated several methods to calculate the exponential matrix of a square matrix. Most of them do not use the calculation to eigenvalues (generalized) matrix, which has been a standard method of tackling the problem in several kinds at the level of initiation. While these methods can be applied to any matrix, all of them are ineffective if dealing with large matrices or inaccurate entries (decimals truncated or numbers accompanied with some error). This is where it is preferable to use the computer, but as the Work-Van Loan there is no suitable method to numerical implementation.

References [1] P. J. Antsaklis & A. N. Michel; A Linear Systems Primer. Birkhauser, Boston (2007). tA [2] T. M. Apostol; Some Explicit Formulas for the Exponential Matrix e . the American Mathematical Monthly (1969), 76, 3:289{292. [3] F. R. Gantmacher; the Theory of Matrices, Volume I. Chelsea, New York (1959). [4] S-H Hou & W-K. Pang; on the matrix . International Journal of Mathematical Education in Science and Technology (2006), 37, 1:65{70. [5] E. J. Putzer; Avoiding the Jordan in the Discussion of Linear Systems with Constant Coefficients. The American Mathematical Monthly (1966), 73, 1:2-7. [6] Nicholas J. Higham and Awad H. Al-Mohy, Computing Matrix Functions, Manchester Institute for Mathematical Sciences 2010,The University of Manchester, ISSN 1749-9097.pp1-47 [7] R. A. Horn and C. R. Johnson, , Cambridge University Press,1985, ISBN 0-521-30586-2, xiii+561 pp. [8] R. Bellman, Introduction to Matrix Analysis, 2nd ed. New York: McGraw- Hill,1960, reprinted by SIAM, Philadelphia,1995.

DOI: 10.9790/5728-12147286 www.iosrjournals.org 85 | Page The Exact Methods To Compute The Matrix Exponential

[9] Syed Muhammad Ghufran, The Computation of Matrix Functions in Particular, The Matrix Exponential, Master Thesis, University of Birmingham,England,2009,169 pp. [10] Nathelie Smalls, The exponential of matrices, Master Thesis, University of Georgia, Georgia, 2007,49 pp.

DOI: 10.9790/5728-12147286 www.iosrjournals.org 86 | Page