<<

View metadata, citation and similar papers at core.ac.uk brought to you by CORE

provided by Elsevier - Publisher Connector

Linear Algebra and its Applications 350 (2002) 171–184 www.elsevier.com/locate/laa

Some explicit formulas for the polynomial decomposition of the exponential and applications R. Ben Taher a,∗, M. Rachidi b aDépartement de Mathématiques et Informatique, Faculté des Sciences, Université Moulay Ismail, B.P. 4010, Beni M’hamed, Méknés, Morocco bDépartement de Mathématiques et Informatique,Faculté des Sciences, Université Mohammed V, B.P. 1014, Rabat, Morocco Received 21 December 2000; accepted 17 December 2001 Submitted by R.A. Brualdi

Abstract This paper is devoted to the study of some formulas for polynomial decomposition of the exponential of a A. More precisely, we suppose that the minimal polynomial tA MA(X) of A is known and has degree m. Therefore, e is given in terms of P0(A), . . . , Pm−1(A), where the Pj (A) are polynomials in A of degree less than m, and some explicit analytic functions. Examples and applications are given. In particular, the two cases m = 5 and m = 6 are considered. © 2002 Elsevier Science Inc. All rights reserved. AMS classification: Primary 15A99; 40A05 Secondary 40A25; 45M05; 15A18

Keywords: Algebra of matrices; Linear recurrence relations; Power of a matrix; Exponential of a matrix

1. Introduction

The role of the is important in many fields of and physics. Therefore, many theoretical and numerical methods have been developed for the computation of the powers An (n  r  2) and exp(tA), for every square matrix A ∈ GL(r; C) (see [1–4,7,8,12,14,16,17], for example).

∗ Corresponding author. E-mail addresses: [email protected] (R. Ben Taher), [email protected] (M. Rachidi).

0024-3795/02/$ - see front matter  2002 Elsevier Science Inc. All rights reserved. PII:S0024-3795(02)00271-9 172 R. Ben Taher, M. Rachidi / Linear Algebra and its Applications 350 (2002) 171–184 ∈ ; For every A GL(r C) the Cayley–Hamilton Theorem allows us to have the ba- = r−1 k sic decomposition exp(tA) k=0 fk(t)A ,wheref0,...,fr−1 are some analytic functions that depend on the characteristic polynomial. Using the minimal polyno- = m−1 k = mial MA(X) of A,wehaveexp(tA) k=0 fk(t)A ,wherem deg(MA(X)) and f0,...,fm−1 are some analytic functions (see [1–4,12,15]). More precisely, from the knowledge of the of A, Cheng–Yau’s method may be used for computing f0,...,fm−1, by solving some systems of linear equation (see [4]). The method developed in [2] is based on the properties of linear recurrence relations (see [5,6,9–11,13–15], for example). The results of [2] give rise to explicit expressions of the analytic functions f0,...,fr−1 in terms of the coefficients of the characteristic polynomial of A. The preceding process is still valid for every polynomial R(X) satisfying R(A) =  (see [2]). In this paper we study some properties and formulas for the basic decomposition tA m−1 k e = k=0 fk(t)A , by considering the Jordan normal form of A. We investigate the = m−1   − expression exp(tA) k=0 fk(t)Pk(A),wherePk(X) (0 k r 1) are poly- nomials such that degree(Pk(X))  m − 1andfk (0  k  r − 1) are some analytic functions. Our approach allows us to give more explicit formulas for etA. This paper is organized as follows. In Section 2 we study the basic polynomial decomposition of etA. Section 3 is devoted to the general method for computing the polynomial decomposition of etA. In Section 4 we apply our approach to some classical special cases. In Section 5 we give examples and applications. Finally, Sec- tion 6 is devoted to the closed relation between the method of [2] and results of the preceding sections.

2. Jordan normal form and basic decomposition of etA

r r−1 Let A∈GL(r; C) and PA(X)=X − a0X −···−ar−1 be its characteristic n n−1 polynomial. The Cayley–Hamilton Theorem shows that A = a0A −···−ar−1 n−r s A for every n  r. Suppose that there exists a polynomial RA(X) = X − b0 s−1 X −···−bs−1 (2  s  r) such that RA(A) = r ,wherer is the zero r × r- n n−1 n−s matrix. Then we have A = b0A −···−bs−1A for every n  s. This holds, m m−1 for example, if RA(X) = MA(X) = X − b0X −···−bm−1 is the minimal polynomial of A. Therefore, the exponential of A is given under the following form: tA s−1 e = f0(t)Ir + f1(t)A +···+fs−1(t)A , (1) where Ir denote the r × r and f0,f1,...,fs−1 are some analytic functions. In the sequel we are interested in RA(X) = MA(X). ∈ = s − mj Note that for every A GL(r, C), with MA(X) j=1(X λj ) ,thereex- ists M ∈ GL(r, C) such that A = MBM−1,whereB is in its Jordan normal form. Since etB = M−1etAM, we may assume without loss of generality that A is in Jordan canonical form. This will greatly reduce the complexity of our discussions. R. Ben Taher, M. Rachidi / Linear Algebra and its Applications 350 (2002) 171–184 173

m tA m−1 tλ Suppose that MA(X) = (X − λ) .Thene = (U0Ir + U1t +···+Um−1t )e , ; = where U0,U1,...,U m−1 areelementsofGL(r C). This implies that U0 Ir and n = n + n − ··· − + n−j   −  A λ U0 j=1 n(n 1) (n j 1)λ Uj for 0 n m 1. For n m,wehave m−1 n n n−j A = λ U0 + n(n − 1) ···(n − j + 1)λ Uj . j=1 Therefore, a straightforward computation allows us to have the following lemma.

m tA Lemma 1. Let A ∈ GL(r; C) such that MA(X) = (X − λ) . Then we have e = m−1 j j=0 fj (t)A , where   λt m−1 n−j e (−1) − f (t) =  λn j tn . (2) j j! (n − j)! n=j

For every family of square matrices B1,...,Bp, not necessarily of the same size, the direct sum B = B1 ⊕···⊕Bp means the square matrix B = diag(B1,...,Bp), and we refer to Bj as the jth diagonal block of this direct sum. Let A ∈ GL(r; C) and = s − λ0,...,λs be its distinct characteristic roots. Suppose that MA(X) j=1(X m λj ) j (m = m1 +···+ms) is the minimal polynomial of A by considering A is in Jordan canonical form. Therefore, we have A = M1(λ1) ⊕ M2(λ2) ⊕···⊕Ms(λs). Each jth block Mj (λj ) (1  j  s)ofA can be written as follows: Mj (λk)=Jj1(λj ) ⊕···⊕Jjk(λj );theJjp(λj ) are the canonical Jordan blocks that Jj1(λj ) is of order mj and the others blocks are of order

∈ ; Theorem 1. Let A GL(r C) and λ1,...,λs be its distinct characteristic roots. = s − mj = +···+ Suppose that MA(X) j=1(X λj ) (m m1 ms) is the minimal poly- nomial of A, A is in Jordan canonical form and A = M1(λ1) ⊕ M2(λ2) ⊕···⊕ M (λ ). Then we have s s   s mi −1 tA  j  e = fij (t)Mi(λi) , (3) i=1 j=0 where   − λ t mi 1 n−j e i (−1) − f (t) =  λn j tn . (4) ij j! (n − j)! i n=j

Recall that for etA the coefficients in expression (1) and therefore (3) are unique (see [4, Proposition 2]). 174 R. Ben Taher, M. Rachidi / Linear Algebra and its Applications 350 (2002) 171–184

Let A ∈ GL(r, C). A straightforward application of the formula in Theorem 1 in several cases allows us to have an expression for etA explicitly described.

3 2 Example. We illustrate Theorem 1 when m=5andMA(X)=(X − λ1) (X − λ2) , where λ1 = λ2; by assuming that A is in the Jordan canonical form, we have A = M1(λ1) ⊕ J2(λ2). Therefore, expressions (3) and (4) show that

tA λ1t λ2t e = e φ1(t, A) ⊕ e φ2(t, A), where

1 t2 φ (t, A) = 1 − λ t + λ2t2 I + (t − λ t2)M (λ ) + M (λ )2 1 1 2 1 r1 1 1 1 2 1 1 and = − + φ2(t, A) (1 λ2t)Ir2 tM2(λ2).

3. The polynomial decomposition of etA

Let {A0,A1, ...,Ap−1} (p = m, r) be a set of matrices generating the same sub- p−1 space as Ir ,A,...,A . Therefore, there exist some analytic functions f0,f1,..., fp−1 such that we have the following decomposition: tA e = f0(t)A0 + f1(t)A1 +···+fp−1(t)Ap−1. (5)

For matrices similar to A the analytic functions f0,f1,...,fp−1 are the same.

Lemma 2. Let A, A0,A1,...,Ap−1 be in GL(r; C). Then for every nonsingular tA matrix M ∈ GL(r; C) we have e = f0(t)A0 + f1(t)A1 +···+fp−1(t)Ap−1 if tB −1 and only if e = f0(t)B0 + f1(t)B1 +···+fp−1(t)Bp−1, where B = M AM −1 and Bk = M AkM(0  k  p − 1). Particularly, for Ak = Pk(A), where Pk(X) are polynomials, we have the polynomial decomposition tA e = f0(t)P0(A) + f1(t)P1(A) +···+fp−1(t)Pp−1(A). (6)

The decomposition (5) is usually associated to the eigenvalues λ1,...,λs of A. In the sequel, we study the polynomial decomposition (5) of etA, by assuming that A is in Jordan canonical form. Therefore, the minimal polynomial MA(X) = − s − mj = m − m 1 −···− = = j=1(X λj ) X b0X bm−1,whereλi λj (i j), will play a central role. j != ! − ! Starting from the identity n /n 1/j (n j) , we derive from Proposition 1 the following well-known result (see [1,4], for example).

m Proposition 1. Let A ∈ GL(r, C) such that MA(X)=(X − λ) , where m  2.Then we have R. Ben Taher, M. Rachidi / Linear Algebra and its Applications 350 (2002) 171–184 175

− m1 tk etA = etλ (A − λI )k. k! r k=0 = s − mj Suppose that MA(X) j=1(X λj ) ,whereλ1,...,λs are the distinct ei- genvalues of A and m = m1 +···+ms, by assuming that A is in Jordan canoni- cal form. Then A = M1(λ1) ⊕+···⊕Ms(λs),whereMj (λj ) is of order rj × rj (1  j  s) with r = r1 +···+rs.For1 j  s, we consider the following r × r-matrix:

(Mj (λj ) − λj )A =  ⊕··· ⊕ − ⊕  ⊕···⊕ r1 rj−1 (Mj (λj ) λj Irj ) rj+1 rs .  × × Recall that rj is the zero square rj rj matrix and Irj is the rj rj identity matrix. Therefore, for every k  0, we have − k (Mj (λj ) λj )A =  ⊕··· ⊕ − k ⊕  ⊕···⊕ r1 rj−1 (Mj (λj ) λj Irj ) rj+1 rs . Consider now the following elementary of order p:   λ 10··· 0    ..  0 λ 1 . 0   Jp(λ) =  . . . . .  , where λ ∈ C.  ......  00··· λ 1 00··· 0 λ = = k =    For λ 0, we set Sp Jp(0). It is clear that Sp p for every k p.For1 k  p − 1, we can verify easily that   0 ··· 01 0··· 0   ......  ......    0 ··· 00 0 10 k   S = 0 ··· 0001 , (7) p   0 ··· 0000 .  .  0 ··· 0000 where the number 1 in the first row is located in the (k + 1)th column. ∈ = s − mj Let A GL(r, C), whose minimal polynomial isMA(X) j=1(X λj ) . For every j (1j s), we have (M (λ ) − λ I )= (J (λ )−λ I ), j j j rj blocs,pj mj pj j j pj where the direct sum means the direct sum over all elementary blocs,pj mj  − k Jordan blocs of Mj (λj ). Therefore, for every k 0, we have (Mj (λj ) λj Irj ) 176 R. Ben Taher, M. Rachidi / Linear Algebra and its Applications 350 (2002) 171–184 = (J (λ ) − λ I )k. Particularly, for every k m ,wehave(M (λ ) blocs,pj mj pj j j pj j j j − k =  λj Irj ) rj . − = s − − k = s Since (A λj Ir ) d=1(Md (λd ) λj Ird ),wehave(A λj Ir ) d=1 − k  = (Md (λd ) λj Ird ) for every k 0. Particularly, for k mj we derive from (7) that j−1 s − mj = − mj ⊕  ⊕ − mj (A λj Ir ) (Mi(λi) λj Iri ) rj (Mi(λi) λj Iri ) . i=1 i=j+1 Therefore, we have the following identity: md (A − λd Ir ) d=1,d =j  s  =  ⊕ − md ⊕  ri (Mj (λj ) λd Irj ) ri . (8) 1ij−1 d=1,d =j j+1is

= − = − Let λd λj be two eigenvalues of A and set Jmj (λj λd ) Jmj (λj ) λd Imj . [ − ]k The same computation as in [4] allows us to establish that Jmj (λj λd ) (k) (k) = cn,p ,wherecn,p = 0ifp  n − 1ork

k − + c(k) = (λ − λ )k p n, n,p p − n j d for p  n. Thus, we have obtained the following lemma. ∈ = s − Lemma 3. Let A GL(r, C), whose minimal polynomial is MA(X) j=1(X m λj ) j , where λi = λj for i = j.Then s − md (Jmj (λj ) λd Imj ) = = d 1,dj  a0,j a1,j ··· am −2,j am −1,j  j j   0 a0,j a1,j am −2,j   j  =  . . . . .   ......  ,   00··· a0,j a1,j 00··· 0 a0,j where ai,j = 0 for i>m1 +···+mj−1 + mj+1 +···+md and s hd md −hd ai,j = (λj − λd ) . (9) md h1+···+hj−1+hj+1+···+hs =i,hd md d=1,d =j R. Ben Taher, M. Rachidi / Linear Algebra and its Applications 350 (2002) 171–184 177

Since       − k =   ⊕ − k ⊕    (Mj (λj ) λj )A ri (Mj (λj ) λj Irj ) ri , 1ij−1 j+1is the preceding discussion allows us to derive the following fundamental theorem. ∈ = s − Theorem 2. Let A GL(r, C), whose minimal polynomial is MA(X) j=1(X m λj ) j , where λi = λj for i = j.Then

s mj −k−1 − md − k = (A λd Ir ) − k+i (Mj (λj ) λj )A m αi,j (A λj Ir ) (λj − λd ) d d=1,d =j i=0 such that the αi,j are defined by −1 αi,j = a1,j αi−1,j + a2,j αi−2,j +···+ai−1,j α1,j + ai,j α0,j , (10) a0,j where the ai,j are given by (9) and α0,j = 1.

Since by assumption A = M1(λ1) ⊕+···⊕Ms(λs),whereMj (λj ) is of order rj × rj (1  j  s) with r = r1 +···+rs, Proposition 1 shows that we have   m −1 1 tk etA = etλ1 (M (λ ) − λ I )k ! 1 1 1 r1 = k k 0  m −1 s tk ⊕···⊕etλs (M (λ ) − λ I )k . k! s s s rs k=0

Therefore,

m −1 s j tk etA = etλj (M (λ ) − λ )k . k! j j j A j=1 k=0 Hence, Theorem 2 implies the following general result. ∈ = s − mj Theorem 3. Let A GL(r, C) such that MA(X) j=1(X λj ) , where λ1,..., λs are distinct. Suppose that A is in Jordan normal form. Then

s s − md tA = (A λd Ir ) e $j (A, t) m (11) (λj − λd ) d j=1 d=1,d =j 178 R. Ben Taher, M. Rachidi / Linear Algebra and its Applications 350 (2002) 171–184 with

$j (A, t) − mj 1 k k−1 tλ t t t k = e j + α +···+α − + α (A − λ I ) , k! 1,j (k − 1)! k 1,j 1! k,j j r k=0 where the αi,j are given by (9) and (10).

In Theorem 3 of [4], Cheng and Yau have established that   − s mj 1 s tA  k md e = fj,k(t)(A − λj Ir ) (A − λd Ir ) , j=1 k=0 d=1,d =j where the family of functions fj,k(t) (1  j  s,0 k  mj − 1) satisfies a system of linear equations (see [4, Eqs.(14a)k of Theorem 3]). A straightforward computa- tion using Theorems 2 and 3 allows us to obtain the explicit expressions of the fj,k as follows. ∈ = s − mj Corollary 1. Let A GL(r, C) such that MA(X) j=1(X λj ) , where λ1,..., λs are distinct. Suppose that A is in Jordan normal form. Then   − s mj 1 s tA  k md e = fj,k(t)(A − λj Ir ) (A − λd Ir ) , j=1 k=0 d=1,d =j with

λj t k k−i = e t fj,k(t) s αi,j , = = (λj − λd ) (k − i)! d 1,d j i=0 where the αi,j are given by (9) and (10).

Remark. For every fixed j, Eq. (10) shows that the numbers αi,j satisfy a linear recurrence relation of order r. Therefore, a combinatorial relation can be derived from results of [10,13].

4. Special cases

We are interested here in the polynomial decomposition of etA for some classical cases in the literature. This will allows us to illustrate our results given in Section 3. R. Ben Taher, M. Rachidi / Linear Algebra and its Applications 350 (2002) 171–184 179 ∈ = s − Let A GL(r, C) and suppose that MA(X) j=1(X λj ). In other words, the eigenvalues λj (1  j  s) are simple. Therefore, we have mj − 1 = 0for every j (1  j  s). Hence, Theorems 2 and 3 allow us to derive easily the following result. ∈ = s − Corollary 2. Let A GL(r, C) and suppose that MA(X) j=1(X λj ).Then we have s s (A − λd Ir ) etA = etλj . (λj − λd ) j=0 d=1,d =j

Corollary 2, which represents a Lagrange interpolation formula, has been estab- lished in [4], by solving a system of linear equations (see [4, Eqs. (14a)k of Theorem 3]). m m Suppose next that MA(X) = (X − λ1) 1 (X − λ2) 2 . Then for 0  i  m1 or 0  i  m2,wehave

i m2−i i m2−i ai1 = (λ1 − λ2) ,ai2 = (λ2 − λ1) , m2 m1 and ai1 = ai2 = 0fori  m2 or i  m1. Therefore, we have + ··· + − = − i m2(m2 1) (m2 i 1) αi,1 ( 1) i i!(λ1 − λ2) and + ··· + − = m1(m1 1) (m1 i 1) αi,2 i . i!(λ2 − λ1) Thus, applying Theorems 2 and 3, we derive the following corollary.

m m Corollary 3. Let A ∈ GL(r, C) and suppose that MA(X)=(X − λ1) 1 (X − λ2) 2 . Then

− m m1 1 (A − λ2Ir ) 2 tA = λ1t − k e e m f1,k(t)(A λ1Ir ) (λ1 − λ2) 2 k=0 − m m2 1 (A − λ1Ir ) 1 + λ2t − k e m f2,k(t)(A λ2Ir ) , (λ2 − λ1) 1 k=0 where k k + ··· + − k−i = t + − i m2(m2 1) (m2 i 1) t f1,k(t) ( 1) i k! i!(λ1 − λ2) (k − i)! i=1 180 R. Ben Taher, M. Rachidi / Linear Algebra and its Applications 350 (2002) 171–184 and k k + ··· + − k−i = t + − i m1(m1 1) (m1 i 1) t f2,k(t) ( 1) i . k! i!(λ2 − λ1) (k − i)! i=1

Corollary 3 has been considered in [4]; however the family of functions fj,k(t) (j = 1, 2and0 k  m1 or m2) are not given explicitly. More precisely, the fj,k(t) satisfy a system of linear equations (see Eqs. (18a) of Theorem 4). From Theorems 2 and 3 we can derive the following corollary. ∈ = − mλ s − Corollary 4. Let A GL(r, C) and suppose that MA(X) (X λ) j=1(X λj ). Then we have − mλ 1 s − tA λt A λiIr e = e fλ,k(t) λ − λi k=0 i=1 s s A − λiIr + λd t − mλ e m (A λIr ) , (12) (λd − λi)(λd − λ) λ d=1 i=1,i =d with     tk k i 1 tk−i f (t) =  + (− )i    λ,k ! 1 − − ! k λ λdk (k i) i=1 (d1,...,di )∈%i,s k=1 k ×(A − λIr ) , (13) where %i,s ={(d1,...,di); 1  dk  s}.

From (12) and (13), a straightforward computation allows us to derive m −1 λ td etA = eλt (A − λI )d d! r d=0  s s   mλ + fd (t) (A − λiIr ) (A − λIr ) , (14) d=1 i=1,i =d with = 1 fd (t) s (λ − λ)mλ (λ − λ ) d i=1,i =d d i  − mλ 1 j (λd − λ) × eλd t − eλt tj  . (15) j! j=0

Expression (14) is identical to Cheng–Yau’s formula (20)–(20a) of [4, Theorem 5]). R. Ben Taher, M. Rachidi / Linear Algebra and its Applications 350 (2002) 171–184 181

5. Examples

In this section we apply our results of Sections 3 and 4 to the case of square matrices. For m = 3, 4 the results of [4] can be easily recovered. For illustrative purposes, we will examine one instance of a minimal polynomi- al when m = 6, the remainder cases may be obtained in a similar way. For exam- 2 3 ple, suppose that MA(X) = (X − λ) (X − λ1) (X − λ2),whereλ, λ1 and λ2 are distinct. Therefore, Theorem 3 and Corollary 1 allow us to derive 3 2 2 (A − λ1Ir ) (A − λIr ) (A − λIr ) (A − λ2Ir ) tA = tλ2 + tλ1 φ (t, A) e e 2 3 e 1 2 (λ − λ2) (λ2 − λ1) (λ − λ1) (λ1 − λ2) (A − λ I )3(A − λ I ) + tλφ (t, A) 1 r 2 r , e 0 3 (λ − λ1) (λ − λ2) where 3 1 φ0(t, A) = Ir + t − − (A − λIr ) λ − λ1 λ − λ2 and 2 φ1(t, A) = Ir + g1(t)(A − λ1Ir ) + g2(t)(A − λ1Ir ) , with 1 2 g1(t) = t − − λ1 − λ2 λ1 − λ and

t2 1 2 g2(t) = − + t 2 λ1 − λ2 λ1 − λ 1 1 2 + + + . 2 2 (λ1 − λ) (λ1 − λ2) (λ1 − λ)(λ1 − λ2)

6. Relation to the results in [2]

∈ = s − s−1 +···− Let A GL(r, C) and RA(X) X a0X as−1 be a polynomial such =  tA = s−1 k that RA(A) r . It was established in [2] that e k=0 fk(t)A ,where +∞ tk tn f (t) = + ρ (n) (16) k k! n! k n=s are some analytic functions such that k ρk(n) = as−k+j−1ρ(n, s), (17) j=0 182 R. Ben Taher, M. Rachidi / Linear Algebra and its Applications 350 (2002) 171–184 where (k +···+k − )! = 0 s 1 k0 k1 ··· ks−1 ρ(n, s) a0 a1 as−1 , (18) k0!k1!···ks−1! k0+2k1+···+rks−1=n−s with ρ(s,s) = 1andρ(n, s) = 0forn  s − 1 (see [2,14]). In [2, Section 4], the connection between the ρ(n, r) and the roots of the polyno- mial RA(X) has been studied. More precisely, for reason of simplicity, the roots of RA(X) are supposed to be simple (see [2]). In this section we study this connection in the general case. m m−1 For RA(X) = MA(X) = X − a0X +···−am−1, the minimal polynomial of A, we see that the class of analytic functions fk defined by (16) are expressed implicitly in terms of the coefficients aj (0  j  m). m Suppose that MA(X) = (X − λ) . Expression (2) of Lemma 1 shows that λt m−1 n−k e (−1) − f (t) = λn ktn. (19) k k! (n − k)! n=k From (16) and (19) we derive the following expression:

m−1 m−1 k − − n − j − − k j ρ (n) = λn k (−1)j k = λn k (−1)j k . k n n − k j n j=k j=k n = m−1 k  In [2,14], it was established that A k=0 ρk(n)A for every n m,where the ρ (n) are given by (17). Therefore, for every n  m,wehave k  

m−1 m−1 − − k j An = λn k (−1)j k  Ak. j n k=0 j=k

Since ρ0(n) = am−1ρ(n, m) (see [13,14]), we derive that

m−1 j ρ (n) = λn (−1)j for every n  m. 0 n j=0 Hence, we have the following proposition.

Proposition 2. For every n  m, we have

m−1 − + − j ρ(n, m) = λn m (−1)m j 1 . n j=0 ∈ = − mj Let A GL(r, C) whose minimal polynomial is MA(X) 1js(X λj ) . We set m = m1 +···+ms. Theorem 3 implies that m s s − mj j ( 1) λj = λkt f0(t) e φk(t) m , (λk − λj ) j k=1 j=1,j =k R. Ben Taher, M. Rachidi / Linear Algebra and its Applications 350 (2002) 171–184 183 where m −1 p k tp−i φ (t) = α (−λ )p, k i,k (p − i)! k p=0 i=0 with the αi,k given by (9) and (10). Therefore, a straightforward computation implies that   − m s mk 1 s (−1)mj λ j (n) =   − p j f0 (0) ωk(p, n) ( λk) ) m . (λk − λj ) j k=1 p=0 j=1,j =k p j n−j (n) = − = −  where ωk(p, n) j=0 n αp j,kλk .Sincef0 (0) am 1ρ(n, m) for n m, we derive the following.

Proposition 3. For every n  m, we have − s mk 1 ω (p, n)(−λ )p = p=0 k k ρ(n, m) − m s , (−1)mk 1λ k (λ − λ )mj k=1 k j=1,j =k k j p j n−j = − with ωk(p, n) j=0 n αp j,kλk .

Acknowledgements

The authors would like to express their sincere gratitude to the referee for several useful and valuable suggestions that improved the presentation of this paper. They thank Professor M. Mouline and R. Ouzilou for their encouragement and discussion.

References

[1] T.M. Apostol, Some explicit formulas for the matrix exponential, Amer. Math. Monthly (1969) 284–292. [2] R. Ben Taher, M. Rachidi, Linear recurrence relations in the algebra of matrices and applications, Linear Algebra Appl. 330 (1–3) (2001) 15–24. [3] D.S. Bernstein, W. So, Some explicit formulas for matrix exponential, IEEE Trans. Autom. Control 38 (8) (1993) 1228–1231. [4] H.W. Cheng, S.S.-T. Yau, More explicit formulas for matrix exponential, Linear Algebra Appl. 262 (1997) 131–163. [5] F. Dubeau, W. Motta, M. Rachidi, O. Saeki, On weighted r-generalized Fibonacci sequences, Fibon- acci Quart. 35 (1997) 102–110. [6] L.E. Fuller, Solutions for general recurrence relations, Fibonacci Quart. 19 (1981) 64–69. [7] F.R. Gantmacher, Theory of Matrices, Chelsea Publishing Company, New York, 1959. [8] F.R. Gantmacher, Application of the Theory of Matrices, Interscience Publishers, New York, 1959. [9] W.G. Kelly, A.C. Peterson, Difference Equations: An introduction with Application, Academic Press, San Diego, 1991. 184 R. Ben Taher, M. Rachidi / Linear Algebra and its Applications 350 (2002) 171–184

[10] C. Levesque, On m-th order linear recurrences, Fibonacci Quart. 23 (4) (1985) 290–295. [11] B. Liu, A matrix method to solve linear recurrences with constant coefficients, Fibonacci Quart. 30 (1) (1992) 2–8. [12] C. Moler, C. Van Loan, Nineteen dubious ways to compute the exponential of matrix, SIAM Rev. 20 (4) (1978) 801–836. [13] M. Mouline, M. Rachidi, Application of Markov chains properties to r-generalized Fibonacci se- quences, Fibonacci Quart. 37 (1999) 34–38. [14] M. Mouline, M. Rachidi, Suites de fibonacci généralisées, Théorème de Cayley–Hamilton et chaines de markov, Rendiconti del Seminario Matematico di Messina Serie II, Tomo XIX (4) (1996/97) 107–115. [15] G.N. Philippou, On the k-th order linear recurrence and some probability applications, in: G.E. Bergum, A.N. Philippou, A.F. Horadam (Eds.), Applications of Fibonacci Numbers, vol. 1, Kluwer Academic Publishers, Dordrecht, 1988. [16] E.U. Stickel, A splitting method for the calculation of the matrix exponential, Analysis 14 (1994) 103–112. [17] R.C. Thompson, Special cases of a matrix exponential formula, Linear Algebra Appl. 107 (1988) 183–192.