Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 15, Number 5 (2019), pp. 693-710 c Research India Publications http://www.ripublication.com/gjpam.htm

Explicit Determinant, Inverse, Minimum Polynomial and Eigenvector of a Doubly Population Projection

Wiwat Wanicharpichat Department of Mathematics, Faculty of Science, and Research Center for Academic Excellence in Mathematics, Naresuan University, Phitsanulok 65000, Thailand. Email: [email protected]

Abstract In this paper, we use the Schur complement of a to compute the explicit formula of determinant and the inverse of a class of matrices called the doubly population (DPPM). The class of DPPM is a generalization of the Leslie matrix, the doubly , and the doubly Leslie matrix. This paper is also discussed about the explicit formula of minimum polynomial, and eigenvector formula of DPPM. Keywords: Schur complement, Companion matrix, Leslie matrix, Population projection matrix, Doubly population projection matrix.

1. INTRODUCTION

For a given field F, the set of all polynomials in x over F is denoted by F[x]. For a positive integer n, let Mn(F) be the set of all n × n matrices over F. The set of all vectors, or n×1 matrices over F is denoted by Fn. A nonzero vector v ∈ Fn is called an eigenvector of A ∈ Mn(F) corresponding to a scalar λ ∈ F if Av = λv, and the scalar λ is an eigenvalue of the matrix A. The set of eigenvalues of A is call the spectrum of A and is denoted by σ(A). In the most common case in which F = C, the complex numbers, Mn(C) is abbreviated to Mn. 694 W. Wanicharpichat

A matrix model used on the populations that its individual is grouped based on certain conditions, such as the stage of age and the stage of life. Model matrix that classifies the population by age is Leslie matrix. Leslie matrix models are used in ecology to model the changes in a population an organism over a given period of time. These models explain the growth of female human and animal populations. A Leslie matrix arises in a discrete, age-dependent model for population growth [4, 5, 6]. It is a matrix of the form

  a1 a2 a3 . . . an−1 an  s 0 0 ... 0 0   1     0 s2 0 ... 0 0   . .  L =  .. . .  , (1.1)  0 0 s3 . . .     ......   . . . . 0 0  0 0 ... 0 sn−1 0

where aj ≥ 0, 0 < sj ≤ 1, j = 1, 2, . . . , n − 1.

Matrix model that classifies the population based on the stage of life is a population projection matrix (PPM) [9, p.48]. This matrix is a modification of Leslie matrix where the classification is replaced by life stage. The population projection matrix entries consist of the rate of reproduction, survival, and growth in each class. Therefore there will be reproduction parameters (reproductive output) which are probability of surviving and growing into next stage and probability of surviving and remaining in the same stage. It is a matrix of the form

  a1 a2 a3 . . . an−1 an  s p 0 ... 0 0   1 2   . . .   .. . .   0 s2 p3 . .  P =  . .  , (1.2)  .. .   0 0 s3 0 .     ......   . . . . pn−1 0  0 0 ... 0 sn−1 pn

where si > 0, i = 1, 2, . . . , n − 1; ai ∈ R, i = 2, . . . , n and pj ∈ R, j = 2, . . . , n.

Doubly companion matrices C ∈ Mn were first introduced by Butcher and Chartier in On a Doubly Population Projection Matrix 695

[3, pp. 274–276], which are given by   −a1 −a2 −a3 ... −an−1 −an − bn  1 0 0 ... 0 −b   n−1     0 1 0 ... 0 −bn−2   . . . .  C =  ......  . (1.3)  . . . . .     ..   0 0 0 . 0 −b2  0 0 0 ... 1 −b1

An n × n matrix C with n > 1 is called a doubly companion matrix if its entries cij satisfy cij = 1 for all entries in the sub-maindiagonal of C and else cij = 0 for i 6= 1 and j 6= n.

The author [16] defined a doubly Leslie matrix (DLM) analogous as the doubly companion matrix by replacing the sub-maindiagonal entries of the doubly companion matrix C by sj, j = 1, 2, . . . , n − 1, respectively, and denoted by L, that is, a doubly Leslie matrix is defined as follows   −a1 −a2 −a3 ... −an−1 −an − bn  s 0 0 ... 0 −b   1 n−1     0 s2 0 ... 0 −bn−2   . .  L =  .. . .  , (1.4)  0 0 s3 . . .     ......   . . . . 0 −b2  0 0 ... 0 sn−1 −b1 where aj, bj ∈ R, j = 1, 2, . . . , n. As the same in Leslie matrix, we restrict only sj > 0, j = 1, 2, . . . , n − 1. The explicit formula of determinant, inverse, minimal polynomial and eigenvector formula of the doubly Leslie matrix was presented [16].

Now, we define a doubly population projection matrix (DPPM) model analogous as the doubly Leslie matrix L in (1.4) by replacing the zero-entries on the main-diagonal by −pk, k = 2, 3, . . . , n − 1, respectively, and denoted by P . A doubly population projection matrix is defined to be a matrix as follows   −a1 −a2 −a3 ... −an−1 −an − bn  s −p 0 ... 0 −b   1 2 n−1   . .   .. .   0 s2 −p3 . −bn−2  P =  . .  , (1.5)  .. .   0 0 s3 0 .     ......   . . . . −pn−1 −b2  0 0 ... 0 sn−1 −b1 696 W. Wanicharpichat

where aj, bj ∈ R, j = 2, . . . , n, −a1, −b1 ≥ 0, sj > 0, j = 1, 2, . . . , n − 1 and pk ∈ R, k = 2, . . . , n − 1.

The doubly population projection matrix is the generalization form of the population projection matrix, the doubly Leslie matrix, the Leslie matrix, the doubly companion matrix, and the companion matrix.

For convenience, we can write the matrix P in a partitioned form as

    a1 bn−1 " T #  a   b  −p −an − bn  2   n−2  P = where p =  .  , q =  .  , (1.6) ∆ −q  .   .  (n,n)     an−1 b1

and   s1 −p2 0 ... 0  . .   0 s −p .. .   2 3   .  ∆ =  ..  (1.7)  0 0 s3 0   . . . .   ......   . . −pn−1  0 0 ... 0 sn−1 is an upper-bidiagonal matrix of order n − 1.

We recall some well-known results that will be used sequel.

Definition 1.1. [7, Definition 1.3.1]. A matrix B ∈ Mn is said to be similar to a matrix −1 A ∈ Mn if there exists a nonsingular matrix S ∈ Mn such that B = S AS.

Definition 1.2. [11, p. 644]. A matrix A ∈ Mn for which the characteristic polynomial cA(t) equal to the minimum polynomial mA(t) are said to be a nonderogatory matrix.

n Theorem 1.3. [7, Theorem 1.4.8]. Let A, B ∈ Mn, if x ∈ C is an eigenvector corresponding to λ ∈ σ(B) and if B is similar to A via S, then Sx is an eigenvector of A corresponding to the eigenvalue λ.

Theorem 1.4. [7, Theorem 3.3.15]. A matrix A ∈ Mn is similar to the companion matrix of its characteristic polynomial if and only if the minimal and characteristic polynomial of A are identical. On a Doubly Population Projection Matrix 697

Solomon [15, Theorem 2] asserted that the companion matrix  0 1 0 ... 0   0 0 1 ... 0     . . . . .  C =  ......   . . . .     0 0 0 ... 1  −c0 −c1 −c2 ... −cn−1

n n−1 of f(t) = t + cn−1t + ··· + c1t + c0 over a ring with unity, the matrix C is similar to its transpose   0 0 ... 0 −c0  1 0 ... 0 −c   1  T   C =  0 1 ... 0 −c2     ......   . . . . .  0 0 ... 1 −cn−1 via an invertible   c1 c2 . . . cn−1 1  c . . . c 1 0   2 n−1   . . .   . .. ..  R =  . 0 0  , (1.8)  . .   .. . .   cn−1 1 . . .  1 0 ... 0 0 that is RCR−1 = CT . Equivalently, we obtain R−1CT R = C. (1.9)

For example, if n = 3 then       0 0 1 0 0 −c0 c1 c2 1 −1 T       R C R =  0 1 −c2   1 0 −c1   c2 1 0  2 1 −c2 c2 − c1 0 1 −c2 1 0 0   0 1 0   . =  0 0 1  = C. −c0 −c1 −c2

In this present paper we give explicit formula of determinant, inverse matrix, minimum polynomial, and eigenvector formulae for the doubly population projection matrix, and of some related matrices. 698 W. Wanicharpichat

2. SOME PROPERTIES OF SCHUR COMPLEMENT

In this section, we are going to establish a special form of Schur complement for computing the determinant and the inverse of a doubly population projection matrix. To do this, we begin by partitioning a square matrix M of order n as follows.

Let M be a matrix which is partitioned into four blocks " # AB M = , (2.1) CD where the submatrix C is assumed to be square and nonsingular. Brezinski in [2, p.232] asserted that, the Schur complement of C in M, denoted by (M/C), and is defined by

(M/C) = B − AC−1D, (2.2)

is related to Gaussian elimination i.e. " #" # I AC−1 0 (M/C) M = . (2.3) 0 I CD

Suppose that B and C are k × k and (n − k) × (n − k) matrices, respectively, where k < n, and C is nonsingular, as in [10, p.39] we have the following theorem.

Theorem 2.1 (Schur’s formula). Let M be a square matrix of order n × n partitioned as " # AB M = , CD where B and C are k × k and (n − k) × (n − k) matrices, respectively, where k < n. If C is nonsingular, then

det M = (−1)(n+1)k det C det(M/C). (2.4)

Proof. From (2.3) " #" # I AC−1 0 (M/C) M = . 0 I CD By taking the determinant of both sides. Then, we obtain " # " # I AC−1 0 (M/C) det M = det det . 0 I CD On a Doubly Population Projection Matrix 699

" # I AC−1 Since det = 1. Therefore 0 I

" # 0 (M/C) det M = det . CD

" # 0 (M/C) By Laplace’s theorem, the expansion of det of the first k rows i.e., CD rows {1, 2, . . . , k} is

" # 0 (M/C) det = (−1)(n+1)k det C det(M/C). CD

Therefore, det M = (−1)(n+1)k det C det(M/C).

This completes the proof.

The following useful formula presents the inverse of a matrix in terms of Schur complements, analogous as in [18, p.19], we obtain.

Theorem 2.2. Let M be a matrix which is partitioned as in (2.1) and suppose both M and C are nonsingular matrices. Then (M/C) is nonsingular and

" # −C−1D (M/C)−1 C−1 + C−1D (M/C)−1 AC−1 M −1 = . (2.5) (M/C)−1 − (M/C)−1 AC−1

Proof. The Schur complements (M/C) is nonsingular by virtue of (2.4). Under the given hypotheses, from (2.3) we have

" #" # I AC−1 0 (M/C) M = 0 I CD " #" #" # I AC−1 0 (M/C) IC−1D = . 0 I C 0 0 I 700 W. Wanicharpichat

By taking an inverse for both sides, it yields " #−1 " #−1 " #−1 IC−1D 0 (M/C) I AC−1 M −1 = 0 I C 0 0 I " #" #" # I −C−1D 0 C−1 I −AC−1 = 0 I (M/C)−1 0 0 I " #" # −C−1D (M/C)−1 C−1 I −AC−1 = (M/C)−1 0 0 I " # −C−1D (M/C)−1 C−1 + C−1D (M/C)−1 AC−1 = , (M/C)−1 − (M/C)−1 AC−1 from which the identity (2.5) follows. This completes the proof.

3. DETERMINANT AND INVERSE FORMULA OF DPPM

In this section we give an explicit formula for the determinant and the inverse of a doubly population projection matrix and of some related matrices.

As in [8, p.126-127], we obtain the inverse of a bidiagonal matrix ∆ defined in (1.7) as the following.   s1 −p2 0 ... 0  . .   0 s −p .. .   2 3   .  Lemma 3.1. If ∆ =  ..  , si 6= 0, i = 1, 2, . . . , n − 1, is  0 0 s3 0   . . . .   ......   . . −pn−1  0 0 ... 0 sn−1 −1 an upper-bidiagonal matrix, then ∆ = [vij](n−1,n−1) where  0, i > j,   1  , i = j, vij = si j−1  Y pk+1  , i < j.  sk k=1 That is   1 p2 p2p3 ... p2p3...pn−1 s1 s1s2 s1s2s3 s1s2s3...sn−1  0 1 p3 p3...pn−1   s2 s2s3 s2s3...sn−1   .  −1  1 .. .  ∆ =  0 0 . .  . (3.1)  s3   . . . .   ...... pn−1   sn−2sn−1  0 0 ... 0 1 sn−1 On a Doubly Population Projection Matrix 701

The subsequent theorem follows from Schur’s formula (2.4).

Theorem 3.2 (Determinant of DPPM). If P is a doubly population projection matrix with partitioned form of

" # −pT −a − b P = n n , ∆ −q (n,n) where     a1 bn−1      a2   bn−2  p =  .  , q =  .  ,  .   .   .   .  an−1 b1 and   s1 −p2 0 ... 0  . .   0 s −p .. .   2 3   .  ∆ =  ..  ,  0 0 s3 0   . . . .   ......   . . −pn−1  0 0 ... 0 sn−1 where si 6= 0, i = 1, 2, . . . , n − 1, then

  n−i  Q n−1 ! n−1 n−i aj pk n Y  X X k=j+1  det P = (−1) s (a + b ) + b   . (3.2) i  n n i  n−i  i=1  i=1  j=1 Q  sk k=j

Proof. Since ∆ is an upper bidiagonal (n − 1) × (n − 1) submatrix of the matrix P . When we apply the Schur’s formula (2.4), we obtain

det P = (−1)(n+1)×1 det ∆ det(P/∆). (3.3)

As in (2.2), the Schur complement of ∆ in P , denoted by (P/∆) is a 1 × 1 matrix or a 702 W. Wanicharpichat

scalar. We have

T −1 det(P/∆) = (P/∆) = (−an − bn) − (−p )∆ (−q) h i = (−an − bn) − −a1 −a2 ... −an−1   1 p2 p2p3 ... p2p3...pn−1 s1 s1s2 s1s2s3 s1s2s3...sn−1    0 1 p3 p3...pn−1  −bn−1  s2 s2s3 s2s3...sn−1   . .   −bn−2   1 .. .    ×  0 0 s .   .   3   .   . . . . p     ...... n−1   sn−2sn−1  −b1 0 0 ... 0 1 sn−1 h a1 a2 a1p2 a3 a2p3 a1p2p3 = (−an − bn) − − , − − , − − − , s1 s2 s1s2 s3 s2s3 s1s2s3 i ..., − an−1 − an−2pn−1 − ... − a2p3...pn−1 − a1p2p3...pn−1 sn−1 sn−2sn−1 s2s3...sn−1 s1s2s3...sn−1   −bn−1    −bn−2  ×  .   .   .  −b1  n−i  Q n−1 n−i aj pk X X k=j+1  = −(a + b ) − b   . (3.4) n n i  n−i  i=1  j=1 Q  sk k=j

n−1 Q Since det ∆ = si, from (3.3), we obtain i=1

n−1 ! (n+1) Y det P = (−1) si det (P/∆) i=1   n−i  Q n−1 ! n−1 n−i aj pk n Y  X X k=j+1  = (−1) s (a + b ) + b   . i  n n i  n−i  i=1  i=1  j=1 Q  sk k=j This completes the proof.

Immediately, we have the following corollaries. Corollary 3.3. Let P be a population projection matrix (PPM) defined as in (1.2) " # −pT −a h iT with partitioned as P = n , where p = a a . . . a , ∆ q 1 2 n−1 (n,n) On a Doubly Population Projection Matrix 703

h iT j = 1, 2, . . . , n q = 0 ... 0 pn , and ∆ is a bidiagonal matrix of order n − 1, then   n−1  Q n−1 ! n−1 aj pk n Y  X k=j+1  det P = (−1) s a + p   . i  n n  n−1  i=1   j=1 Q  sk k=j Corollary 3.4. Let L be a doubly Leslie matrix with partitioned as " # −pT −a − b L = n n Λ −q (n,n), h iT h iT where p = a1 a2 . . . an−1 , q = bn−1 bn−2 . . . b1 , and Λ = diag(s1, s2, . . . , sn−1), sj > 0, j = 1, 2, . . . , n − 1 is a of order n − 1, then n−1 n−1 ! Y X aibn−i det L = (−1)n s (a + b ) + . i n n s i=1 i=1 i Corollary 3.5. Let L be a Leslie matrix with partitioned as " # −pT −a L = n , Λ 0 (n,n) h iT where p = a1 a2 . . . an−1 , −aj ≥ 0, j = 1, 2, . . . , n. and

Λ = diag(s1, s2, . . . , sn−1), sj > 0, j = 1, 2, . . . , n − 1

is a diagonal matrix of order n − 1, then

n−1 n Y det L = (−1) an si. i=1

Next, we will determine the inverse of the DPPM. " # −pT −a − b Theorem 3.6. Let P = n n be a DPPM, where ∆ −q (n,n) h iT h iT p = a1 a2 . . . an−1 , q = bn−1 bn−2 . . . b1 , and ∆ is an upper-bidiagonal matrix of order n − 1. If det P 6= 0 then " # ∆−1q (P/∆) ∆−1 + ∆−1qpT ∆−1 P −1 = (P/∆)−1 , 1 pT ∆−1 (n,n) 704 W. Wanicharpichat

n−i  Q  n−1 n−i aj pk P P k=j+1 −1 where (P/∆) = −(an + bn) − bi  n−i , and ∆ has the form as in Q i=1 j=1 sk k=j (3.1).

Proof. Apply the identity (2.5) to the matrix P , we have "   # −∆−1(−q)(P/∆)−1 ∆−1 + ∆−1(−q)(P/∆)−1 (−pT )∆−1 P −1 = (P/∆)−1 − (P/∆)−1 (−pT )∆−1 (n,n) " # ∆−1q (P/∆)−1 ∆−1 + ∆−1q (P/∆)−1 pT ∆−1 = . (P/∆)−1 (P/∆)−1 pT ∆−1 (n,n) The Schur complement of ∆ in P , (P/∆) is a scalar. Then " # ∆−1q (P/∆) ∆−1 + ∆−1qpT ∆−1 P −1 = (P/∆)−1 . 1 pT ∆−1 (n,n)

Therefore, we have the following corollaries. Corollary 3.7. Let P be a PPM. If det P 6= 0 then " # ∆−1q (P/∆) ∆−1 + ∆−1qpT ∆−1 P −1 = (P/∆)−1 , 1 pT ∆−1 (n,n) h iT h iT where p = a1 a2 . . . an−1 , q = 0 ... 0 pn ,

 n−1  Q n−1 aj pk X k=j+1  (P/∆) = −a − p   , n n  n−1   j=1 Q  sk k=j and ∆−1 as in (3.1). Corollary 3.8. Let L be a doubly Leslie matrix. If det L 6= 0 then " # Λ−1q (L/Λ) Λ−1 + Λ−1qpT Λ−1 L−1 = (L/Λ)−1 , 1 pT Λ−1 (n,n) h iT h iT where p = a1 a2 . . . an−1 , q = bn−1 bn−2 . . . b1 ,

Λ = diag(s1, s2, . . . , sn−1), sj > 0, i = 1, 2, . . . , n − 1.  n−1  −1 1 1 1 P aibn−i Λ = diag( , ,..., ) and (L/Λ) = − (an + bn) + . s1 s2 sn−1 si i=1 On a Doubly Population Projection Matrix 705

Corollary 3.9. Let L be a Leslie matrix. If det L 6= 0 then " # 0 (L/Λ)Λ−1 L−1 = (L/Λ)−1 , 1 pT Λ−1 (n,n)

h iT where p = a1 a2 . . . an−1 , Λ = diag(s1, s2, . . . , sn−1), sj > 0, i = −1 1 1 1 1, 2, . . . , n − 1, Λ = diag( , ,..., ) and (L/Λ) = −an. s1 s2 sn−1

4. MINIMUM POLYNOMIAL OF DPPM

This section derives the explicit determinantal formula for the coefficient in the " # −pT −a − b characteristic polynomial c (t) of the DPPM P = n n . P ∆ −q (n,n)

The characteristic polynomial of a square matrix P = [pij] of order n is the polynomial

n n−1 cP (t) := det(tI − P ) = t + cn−1t + ··· + c1t + c0. (4.1)

It is well known that the constant term of the characteristic polynomial 4P (t) is n c0 = (−1) det P and cn−1 = − tr(P ), where tr(P ) = p11 + p22 + ··· + pnn. Pennisi [13] asserted that, let

p p . . . p 11 12 1n

p21 p22 . . . p2n det(p11, p12, . . . , pnn) = ......

pn1 pn2 . . . pnn be the determinant as a function of n2 entries. Then X ∂r det P cr = , (4.2) ∂pi1i1 ∂pi2i2 . . . ∂pirir i1

Let the characteristic polynomial of the matrix P be

4 3 2 4P (t) := det(tI − P ) = t + c3t + c2t + c1t + c0. Then

c0 = det P, P ∂ det P c1 = ∂(ki i ) i 1 1 1   = − ∂ det P + ∂ det P + ∂ det P + ∂ det P , ∂a1 ∂p2 ∂p3 ∂b1 P ∂2 det P c2 = ∂ki1i1 ∂ki2i2 i1

An unreduced upper- is a square matrix with containing no zeros on the first sub-maindiagonal and containing zeros everywhere below it. Therefore the DPPM, P is an unreduced upper-Hessenberg matrix.

Peter and Zemke [14, p.592] asserted that unreduced Hessenberg matrices form a subclass of the class of non-derogatory matrices.

Watkins [17, p.145], an unreduced upper-Hessenberg matrix DPPM P is similar n−1 to a companion matrix, since the vectors {e1,P e1,...,P e1} (where e1 = [1, 0,..., 0]T ∈ Rn) are linearly independent. We obtain the following theorem. Theorem 4.1. The doubly population projection matrix DPPM P is similar to a companion matrix. That is   0 0 ... 0 −c0  1 0 ... 0 −c   1  −1  0 1 ... 0 −c  T K PK =  2  =: C ∈ Mn(R),    ......   . . . . .  0 0 ... 1 −cn−1 for a nonsingular Krylov matrix K of P generated by e1.

h 2 n−1 i K = e1 P e1 P e1 ...P e1 . (4.3) Theorem 4.2. [7, pp. 146-147]. For any nth degree polynomial

n n−1 n−2 p(t) = t + cn−1t + cn−2t + ··· + c1t + c0, (4.4) there is a (companion) matrix C ∈ Mn for which p(t) is the minimum polynomial. On a Doubly Population Projection Matrix 707

The DPPM, P is a non-derogatory matrix, the minimum polynomial is equal to the characteristic polynomial of P . By Theorem 4.2, we have the minimum polynomial of P as n n−1 n−2 p(t) = mP (t) = t + cn−1t + cn−2t + ··· + c1t + c0, (4.5) where c0 = det P, and cr is in the form in (4.2), where r = 1, 2, . . . , n − 1.

Now, choose the K the Krylov matrix of P generated by e1. By straightforward computing, we have K−1PK = CT (4.6)  0 0 ··· 0 − det P  P ∂ det P  1 0 ··· 0 − ∂(p )   i1i1   i1   P ∂2 det P  0 1 ··· 0 − ∂p ∂p =  i1i1 i2i2  .  i1

The eigenvalues of the matrix P are the roots of the equation

n n−1 n−2 det(λIn − P ) = λ + cn−1λ + cn−2λ + ··· + c1λ + c0 = 0,

n where c0 = (−1) det P, and cr is in the form in (4.2), where r = 1, 2, . . . , n − 1. Clearly, if n is large, then the equation above will be difficult to solve exactly.

5. EIGENVECTOR FORMULA OF DPPM

The eigenvector associated with an eigenvalue λ is the nonzero vector v which satisfies equation P v = λv. Now analogous as eigenvector of a companion matrix in [1, pp.630–631] and in [12, p.6], we obtain the following theorem. Theorem 5.1. Let λ be an eigenvalue of a doubly population projection matrix P. Then (JKR)u is an eigenvector of P corresponding to the eigenvalue λ, where  1   λ     .  u =  .  ,  .   n−2   λ  λn−1 708 W. Wanicharpichat

and K is the Krylov matrix of P generated by e1 and R as in (1.8).

Proof. From Theorem 4.1, P is similar to the companion matrix CT which has the form shown in (4.6). Then they have the same eigenvalues in common. Let λ be an eigenvalue of P , then λ is also an eigenvalue of CT and also of C. Since λ is a root of the minimum polynomial mP (t), we have

n n−1 n−2 mP (λ) = λ + cn−1λ + cn−2λ + ··· + c1λ + c0 = 0.

n where c0 = (−1) det P, and cr is in the form in (4.2), where r = 1, 2, . . . , n − 1. Therefore n n−1 n−2  λ = − cn−1λ + cn−2λ + ··· + c1λ + c0 .  1   λ     .  Then, we consider a vector u =  . . We must show that this vector u is an  .   n−2   λ  λn−1 eigenvector of C corresponding to the eigenvalue λ. We have   0 1 ... 0 0  1   .. ..   λ   0 0 . . 0       .  Cu =  ......   .   . . . . .   .     n−2   0 0 ... 0 1   λ    n−1 −c0 −c1 ... −cn−2 −cn−1 λ  λ   λ   λ2   λ2       .   .  =  .  =  .   .   .   n−1   n−1   λ   λ  n−2 n−1 n −(c0 + c1λ + ··· + cn−2λ + cn−1λ ) λ  1   λ     .  = λ  .  = λu.  .   n−2   λ  λn−1

It is easy to see that the first component in the vector u cannot be zero, thus the vector u is not a zero-vector, and therefore it is an eigenvector of C corresponding to λ. On a Doubly Population Projection Matrix 709

From (4.6) we have CT = K−1PK, and from (1.9), we have C = R−1CT R = R−1(K−1PK)R = (R−1K−1)P (KR) = (KR)−1P (KR).

Therefore C = (KR)−1P (KR). Theorem 1.3 asserts that (KR)u is an eigenvector of P corresponding to the eigenvalue λ, where K is the Krylov matrix in (4.3). This completes the proof.

6. CONCLUSION

The DPPM is a non-derogatory matrix. This paper has explored a special form of a Schur complement to obtain the explicit formula of determinant and inverse, and the explicit eigenvector formulas of the DPPM which is the generalized forms of the population projection matrix, the doubly Leslie matrix and the Leslie matrix. It is also a generalized form of the doubly companion matrix, and the companion matrix.

Acknowledgment. The author is very grateful to the anonymous referees for their comments and suggestions, which inspired the improvement of the manuscript. This work was supported by Naresuan University.

REFERENCES

[1] Brand, L., 1964, “The Companion Matrix and Its Properties,” Amer. Math. Monthly, 71(6), pp. 629-634.

[2] Brezinski, C., 1988, “Other Manifestations of the Schur Complement,” Appl., 111, pp. 231-247.

[3] Butcher, J. C., and Chartier P., 1999, “The Effective Order of Singly-Implicit Runge-Kutta Methods,” Numer. Algor., 20(4), pp. 269-284.

[4] Chen, M.-Q., and Li, X., 2005, “Spectral Properties of a Near-Periodic Row-Stochastic Leslie Matrix,” Linear Algebra Appl., 409, pp. 166-186.

[5] Cull, P., and Vogt, A., 1974, “The Periodic Limit for the Leslie Model,” Math. Biosci., 21, pp. 39-54.

[6] Hansen, P. E., 1987, “Leslie Matrix Models,” Math. Population Stud., 2, pp. 37-67.

[7] Horn, R. A., and Johnson, C. R., 1996, Matrix Analysis, Cambridge University Press, Cambridge, UK. 710 W. Wanicharpichat

[8] Kilic, E., and Stanica, P., 2013, “The Inverse of Banded Matrices,” J. Comput. Appl. Math., 237, pp. 126-135.

[9] Kirkland, S. J., and Neumann, M., 2013, Group Inverses of M-Matrices and Their Applications, Chapman and Hall/CRC, New York, USA.

[10] Lancaster, P., and Tismenetsky, M., 1985, The Theory of Matrices Second Edition with Applications, Academic Press Inc., San Diego, USA.

[11] Meyer, C. D., 2000, Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, USA.

[12] Moritsugu,S., and Kuriyama, K., 2000, “A Linear Algebra Method for Solving Systems of Algebraic Equations,” J. Jap. Soc. Symb. Alg. Comp., 7(4), pp. 2-22.

[13] Pennisi, L. L., 1987, “Coefficients of the Characteristic Polynomial,” Math. Mag., 60 (1), pp. 31-33.

[14] Peter, J., and Zemke, M., 2006, “Hessenberg Eigenvalue-Eigenmatrix Relations,” Linear Algebra Appl., 414, pp. 589-606.

[15] Solomon, L., 1999, “Similarity of the Companion Matrix and Its Transpose,” Linear Algebra Appl., 302-303, pp. 555-561.

[16] Wanicharpichat, W., 2015, “Explicit Minimum Polynomial, Eigenvector, and Inverse Formula of Doubly Leslie Matrix,” J. Appl. Math. & Informatics, 33(3-4), pp. 247-260.

[17] Watkins, D. S., 2007, The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, USA.

[18] Zhang, F., 2005, The Schur Complement and Its Applications, in Series: Numerical Methods and Algorithms, Springer, New York, USA.