arXiv:1503.06997v2 [math.RA] 14 Jan 2016 rmrsRl o eeaie nes ouin of Solutions Inverse Generalized for Rule Cramer’s isrgc nttt o ple rbeso ehnc n Mathe and Mechanics of Problems Applied for Institute Pidstrygach arcso prpit ie egtaaoso h rmrrl for usin diffe rule some and of Cramer equations, equations solutions the of matrix representations of these determinantal of analogs for their solutions get inverse We generalized size. the appropriate of matrices XB or ers,wihe or-ers,Dai n egtdDr weighted and inverses. Drazin conside Moore-Penrose, shall weighted th agrees We theory, Penrose, that exist. In Moore that and inverses nonsingular. inverse, generalized be different usual many to are the happens matrix of given matrices, properties when nonsingular inverse the the than of matrices some of has class larger a for exists n h rznivresltoso h arxequations matrix norm the minimum of the solutions with inverse solutions Drazin squares singular the least of and solution The inverse solu Drazin squares systems. the least for linear the and norm for minimum rules the Cramer’s with get we matrix, adjoint the A AA ino hsnwmto losu ooti nlge lsia adjoint matrices classical projection analogues Applica- for obtain representations to paper. us Determinantal this allows matrix. method in new introduced this are of representations tion limit their on e flna qain arxeuto,dffrnilmti equation matrix differential equation, rule. matrix Cramer’s inverse, equation, linear Drazin weighted of inverse, tem Drazin inverse, Moore-Penrose ssingular. is e eemnna ersnain fteegnrlzdinverse generalized these of representations determinantal New that matrix a mean we matrix, given a of inverse generalized a By Keywords MSC + = and , D 53,1A5 15A24. 15A15, 15A33, : and A A fUrie kan,[email protected] Ukraine, Ukraine, of NAS D eeaie nes,MoePnoeivre weighted inverse, Penrose Moore inverse, generalized : A AXB oeMtie Equations Matrices Some r rsne swl.Uigteotie nlge of analogues obtained the Using well. as presented are X ′ + = AX D VNI KYRCHEI I. IVAN r osdrd Thus considered. are = Abstract B and 1 X ′ + XA = B A B hr h matrix the where , a esingular be can AX rential A = based the r with that + azin tion sys- ere D A g , , , matics 1 Preface

It’s well-known in , an n-by-n square matrix A is called in- vertible (also nonsingular or nondegenerate) if there exists an n-by-n square matrix X such that AX = XA = In. If this is the case, then the matrix X is uniquely determined by A and is called the inverse of A, denoted by A−1. By a of a given matrix, we mean a matrix that exists for a larger class of matrices than the nonsingular matrices, that has some of the properties of the usual inverse, and that agrees with inverse when given matrix happens to be nonsingular. For any matrix A ∈ Cm×n consider the following equations in X:

AXA = A; (1)

XAX = X; (2) (AX)∗ = AX; (3) (XA)∗ = XA. (4) and if m = n, also AX = XA; (5) Ak+1X = Ak. (6) For a sequence G of {1, 2, 3, 4, 5} the set of matrices obeying the equations represented in G is denoted by A{G}. A matrix from A{G} is called an G-inverse of A and denoted by A(G). Consider some principal cases. If X satisfies the all equations (1)-(4) is said to be the Moore-Penrose inverse of A and denote A+ = A(1,2,3,4). The MoorePenrose inverse was independently described by E. H. Moore [1] in 1920, Arne Bjerhammar [2] in 1951 and Roger Penrose [3] in 1955. R. Penrose introduced the characteristic equations (1)-(4). If det A 6= 0, then A+ = A−1. The group inverse Ag is the unique A(1,2,5) inverse of A, and exists if and only if Ind A = min{k : rank Ak+1 = rank Ak} = 1. A matrix X = AD is said to be the Drazin inverse of A if (6) (for some positive integer k), (2) and (5) are satisfied, where k = Ind A. It is named after Michael P. Drazin [4]. In particular, when IndA = 1, then the matrix X is the group inverse, X = Ag. If IndA = 0, then A is nonsingular, and AD ≡ A−1.

2 Let Hermitian positive definite matrices M and N of order m and n, respectively, be given. For any matrix A ∈ Cm×n, the weighted Moore- + Penrose inverse of A is the unique solution X = AM,N of the matrix equations (1) and (2) and the following equations in X [5]:

(3M) (MAX)∗ = MAX; (4N) (NXA)∗ = NXA.

In particular, when M = Im and N = In, the matrix X satisfying the equations (1), (2), (3M), (4N) is the Moore-Penrose inverse A+. The weighted Drazin inverse is being considered as well. To determine the inverse and to give its analytic solution, we calculate a matrix of cofactors, known as an adjugate matrix or a classical adjoint matrix. The classical adjoint of A, denote Adj[A], is the transpose of the −1 Adj[A] cofactor matrix, then A = kAk . Representation an inverse matrix by its classical adjoint matrix also plays a key role for Cramer’s rule of systems of linear equations or matrices equations. Obviously, the important question is the following: what are the ana- logues for the adjoint matrix of generalized inverses and, consequently, for Cramer’s rule of generalized inverse solutions of matrix equations? This is the main goal of the paper. In the paper we shall adopt the following notation. Let Cm×n be the set of m by n matrices with complex Cm×n Cm×n entries, r be a subset of in which any matrix has rank r, Im be the identity matrix of order m, and k.k be the Frobenius norm of a matrix. m×n Denote by a.j and ai. the jth column and the ith row of A ∈ C , ∗ ∗ respectively. Then a.j and ai. denote the jth column and the ith row of a ∗ conjugate and transpose matrix A as well. Let A.j (b) denote the matrix obtained from A by replacing its jth column with the vector b, and by Ai. (b) denote the matrix obtained from A by replacing its ith row with b. Let α := {α1,...,αk} ⊆ {1,...,m} and β := {β1,...,βk} ⊆ {1,...,n} α be subsets of the order 1 ≤ k ≤ min {m,n}. Then Aβ denotes the minor of A determined by the rows indexed by α and the columns indexed by β. α Clearly, |Aα| denotes a principal minor determined by the rows and columns indexed by α. The cofactor of a in A ∈ Cn×n is denoted by ∂ |A|. ij ∂aij For 1 ≤ k ≤ n, Lk,n := { α : α = (α1,...,αk) , 1 ≤ α1 ≤ . . . ≤ αk ≤ n} denotes the collection of strictly increasing sequences of k integers chosen from {1,...,n}. Let Nk := Lk,m × Lk,n. For fixed α ∈ Lp,m, β ∈ Lp,n, 1 ≤ p ≤ k, let Ik, m (α) := {I : I ∈ Lk, m,I ⊇ α}, Jk,n (β) := {J : J ∈ Lk,n, J ⊇ β}, Nk (α, β) := Ik, m (α) × Jk,n (β)

3 For case i ∈ α and j ∈ β, we denote

Ik,m{i} := {α : α ∈ Lk,m, i ∈ α}, Jk,n{j} := {β : β ∈ Lk,n, j ∈ β}, Nk{i, j} := Ik, m{i}× Jk,n{j}.

The paper is organized as follows. In Section 2 determinantal representations by analogues of the classical adjoint matrix for the Moore Penrose, weighted Moore-Penrose, Drazin and weighted Drazin inverses are obtained. In Section 3 we show that the obtained analogues of the adjoint matrix for the generalized inverse matrices enable us to obtain natural analogues of Cramer’s rule for generalized inverse solutions of systems of linear equations and demonstrate it in two examples. In Section 4, we obtain analogs of the Cramer rule for generalized inverse solutions of the matrix equations, AX = B, XA = B and AXB = D, namely for the minimum norm least squares solutions and Drazin inverse solutions. We show numerical examples to illustrate the main results as well. In Section 5, we use the determinantal representations of Drazin inverse solution to solutions of the following differential matrix equations, X′ + AX = B and X′ + XA = B, where A is singular. It is demonstrated in the example. Facts set forth in Sections 2 and 3 were partly published in [6], in Section 4 were published in [7,8] and in Sections 5 were published in [8]. Note that we obtained some of the submitted results for matrices over the quaternion skew field within the framework of the theory of the column and row determinants ( [9–16]).

2 Analogues of the classical adjoint matrix for the generalized inverse matrices

For determinantal representations of the generalized inverse matrices as ana- logues of the classical adjoint matrix, we apply the method, which consists on the limit representation of the generalized inverse matrices, lemmas on rank of some matrices and on characteristic polynomial. We used this method at first in [6] and then in [8]. Liu et al. in [17] deduce the new determi- (2) nantal representations of the outer inverse AT,S based on these principles as well. In this chapter we obtain detailed determinantal representations by analogues of the classical adjoint matrix for the Moore Penrose, weighted Moore-Penrose, Drazin and weighted Drazin inverses.

4 2.1 Analogues of the classical adjoint matrix for the Moore - Penrose inverse Determinantal representation of the Moore - Penrose inverse was studied in [1], [18–21]. The main result consists in the following theorem. + + Cn×m Theorem 2.1 The Moore - Penrose inverse A = (aij) ∈ of A ∈ Cm×n r has the following determinantal representation

(A∗)β ∂ Aα α ∂aj i β + (α, β)∈Nr{j,i} aij = , 1 ≤ i, j ≤ n. P ∗ δ γ (A )γ Aδ (γ, δ)∈N r P This determinantal representation of the Moore - Penrose inverse is based Cm×r on corresponding full-rank representation [18]: if A = PQ, where P ∈ r Cr×n and Q ∈ r , then A+ = Q∗(P∗AQ∗)−1P∗. For a better understanding of the structure of the Moore - Penrose inverse we consider it by singular value decomposition of A. Let

∗ 2 AA ui = σi ui, i = 1,m ∗ 2 A Avi = σi vi, i = 1, n,

σ1 ≤ σ2 ≤ ...σr > 0= σr+1 = σr+2 = ... and the singular value decomposition (SVD) of A is A = UΣV∗, where

m×m ∗ U = [u1 u2...um] ∈ C , U U = Im, n×n ∗ V = [v1 v2...vn] ∈ C , V V = In,

m×n Σ = diag(σ1,σ2, ..., σr) ∈ C . + + ∗ + −1 −1 −1 Then [3], A = VΣ U , where Σ = diag(σ1 ,σ2 , ..., σr ). We need the following limit representation of the Moore-Penrose inverse.

Lemma 2.1 [22] If A ∈ Cm×n, then

A+ = limA∗ (AA∗ + λI)−1 = lim (A∗A + λI)−1 A∗, λ→0 λ→0 where λ ∈ R+, and R+ is the set of positive real numbers.

Corollary 2.1 [23] If A ∈ Cm×n, then the following statements are true.

5 i) If rank A =n, then A+ = (A∗A)−1 A∗ .

ii) If rank A =m, then A+ = A∗ (AA∗)−1 .

iii) If rank A =n=m, then A+ = A−1 .

We need the following well-known theorem about the characteristic poly- nomial and lemmas on rank of some matrices.

Theorem 2.2 [24] Let dr be the sum of principal minors of order r of n n A ∈ C × . Then its characteristic polynomial pA (t) can be expressed as n n−1 n−2 n pA (t) = det (tI − A)= t − d1t + d2t − . . . + (−1) dn.

Cm×n ∗ ∗ Lemma 2.2 If A ∈ r , then rank (A A).i a.j ≤ r.   n×n Proof . Let Pik (−ajk) ∈ C , (k 6= i), be the matrix with −ajk in the (i, k) entry, 1 in all diagonal entries, and 0 in others. It is the matrix of an elementary transformation. It follows that

∗ ∗ ∗ a1kak1 . . . a1j . . . a1kakn k6=j k6=j ∗ ∗   (A A).i a.j · Pik (−ajk)= ...P ...... P . k6=i a∗ a . . . a∗ . . . a∗ a Y  nk k1 nj nk kn    k6=j k6=j     P i−th P  The obtained above matrix has the following factorization.

∗ ∗ ∗ a1kak1 . . . a1j . . . a1kakn k6=j k6=j  ...P ...... P  = ∗ ∗ ∗  ankak1 . . . anj . . . ankakn   k6=j k6=j     P i−th P  a . . . 0 . . . a a∗ a∗ . . . a∗ 11 n1 11 12 1m ...... a∗ a∗ . . . a∗   = 21 22 2m 0 . . . 1 . . . 0 j − th.  ......   ......   a∗ a∗ . . . a∗     n1 n2 nm   a . . . 0 . . . a     m1 mn   i−th 

6 a11 . . . 0 . . . a1n ......   Denote by A˜ := 0 . . . 1 . . . 0 j − th. The matrix A˜ is ob-  ......     a . . . 0 . . . a   m1 mn   i−th  tained from A by replacing all entries of the jth row and of the ith column with zeroes except that the (j, i) entry equals 1. Elementary transforma- ∗ ∗ tions of a matrix do not change its rank. It follows that rank (A A).i a.j ≤ min rank A∗, rank A˜ . Since rank A˜ ≥ rank A = rank A∗ and rankA∗A = ranknA the proof is completed.o  The following lemma can be proved in the same way.

Cm×n ∗ ∗ Lemma 2.3 If A ∈ r , then rank (AA )i. aj. ≤ r.   Analogues of the characteristic polynomial are considered in the following two lemmas.

Lemma 2.4 If A ∈ Cm×n and λ ∈ R, then

∗ ∗ (ij) n−1 (ij) n−2 (ij) det (λIn + A A).i a.j = c1 λ + c2 λ + . . . + cn , (7)

 β (ij) ∗ ∗ (ij) ∗ ∗ where cn = (A A).i a.j and cs = (A A).i a.j for β β∈Js, n{i}   P    all s = 1,n − 1 , i = 1,n, and j = 1,m.

∗ Cn×n Proof. Denote A A = V = (vij) ∈ . Consider (λIn + V).i (v.i) ∈ Cn×n. Taking into account Theorem 2.2 we obtain

n−1 n−2 |(λIn + V).i (v.i)| = d1λ + d2λ + . . . + dn, (8)

β where ds = |(V)β | is the sum of all principal minors of order s β∈Js, n{i} that contain thePi-th column for all s = 1,n − 1 and dn = det V. Since ∗ ∗ ∗ v.i = a. lali, where a.l is the lth column-vector of A for all l = 1,n, then l we haveP on the one hand ∗ |(λI + V).i (v.i)| = |(λI + V).l (a. lali)| = l ∗ (9) |(λI + VP).i (a. l)| · ali l P 7 Having changed the order of summation, we obtain on the other hand for all s = 1,n − 1

β ∗ β ds = (V)β = (V.i (a. lali))β = β∈J {i} β∈J {i} l s, n s, n (10) P P ∗Pβ (V.i (a.l))β · ali. l β∈J {i} s, n P P By substituting (9) and (10) in (8), and equating factors at ali when l = j, we obtain the equality (7).  By analogy can be proved the following lemma. Lemma 2.5 If A ∈ Cm×n and λ ∈ R, then

∗ ∗ (ij) m−1 (ij) m−2 (ij) det ((λIm + AA )j. (ai.)) = r1 λ + r2 λ + . . . + rm , (ij) ∗ ∗ (ij) ∗ ∗ α where rm = |(AA )j. (ai. )| and rs = ((AA )j. (ai. ))α for all α∈Is,m{j} P s = 1,n − 1, i = 1,m, and j = 1,n. The following theorem and remarks introduce the determinantal represen- tations of the Moore-Penrose by analogs of the classical adjoint matrix. Cm×n Theorem 2.3 If A ∈ r and r < min{m,n}, then the Moore-Penrose + + Cn×m inverse A = aij ∈ possess the following determinantal represen- tations:   ∗ ∗ β (A A) .i a.j β + β∈Jr, n{i} aij = P    , (11) ∗ β (A A) β β∈J r, n or P ∗ ∗ α |((AA )j. (ai. )) α| α∈I {j} a+ r,m . ij = P ∗ α (12) |(AA ) α| α∈Ir, m P for all i = 1,n, j = 1,m.

Proof . At first we shall obtain the representation (11). If λ ∈ R+, then the matrix (λI + A∗A) ∈ Cn×n is Hermitian and rank (λI + A∗A) = n. Hence, there exists its inverse

L11 L21 ... Ln 1 1 L12 L22 ... Ln 2 (λI + A∗A)−1 =   , det (λI + A∗A) ......  L L ... L   1 n 2 n nn    8 ∗ + where Lij (∀i, j = 1,n) is a cofactor in λI + A A. By Lemma 2.1, A = lim (λI + A∗A)−1 A∗, so that λ→0

λI A∗A a∗ I A∗A a∗ det( + ).1( . 1) det(λ + ). 1( . m) det(λI+A∗A) . . . det(λI+A∗A) A+ = lim  ......  . (13) λ→0 I A∗A a∗ I A∗A a∗ det(λ + ). n( . 1) det(λ + ). n( . m)  I A∗A . . . I A∗A   det(λ + ) det(λ + )    From Theorem 2.2 we get

∗ n n−1 n−2 det (λI + A A)= λ + d1λ + d2λ + . . . + dn,

∗ where dr (∀r = 1,n − 1) is a sum of principal minors of A A of order r and ∗ ∗ dn = det A A. Since rank A A = rank A = r, then dn = dn−1 = . . . = dr+1 = 0 and

∗ n n−1 n−2 n−r det (λI + A A)= λ + d1λ + d2λ + . . . + drλ . (14)

In the same way, we have for arbitrary 1 ≤ i ≤ n and 1 ≤ j ≤ m from Lemma 2.4

∗ ∗ (ij) n−1 (ij) n−2 (ij) det (λI + A A).i a.j = l1 λ + l2 λ + . . . + ln ,

 β (ij) ∗ ∗ where for an arbitrary 1 ≤ k ≤ n−1, lk = (A A).i(a.j) , and β β∈Jk, n{i} (ij) P   l = det (A∗A) a∗ . By Lemma 2.2, rank (A∗ A) a∗ ≤ r so that n .i .j .i .j β ∗  ∗   if k > r, then (A A) .i(a.j) = 0, (∀β ∈ Jk,n{i}, ∀i = 1, n, ∀j = 1,m). β   β (ij) ∗ ∗ Therefore if r + 1 ≤ k

(11) n−1 (11) n−r (1m) n−1 (1m) n−r l1 λ +...+lr λ l1 λ +...+lr λ n n−1 n−r . . . n n−1 n−r λ +d1λ +...+drλ λ +d1λ +...+drλ A+ = lim  ......  = λ→0 (n1) n−1 (n1) n−r (nm) n−1 (nm) n−r l1 λ +...+lr λ l1 λ +...+lr λ  n n−1 n−r . . . n n−1 n−r   λ +d1λ +...+drλ λ +d1λ +...+drλ    9 (11) (1m) lr . . . lr dr dr = ......  (n1) (nm)  lr . . . lr  dr dr    From here it follows (11). We can prove (12) in the same way.  Cm×n Corollary 2.2 If A ∈ r and r < min {m,n} or r = m

l11 l12 ... l1m a11 a12 . . . a1n 1 l l ... l a a . . . a P 21 22 2m 21 22 2n . = ∗     dr (A A) ...... l l ... l  a a . . . a   n1 n2 nm  m 1 m 2 mn     Therefore, for arbitrary 1 ≤ i, j ≤ n we get

∗ ∗ β pij = ((A A).i(a.k))β · akj = k β∈Jr, n{i} P P β ∗ ∗ β ∗ ∗  = ((A A).i(a.k · a kj)) = (A A).i(d.j) . β β β∈Jr, n{i} k β∈Jr, n{i} P P P  

Using the representation (12) of the Moore - Penrose inverse the following corollary can be proved in the same way. Cm×n Corollary 2.3 If A ∈ r , where r < min {m,n} or r = n

10 Remark 2.1 If rank A = n, then from Corollary 2.1 we get A+ = (A∗A)−1 A∗. Representing (A∗A)−1 by the classical adjoint matrix, we have

det(A∗A) (a∗ ) . . . det(A∗A) (a∗ ) 1 .1 .1 .1 . m A+ ...... = ∗ (16) det(A A)  ∗ ∗ ∗ ∗  det(A A).n (a. 1) . . . det(A A).n (a. m)   If n

Remark 2.2 As above, if rank A = m, then det(AA∗) (a∗ ) . . . det(AA∗) (a∗ ) 1 1 . 1 . m. 1 . A+ ...... = ∗ (17) det(AA )  ∗ ∗ ∗ ∗  det(AA )1 . (an.) . . . det(AA )m. (an.)   If n>m, then (12) is valid as well.

Remark 2.3 By definition of the classical adjoint Adj(A) for an arbitrary n×n A ∈ C one may put, Adj(A) · A = det A · In. m×n + If A ∈ C and rank A = n, then by Corollary 2.1, A A = In. + + L Representing the matrix A by (16) as A = det(A∗A) , we obtain LA = ∗ n×m det (A A) · In. This means that the matrix L = (lij) ∈ C is a left Cm×n ∗ ∗ analogue of Adj(A) , where A ∈ n , and lij = det(A A).i a.j for all i = 1,n, j = 1,m.   + If rank A = m, then by Corollary 2.1, AA = Im. Representing the + + R ∗ matrix A by (17) as A = det(AA∗) , we obtain AR = Im · det (AA ). m×n This means that the matrix R = (rij) ∈ C is a right analogue of Adj(A), Cm×n ∗ ∗ where A ∈ m , and rij = det(AA )j. (ai.) for all i = 1,n, j = 1,m. L If A ∈ Cm×n and r < min{m,n}, then by (11) we have A+ = , r dr(A∗A) Cn×m ∗ ∗ β where L = (lij) ∈ and lij = (A A) .i a.j β for all β∈J {i} r, n    i = 1,n, j = 1,m. From Corollary 2.2 weP get LA = d (A∗A) · P. The r matrix P is idempotent. All eigenvalues of an idempotent matrix chose from 1 or 0 only. Thus, there exists an unitary matrix U such that

∗ ∗ LA = dr (A A) Udiag (1,..., 1, 0,..., 0)U , where diag (1,..., 1, 0,..., 0) ∈ Cn×n is a diagonal matrix. Therefore, the Cm×n matrix L can be considered as a left analogue of Adj(A), where A ∈ r . Cm×n In the same way, if A ∈ r and r< min{m,n}, then by (11) we have R A+ = , where R = (r ) ∈ Cn×m, r = |((AA∗) (a∗ )) α|for dr(AA∗) ij ij j. i. α α∈Ir,m{j} P 11 ∗ all i = 1,n, j = 1,m. From Corollary 2.3 we get AR = dr (AA ) · Q. The matrix Q is idempotent. There exists an unitary matrix V such that

∗ ∗ AR = dr (AA ) Vdiag (1,..., 1, 0,..., 0)V , where diag (1,..., 1, 0,..., 0) ∈ Cm×m. Therefore, the matrix R can be considered as a right analogue of Adj(A) in this case.

Remark 2.4 To obtain an entry of A+ by Theorem 2.1 one calculates r r r−1 r−1 (CnCm + Cn−1Cm−1) determinants of order r. Whereas by the equation r r−1 (11) we calculate as much as (Cn + Cn−1) determinants of order r or we r r−1 calculate the total of (Cm + Cm−1) determinants by (12). Therefore the calculation of entries of A+ by Theorem 2.3 is easier than by Theorem 2.1.

2.2 Analogues of the classical adjoint matrix for the weighted Moore-Penrose inverse Let Hermitian positive definite matrices M and N of order m and n, re- + spectively, be given. The weighted Moore-Penrose inverse X = AM,N can be explicitly expressed from the weighted singular value decomposition due to Van Loan [25] Cm×n Cm×m Cn×n Lemma 2.6 Let A ∈ r . There exists U ∈ , V ∈ satisfying ∗ ∗ −1 U MU = Im and V N V = In such that D 0 A = U V∗ 0 0   + Then the weighted Moore-Penrose inverse AM,N can be represented

D−1 0 A+ = N−1V U∗M, M,N 0 0   2 where D = diag(σ1,σ2, ..., σr), σ1 ≥ σ2 ≥ ... ≥ σr > 0 and σ is the nonzero eigenvalues of N−1A∗MA.

+ For the weighted Moore-Penrose inverse X = AM,N we have the following limit representation

Lemma 2.7 ( [26], Corollary 3.4.) Let A ∈ Cm×n, A♯ = N−1A∗M. Then

+ ♯ −1 ♯ AM,N = lim(λI + A A) A λ→0

12 ♯ ♯ ♯ Denote by a.j and ai. the jth column and the ith row of A respectively. By putting A♯ instead A∗, we obtain the proofs of the following two lemmas and theorem similar to the proofs of Lemmas 2.2, 2.4 and Theorem 2.3, respectively.

Lemma 2.8 If A♯A ∈ Cn×n, then

♯ ♯ ♯ rank A A a.j ≤ rank A A . .i       for all i = 1,n and j = 1,m

Analogues of the characteristic polynomial are considered in the follow- ing lemma.

Lemma 2.9 If A ∈ Cm×n and λ ∈ R, then

♯ ♯ (ij) n−1 (ij) n−2 (ij) det λIn + A A a.j = c1 λ + c2 λ + . . . + cn , .i     β (ij) ♯ ♯ (ij) ♯ ♯ where cn = A A a.j and cs = A A a.j for .i .i β β∈Js, n{i}        all s = 1,n − 1 , i = 1,n, and j = 1,m. P

The following theorem introduce the determinantal representations of the weighted Moore-Penrose by analogs of the classical adjoint matrix.

Cm×n Theorem 2.4 If A ∈ r and r < min{m,n}, then the weighted Moore- + + Cn×m Penrose inverse AM,N = a˜ij ∈ possess the following determinantal representation:   ♯ ♯ β A A .i a.j β + β∈Jr, n{i} a˜ij =     , (18) P ♯ β (A A) β β∈J r, n P for all i = 1,n, j = 1,m.

2.3 Analogues of the classical adjoint matrix for the Drazin inverse The Drazin inverse can be represented explicitly by the Jordan canonical form as follows.

13 Theorem 2.5 [27] If A ∈ Cn×n with IndA = k and

C 0 A = P P−1 0 N   where C is nonsingular and rank C = rank Ak, and N is nilpotent of order k, then C−1 0 AD = P P−1. (19) 0 0   Stanimirovic’ [28] introduced a determinantal representation of the Drazin inverse by the following theorem.

D D Theorem 2.6 The Drazin inverse A = aij of an arbitrary matrix A ∈ Cn×n with IndA = k possesses the following determinantal representation

(As)β ∂ Aα α ∂aj i β (α, β)∈N {j,i} D rk aij = P , 1 ≤ i, j ≤ n; (20) s δ γ (A )γ Aδ (γ, δ)∈N rk P s where s ≥ k and rk = rankA . This determinantal representations of the Drazin inverse is based on a full- rank representation. We use the following limit representation of the Drazin inverse. Lemma 2.10 [29] If A ∈ Cn×n, then

−1 D k+1 k A = lim λIn + A A , λ→0   where k = IndA, λ ∈ R+, and R+ is a set of the real positive numbers. Since the equation (6) can be replaced by follows

(6a) XAk+1 = Ak, the following lemma can be obtained by analogy to Lemma 2.10. Lemma 2.11 If A ∈ Cn×n, then

−1 D k k+1 A = limA λIn + A , λ→0   where k = IndA, λ ∈ R+, and R+ is a set of the real positive numbers.

14 (k) (k) k Denote by a.j and ai. the jth column and the ith row of A respectively. We consider the following auxiliary lemma. Lemma 2.12 If A ∈ Cn×n with IndA = k, then for all i, j = 1,n

k+1 (k) k+1 rank Ai. aj. ≤ rank A .

k+1 (k)  Proof. The matrix Ai. aj. may by represent as follows n   n (k) (k) a1sas1 . . . a1sasn s=1 s=1  P ...... P ...   a(k) . . . a(k)   j1 jn   ......     n n   a a(k) . . . a a(k)   ns s1 ns sn   s=1 s=1  n×n P P  Let Pli (−alj) ∈ C , (l 6= i), be a matrix with −alj in the (l, i) entry, 1 in all diagonal entries, and 0 in others. It is a matrix of an elementary transformation. It follows that n n (k) (k) a1sas1 . . . a1sasn  s6=j s6=j  P ...... P ... Ak+1 a(k) P a  (k) (k)  ith i. j. · li (− lj)=  aj1 . . . ajn      l6=i  ......  Y  n n   (k) (k)   ansas1 . . . ansasn   s j s j   6= 6=  P P The obtained above matrix has the following factorization.  n n (k) (k) a1sas1 . . . a1sasn s6=j s6=j  P ...... P ...  (k) (k)   aj1 . . . ajn  =    ......   n n   (k) (k)  ansas1 . . . ansasn  s j s j   6= 6=  P P  a ...... a (k) (k) (k) 11 0 1n a a . . . a ...... 11 12 1n a(k) a(k) . . . a(k)  0 . . . 1 . . . 0   21 22 2n  ......  ......     (k) (k) (k) a . . . 0 . . . a  an1 an2 . . . ann   n1 nn       15 Denote the first matrix by

a11 . . . 0 . . . a1n ...... A˜ :=  0 . . . 1 . . . 0 ith.  ......     a . . . 0 . . . a   n1 nn   jth  The matrix A˜ is obtained from A by replacing all entries of the ith row and the jth column with zeroes except for 1 in the (i, j) entry. Elemen- tary transformations of a matrix do not change its rank. It follows that k+1 (k) k ˜ ˜ k rank Ai. aj. ≤ min rank A , rank A . Since rank A ≥ rank A the proof is completed.   n o The following lemma is proved similarly. Lemma 2.13 If A ∈ Cn×n with IndA = k, then for all i, j = 1,n

k+1 (k) k+1 rank A.i a.j ≤ rank A .   Lemma 2.14 If A ∈ Cn×n and λ ∈ R, then

k+1 (k) (ij) n−1 (ij) n−2 (ij) det (λIn + A )j. (ai. ) = r1 λ + r2 λ + . . . + rn , (21)

  α (ij) k+1 (k) (ij) k+1 (k) where rn = Aj. (ai. ) and rs = Aj. (ai. ) for all s = α∈I {j} α s,n   1,n − 1 and i, j = 1,n. P

k+1 (k) Cn×n Proof. Consider the matrix (λIn + A )j. (aj. ) ∈ . Taking into account Theorem 2.2 we obtain  k+1 (k) n−1 n−2 (λIn + A )j. (aj. ) = d1λ + d2λ + . . . + dn, (22)   k+1 α where ds = |(A )α| is the sum of all principal minors of order α∈Is, n{j} k+1 s that contain theP j-th row for all s = 1,n − 1 and dn = det A . Since (k+1) (k) (k) k aj. = ajlal. , where al. is the lth row-vector of A for all l = 1,n, l then we haveP on the one hand k+1 (k) k+1 (k) (λIn + A )j.(aj. ) = λI + A l. ajlal. = l (23)   Pk +1 (k)    ajl · λ I + A l. al. l   P 

16 Having changed the order of summation, we obtain on the other hand for all s = 1,n − 1 α k+1 α k+1 (k) ds = A = Aj. ajlal. = α α α∈Is, n{j} α∈Is, n{j} l  α  (24) P  P k+1P (k) ajl · Aj. al. l α∈I {j} α s, n    P P By substituting (23) and (24) in (22), and equating factors at ajl when l = i, we obtain the equality (21). 

Theorem 2.7 If IndA = k and rank Ak+1 = rank Ak = r ≤ n for A ∈ Cn×n D D Cn×n , then the Drazin inverse A = aij ∈ possess the following determinantal representations:   α k+1 (k) Aj. ai. α α∈Ir,n{j} aD =    , [ (25) ij P k+1 α (A )α α∈Ir,n P and β k+1 (k) A.i a.j β D β∈Jr,n{i} aij = P    , (26) k+1 β (A )β β∈J r,n P for all i, j = 1,n.

Proof. At first we shall prove the equation (25). k+1 If λ ∈ R+, then rank λI + A = n. Hence, there exists the inverse matrix  R11 R21 ...Rn 1 −1 1 R R ...Rn λI + Ak+1 =  12 22 2  , det (λI + Ak+1) ......    R R ...R   1 n 2 n nn    k+1 where Rij is a cofactor in λI + A for all i, j = 1,n. By Theorem 2.11, D k k+1 −1 A = limA λIn + A , so that λ→0  n a(k)R . . . n a(k)R 1 s=1 1s 1s s=1 1s ns AD ...... = lim k+1   = λ→0 det (λI + A ) Pn (k) Pn (k) ans R s . . . ans Rns  s=1 1 s=1    P P 17 k+1 (k) k+1 (k) det(λI+A ) a det(λI+A ) an. 1. 1.  . . . n.  det(λI+Ak+1) det(λI+Ak+1)   lim ...... (27) λ→0 k+1 (k) k+1 (k)  det(λI+A ) an. det(λI+A ) an.   1.  . . . n.    det(λI+Ak+1) det(λI+Ak+1)    Taking into account Theorem 2.2 , we have 

k+1 n n−1 n−2 det λI + A = λ + d1λ + d2λ + . . . + dn,   k+1 α k+1 where ds = A α is a sum of the principal minors of A of α∈Is,n P  k+1 k+1 order s, for all s = 1,n − 1, and dn = det A . Since rank A = r, then dn = dn−1 = . . . = dr+1 = 0 and

k+1 n n−1 n−2 n−r det λI + A = λ + d1λ + d2λ + . . . + drλ . (28)   By Lemma 2.14 for all i, j = 1,n,

k+1 (k) (ij) n−1 (ij) n−2 (ij) det λI + A ai. = l1 λ + l2 λ + . . . + ln , j.     where for all s = 1,n − 1, α (ij) k+1 (k) ls = Aj. ai. , α α∈Is,n{j}    X

(ij) k+1 (k) and ln = det Aj. ai. .   k+1 (k) By Lemma 2.12, rank Aj. ai. ≤ r, so that if s > r, then for all α ∈ Is,n{i} and for all i, j = 1,n,  α k+1 (k) Aj. ai. = 0. α   

Therefore if r + 1 ≤ s

(ij) k+1 (k) and ln = det Aj. ai. = 0. Finally we obtain   k+1 (k) (ij) n−1 (ij) n−2 (ij) n−r det λI + A ai. = l1 λ + l2 λ + . . . + lr λ . (29) j.     18 By replacing the denominators and the nominators of the fractions in the entries of the matrix (27) with the expressions (28) and (29) respectively, finally we obtain

(11) n−1 (11) n−r (1n) n−1 (1n) n−r l1 λ +...+lr λ l1 λ +...+lr λ n n−1 n−r . . . n n−1 n−r λ +d1λ +...+drλ λ +d1λ +...+drλ AD = lim  ......  = λ→0 (n1) n−1 (n1) n−r (nn) n−1 (nn) n−r l1 λ +...+lr λ l1 λ +...+lr λ  n n−1 n−r . . . n n−1 n−r   λ +d1λ +...+drλ λ +d1λ +...+drλ    (11) (1n) lr . . . lr dr dr = ...... ,  (n1) (nn)  lr . . . lr  dr dr    where for all i, j = 1,n,

α α (ij) k+1 (k) k+1 lr = Aj. ai. , dr = A . α α α∈Ir,n{j}    α∈Ir,n   X X

The equation (26) can be proved similarly. This completes the proof.  Using Theorem 2.7 we evidently can obtain determinantal representa- tions of the group inverse and the following determinantal representation of the identities ADA and AAD on R(Ak)

Corollary 2.4 If IndA = 1 and rank A2 = rank A = r ≤ n for A ∈ Cn×n g g Cn×n , then the group inverse A = aij ∈ possess the following determinantal representations:   α 2 Aj. (ai.) α α∈Ir,n{j} ag =   , (30) ij P 2 α |(A )α| α∈Ir,n P 2 β A.i (a.j) β g β∈Jr,n{i} aij = P  , 2 β (A )β β∈J r,n P for all i, j = 1, n.

19 Corollary 2.5 If IndA = k and rank Ak+1 = rank Ak = r ≤ n for n×n D n×n A ∈ C , then the matrix AA = (qij) ∈ C possess the following determinantal representation

β k+1 (k+1) Aj. ai. β α∈Ir,n{j} qij = P    , (31) k+1 β (A )β α∈I r,n P for all i, j = 1, n.

Corollary 2.6 If IndA = k and rank Ak+1 = rank Ak = r ≤ n for n×n D n×n A ∈ C , then the matrix A A = (pij) ∈ C possess the following determinantal representation

β k+1 (k+1) A.i a.j β β∈Jr,n{i} p =   , (32) ij P β  k+1 A.i β β∈Jr,n   P for all i, j = 1, n.

2.4 Analogues of the classical adjoint matrix for the W- weighted Drazin inverse Cline and Greville [30] extended the Drazin inverse of square matrix to rectangular matrix and called it as the weighted Drazin inverse (WDI). The W-weighted Drazin inverse of A ∈ Cm×n with respect to the matrix W ∈ Cn×m is defined to be the unique solution X ∈ Cm×n of the following three matrix equations:

1) (AW)k+1XW = (AW)k; 2) XWAWX = X; (33) 3) AWX = XWA, where k = max{Ind(AW),Ind(WA)}. It is denoted by X = Ad,W . In m×m D particular, when A ∈ C and W = Im , then Ad,W reduce to A . If m×m A ∈ C is non-singular square matrix and W = Im, then Ind(A) = 0 D −1 and Ad,W = A = A . The properties of WDI can be found in (e.g., [31–34]). We note the general algebraic structures of the W-weighted Drazin inverse [31]. Let for

20 A ∈ Cm×n and W ∈ Cn×m exist L ∈ Cm×m and Q ∈ Cn×n such that A 0 W 0 A = L 11 Q−1, W = Q 11 L−1 0 A 0 W  22   22  Then (W A W )−1 0 A = L 11 11 11 Q−1, d,W 0 0   where L, L, A11, W11 are non-singular matrices, and A22, W22 are nilpotent matrices. By [29] we have the following limit representations of the W- weighted Drazin inverse, −1 k+2 k Ad,W = lim λIm + (AW) (AW) A (34) λ→0   and −1 k k+2 Ad,W = limA(WA) λIn + (WA) (35) λ→0   where λ ∈ R+, and R+ is a set of the real positive numbers. (k) (k) Denote WA =: U and AW =: V. Denote by v.j and vi. the jth column and the ith row of Vk respectively. Denote by V¯ k := (AW)kA ∈ Cm×n and W¯ = WAW ∈ Cn×m.

m×m Lemma 2.15 If AW = V = (vij) ∈ C with IndV = k, then

k+2 (k) k+2 rank V v¯.j ≤ rank V . (36) .i k+2  k    m×m Proof. We have V = V¯ W¯ . Let Pis (−w¯js) ∈ C , (s 6= i), be a matrix with −w¯js in the (i, s) entry, 1 in all diagonal entries, and 0 in others. The matrix Pis (−w¯js), (s 6= i), is a matrix of an elementary transformation. It follows that (k) (k) (k) v¯1s w¯s1 . . . v¯1j . . . v¯1s w¯sm s6=j s6=j k+2 (k)   V v¯.j · Pis (−w¯js)= ...P ...... P . .i s6=i (k) (k) (k)      v¯msw¯s1 . . . v¯ . . . v¯msw¯sm  Y  mj   s6=j s6=j   P i−th P  We have the next factorization of the obtained matrix. (k) (k) (k) v¯1s w¯s1 . . . v¯1j . . . v¯1s w¯sm s6=j s6=j  ...P ...... P  = (k) (k) (k)  v¯msw¯s1 . . . v¯ . . . v¯msw¯sm   mj   s6=j s6=j   P i−th P 

21 (k) (k) (k) w¯11 . . . 0 . . . w¯1m v¯11 v¯12 . . . v¯1n k k k ...... v¯( ) v¯( ) . . . v¯( )   =  21 22 2n  0 . . . 1 . . . 0 j − th......  ......   (k) (k) (k)     v¯m1 v¯m2 . . . v¯mn   w¯ . . . 0 . . . w¯     n1 nm     i−th 

w¯11 . . . 0 . . . w¯1m ......   Denote W˜ := 0 . . . 1 . . . 0 j − th. The matrix W˜ is ob-  ......     w¯ . . . 0 . . . w¯   n1 nm   i−th  tained from W¯ = WAW by replacing all entries of the jth row and the ith column with zeroes except for 1 in the (i, j) entry. Since elementary k+2 (k) transformations of a matrix do not change a rank, then rank V.i v¯.j ≤ min rank V¯ k, rank W˜ . It is obvious that   n o rank V¯ k = rank (AW)kA ≥ rank (AW)k+2, rank W˜ ≥ rank WAW ≥ rank (AW)k+2.

From this the inequality (36) follows immediately.  The next lemma is proved similarly.

n×n Lemma 2.16 If WA = U = (uij) ∈ C with IndU = k, then

k+2 (k) k+2 rank U u¯j. ≤ rank U , i.       where U¯ k := A(WA)k ∈ Cm×n Analogues of the characteristic polynomial are considered in the follow- ing two lemmas.

m×m Lemma 2.17 If AW = V = (vij) ∈ C with IndV = k and λ ∈ R, then

k+2 (k) (ij) m−1 (ij) m−2 (ij) λIm + V v¯.j = c1 λ + c2 λ + . . . + cm , (37) .i    

(ij) k+2 ( k) (ij) k+2 (k) β where cm = det V .i v¯.j and cs = det V .i v¯.j β β∈J {i}   s, m    for all s = 1,m −1, i = 1,m, and j = 1,n. P 

22 k+2 (k+2) Cm×m Proof. Consider the matrix λI + V .i (v.i ) ∈ . Taking into account Theorem 2.2 we obtain  k+2 (k+2) m−1 m−2 λI + V v.i = d1λ + d2λ + . . . + dm, (38) .i    

k+2 β where ds = | V β| is the sum of all principal minors of order β∈Js, m{i}  k+2 s that contain theP i-th column for all s = 1,m − 1 and dm = det V . v(k)w ¯1l ¯li  l  P (k)  v¯2l w¯li (k+2) l (k) (k) Since v.i =   = ¯v. l w¯li, where ¯v.l is the lth column-  P.  l  .   (k)  P  v¯nl w¯li   l  k  k  vector of V¯ = (AWP) A andWAW = W¯ = (w ¯li) for all l = 1,n, then we have on the one hand

k+2 (k+2) k+2 (k) λI + V .i v.i = λI + V .l ¯v. l w¯li = l (39)    k+2P (k)    λI + V .i ¯v. l · w¯li l   P  Having changed the order of summation, we obtain on the other hand for all s = 1,m − 1

k+2 β k+2 (k) β ds = V β = V .i ¯v. l w¯li β = β∈J {i} β∈J {i} l s, m s, m    P  VPk+2 P¯v (k) β w . .i .l β · ¯li l β∈J {i} s, m    P P  (40)

By substituting (39) and (40) in (38), and equating factors atw ¯li when l = j, we obtain the equality (37).  By analogy can be proved the following lemma.

n×n Lemma 2.18 If WA = U = (uij) ∈ C with IndU = k and λ ∈ R, then

k+2 (k) (ij) n−1 (ij) n−2 (ij) (λI + U )j. (¯ui. ) = r1 λ + r2 λ + . . . + rn ,

(ij ) k+2 (k) (ij) k+2 (k) α where rn = (U )j. (¯ui. ) and rs = (U )j. (¯ui. ) α for α∈I {j} s,n   all s = 1,n − 1, i = 1,m, and j = 1,n. P

23 Theorem 2.8 If A ∈ Cm×n, W ∈ Cn×m with k = max{Ind(AW),Ind(WA)} k d,W and rank(AW) = r, then the W-weighted Drazin inverse Ad,W = aij ∈ Cm×n with respect to W possess the following determinantal representations: 

k+2 (k) β (AW) .i ¯v.j β β∈Jr, m{i} ad,W =    , (41) ij P k+2 β (AW) β β∈J r, m P or k+2 (k) α (WA)j. (¯ui. ) α d,W α∈Ir,n{j} aij =   . (42) P k+2 α (WA) α α∈I r, n P (k) ¯ k k (k) where ¯v.j is the jth column of V = (AW) A for all j = 1, ..., m and ¯ui. is the ith row of U¯ k = A(WA)k for all i = 1, ..., n.

Proof. At first we shall prove (41). By (34),

−1 k+2 k Ad,W = lim λIm + (AW) (AW) A. λ→0   Let

L11 L21 ... Lm1 −1 1 L L ... L λI AW k+2 12 22 m2 , m + ( ) = k+2   det (λIm + (AW) ) ......    L L ... L   1m 2m mm    k+2 where Lij is a left ij-th cofactor of a matrix λIm + (AW) . Then we have

k+2 −1 k λIm + (AW) (AW) A = m m m (k) (k) (k)  Ls1v¯s1 Ls1v¯s2 . . . Ls1v¯sn s=1 s=1 s=1  m m m  P (k) P (k) P (k) 1 Ls2v¯s1 Ls2v¯s2 . . . Ls2v¯sn = I AW k+2  s=1 s=1 s=1  . det(λ m+( ) )    ...P ...P ...... P   m m m   (k) (k) (k)   Lsmv¯ Lsmv¯ . . . Lsmv¯sn   s1 s2   s=1 s=1 s=1   P P P 

24 By (34), we obtain

k+2 (k) k+2 (k) λIm+(AW) ¯v λIm+(AW) ¯v ( ).  .1  ( ).  .n  1 1 k . . . k |(λIm+(AW) +2)| |(λIm+(AW) +2)|   Ad,W = lim ...... λ→0 I AW k+2 ¯v(k) I AW k+2 ¯v(k)  (λ m+( ) )  .1  (λ m+( ) )  .n    .n .m  k . . . k  λIm+(AW) +2 λIm+(AW) +2   |( )| |( )|   (43) By Theorem 2.2 we have

k+2 m m−1 m−2 λIm + (AW) = λ + d1λ + d2λ + . . . + dm,   k+2 β where ds = λIm + (AW) β is a sum of principal minors of β∈Js, m k+2 k+2 (AW) of orderP s for all s = 1,m −1 and dm = (AW) . k+2 k+1 k Since rank(AW) = rank(AW) = rank(AW) = r, then dm = k+2 m dm−1 = . . . = dr+1 = 0. It follows that det λIm + (AW) = λ + d λm−1 + d λm−2 + . . . + d λm−r. 1 2 r  By Lemma 2.17

k+2 (k) (ij) m−1 (ij) m−2 (ij) λIm + (AW) v¯.j = c1 λ + c2 λ + . . . + cm .i     (ij) k+2 (k) β for i = 1,m and j = 1,n, where c s = (AW).i ¯v.j β for β∈J {i} s, m    (ij) k+2 (k)P all s = 1,m − 1 and cm = (AW).i ¯v.j . (ij)   We shall prove that ck = 0, when k ≥ r+1 for i = 1,m and j = 1,n. By k+2 ( k) k+2 (k) Lemma 2.15 (AW).i ¯v.j ≤ r, then the matrix (AW).i ¯v.j has no more rlinearly independent  columns.    k+2 (k) β Consider (AW).i ¯v.j β, when β ∈ Js,m{i}. It is a principal sub-  k+2 (k)  matrix of (AW).i ¯v.j of order s ≥ r + 1. Deleting both its i-th row and column, we obtain a principal submatrix of order s − 1 of (AW)k+2. We denote it by M. The following cases are possible. • Let s = r + 1 and det M 6= 0. In this case all columns of M are right- linearly independent. The addition of all of them on one coordinate to k+2 (k) β columns of (AW).i ¯v.j β keeps their right-linear independence.    k+2 (k) β Hence, they are basis in a matrix (AW).i ¯v.j β, and the i-th column is the right linear combination of its basis columns. From this, k+2 (k) β (AW).i ¯v.j β = 0, when β ∈ Js,n{i} and s = r + 1.   

25 • If s = r + 1 and det M = 0, than p, (p ≤ r), columns are basis in k+2 (k) β k+2 (k) β M and in (AW).i ¯v.j β. Then (AW).i ¯v.j β =0 as well.      

• If s > r + 1, then det M = 0 and p, (p < r), columns are ba- k+2 (k) β sis in the both matrices M and (AW).i ¯v.j β. Therefore, k+2 (k) β    (AW).i ¯v.j β = 0.    k+2 (k) β Thus in all cases we have ( AW).i ¯v.j β = 0, when β ∈ Js,m{i} and r + 1 ≤ s

(ij) k+2 (k) β cs = (AW).i ¯v.j β = 0, β∈Js, m{i}    X

(ij) k+2 (k) and cm = det (AW).i ¯v.j = 0 for i = 1,m and j = 1,n.  k+2 (k) (ij) m−1 (ij) m−r Hence, λI + (AW) .i ¯v.j = c1 λ + . . . + cr λ for i = 1,m and j = 1,n. By substituting these values in the matrix from (43), we  obtain

(11) m−1 (11) m−r (1n) m−1 (1n) m−r c1 λ +...+cr λ c1 λ +...+cr λ m m−1 m−r . . . m m−1 m−r λ +d1λ +...+drλ λ +d1λ +...+drλ Ad,W = lim  ......  = λ→0 (m1) m−1 (m1) m−r (mn) m−1 (mn) m−r c1 λ +...+cr λ c1 λ +...+cr λ m m−1 m−r . . . m m−1 m−r  λ +d1λ +...+drλ λ +d1λ +...+drλ   (11) (1n)   cr . . . cr  dr dr ......  (m1) (mn)  cr . . . cr  dr dr    (ij) k+1 (k) β k+1 β where cr = A .i a.j β and dr = A β . β∈J {i} β∈Jr, m r, m    Thus, we have obtainedP the determinantal representation ofPA by (41). d,W By analogy can be proved (42). 

3 Cramer’s rules for generalized inverse solutions of systems of linear equations

An obvious consequence of a determinantal representation of the inverse matrix by the classical adjoint matrix is the Cramer rule. As we know,

26 Cramer rule gives an explicit expression for the solution of nonsingular linear equations. In [35], Robinson gave an elegant proof of Cramer rule which aroused great interest in finding determinantal formulas for solutions of some restricted linear equations both consistent and nonconsistent. It has been widely discussed by Robinson [35], Ben-Israel [36], Verghese [37], Werner [38], Chen [39], Ji [40] ,Wang [41], Wei [33]. In this section we demonstrate that the obtained analogues of the ad- joint matrix for the generalized inverse matrices enable us to obtain natural analogues of Cramer’s rule for generalized inverse solutions of systems of linear equations.

3.1 Cramer’s rule for the least squares solution with the min- imum norm Definition 3.1 Suppose in a complex system of linear equations:

A · x = y (44) Cm×n T the coefficient matrix A ∈ r and a column of constants y = (y1,...,ym) ∈ Cm. The least squares solution with the minimum norm of (44) is the vector x0 ∈ Cn satisfying

x0 = min k˜xk | kA · ˜x − yk = min kA · x − yk , ˜x∈Cn x∈Cn   where Cn is an n-dimension complex vector space.

If the equation (44) has no precision solutions, then x0 is its optimal ap- proximation. The following important proposition is well-known.

Theorem 3.1 [23] The vector x = A+y is the least squares solution with the minimum norm of the system (44).

Theorem 3.2 The following statements are true for the system of linear equations (44). i) If rank A = n, then the components of the least squares solution with 0 0 0 T the minimum norm x = x1,...,xn are obtained by the formula det(A∗A) (f) x0 = .j , ∀j = 1,n , (45) j det A∗A where f = A∗y. 

27 ii) If rank A = r ≤ m

∗ β ((A A).j(f))β β∈Jr,n{j} x0 = , ∀j = 1,n . (46) j P ∗ d r (A A)  Proof. i) If rank A = n, then we can represent A+ by (16). By multi- plying A+ into y we get (45). ii) If rank A = k ≤ m

Theorem 3.3 The following statements are true for a system of linear equations written in the form x · A = y.

i) If rank A = m, then the components of the least squares solution x0 = yA+ are obtained by the formula

det(AA∗) (g) x0 = i. , ∀i = 1,m , i det AA∗  where g = yA∗.

ii) If rank A = r ≤ n

∗ α |((AA ) i.(g))α| α∈I {i} x0 r,m , i ,m . i = P ∗ ∀ = 1 dr (AA )  Proof . The proof of this theorem is analogous to that of Theorem 3.2.

Remark 3.1 The obtained formulas of the Cramer rule for the least squares solution differ from similar formulas in [36, 38–41]. They give a closer ap- proximation to the Cramer rule for consistent nonsingular systems of linear equations.

3.2 Cramer’s rule for the Drazin inverse solution In some situations, however, people pay more attention to the Drazin inverse solution of singular linear systems [42–45].

28 Consider a general system of linear equations (44), where A ∈ Cn×n and x, y are vectors in Cn. R(A) denotes the range of A and N(A) denotes the null space of A. The characteristic of the Drazin inverse solution ADy is given in [26] by the following theorem.

Theorem 3.4 Let A ∈ Cn×n with Ind(A) = k. Then ADy is both the unique solution in R(Ak) of

Ak+1x = Aky, (47) and the unique minimal P-norm least squares solution of (44).

n Remark 3.2 The P-norm is defined as kxkP = kP−1xk for x ∈ C , where P is a nonsingular matrix that transforms A into its Jordan canonical form (19).

In other words, the the Drazin inverse solution x = ADy is the unique solution of the problem: for a given A and a given vector y ∈ R(Ak), find a vector x ∈ R(Ak) satisfying Ax = y, Ind A = k. In general, unlike A+y, the Drazin inverse solution ADy is not a true solution of a singular system (44), even if the system is consistent. However, Theorem 3.4 means that ADy is the unique minimal P-norm least squares solution of (44). The determinantal representation of the P-norm least squares solution a system of linear equations (44) by the determinantal representation of the Drazin inverse (20) is introduced in [46]. We obtain Cramer’s rule for the P-norm least squares solution (the Drazin inverse solution) of (44) in the following theorem.

Theorem 3.5 Let A ∈ Cn×n with Ind(A)= k and rank Ak+1 = rank Ak = T r. Then the unique minimal P-norm least squares solution x = (x1,..., xn) of the system (44) is given by b b b β k+1 A.i (f) β β∈Jr, n{i} xi = P   ∀i = 1, n, (48) k+1 β (A )β β∈J r,n b P where f = Aky.

29 Proof . Representing the Drazin inverse by (26) and by virtue of Theorem 3.4, we have

n d y x 1s s 1 1 s=1 x . . . ADy  . . .  . = = = k+1 P   dr (A ) n xn   b  dnsys     b s=1  b  P  Therefore,

n 1 β x Ak+1 a(k) y i = k+1 .i .s · s = dr (A ) β s=1 β∈ Jr,n{i} X X   

b n 1 β Ak+1 a(k) y = k+1 .i .s · s = dr (A ) β β∈Jr, n{i} s=1 X X    n 1 β Ak+1 a(k) y . = k+1 .i .s · s dr (A ) β β∈Jr, n{i} s=1 X X   

From this (48) follows immediately.  If we shall present a system of linear equations as,

xA = y, (49) where A ∈ Cn×n with Ind(A) = k and rank Ak+1 = rank Ak = r, then by using the determinantal representation of the Drazin inverse (25) we shall obtain the following analog of Cramer’s rule for the Drazin inverse solution of (49): α k+1 Ai. (g) α α∈Ir, n{i} x =   , ∀i = 1, n, i P k+1 α (A )α α∈Ir,n b P where g = yAk.

3.3 Cramer’s rule for the W-weighted Drazin inverse solu- tion Consider restricted linear equations

WAWx = y, (50)

30 m×n n×m where A ∈ C , W ∈ C , k1 = Ind(AW), k2 = Ind(WA) with y ∈ (WA)k2 and rank(WA)k2 = rank(AW)k1 = r. In [33], Wei has showed that there exists an unique solution Ad,W y of the linear equations (50) and given a Cramer rule for the W-weighted Drazin inverse solution of (50) by the following theorem.

Cn×(n−r) Theorem 3.6 Let A, W be the same as in (50). Suppose that U ∈ n−r m×(m−r) ∗ C k2 and V ∈ m−r be matrices whose columns form bases for N((WA) ) and N((AW)k1 ), respectively. Then the unique W-weighted Drazin inverse solution x = (x1, ..., xm of (50) satisfies

WAW(i → y) U WAW U x = det det i V(i → 0) 0 V 0     where i = 1, 2, .., m.

k Let k = max{k1, k2}. Denote f = (AW) A · y. Then by Theorem 2.8 using the determinantal representation (41) of the W-weighted Drazin in- verse Ad,W , we evidently obtain the following Cramer’s rule the W-weighted Drazin inverse solution of (50),

k+2 β (AW) .i (f) β β∈Jr, m{i} xi =   , (51) P k+2 β (AW) β β∈J r, m P where i = 1, 2, .., m. Remark 3.3 Note that for (51) unlike Theorem 3.6, we do not need auxil- iary matrices U and V.

3.4 Examples 1. Let us consider the system of linear equations.

2x1 − 5x3 + 4x4 = 1, 7x − 4x − 9x + 1.5x = 2,  1 2 3 4 (52)  3x1 − 4x2 + 7x3 − 6.5x4 = 3,  x1 − 4x2 + 12x3 − 10.5x4 = 1.  

31 2 0 −5 4 7 −4 −9 1.5 The coefficient matrix of the system is the matrix A =  . 3 −4 7 −6.5 1 −4 12 −10.5   We calculate the rank of A which is equal to 3, and we have  27 3 1 63 −44 −40 −11.5 0 −4 −4 −4 −44 48 −40 62 A∗ =   , A∗A =   . −5 −9 7 12 −40 −40 299 −205  4 1.5 −6.5 −10.5 −11.5 62 −205 170.75         At first we obtain entries of A+ by (16):

63 −44 −40 63 −44 −11.5 d (A∗A)= −44 48 −40 + −44 48 62 + 3 −40 −40 299 −11.5 62 170.75

63 − 40 −11.5 48 −40 62

+ −40 299 −205 + − 40 299 −205 = 102060 ,

−11.5 −205 170.75 62 −205 170.75

2 −44 −40 2 −44 −11.5 2 − 40 −11.5 l = 0 48 −40 + 0 48 62 + −5 299 −205 = 11 −5 −40 299 4 62 170.75 4 −205 170.75

= 25779 , and so forth. Continuing in the same way, we get 25779 −4905 20742 −5037 1 −3840 −2880 −4800 −960 A+ =   . 102060 28350 −17010 22680 −5670  39558 −18810 26484 −13074     Now we obtain the least squares solution of the system (52) by the matrix method. 0 x1 25779 −4905 20742 −5037 1 0 0 x2 1 −3840 −2880 −4800 −960 2 x =  0 =   ·   = x3 102060 28350 −17010 22680 −5670 3 x0  39558 −18810 26484 −13074 1  4           12193 73158 17010 416 1 −24960 − 1071 =   =  5  102060 56700 9  68316   5693     8505      32 Next we get the least squares solution with minimum norm of the system (52) by the Cramer rule (46), where

27 3 1 1 26 0 −4 −4 −4 2 −24 f =   ·   =   . −5 −9 7 12 3 10  4 1.5 −6.5 −10.5 1 −23             Thus we have 26 −44 −40 26 −44 −11.5 1 x0 = −24 48 −40 + −24 48 62 + 1 102060  10 −40 299 −23 62 170.75

26 −40 −11.5 73158 12193 + 10 299 −205 = = ;  102060 17010 23 −205 170.75

 63 26 −40 63 26 −11.5 1 x0 = −44 −24 −40 + −44 −24 62 + 2 102060  −40 10 299 −11.5 −23 170.75

 −24 −40 62 −24960 416 + 10 299 −205 = = − ;  102060 1071 −23 −205 170.75

 63 −44 26 63 26 −11.5 1 x0 = −44 48 −24 + −40 10 −205 + 3 102060  −40 −40 10 −11.5 −23 170.75

 48 −24 62 56700 5 + −40 10 −205 = = ;  102060 9 62 −23 170.75

 63 −44 26 63 −40 26 1 x0 = −44 48 −24 + −40 299 10 + 4 102060  −11.5 62 −23 −11.5 −205 −23

 48 −40 −24 68316 5693 + −40 299 10 = = .  102060 8505 62 −205 −23

33 2. Let us consider the following system of linear equations.

x1 − x2 + x3 + x4 = 1, x − x + x = 2,  2 3 4 (53)  x1 − x2 + x3 + 2x4 = 3,  x1 − x2 + x3 + x4 = 1.   1 −1 1 1 0 1 −1 1 The coefficient matrix of the system is the matrix A =  . 1 −1 1 2 1 −1 1 1   It is easy to verify the following:  

3 −4 4 3 10 −14 14 10 0 1 −1 0 −1 2 −2 −1 A2 =   , A3 =   , 4 −5 5 4 13 −18 18 13 3 −4 4 3  10 −14 14 10          and rank A = 3, rank A2 = rank A3 = 2. This implies k = Ind(A) = 2. We obtain entries of AD by (26).

10 −14 10 14 10 10 d (A3)= + + 2 −1 2 13 18 10 10

2 −2 2 −1 18 13 + + + = 8, −18 18 −14 10 14 10

3 − 14 3 14 3 10 d = + + = 4, 11 0 2 4 18 3 10

and so forth. 0.5 0.5 −0.5 0.5 1.75 2.5 −2.5 1.75 Continuing in the same way, we get AD =   . 1.25 1.5 −1.5 1.25  0.5 0.5 −0.5 0.5    Now we obtain the Drazin inverse solution x of the system (53) by the Cramer rule (48), where b 3 −4 4 3 1 10 0 1 −1 0 2 −1 g = A2y =   ·   =   . 4 −5 5 4 3 13 3 −4 4 3 1  10             

34 Thus we have 1 10 −14 10 14 10 10 1 x = + + = , 1 8 −1 2 13 18 10 10 2  

b 1 10 10 −1 −2 −1 −1 x = + + = 1, 2 8 −1 −1 13 18 10 10  

1 10 10 2 −1 13 13 bx = + + = 1, 3 8 13 13 −18 13 10 10  

1 10 10 2 −1 18 13 1 xb = + + = . 4 8 10 10 −14 10 14 10 2  

4 Cramer’sb rule of the generalized inverse solu- tions of some matrix equations

Matrix equation is one of the important study fields of linear algebra. Linear matrix equations, such as AX = C, (54) XB = D, (55) and AXB = D, (56) play an important role in linear system theory therefore a large number of papers have presented several methods for solving these matrix equations [47–51]. In [52], Khatri and Mitra studied the Hermitian solutions to the matrix equations (57) and (56) over the complex field and the system of the equations (57) and (55). Wang, in [53, 54], and Li and Wu, in [55] studied the bisymmetric, symmetric and skew-antisymmetric least squares solution to this system over the quaternion skew field. Extreme ranks of real matrices in least squares solution of the equation (56) was investigated in [56] over the complex field and in [57] over the quaternion skew field. As we know, the Cramer rule gives an explicit expression for the solution of nonsingular linear equations. The Robinson’s result ( [35]) aroused great interest in finding determinantal representations of least squares solution as analogs of the Cramer rule for the matrix equations (for example, [58–60]). The Cramer rule for solutions of the restricted matrix equations (57), (55) and (56) was established in [61–63]. In this section, we obtain analogs of the Cramer rule for generalized inverse solutions of the matrix equations (57), (55) and (63) without any

35 restriction, namely for the minimum norm least squares solution and Drazin inverse solution. We shall show numerical examples to illustrate the main results as well.

4.1 Cramer’s rule for the minimum norm least squares solu- tion of some matrix equations Definition 4.1 Consider a matrix equation AX = B, (57) where A ∈ Cm×n, B ∈ Cm×s are given, X ∈ Cn×s is unknown. Suppose n×s S1 = {X|X ∈ C , kAX − Bk = min}. n×s Then matrices X ∈ C such that X ∈ S1 are called least squares solutions of the matrix equation (57). If XLS = minX∈S1 kXk, then XLS is called the minimum norm least squares solution of (57).

If the equation (57) has no precision solutions, then XLS is its optimal approximation. The following important proposition is well-known. Lemma 4.1 ( [40]) The least squares solutions of (57) are + + X = A B + (In − A A)C, where A ∈ Cm×n, B ∈ Cm×s are given, and C ∈ Cn×s is an arbitrary + matrix. The least squares minimum norm solution is XLS = A B.

∗ n×s We denote A B =: Bˆ = (ˆbij) ∈ C . Theorem 4.1 (i) If rank A = r ≤ m

∗ det(A A).i bˆ.j x = (59) ij det(A∗A) 

where bˆ.j is the jth column of Bˆ for all j = 1,s.

36 Proof . i) If rank A = r ≤ m

∗ ∗ β m m ((A A) .i (a.k)) β + β∈Jr, n{i} xij = aikbkj = P · bkj = ∗ β k=1 k=1 (A A) β X X β∈J r, n P m ∗ ∗ β k=1 ((A A) .i (a.k)) β · bkj β∈Jr, n{i} P P . ∗ β (A A) β β∈J r, n ∗ P a1kbkj k ∗  P a2kbkj  ∗ k Since a bkj = = bˆ.j, then it follows (58). .k  P.  k  .    P  a∗ b   nk kj   k  (ii) The proof of this P case is similarly to that of (i) by using Corollary 2.1. 

Definition 4.2 Consider a matrix equation

XA = B, (60) where A ∈ Cm×n, B ∈ Cs×n are given, X ∈ Cs×m is unknown. Suppose

s×m S2 = {X| X ∈ C , kXA − Bk = min}.

s×m Then matrices X ∈ C such that X ∈ S2 are called least squares solutions of the matrix equation (60). If XLS = minX∈S2 kXk, then XLS is called the minimum norm least squares solution of (60). The following lemma can be obtained by analogy to Lemma 4.1.

Lemma 4.2 The least squares solutions of (60) are

+ + X = BA + C(Im − AA ), where A ∈ Cm×n, B ∈ Cs×n are given, and C ∈ Cs×m is an arbitrary + matrix. The minimum norm least squares solution is XLS = BA .

∗ s×m We denote BA =: Bˇ = (ˇbij) ∈ C .

37 Theorem 4.2 (i) If rank A = r ≤ n

∗ det(AA )j. bˇi. x = (62) ij AA∗ det( ) 

where bˇi. is the ith row of Bˇ for all i = 1,s. Proof . (i) If rank A = r ≤ n

∗ ∗ α n n (AA ) j. (ak.) α α∈Ir,m{j} x = b a+ = b ·   = ij ik kj ik P |(AA∗) α| k=1 k=1 α X X α∈Ir, m P n ∗ ∗ α k=1 bik (AA ) j. (ak.) α α∈I {j} r,m   P P ∗ α | (AA ) α| α∈Ir, m P Since for all i = 1,s

∗ b a∗ b a∗ · · · b a∗ bika = ik k1 ik k2 ik km = bˇi., k. k k k Xk P P P  then it follows (61). (ii) The proof of this case is similarly to that of (i) by using Corollary 2.1. 

Definition 4.3 Consider a matrix equation

AXB = D, (63)

A Cm×n B Cp×q D Cm×q X Cn×p where ∈ r1 , ∈ r2 , ∈ are given, ∈ is unknown. Suppose n×p S3 = {X| X ∈ C , kAXB − Dk = min}.

38 n×p Then matrices X ∈ C such that X ∈ S3 are called least squares solutions of the matrix equation (63). If XLS = minX∈S3 kXk, then XLS is called the minimum norm least squares solution of (63).

The following important proposition is well-known.

Lemma 4.3 ( [38]) The least squares solutions of (63) are

+ + + + X = A DB + (In − A A)V + W(Ip − BB ),

A Cm×n, B Cp×q, D Cm×q V, W Cn×p where ∈ r1 ∈ r2 ∈ are given, and { }⊂ are arbitrary quaternion matrices. The minimum norm least squares solution is + + XLS = A DB .

We denote D = A∗DB∗.

Theorem 4.3e (i) If rank A = r1

e e

39 (ii) If rank A = n and rank B = p, then for the least squares solution n×p XLS = (xij) ∈ C of (63) we have for all i = 1,n, j = 1,p,

∗ B det (A A).i d.j x = , (68) ij det(A∗A) · det(BB∗) or ∗ A det (BB )j. d x = i. , (69) ij A∗A BB∗ det( ) · det( ) where T B ∗ ˜ ∗ ˜ d.j := det (BB )j. d1 . ,..., det (BB )j. dn. , (70) h      i A ∗ ˜ ∗ ˜ di. := det (A A).i d.1 ,..., det (A A).i d.p (71) are respectivelyh the column-vector  and the row-vector. i

(iii) If rank A = n and rank B = r2

∗ B det (A A).i d.j x , ij = ∗ ∗ α (72) det(AA) |(BB)α| α∈Ir2,p or P ∗ A α (BB ) j. d i. α α∈Ir ,p{j} x = 2 , (73) ij P ∗ ∗ α det(A A ) |(BB )α| α∈Ir2,p B A P where d.j is (66) and di. is (71).

(iiii) If rank A = r1

40 A Cm×n B Cp×q Proof . (i) If ∈ r1 , ∈ r2 and r1

∗ ∗ β (A A) .i a.j β β∈J {i} + r1, n aij = P   , ∗ β (A A) β β∈J r1, n P ∗ ∗ α |(BB )j. (bi. ) α| α∈Ir ,p{j} b+ 2 . ij = P ∗ α (76) |(BB ) α| α∈Ir2,p P + + Since by Theorem 4.3 XLS = A DB , then an entry of XLS = (xij) is

q m + + xij = aikdks bsj. (77) s=1 ! X Xk=1 ∗ n×q Denote by dˆ.s the sth column of A D =: Dˆ = (dˆij) ∈ C for all s = 1,q. ∗ ˆ It follows from a.kdks = d.s that k P (A∗A) (a∗ ) β m m .i .k β β∈J {i} + r1, n aikdks = P · dks = ∗ β k=1 k=1 (A A) β X X β∈J r1, n P m ∗ ∗ β ∗ ˆ β (A A) .i (a.k) β · dks (A A) .i d.s β k β∈Jr1, n{i} =1 β∈Jr1, n{i} P P = P   (78) ∗ β ∗ β (A A) β (A A) β β∈J β∈J r1, n r1, n P P Suppose es. and e.s are respectively the unit row-vector and the unit column- vector whose components are 0, except the sth components, which are 1. Substituting (78) and (76) in (77), we obtain

∗ ˆ β ∗ ∗ α q (A A) .i d.s β |(BB )j. (bs. ) α| β∈J {i} α∈I {j} r1, n   r2,p xij = P P α . ∗ β |(BB∗) | s=1 (A A) α β α∈I X β∈J r2,p r1, n P P

41 Since n p q ˆ ˆ ∗ ∗ ˆ ∗ d.s = e. ldls, bs. = bstet., dlsbst = dlt, (79) t=1 s=1 Xl=1 X X then we have e xij = q p n ∗ β ˆ ∗ ∗ α (A A) .i (e. l) β dlsbst |(BB )j. (et.) α| s=1 t=1 l=1 β∈Jr1, n{i} α∈Ir2,p{j} P P P P P = ∗ β ∗ α (A A) β |(BB ) α| β∈Jr1, n α∈Ir2,p P P p n ∗ β ∗ α (A A) .i (e. l) β dlt |(BB )j. (et.) α| t=1 l =1 β∈Jr1, n{i} α∈Ir2,p{j} P P P P . (80) ∗ β e ∗ α (A A) β |(BB ) α| β∈J α∈I r1, n r2,p P P Denote by A dit := n ∗ β ∗ β (A A) .i d. t β = (A A) .i (e. l) β dlt β∈J {i} l=1 β∈J {i} r1, n   r1, n X X X e A A A e the tth component of a row-vector di. = (di1, ..., dip) for all t = 1,p. Sub- stituting it in (80), we have

p A ∗ α dit |(BB )j. (et.) α| t=1 α∈Ir2,p{j} xij = P P . ∗ β ∗ α (A A) β |(BB ) α| β∈J α∈I r1, n r2,p P P p A A Since dit et. = di. , then it follows (65). t=1 If weP denote by p B ∗ α ∗ α dlj := dlt |(BB )j. (et.) α| = (BB )j. (dl.) α (81) t=1 α∈I {j} α∈I {j} r2,p r2,p X X X e e

42 B B B T the lth component of a column-vector d.j = (d1j , ..., djn) for all l = 1,n and substitute it in (80), we obtain

n ∗ β B (A A) .i (e. l) β dlj l =1 β∈Jr1, n{i} xij = P P . ∗ β ∗ α (A A ) β |(BB ) α| β∈J α∈I r1, n r2,p P P n B B Since e.ldlj = d.j, then it follows (64). l=1 (ii)P If rank A = n and rank B = p, then by Corollary 2.1 A+ = (A∗A)−1 A∗ and B+ = B∗ (BB∗)−1. Therefore, we obtain

∗ −1 ∗ ∗ ∗ −1 XLS = (A A) A DB (BB ) = A A A x11 x12 ... x1p L11 L21 ... Ln1 x x ... x LA LA ... LA = 21 22 2p = 1 12 22 n2 ×  ......  det(A∗A)  ......  A A A xn1 xn2 ... xnp  L1n L2n ... Lnn  ˜ ˜ ˜   B B B   d11 d12 . . . d1m R 11 R 21 ...R p1  ˜ ˜ ˜ B B B d21 d22 . . . d2m 1 R 12 R 22 ...R p2 ×   ∗   , ...... det(BB ) ...... B B B d˜ d˜ . . . d˜  R R ...R   n1 n2 nm   1p 2p pp     ˜ A ∗ where dij is ijth entry of the matrix D, Lij is the ijth cofactor of (A A) for B ∗ all i, j = 1,n and Rij is the ijth cofactor of (BB ) for all i, j = 1,p. This implies e n p A ˜ B Lki d ksRjs x = k=1 s=1  , (82) ij det(P A∗A)P· det(BB∗) for all i = 1,n, j = 1,p. We obtain the sum in parentheses and denote it as follows p ˜ B ∗ ˜ B dksRjs = det(BB )j. dk. := dkj, s=1 X   ˜ ˜ B where dk. is the kth row-vector of D for all k = 1,n. Suppose d.j := B B T d1 j,...,dnj is the column-vector for all j = 1,p. Reducing the sum n  A B  Lkidkj, we obtain an analog of Cramer’s rule for (63) by (68). k=1 P

43 Interchanging the order of summation in (82), we have

p n A ˜ B Lkid ks Rjs x = s=1 k=1  . ij det(P AP∗A) · det(BB∗) We obtain the sum in parentheses and denote it as follows

n A ˜ ∗ ˜ A Lkidks = det(A A).i d.s =: dis, Xk=1   ˜ ˜ A where d.s is the sth column-vector of D for all s = 1,p. Suppose di. := n A A A B di 1,...,dip is the row-vector for all i = 1,n. Reducing the sum disRjs, s=1 we obtain another analog of Cramer’s rule for the least squares solutionsP of (63) by (69). A Cm×n B Cp×q (iii) If ∈ r1 , ∈ r2 and r1 = n, r2 < p, then by Remark + + Cn×m 2.1 and Theorem 2.3 the Moore-Penrose inverses A = aij ∈ and + + Cq×p   B = bij ∈ possess the following determinantal representations respectively,  ∗ ∗ det (A A) .i a.j a+ = , ij det (A∗A)  ∗ ∗ α |(BB )j. (bi. ) α| α∈Ir ,p{j} b+ 2 . ij = P ∗ α (83) |(BB ) α| α∈Ir2,p +P + Since by Theorem 4.3 XLS = A DB , then an entry of XLS = (xij) is ∗ n×q (77). Denote by dˆ.s the sth column of A D =: Dˆ = (dˆij) ∈ C for all ∗ ˆ s = 1,q. It follows from a.kdks = d.s that k P m m ∗ ˆ det (A∗A) (a∗ ) det (A A) .i d.s a+ d = .i .k · d = (84) ik ks det (A∗A) ks det (A∗A)  Xk=1 Xk=1 Substituting (84) and (83) in (77), and using (79) we have |(BB∗) (b∗ ) α| q A∗A dˆ j. s. α det ( ) .i .s α∈Ir ,p{j} x = 2 = ij det (A∗A)  P |(BB∗) α| s=1 α X α∈Ir2,p P 44 q p n ∗ ˆ ∗ ∗ α det (A A) .i (e. l)dlsbst |(BB )j. (et.) α| s=1 t=1 l =1 α∈Ir2,p{j} P P P ∗ P∗ α = det (A A) |(BB ) α| α∈Ir2,p p n P ∗ ∗ α det (A A) .i (e. l) dlt |(BB )j. (et.) α| t=1 l=1 α∈Ir ,p{j} 2 . P P ∗ P ∗ α (85) det (A A) e |(BB ) α| α∈Ir2,p P If we substitute (81) in (85), then we get n ∗ B det (A A) .i (e. l) dlj x l=1 . ij = ∗ ∗ α detP (A A) |(BB ) α| α∈Ir2,p P n B B B Since again e.ldlj = d.j, then it follows (72), where d.j is (66). l=1 If we denoteP by A dit := n n ∗ ∗ det (A A) .i d. t = det (A A) .i (e. l) dlt Xl=1   Xl=1 e A A A e the tth component of a row-vector di. = (di1, ..., dip) for all t = 1,p and substitute it in (85), we obtain p A ∗ α dit |(BB )j. (et.) α| t=1 α∈Ir ,p{j} x 2 . ij = P ∗P ∗ α det (A A) |(BB ) α| α∈Ir2,p P p A A A Since again dit et. = di. , then it follows (73), where di. is (71). t=1 (iiii) TheP proof is similar to the proof of (iii). 

4.2 Cramer’s rule of the Drazin inverse solutions of some matrix equations Consider a matrix equation AX = B, (86) where A ∈ Cn×n with IndA = k, B ∈ Cn×m are given and X ∈ Cn×m is unknown.

45 Theorem 4.4 ( [64], Theorem 1) If the range space R(B) ⊂ R(Ak), then the matrix equation (86) with constrain R(X) ⊂ R(Ak) has a unique solution

X = ADB.

k n×m We denote A B =: Bˆ = (ˆbij) ∈ C .

Theorem 4.5 If rank Ak+1 = rank Ak = r ≤ n for A ∈ Cn×n, then for D n×m Drazin inverse solution X = A B = (xij) ∈ C of (86) we have for all i = 1,n, j = 1,m,

k+1 ˆ β A .i b.j β β∈Jr, n{i} xij = P    . (87) k+1 β (A ) β β∈J r, n P Proof . By Theorem 2.7 we can represent the matrix AD by (26). There- fore, we obtain for all i = 1,n, j = 1,m,

k+1 (k) β n n A .i a .s β D β∈Jr, n{i} xij = aisbsj = P    · bsj = k+1 β s=1 s=1 (A ) β X X β∈J r, n P n k+1 (k) β s=1 A .i a .s β · bsj β∈Jr, n{i} P P    . k+1 β (A ) β β∈J r, n P (k) a1s bsj s  (k)  P a2s bsj (k) s ˆ  Since a .s bsj =   = b.j, then it follows (87). s  .P   .  P  (k)   ans b   sj   s  Consider a matrix P equation 

XA = B, (88) where A ∈ Cm×m with IndA = k, B ∈ Cn×m are given and X ∈ Cn×m is unknown.

46 Theorem 4.6 ( [64], Theorem 2) If the null space N(B) ⊃ N(Ak), then the matrix equation (88) with constrain N(X) ⊃ N(Ak) has a unique solution X = BAD.

k n×m We denote BA =: Bˇ = (ˇbij) ∈ C . Theorem 4.7 If rank Ak+1 = rank Ak = r ≤ m for A ∈ Cm×m, then for D n×m Drazin inverse solution X = BA = (xij) ∈ C of (88), we have for all i = 1,n, j = 1,m,

k+1 ˇ α A j. bi. α α∈Ir,m{j} x =   . (89) ij P k+1 α |(A ) α| α∈Ir, m P Proof . By Theorem 2.7 we can represent the matrix AD by (25). Therefore, for all i = 1,n, j = 1,m, we obtain

k+1 (k) α m m A j. as. α α∈Ir,m{j} x = b aD = b ·    = ij is sj is P |(Ak+1) α| s=1 s=1 α X X α∈Ir, m P m k+1 (k) α s=1 bik A j. as. α α∈I {j} r,m    P P k+1 α |(A ) α| α∈Ir, m P Since for all i = 1,n

(k) b a(k) b a(k) b a(k) ˇ bisas. = is s1 is s2 · · · is sm = bi., s s s s X P P P  then it follows (89).  Consider a matrix equation AXB = D, (90)

n×n m×m where A ∈ C with IndA = k1, B ∈ C with IndB = k2 and D ∈ Cn×m are given, and X ∈ Cn×m is unknown. Theorem 4.8 ( [64], Theorem 3) If R(D) ⊂ R(Ak1 ) and N(D) ⊃ N(Bk2 ), k = max{k1, k2}, then the matrix equation (90) with constrain R(X) ⊂ R(Ak) and N(X) ⊃ N(Bk) has a unique solution X = ADDBD.

47 k k n×m We denote A 1 DB 2 =: D = (dij ) ∈ C .

k1+1 k1 Cn×n Theorem 4.9 If rank A e = ranke A = r1 ≤ n for A ∈ , and k2+1 k2 m×m rank B = rank B = r2 ≤ m for B ∈ C , then for the Drazin D D n×m inverse solution X = A DB =: (xij) ∈ C of (90) we have

k1+1 B β A .i d .j β β∈Jr1, n{i} xij =   , (91) P β α k1+1 k2+1 (A )β (B )α β∈J α∈I r1,n r2,m P P or k2+1 A α Bj. d i. α α∈Ir2,m{j} xij = , (92) P β  α k1+1 k2+1 (A ) β (B )α β∈J α∈I r1,n r2,m P P where T B d = Bk2+1 d α , ..., Bk2+1 d α , (93) .j  j. 1. α j. n. α  α∈I {j} α∈I {j} r2,m   r2,m   X X  e e  A d = Ak1+1 d β , ..., Ak1+1 d β i.  .i .1 β .i . m β  β∈J {i} α∈I {i} r1,n   r1,n   X X  e e  are the column-vector and the row-vector. di. and d.j are respectively the ith row and the jth column of D for all i = 1,n, j = 1,m. e e D D Cn×n Proof . By (26) and (25) thee Drazin inverses A = aij ∈ and D D Cm×m   B = bij ∈ possess the following determinantal representations, respectively,  k1+1 (k1) β A .i a.j β β∈Jr , n{i} aD = 1   , ij P β k1+1 (A ) β β∈J r1, n P k2+1 (k2 ) α Bj. (bi. ) α α∈Ir ,m{j} bD = 2 . (94) ij P k2+1 α |(B ) α| α∈Ir2,m P 48 D D n×m Then an entry of the Drazin inverse solution X = A DB =: (xij) ∈ C is

m n D D xij = ait dts bsj. (95) s=1 t=1 ! X X k n×m Denote by dˆ.s the sth column of A D =: Dˆ = (dˆij ) ∈ C for all s = 1,m. D ˆ It follows from a. tdts = d.s that t P Ak1+1 a(k1) β n n .i .t β β J i D ∈ r1, n{ } a dts =   · dts = it P β k1+1 t=1 t=1 (A ) β X X β∈J r1, n P n k1+1 (k1) β k1+1 ˆ β A .i a.t β · dts A .i d.s β β∈Jr , n{i} t=1 β∈Jr , n{i} 1   = 1   (96) P P β P β k1+1 k1+1 (A ) β (A ) β β∈J β∈J r1, n r1, n P P Substituting (96) and (94) in (95), we obtain

Ak1+1 dˆ β Bk2+1(b(k2)) α m .i .s β j. s. α β∈J {i} α∈I {j} r1, n   r2,m xij = P P k α . k +1 β |(B 2+1) | s=1 (A 1 ) α β α∈Ir ,m X β∈Jr , n 2 1 P P

Suppose es. and e.s are respectively the unit row-vector and the unit column- vector whose components are 0, except the sth components, which are 1. Since n m m ˆ ˆ (k2) (k2) ˆ (k2) d.s = e. ldls, bs. = bst et., dlsbst = dlt, t s Xl=1 X=1 X=1 then we have e xij = m m n k1+1 β ˆ (k2) k2+1 α A .i (e. l) β dlsbst Bj. (et.) α s=1 t=1 l=1 β∈Jr , n{i} α∈Ir ,m{j} 1 2 = P P P P β P k1+1 k2+1 α (A ) β |(B ) α| β∈J α∈I r1, n r2,m P P

49 m n k1+1 β k2+1 α A .i (e. l) β dlt Bj. (et.) α t=1 l=1 β∈Jr , n{i} α∈Ir ,m{j} 1 2 . (97) P P P β P k1+1 e k2+1 α (A ) β |(B ) α| β∈J α∈I r1, n r2,m P P Denote by A dit := n k1+1 β k1+1 β A .i d. t β = A .i (e. l) β dlt β∈J {i} l=1 β∈J {i} r1, n   r1, n X X X e A A A e the tth component of a row-vector di. = (di1, ..., dim) for all t = 1,m. Substituting it in (97), we obtain

m A k2+1 α dit Bj. (et.) α t=1 α∈Ir2,m{j} xij = . P P β k1+1 k2+1 α (A ) β |(B ) α| β∈J α∈I r1, n r2,m P P m A A Since dit et. = di. , then it follows (92). t=1 If weP denote by m B k2+1 α k2+1 α dlj := dlt Bj. (et.) α = Bj. (dl.) α t=1 α∈I {j} α∈I {j} r2,m r2,m X X X e e B B B T the lth component of a column-vector d.j = (d1j , ..., djn) for all l = 1,n and substitute it in (97), we obtain

n k1+1 β B A .i (e. l) β dlj l =1 β∈Jr1, n{i} xij = . P P β k1+1 k2+1 α (A ) β |(B ) α| β∈J α∈I r1, n r2,m P P n B B  Since e.ldlj = d.j, then it follows (91). l=1 P 4.3 Examples In this subsection, we give an example to illustrate results obtained in the section.

50 1. Let us consider the matrix equation

AXB = D, (98) where 1 i i 1 i 1 i −1 −1 i 1 −i i 0 1 A =   , B = , D =   . 0 1 0 −1 i 1 1 i 0   −1 0 −i 0 1 i         Since rank A = 2 and rank B = 1, then we have the case (ii) of Theorem 4.3. We shall find the least squares solution of (98) by (64). Then we have

3 2i 3i 1 −i 3 −3i A∗A = −2i 3 2 , BB∗ = , D = A∗DB∗ = −i −1 ,   3i 3   −3i 2 3   −i −1   e   ∗ α and |(BB ) α| =3+3=6, α∈I1, 2 P 3 2i 3 2 3 3i (A∗A) β = det + det + det = 10. β −2i 3 2 3 −3i 3 β∈J2, 3       X By (70), we can get

1 −i B B d = −i , d = −1 . .1   .2   −i −1     1 2i 3i Since (A∗A) d B = −i 3 2 , then finally we obtain . 1 . 1   −i 2 3    ∗ B β 1 2i 1 3i (A A) . 1 d . 1 β det + det β∈J2, 3{i} −i 3 −i 3 1 x11 = P  =     = − . ∗ β ∗ α 60 60 (A A)β |(BB ) α| β∈J α∈I 2,3 1,2 P P Similarly, −i 2i −i 3i det + det −1 3 −1 3 i x =     = − , 12 60 60

51 3 1 −i 2 det + det −2i −i −i 3 2i x =     = − , 21 60 60 3 −i −1 2 det + det −2i −1 −1 3 2 x =     = − , 22 60 60 3 1 3 −i det + det −3i −i 2 −i i x =     = − , 31 60 60 3 −i 3 −1 det + det −3i −1 2 −1 1 x =     = − . 32 60 60 2. Let us consider the matrix equation (98), where 2 0 0 1 −1 1 1 i 1 A = −i i i , B = i −i i , D = i 0 1 .       −i −i −i −1 1 2 1 i 0       We shall find the Drazin inverse solution of (98) by (64). We obtain 4 0 0 8 0 0 A2 = 2 − 2i 0 0 , A3 = 4 − 4i 0 0 ,     −2 − 2i 0 0 −4 − 4i 0 0     −i i 3 − i B2 = 1 −1 1+3i .   −3+ i 3 − i 3+ i  2 2  Since rank A = 2 and rank A = rank A = 1, then k1 = Ind A = 2 and 2 r1 = 1. Since rank B = rank B = 2, then k2 = Ind B = 1 and r2 = 2. Then we have −4 4 8 D = A2DB = −2 + 2i 2 − 2i 4 − 4i ,   2 + 2i −2 − 2i −4 − 4i e   3 β and A β =8+0+0=8, β∈J1, 3 P  2 α B α = α∈I2, 3 P − i i −1 1+3i −i 3 − i det + det + det = 1 −1 3 − i 3+ i −3+ i 3+ i       0 + (−9 − 9i) + (9 − 9i)= −18i.

52 By (66), we can get

12 − 12i −12 + 12i 8 B B B d = −12i , d = 12i , d = −12 − 12i . .1   .2   .3   −12 12 −12 + 12i       12 − 12i 0 0 Since A3 d B = −12i 0 0 , then finally we obtain . 1 . 1   −12 0 0    3 B β A . 1 d . 1 β β∈J1, 3{1} 12 − 12i 1+ i x11 = P  = = . 3 β 2 α 8 · (−18i) 12 (A ) β |(B )α| β∈J α∈I 1,3 2,3 P P Similarly, −12 + 12i −1 − i 8 i x = = , x = = , 12 8 · (−18i) 12 13 8 · (−18i) 18

−12i 1 12i 1 −12 − 12i 1 − i x = = , x = = − , x = = , 21 8 · (−18i) 12 22 8 · (−18i) 12 23 8 · (−18i) 12

12 i −12 i −12 + 12i −1 − i x = = − , x = = . x = = . 31 8 · (−18i) 12 32 8 · (−18i) 12 33 8 · (−18i) 12 Then 1+i −1−i i 12 12 18 1 1 1−i X = 12 − 12 12  i i −1−i  − 12 12 12   is the Drazin inverse solution of (98).

5 An application of the determinantal representa- tions of the Drazin inverse to some differential matrix equations

In this section we demonstrate an application of the determinantal repre- sentations of the Drazin inverse to following differential matrix equations, X′ + AX = B and X′ + XA = B, where the matrix A is singular.

53 Consider the matrix differential equation

X′ + AX = B (99) where A ∈ Cn×n, B ∈ Cn×n are given, X ∈ Cn×n is unknown. It’s well- known that the general solution of (99) is found to be

A A X(t) = exp− t exp t dt B Z  If A is invertible, then

A A exp t dt = A−1 exp t +G, Z where G is an arbitrary n × n matrix. If A is singular, then the following theorem gives an answer.

Theorem 5.1 ( [65], Theorem 1) If A has index k, then

2 k−1 A A A A A exp t dt = AD exp t +(I−AAD)t I + t + t2 + ... + tk−1 +G. 2 3! k! Z   Using Theorem 5.1 and the power series expansion of exp−At, we get an explicit form for a general solution of (99)

X(t)= D D A A2 2 k−1 Ak−1 k−1 A + (I − AA )t I − 2 t + 3! t − ...(−1) k! t + G B. n   o If we put G = 0, then we obtain the following partial solution of (99),

D D 1 D 2 2 X(t)= A B + (B − A AB)t − 2 (AB − A A B)t + ... k (−1) −1 k−1 D k k (100) k! (A B − A A B)t .

l (l) (l) Cn×n Denote A B =: B = (bij ) ∈ for all l = 1, 2k.

Theorem 5.2 Theb partialb solution (100), X(t) = (xij), possess the follow-

54 ing determinantal representation, Ak+1 b(k) β Ak+1 b(k+1) β  . i  .j  β  . i  .j  β β∈JPr, n{i} b β∈JPr, n{i} b xij = + bij − t Ak+1 β Ak+1 β ( ) β  ( ) β  β∈PJr, n β∈PJr, n

Ak+1 b(k+2) β  . i  .j  β  β∈JP {i} − 1 b(1) − r, n b t2 + ... 2 ij Ak+1 β  ( ) β  β∈PJ r, n Ak+1 b(2k) β b  . i  .j β k k β∈JPr, n{i} (−1) b( −1) − b tk k! ij Ak+1 β  ( ) β  β∈PJr, n

b  (101) for all i, j = 1,n. Proof . Using the determinantal representation of the identity ADA (32), we obtain the following determinantal representation of the matrix ADAmB := (yij),

n β n k+1 (k+1) (m−1) n n A.i a.s · ast btj (m−1) s=1 β t=1 yij = pis ast btj =   = P k+1 βP s=1 t=1 β∈Jr,n{i} (A )β X X X β∈Jr,n P n β (k +m) β k+1 (k+m) A k+1 b A.i a.t · btj .i .j β t=1 β β∈Jr, n{i}   = P    P k+1 β k+1b β β∈Jr,n{i} (A )β (A ) β X β∈J β∈J r,n r, n P P for all i, j = 1,n and m = 1, k. From this and the determinantal representa- tion of the Drazin inverse solution (87) and the identity (32) it follows (101).  Corollary 5.1 If IndA = 1, then the partial solution of (99), g g X(t) = (xij)= A B + (B − A AB)t, possess the following determinantal representation

2 (1) β 2 (2) β A .i b.j β A .i b.j β β∈Jr, n{i} β∈Jr, n{i} xij = P    + bij − P     t. 2b β 2b β (A ) β (A ) β β∈J  β∈J  r, n  r, n  P  P (102) for all i, j = 1,n.

55 Consider the matrix differential equation

X′ + XA = B (103) where A ∈ Cn×n, B ∈ Cn×n are given, X ∈ Cn×n is unknown. The general solution of (103) is found to be

A A X(t)= B exp− t exp t dt Z  If A is singular, then an explicit form for a general solution of (103) is

X(t)= D D A A2 2 k−1 Ak−1 k−1 B A + (I − AA )t I − 2 t + 3! t + ...(−1) k! t + G . n   o If we put G = 0, then we obtain the following partial solution of (103),

D D 1 2 D 2 X(t)= BA + (B − BAA )t − 2 (BA − BA A )t + ... (−1)k−1 k−1 k D k (104) k! (BA − BA A )t .

l ˇ (l) ˇ(l) Cn×n Denote BA =: B = (bij ) ∈ for all l = 1, 2k. Using the determi- nantal representation of the Drazin inverse solution (89), the group inverse (30) and the identity (31) we evidently obtain the following theorem.

Theorem 5.3 The partial solution (104), X(t) = (xij), possess the follow- ing determinantal representation,

Ak+1 bˇ(k) α Ak+1 bˇ(k+1) α  j.  . i  α  j.  i.  α α∈IP {j} α∈IP {j} r,n r,n xij = Ak+1 α + bij − Ak+1 α t |( ) α|  |( ) α|  α∈PIr,n α∈PIr,n Ak+1 bˇ(k+2) α  j.  i.  α  α∈IP {j} 1 ˇ(1) r,n 2 − 2 bij − Ak+1 α t + ...  |( ) α|  α∈PIr,n Ak+1 bˇ(2k) α   j.  i. α k α∈IP {j} (−1) ˇ(k−1) r,n k k! bij − Ak+1 α t  |( ) α|  α∈PIr,n   for all i, j = 1,n.

Corollary 5.2 If IndA = 1, then the partial solution of (103),

g g X(t) = (xij)= BA + (B − BAA )t,

56 possess the following determinantal representation

2 (1) α 2 (2) α Aj. bi. α Aj. bi. α α∈Ir,n{j} α∈Ir,n{j} x =    + b −    t. ij P 2 α  ij P 2 α  |(A )b α| |(A )b α| α∈Ir,n  α∈Ir,n    P  P  for all i, j = 1,n.

5.1 Example 1. Let us consider the differential matrix equation X′ + AX = B, (105) where 1 −1 1 1 i 1 A = i −i i , B = i 0 1 .     −1 1 2 1 i 0 Since rank A = rank A2 = 2, then k= Ind A= 1 andr = 2. The matrix A is the group inverse. We shall find the partial solution of (105) by (102). We have −i i 3 − i 2 − i 2i 0 A2 = 1 −1 1+3i , B(1) = AB = 1 + 2i −2 0 ,     −3+ i 3 − i 3+ i 1+ i i 0   b   2 − 2i 2 + 3i 0 B(2) = A2B = 2 + 2i −3 + 2i 0 .   1 + 5i −2 0 and b   2 β A β = α∈J 2, 3 P − i i −1 1+3i −i 3 − i det + det + det = 1 −1 3 − i 3+ i −3+ i 3+ i       0 + (−9 − 9i) + (9 − 9i)= −18i. 2 − i i 3 − i Since A2 b(1) = 1 + 2i −1 1+3i and . 1 .1   1+ i 3 − i 3+ i    b   2 − 2i i 3 − i A2 b(2) = 2 + 2i −1 1+3i , . 1 .1   1 + 5i 3 − i 3+ i    b   57 then finally we obtain

A2 b(1) β A2 b(2) β  . 1 .1  β  . 1 .1  β β∈JP {1} β∈JP {1} 2,3 b 2,3 b x11 = + b11 − t = A2 β A2 β ( ) β  ( ) β  β∈PJ β∈PJ 2,3 2,3 3−3i −18i 1+i  −18i + 1 − −18i t = 6 .   Similarly,

−3 + 3i 9 + 9i −1 − i 1+ i x = + i − t = + t, x =0+(1 − 0) t = t, 12 −18i −18i 6 2 13   3 + 3i −18 −1+ i x = + i − t = , 21 −18i −18i 6   −3 − 3i −9 + 9i 1 − i 1+ i x = + 0 − t = + t, x = 0 + (1 − 0) t = t, 22 −18i −18i 6 2 23   −12i −18i 2 x = + 1 − t = , 31 −18i −18i 3   9 + 3i −18 −1 + 3i x = + i − t = , x =0+(0 − 0) t = 0. 32 −18i −18i 6 33   Then 1+ i −1 − i + (3 + 3i)t t 1 X = −1+ i 1 − i + (3 + 3i)t t 6   4 −1 + 3i 0   is the partial solution of (105) .

References

[1] E. H. Moore, On the reciprocal of the general algebraic matrix, Bulletin of the American Mathematical Society 26(9) (1920) 394–395.

[2] A. Bjerhammar, Application of calculus of matrices to method of least squares; with special references to geodetic calculations, Trans. Roy. Inst. Tech. Stockholm 49. (1951).

[3] R. Penrose, A generalized inverse for matrices. Proc. Camb. Philos. Soc. 51 (1955) 406–413.

58 [4] M. P.Drazin, Pseudo-inverses in associative rings and semigroups, The American Mathematical Monthly 65(7) (1958) 506–514. [5] K.M. Prasad, R.B. Bapat, A note of the Khatri inverse, Sankhya: Indian J. Stat. 54 (1992) 291–295. [6] I. Kyrchei, Analogs of the adjoint matrix for generalized inverses and corresponding Cramer rules, Lin. Multilin. Alg. 56(4) (2008) 453–469. [7] I. Kyrchei, Analogs of Cramer’s rule for the minimum norm least squares solutions of some matrix equations, Appl. Math. Comput. 218 (2012) 6375–6384. [8] I. Kyrchei, Explicit formulas for determinantal representations of the Drazin inverse solutions of some matrix and differential matrix equations, Appl. Math. Comput. 219 (2013) 1576–1589. [9] I. Kyrchei, The theory of the column and row determinants in a quater- nion linear algebra, Advances in Mathematics Research 15, pp. 301-359. Nova Sci. Publ., New York, 2012. [10] I. Kyrchei, Cramer’s rule for quaternion systems of linear equations, J. Math. Sci. 155 (6) (2008) 839–858. [11] I. Kyrchei, Cramer’s rule for some quaternion matrix equations, Appl. Math. Comput. 217(5) (2010) 2024–2030. [12] I. Kyrchei, Determinantal representation of the Moore-Penrose inverse matrix over the quaternion skew field, J. Math. Sci. 180(1) (2012) 23–33. [13] I. Kyrchei, Determinantal representations of the Moore-Penrose inverse over the quaternion skew field and corresponding Cramer’s rules, Lin. Multilin. Alg. 59(4) (2011) 413–431. [14] I. Kyrchei, Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations, Linear Algebra Appl. 438(1) (2013) 136–152. [15] I. Kyrchei, Determinantal representations of the Drazin inverse over the quaternion skew field with applications to some matrix equations, Appl. Math. Comput. 238 (2014) 193–207. [16] I. Kyrchei, Determinantal representations of the W-weighted Drazin inverse over the quaternion skew field, Appl. Math. Comput. 264 (2015) 453–465.

59 [17] X. Liu, G. Zhu, G. Zhou, Y. Yu, An analog of the adjugate matrix for (2) the outer inverse AT,S, Mathematical Problems in Engineering, Volume 2012, Article ID 591256, 14 pages, doi:10.1155/2012/591256.

[18] R. B. Bapat, K. P. S. Bhaskara, K. M. Prasad, Generalized inverses over integral domains, Linear Algebra Appl. 140 (1990) 181–196.

[19] A. Ben-Israel, Generalized inverses of marices: a perspective of the work of Penrose, Math. Proc. Camb. Phil. Soc. 100 (1986) 401–425.

[20] R. Gabriel, Das verallgemeinerte inverse eineer matrix, deren elemente einem beliebigen K¨orper angeh¨oren, J.Reine angew math. 234 (1967) 107–122.

[21] P. S. Stanimirovic’, General determinantal representation of pseudoin- verses of matrices, Mat. Vesnik 48 (1996) 1–9.

[22] A. Ben-Israel, On matrices of index zero or one, SIAM J. Appl. Math. 17 (1969) 1118–1121.

[23] R. A. Horn, C. R. Johnson, Matrix analysis. Cambridge etc., Cambridge University Press, 1985.

[24] P. Lancaster, M. Tismenitsky, Theory of matrices, Acad. Press., New York 1969.

[25] C.F. Van Loan, Generalizing the singular value decomposition, SIAM J. Numer. Anal. 13 (1976) 76–83.

[26] Y. Wei, H. Wu, The representation and approximation for the weighted MoorePenrose inverse, Appl. Math. Comput. 121 (2001) 17–28.

[27] Stephen L. Campbell, Carl D. Meyer Jr., Weak Drazin inverses, Linear Algebra Appl. 20(2) (1978) 167–178.

[28] P.S. Stanimirovic’, D.S. Djordjevic’, Full-rank and determinantal repre- sentation of the Drazin inverse, Linear Algebra Appl. 311 (2000) 131-151.

[29] Carl D. Meyer Jr. , Limits and the index of a square matrix, SIAM J. Appl. Math. 26(3) (1974) 506–515.

[30] R. E. Cline, T. N. E. Greville, A Drazin inverse for rectangular matrices, Linear Algebra Appl. 29 (1980) 53–62.

60 [31] Y. Wei, Integral representation of the W-weighted Drazin inverse, Appl. Math. Comput. 144 (2003) 3–10.

[32] Y. Wei, C.-W. Woo, T. Lei, A note on the perturbation of the W- weighted Drazin inverse, Appl. Math. Comput.149 (2004) 423–430.

[33] Y. Wei, A Characterization for the W -weighted Drazin inverse and a Cramer rule for the W - weighted Drazin inverse solution, Appl. Math. Comput. 125 (2002) 303–310.

[34] Z. Al-Zhour, A. Kili¸cman, M. H. Abu Hassa, New representations for weighted Drazin inverse of matrices, Int. Journal of Math. Analysis 1(15) (2007) 697–708.

[35] S.M. Robinson, A short proof of Cramer’s rule, Math. Mag. 43 (1970) 94–95.

[36] A. Ben-Israel, A Cramer rule for least-norm solutions of consistent lin- ear equations, Linear Algebra Appl. 43 (1982) 223–226.

[37] G.C. Verghese, A Cramer rule for least-norm least-square-error solution of inconsistent linear equations, Linear Algebra Appl. 48 (1982) 315–316.

[38] H. J. Werner, On extension of Cramer’s rule for solutions of restricted linear systems, Lin. Multilin. Alg. 15 (1984) 319–330.

[39] Y. Chen, A Cramer rule for solution of the general restricted linear equation, Lin. Multilin. Alg. 34 (1993) 177–186

[40] J. Ji, Explicit expressions of the generalized inverses and condensed Cramer rules, Linear Algebra Appl. 404 (2005) 183–192.

[41] G. Wang, A Cramer rule for minimum-norm (T) least-squares (S) so- lution of inconsistent linear equations, Linear Algebra Appl. 74 (1986) 213–218.

[42] G. Wang, A Cramer rule for finding the solution of a class of singular equations, Linear Algebra Appl. 116 (1989) 27–34.

[43] G. Chen, X. Chen, A new splitting for singular linear system and Drazin inverse. J. East China Norm. Univ. Natur. Sci. Ed., 3 (1996) 12–18.

[44] A. Sidi, A unified approach to Krylov subspace methods for the Drazin- inverse solution of singular nonsymmetric linear systems, Linear Algebra Appl. 298 (1999) 99–113.

61 [45] Y. M. Wei, H. B. Wu, Additional results on index splittings for Drazin inverse solutions of singular linear systems, The Electronic Journal of Linear Algebra 8 (2001) 83–93.

[46] P.S. Stanimirovic. A representation of the minimal P-norm solution. Novi Sad J. Math. 3(1) (2000) 177–183.

[47] H. Dai, On the symmetric solution of linear matrix equation, Linear Algebra Appl. 131 (1990) 1–7.

[48] M. Dehghan, M. Hajarian, The reflexive and anti-reflexive solutions of a linear matrix equation and systems of matrix equations, Rocky Mountain J. Math. 40 (2010) 825-848.

[49] F.J. Henk Don, On the symmetric solutions of a linear matrix equation, Linear Algebra Appl. 93 (1987) 1–7.

[50] Z.Y. Peng and X.Y. Hu, The generalized reflexive solutions of the ma- trix equations AX = D and AXB = D, Numer. Math. 5 (2003) 94–98.

[51] W.J. Vetter, Vector structures and solutions of linear matrix equations, Linear Algebra Appl. 9 (1975) 181–188.

[52] C.G. Khatri, S.K. Mitra, Hermitian and nonnegative definite solutions of linear matrix equations, SIAM J. Appl. Math. 31 (1976) 578–585.

[53] Q.W. Wang, Bisymmetric and centrosymmetric solutions to systems of real quaternion matrix equations, Comput. Math. Appl. 49 (2005) 641–650.

[54] Q.W. Wang, The general solution to a system of real quaternion matrix equations, Comput. Math. Appl. 49 (56) (2005) 665–675.

[55] Y. Li, W/ Wu, Symmetric and skew-antisymmetric solutions to systems of real quaternion matrix equations, Comput. Math. Appl. 55 (2008) 11421147.

[56] Y. H. Liu, Ranks of least squares solutions of the matrix equation AXB=C, Comput. Math. Appl. 55 (2008) 1270–1278.

[57] Q.W. Wang, S. W. Yu, Extreme ranks of real matrices in solution of the quaternion matrix equation AXB=C with applications, Algebra Colloquium, 17 (2) (2010) 345–360.

62 [58] G. Wang, S. Qiao, Solving constrained matrix equations and Cramer rule, Appl. Math. Comput. 159 (2004) 333–340.

[59] Y. Qiu, A. Wang, Least-squares solutions to the equations AX = B, XC = D with some constraints, Appl. Math. Comput. 204 (2008) 872–880.

[60] Y. Yuan, Least-squares solutions to the matrix equations AX = B and XC = D, Appl. Math. Comput. 216 (2010) 3120–3125.

[61] G. Wang, Z. Xu, Solving a kind of restricted matrix equations and Cramer rule Appl. Math. Comput. 162 (2005) 329–338.

[62] C. Gu, G. Wang, Condensed Cramer rule for solving restricted matrix equations, Appl. Math. Comput. 183 (2006) 301–306

[63] G.J. Song, Q.W. Wang, H.X. Chang, Cramer rule for the unique solu- tion of restricted matrix equations over the quaternion skew field, Com- put. Math. Appl. 61 (2011) 1576–1589.

[64] Z.Xu, G. Wang, On extensions of Cramer’s rule for solutions of re- stricted matrix equtions, Journal of Lanzhou University 42 (3) (2006) 96–100.

[65] S. L. Campbell, C. D. Meyer, JR. and N. J. Rose, Applications of the Drazin inverse to linear systems of differential equations with singular constant coefficients, SIAM J. Appl. Math. 31 (1976) 411–425.

63