View metadata, citation and similar papers at core.ac.uk brought to you by CORE

provided by Elsevier - Publisher Connector

Linear Algebra and its Applications 432 (2010) 1531–1552

Contents lists available at ScienceDirect

Linear Algebra and its Applications

journal homepage: www.elsevier.com/locate/laa

The general coupled equations over generalized bisymmetric matrices ∗ Mehdi Dehghan , Masoud Hajarian

Department of Applied Mathematics, Faculty of Mathematics and Computer Science, Amirkabir University of Technology, No. 424, Hafez Avenue, Tehran 15914, Iran

ARTICLE INFO ABSTRACT

Article history: Received 20 May 2009 In the present paper, by extending the idea of conjugate gradient Accepted 12 November 2009 (CG) method, we construct an iterative method to solve the general Available online 16 December 2009 coupled matrix equations

Submitted by S.M. Rump p AijXjBij = Mi,i= 1, 2, ...,p, j=1 AMS classification: 15A24 (including the generalized (coupled) Lyapunov and 65F10 equations as special cases) over generalized bisymmetric matrix 65F30 group (X1,X2, ...,Xp). By using the iterative method, the solvability of the general coupled matrix equations over generalized bisym- Keywords: metric matrix group can be determined in the absence of roundoff Iterative method errors. When the general coupled matrix equations are consistent Least Frobenius norm solution group over generalized bisymmetric matrices, a generalized bisymmetric Optimal approximation generalized solution group can be obtained within finite iteration steps in the bisymmetric solution group Generalized bisymmetric matrix absence of roundoff errors. The least Frobenius norm generalized General coupled matrix equations bisymmetric solution group of the general coupled matrix equa- tions can be derived when an appropriate initial iterative matrix group is chosen. In addition, the optimal approximation generalized bisymmetric solution group to a given matrix group (X1, X2, ..., Xp) in Frobenius norm can be obtained by finding the least Frobe- nius norm generalized bisymmetric solution group of new general coupled matrix equations. The numerical results indicate that the iterative method works quite well in practice. © 2009 Elsevier Inc. All rights reserved.

∗ Corresponding author. E-mail addresses: [email protected], [email protected] (M. Dehghan), [email protected],masoudhajarian@ gmail.com (M. Hajarian).

0024-3795/$ - see front matter © 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.laa.2009.11.014 1532 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552

1. Introduction

× First some symbols and notations are introduced. Let Rm n be the set of all m × n real matrices n×n n×n and SOR be the set of all symmetric orthogonal matrices in R . We denote by In the n × n . We also write it as I, when the dimension of this matrix is clear. The symbols AT ,tr(A) and R(A) stand the transpose, trace and column space of the matrix A, respectively. A, B=tr(BT A) || ||2 = is defined as the inner product of the two matrices, which generates the Frobenius norm, i.e. A F  = ( T ) ⊗ A, A tr A A . The Kronecker product of two matrices A and B is denoted by A B. The stretching T ( ) ( ) = T T ... T function vec A is defined as vec A a1a2 am , where ak is the kth column of A. The generalized bisymmetric matrices have wide applications in many fields which can be defined as follows:

× − Definition 1.1. For arbitrary given matrix R ∈ SORn n, i.e., R = RT = R 1, we say that matrix A ∈ × Rn n is generalized bisymmetric matrix with respect to R,ifRAR = A = AT . The set of order n gener- BSRn×n alized bisymmetric matrices with respect to R is denoted by R . It is obvious that any is also a generalized bisymmetric matrix with respect to I.

In the literature, the problem for determining a solution to several linear matrix equations has been widely studied [10,14,22–24,27–30,33,35,43,44]. In [4], the symmetric solution of the linear matrix equation

AX = B, (1.1) have been considered using the singular-value, generalized singular-value, real Schur, and real gen- eralized Schur decompositions. By applying a formula for the partitioned minimum-norm reflexive generalized, Don [5] presented the general symmetric solution X to the matrix equation (1.1). By using the singular-value decomposition and the generalized singular-value decomposition, Dai [2] proposed the necessary and sufficient conditions for the consistency of two matrix equations

AX = C and AXB = C, (1.2) with a symmetric condition on solutions. In [1], the necessary and sufficient conditions for the existence of and the expressions for the symmetric solutions of the matrix equations

AX + YA = C, (1.3) AXAT + BYBT = C, (1.4) and (AT XA, BT XB) = (C, D), (1.5) were derived. Many problems in systems and control theory require the solution of the matrix equation AX − YB = C. Baksalary and Kala [6] presented a condition for the existence of a solution and derived a formula for the general solution of the matrix equation

AXB + CYD = E, (1.6) where A, B, C, D and E are given matrices of suitable sizes defined over real number field. Dehghan and Hajarian proposed an iterative method for solving a pair of matrix equations AYB = E and CYD = F over generalized centro-symmetric matrices [7] and some finite iterative algorithms for the reflexive and anti-reflexive solutions of (coupled) Sylvester matrix equations [8,9,13], and studied the lower bound for the product of eigenvalues of solutions to a class of linear matrix equations [12]. In [26], the authors presented the necessary and sufficient conditions for the existence of constant solutions with bi(skew)symmetric constrains to the matrix equations

AiX − YBi = Ci,i= 1, 2, ...,s, (1.7) M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552 1533 and

AiXBi − CiYDi = Ei,i= 1, 2, ...,s. (1.8) In [31,32], Wang investigated the centro-symmetric solution to the system of quaternion matrix equations

A1X = C1,A3XB3 = C3. (1.9) In [36], the maximal and minimal ranks and the least-norm of the general solution to the system

A1X = C1,A2X2 = C2,A3X1B1 + A4X2B2 = C3, (1.10) over a quaternion algebra were derived. When the solvability conditions of the system of linear real quaternion matrix equations

A1X = C1,XB1 = C2,A2X2 = C3,XB2 = C4,A3X1B3 + A4X2B4 = Cc, (1.11) are satisfied, Wang et al. [36] presented some necessary and sufficient conditions for the existence of a solution to this system and gave an expression of the general solution to the system. It is well-known that Sylvester and Lyapunov matrix equations are important equations which play a fundamental role in the various fields of engineering theory, particularly in control systems. The numerical solution of Sylvester and Lyapunov matrix equations has been addressed in a large body of literature. Dehghan and Hajarian [11] proposed an efficient iterative method for solving the second-order Sylvester matrix equation EVF2 − AVF − CV = BW. (1.12) Zhou and Duan [38,41,39] established the solution of the several generalized Sylvester matrix equations. Zhou et al. [40] proposed gradient-based iterative algorithms for solving the general coupled Sylvester matrix equations with weighted least squares solutions. In [42], general parametric solution to a family of generalized Sylvester matrix equations arising in linear system theory is presented by using the socalled generalized Sylvester mapping which has some elegant properties. Iterative algorithms are commonly employed to solve large linear matrix equations. By extending the well-known Jacobi and Gauss–Seidel iterations for Ax = b, Ding et al. [20] derived iterative solu- tions of the matrix equation AXB = F and the generalized Sylvester matrix equation AXB + CXD = F.In [15,16,20], to solve (coupled) matrix equations, the iterative methods are given which are based on the hierarchical identification principle [17,18]. The gradient-based iterative (GI) algorithms [15,20] and least squares based iterative algorithm [16] for solving (coupled) matrix equations are innovational and computationally efficient numerical algorithms and were presented based on the hierarchical identification principle [17,18] which regards the unknown matrix as the system parameter matrix to be identified. Also Ding and Chen [19], applying the gradient search principle and the hierarchical identification principle, presented the gradient-based iterative algorithms for the general coupled matrix equations p AijXjBij = Mi,i= 1, 2, ...,p. (1.13) j=1 We would like to comment that the coupled matrix equations (1.13) are quite general and include many matrix equations such as the generalized (coupled) Lyapunov and Sylvester matrix equations. Also by using the above-mentioned algorithms, we can not obtain the generalized bisymmetric solution group of the coupled matrix equations (1.13). In this paper, we mainly consider the following problems:

ri×nj nj×si ri×si Problem 1.1. For given matrices Aij ∈ R , Bij ∈ R , Mi ∈ R and the symmetric orthogonal nj×nj matrices Rj ∈ SOR , find the generalized bisymmetric matrix group (X1,X2, ...,Xp) with Xj ∈ n ×n BSR j j , j = 1, 2, ...,p, such that Rj p AijXjBij = Mi,i= 1, 2, ...,p. j=1 1534 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552

Problem 1.2. Let Problem 1.1 be consistent, and its solution group set be denoted by Sr . For a given gen- n ×n eralized bisymmetric matrix group (X , X , ..., X ) with X ∈ BSR j j , j = 1, 2, ...,p, find (X , X , 1 2 p j Rj 1 2 n ×n ..., X ) ∈ S with X ∈ BSR j j , j = 1, 2, ...,psuch that p r j Rj ⎧ ⎫ p ⎨p ⎬ || − ||2 = || − ||2 . Xj Xj F min Xj Xj F (1.14) (X ,X ,...,X )∈S ⎩ ⎭ j=1 1 2 p r j=1

The rest of the paper is structured as follows. In Section 2, first we introduce an iterative method for solving Problem 1.1. Then we show that the introduced algorithm can obtain a generalized bisym- metric solution group (the minimal Frobenius norm generalized bisymmetric solution group) for any (special) initial matrix group within finite steps. Also we will solve Problem 1.2. Finally, we will give two numerical examples to verify our results in Section 4.

2. Main results

In this section, by extending the idea of conjugate gradient method, we propose an iterative method to compute the generalized bisymmetric solution group of the matrix equations (1.13). Then some properties of such algorithm are discussed. By using these properties, we present the convergence results for the algorithm. In order to explain the method, we first give the following lemma.

Lemma 2.1. The coupled matrix equations (1.13) have a generalized bisymmetric solution group if and only if the system of matrix equations ⎧ p ⎪ = AijXjBij = Mi,i= 1, 2, ...,p, ⎪ j 1 ⎨⎪ p T T T = B XjA = M ,i= 1, 2, ...,p, j 1 ij ij i ⎪ p (2.1) ⎪ = AijRjXjRjBij = Mi,i= 1, 2, ...,p, ⎪ j 1 ⎩ p T T = T = ... j=1 Bij RjXjRjAij Mi ,i 1, 2, ,p, is consistent.

 ∗ ∗ Proof. If the coupled matrix equations (1.13) have a generalized bisymmetric matrix group X1 ,X2 , ∗ ∗ nj×nj ∗ ∗ ∗ ...,X with X ∈ BSR , i.e., X = X T = R X R for j = 1, 2, ...,p, then we get p j Rj j j j j j p p p T ∗ T = ∗T T = ∗ T = T Bij Xj Aij AijXj Bij AijXj Bij Mi , (2.2) j=1 j=1 j=1

p p ∗ = ∗ = AijRjXj RjBij AijXj Bij Mi, (2.3) j=1 j=1 and p p p T ∗ T = T ∗T T T = ∗ T = T Bij RjXj RjAij AijRj Xj Rj Bij AijXj Bij Mi , (2.4) j=1 j=1 j=1 for i = 1, 2, ... ,p. By combining the equalities (2.2)–(2.4), we obtain that the generalized bisymmetric ∗ ∗ ... ∗ matrix group X1 ,X2 , ,Xp is a solution group of the system of matrix equations (2.1). Conversely, suppose that the system of matrix equations (2.1) is consistent and (X1,X2, ...,Xp) is a solution group of (2.1). Define M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552 1535

+ + T + T Xj RjXjRj Xj RjXj Rj Yj = for j = 1, 2, ...,p. (2.5) 4 n ×n Obviously that Y ∈ BSR j j for j = 1, 2, ...,p. Now we can write j Rj   p p + + T + T Xj RjXjRj Xj RjXj Rj AijYjBij = Aij Bij j=1 j=1 4 ⎡ ⎤ 1 p p p p = ⎣ + + T + T ⎦ AijXjBij AijRjXjRjBij AijXj Bij AijRjXj RjBij 4 j=1 j=1 j=1 j=1 ⎡ ⎤ p p p p 1 T T = ⎣ + + T T + T T ⎦ AijXjBij AijRjXjRjBij Bij XjAij Bij RjXjRjAij 4 j=1 j=1 j=1 j=1   1 T T T T = Mi + Mi + (M ) + (M ) = Mi, (2.6) 4 i i for i = 1, 2, ...,p. Hence (Y1,Y2, ...,Yp) is a solution group of the coupled matrix equations (1.13). The proof is completed. 

By considering Lemma 2.1, the solvability of the system of matrix equations (2.1) is equivalent to Problem 1.1. Also the system of matrix equations (2.1) is equivalent to the following system: ⎛ ⎞ T ⊗ T ⊗ ... T ⊗ ⎛ ⎞ ⎛ ( )⎞ B11 A11 B12 A12 B1p A1p vec(X ) vec M1 ⎜ ⎟ 1 ⎜ ⎟ ⎜ . . . . ⎟ ⎜ ( )⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜vec X2 ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ BT ⊗ A BT ⊗ A ... BT ⊗ A ⎟ ⎜ . ⎟ ⎜vec(M )⎟ ⎜ p1 p1 p2 p2 pp pp ⎟ ⎜ ⎟ ⎜ p ⎟ ⎜ T T T ⎟ ⎜ ⎟ ⎜ T ⎟ ⎜ A11 ⊗ B A12 ⊗ B ... A1p ⊗ B ⎟ ⎜ . ⎟ ⎜vec(M )⎟ ⎜ 11 12 1p ⎟ ⎜ ⎟ ⎜ 1 ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⊗ T ⊗ T ... ⊗ T ⎟ ⎜ . ⎟ ⎜ ( T )⎟ ⎜ Ap1 Bp1 Ap2 Bp2 App Bpp ⎟ ⎜ ⎟ ⎜vec Mp ⎟ ⎜ ⎟ ⎜ ⎟ = ⎜ ⎟ . (2.7) ⎜ BT R ⊗ A R BT R ⊗ A R ... BT R ⊗ A R ⎟ ⎜ . ⎟ ⎜vec(M )⎟ ⎜ 11 1 11 1 12 2 12 2 1p p 1p p ⎟ ⎜ . ⎟ ⎜ 1 ⎟ ⎜ ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ T T T ⎟ ⎜ ⎟ ⎜ ( )⎟ ⎜ B R ⊗ A R B R ⊗ A R ... B R ⊗ A R ⎟ ⎜ . ⎟ vec Mp ⎜ p1 1 p1 1 p2 2 p2 2 pp p pp p ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ T ⎟ ⎜ A R ⊗ BT R A R ⊗ BT R ... A R ⊗ BT R ⎟ ⎜ ⎟ ⎜vec(M )⎟ ⎜ 11 1 11 1 12 2 12 2 1p p 1p p ⎟ ⎜ . ⎟ ⎜ 1 ⎟ ⎜ . . . . ⎟ ⎜ ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎝ ⎠ ⎜ . ⎟ ⎝ . . . . ⎠ . ⎝ . ⎠ T ⊗ T ⊗ T ... ⊗ T vec(Xp) vec(M ) Ap1R1 Bp1R1 Ap2R2 Bp2R2 AppRp BppRp p

The size of the system of the linear equations (2.7) is large. However, iterative methods consume more computer time and memory once the size of the system is large. To overcome these complications and drawbacks, by extending the idea of conjugate gradient method, we propose an iterative method for solving (1.13). Consider the linear algebraic system of equations

Ax = b, (2.8)

n×n n where we have denoted by A = (aij) ∈ R the coefficient matrix, by b = (bi) ∈ R the right side n vector and by x = (xi) ∈ R the unknown vector, respectively. For solving systems of linear algebraic equations, the conjugate gradient method reads as follows: 1536 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552

Conjugate gradient algorithm

1. Select x(0) ∈ Rn and set r(0) = b − Ax(0).Setα(0) =||r(0)||2 and d(0) = r(0); 2. for k = 0, 1, ...until convergence do: 3. s(k) = Ad(k); 4. t(k)=α(k)/(d(k)T s(k)); x(k + 1)= x(k) + t(k)d(k); r(k + 1)= r(k) − t(k)s(k); β(k + 1) =||r(k + 1)||2/||r(k)||2; d(k + 1) = r(k + 1) + β(k + 1)d(k); 5. end for. If A be a symmetric and positive definite matrix, then any method which employs conjugate direc- tions to solve (2.8) terminates after at most n steps, yielding the exact solution [21,25]. In general, CG − method cannot guarantee that x(k) converges to the exact solution x = A 1b. Also this method not × suitable for solving the non-square system: Bx = c with B ∈ Rm n. This motivates us to study new iterative methods. It is obvious that CG method can be represented as x(k + 1) = x(k) + t(k)d(k), (2.9) where parameter t(k) and vector d(k) to be determined. Now we decompose the general coupled matrix equations (1.13) into p subsystems and consider a large family of CG iterative methods for the subsystems as:

Xi(k + 1) = Xi(k) + Ti(k)Di(k) for i = 1, 2, ...,p, (2.10) where parameter Ti(k) and matrix Di(k) (for i = 1, 2, ...,p) to be determined. According to (2.10), we propose an iterative algorithm for solving Problem 1.1. The details of algorithm are given as follows.

Algorithm 1

ri×nj nj×si ri×si Step 1. Input matrices Aij ∈ R , Bij ∈ R , Mi ∈ R and the symmetric orthogonal matrices nj×nj Rj ∈ SOR ; n ×n Step 2. Choose arbitrary X (1) ∈ BSR j j for j = 1, 2, ...,p; j Rj Step 3. Calculate   p p p R(1)=diag M1 − A1t Xt (1)B1t ,M2 − A2t Xt (1)B2t , ...,Mp − Apt Xt (1)Bpt ; t=1 t=1 t=1 ⎧   ⎡   ⎤ ⎨ p p p p T 1 ( )= T − ( ) T + ⎣ − ( ) ⎦ Pi 1 ⎩ Asi Ms Ast Xt 1 Bst Bsi Bsi Ms Ast Xt 1 Bst Asi 4 s=1 t=1 s=1 t=1   p p + T − ( ) T RiAsi Ms Ast Xt 1 Bst BsiRi s=1 t=1 ⎡   ⎤⎫ p p T ⎬ + ⎣ − ( ) ⎦ = ... ; := ; RiBsi Ms Ast Xt 1 Bst AsiRi ⎭ ,i 1, 2, ,p k 1 s=1 t=1

Step 4. If R(k) = 0, then stop and (X1(k),X2(k), ...,Xp(k)) is the generalized bisymmetric solution group; else if R(k)/= 0 but Pm(k) = 0 for all m = 1, 2, ...,p, then stop and the general coupled matrix equations (1.13) are not consistent over generalized bisymmetric matrix group; else k := k + 1; Step 5. Calculate ||R(k − 1)||2 X (k) = X (k − 1) +  F P (k − 1),i= 1, 2, ...,p; i i p || ( − )||2 i t=1 Pt k 1 F M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552 1537   p p p R(k) = diag M1 − A1t Xt (k)B1t ,M2 − A2t Xt (k)B2t , ...,Mp − Apt Xt (k)Bpt ; t=1 t=1 t=1

||R(k − 1)||2 = R(k − 1) −  F p || ( − )||2 t=1 Pt k 1 F   p p p × diag A1t Pt (k − 1)B1t , A2t Pt (k − 1)B2t , ..., Apt Pt (k − 1)Bpt ; t=1 t=1 t=1 ⎡    T 1 p p p p ( ) = ⎣ T − ( ) T + − ( ) Pi k Asi Ms Ast Xt k Bst Bsi Bsi Ms Ast Xt k Bst Asi 4 s=1 t=1 s=1 t=1     ⎤ p p p p T + T − ( ) T + − ( ) ⎦ RiAsi Ms Ast Xt k Bst BsiRi RiBsi Ms Ast Xt k Bst AsiRi s=1 t=1 s=1 t=1

||R(k)||2 + F P (k − 1),i= 1, 2, ...,p; || ( − )||2 i R k 1 F Step 6. Go to Step 4.

Remark 2.1. Xj(k) and Pj(k)(j = 1, 2, ...,p) generated by Algorithm 1 are the generalized bisymmetric matrices with respect to Rj, i.e., ( ) = T ( ) = ( ) ( ) = T ( ) = ( ) Xj k Xj k RjXj k Rj and Pj k Pj k RjPj k Rj, for k = 1, 2, ...,j= 1, 2, ...,p. (2.11) Remark 2.2. Because of the influence of the error of calculation, the residual R(k)(k = 1, 2, ...) is usually unequal to zero exactly in the process of the iteration. We regard the matrix R(k) as a if ||R(k)||F <εwhere ε is a small positive number. In Algorithm 1, the iteration will be stopped whenever ||R(k)||F <ε.

We begin with the following useful lemmas about Algorithm 1 to be used in the next results.

Lemma 2.2. Suppose that the sequences {R(k)} and {Pi(k)} for i = 1, 2, ..., p and k = 1, 2, ...,s(R(k)/= 0,k = 1, 2, ...,s) are generated by Algorithm 1, then p T ( ) ( ) = T ( ) ( ) = = ... ( =/ ). tr R m R n 0 and tr Pi m Pi n 0, for m, n 1, 2, ,s m n (2.12) i=1

The proof of Lemma 2.2 is presented in the Appendix.

Lemma 2.3. Assume that the coupled matrix equations(1.13) are consistent over generalized bisymmetric ( ... ) ∗ ∗ ... ∗ matrix group X1,X2, ,Xp and also X1 ,X2 , ,Xp is an arbitrary generalized bisymmetric solution n ×n group of (1.13), then for any initial matrix group (X (1),X (1), ...,X (1)) with X (1) ∈ BSR j j and 1 2 p j Rj j = 1, 2, ..., p, the sequences {Xi(k)}, {R(k)} and {Pi(k)} (i = 1, 2, ...,p) generated by Algorithm 1 satisfy p ∗ − ( ) T ( ) = < tr Xi Xi n Pi m 0 for m n, (2.13) i=1 1538 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552 and p ∗ − ( ) T ( ) =|| ( )||2  . tr Xi Xi n Pi m R m F for m n (2.14) i=1

The proof of Lemma 2.3 is given in the Appendix.

Remark 2.3. If there exists a positive number j such that Pi(j) = 0 for all i = 1, 2, ...,pbut R(j)/= 0, then by considering Lemma 2.3, we can get the coupled matrix equations (1.13) are not consistent over generalized bisymmetric matrices. Hence, the solvability of Problem 1.1 can be determined by Algorithm 1 in the absence of roundoff errors.

Theorem 2.1. Suppose that Problem 1.1 is consistent. Then for any arbitrary initial matrix group (X1(1), n ×n X (1), ...,X (1)) with X (1) ∈ BSR j j and j = 1, 2, ..., p, a generalized bisymmetric solution group 2 p j Rj of Problem 1.1 can be obtained with finite iteration steps in the absence of roundoff errors.  ( )/= = ... = p Proof. Let R i 0fori 1, 2, ,m i=1 risi, then from Lemma 2.3 and Remark 2.3 we have ( )/= ∈{ ... } (m+1) (m+1) ... (m+1) Pg i 0 for some g 1, 2, ,p . Hence Rm+1 and X1 ,X2 , ,Xp can be computed. Now from Lemma 2.2, it is not difficult to get

tr(RT (m + 1)R(i)) = 0 i = 1, 2, ...,m, (2.15) and tr(RT (i)R(j)) = 0 i, j = 1, 2, ...,m, i =/ j. (2.16) Then R(1),R(2), ...,R(m) is an orthogonal basis of the subspace  ri×si K = M|M = diag Z1,Z2, ...,Zp where Zi ∈ R for i = 1, 2, ...,p .

It follows that R(m + 1) = 0 and (X1(m + 1),X2(m + 1), ...,Xp(m + 1)) is a solution group of Prob- lem 1.1. Therefore when Problem 1.1 is consistent, we can verify that the solution group of Problem 1.1 can be obtained within finite iterative steps. 

Let Ej be arbitrary matrices for j = 1, 2, ...,p, we can get ⎛ ⎞ p p p p vec AT E BT + B ET A + R AT E BT R + R B ET A R ⎜ j=1 j1 j j1 j=1 j1 j j1 j=1 1 j1 j j1 1 j=1 1 j1 j j1 1⎟ ⎜ p p p p ⎟ ⎜vec AT E BT + B ET A + R AT E BT R + R B ET A R ⎟ ⎜ j=1 j2 j j2 j=1 j2 j j2 j=1 2 j2 j j2 2 j=1 2 j2 j j2 2 ⎟ ⎜ ⎟ ⎜ ... ⎟ ⎜ ... ⎟ ⎝ . ⎠     p T T + p T + p T T + p T vec j=1 AjpEjBjp j=1 BjpEj Ajp j=1 RpAjpEjBjpRp j=1 RpBjpEj AjpRp ⎛ T T T T T B11 ⊗ A ... Bp1 ⊗ A A ⊗ B11 ... A ⊗ Bp1 R1B11 ⊗ R1A ⎜ 11 p1 11 p1 11 ⎜ ⊗ T ... ⊗ T T ⊗ ... T ⊗ ⊗ T ⎜B12 A Bp2 A A B12 A Bp2 R2B12 R2A ⎜ 12 p2 12 p2 12 = ⎜ ...... ⎜ ...... ⎝ ...... ⊗ T ... ⊗ T T ⊗ ... T ⊗ ⊗ T B1p A1p Bpp App A1p B1p App Bpp RpB1p RpA1p ⎞ T T T ... R1Bp1 ⊗ R1A R1A ⊗ R1B11 ... R1A ⊗ R1Bp1 p1 11 p1 ⎟ ... ⊗ T T ⊗ ... T ⊗ ⎟ R2Bp2 R2A R2A R2B12 R2A R2Bp2⎟ p2 12 p2 ⎟ . . . . . ⎟ . . . . . ⎟ . . . . . ⎠ ... ⊗ T T ⊗ ... T ⊗ RpBpp RpApp RpA1p RpB1p RpApp RpBpp M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552 1539

⎛ ⎞ ⎛ ⎞ T T T T ⎛ ⎞ vec(E ) B ⊗ A11 B ⊗ A12 ... B ⊗ A1p vec(E ) 1 ⎜ 11 12 1p ⎟ 1 ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ( )⎟ ⎜ T ⊗ T ⊗ ... T ⊗ ⎟ ⎜ ( )⎟ ⎜vec Ep ⎟ ⎜ Bp1 Ap1 Bp2 Ap2 Bpp App ⎟ ⎜vec Ep ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜vec(ET )⎟ ⎜ A ⊗ BT A ⊗ BT ... A ⊗ BT ⎟ ⎜vec(ET )⎟ ⎜ 1 ⎟ ⎜ 11 11 12 12 1p 1p ⎟ ⎜ 1 ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ( T )⎟ ⎜ ⊗ T ⊗ T ... ⊗ T ⎟ ⎜ ( T )⎟ ⎜vec Ep ⎟ ⎜ Ap1 Bp1 Ap2 Bp2 App Bpp ⎟ ⎜vec Ep ⎟ × ⎜ ⎟ = ⎜ ⎟ ⎜ ⎟ ⎜vec(E )⎟ ⎜BT R ⊗ A R BT R ⊗ A R ... BT R ⊗ A R ⎟ ⎜vec(E )⎟ ⎜ 1 ⎟ ⎜ 11 1 11 1 12 2 12 2 1p p 1p p⎟ ⎜ 1 ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ( )⎟ ⎜ T ⊗ T ⊗ ... T ⊗ ⎟ ⎜ ( )⎟ ⎜vec Ep ⎟ ⎜B R1 Ap1R1 B R2 Ap2R2 B Rp AppRp⎟ ⎜vec Ep ⎟ ⎜ ⎟ ⎜ p1 p2 pp ⎟ ⎜ ⎟ ⎜vec(ET )⎟ ⎜A R ⊗ BT R A R ⊗ BT R ... A R ⊗ BT R ⎟ ⎜vec(ET )⎟ ⎜ 1 ⎟ ⎜ 11 1 11 1 12 2 12 2 1p p 1p p⎟ ⎜ 1 ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . . . . ⎟ ⎜ . ⎟ ⎝ . ⎠ ⎝ . . . . ⎠ ⎝ . ⎠ vec(ET ) ⊗ T ⊗ T ... ⊗ T vec(ET ) p Ap1R1 Bp1R1 Ap2R2 Bp2R2 AppRp BppRp p ⎛⎛ ⎞ ⎞ T BT ⊗ A BT ⊗ A ... BT ⊗ A ⎜⎜ 11 11 12 12 1p 1p ⎟ ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜⎜ ⎟ ⎟ ⎜⎜ T ⊗ T ⊗ ... T ⊗ ⎟ ⎟ ⎜⎜ Bp1 Ap1 Bp2 Ap2 Bpp App ⎟ ⎟ ⎜⎜ ⎟ ⎟ ⎜⎜ A ⊗ BT A ⊗ BT ... A ⊗ BT ⎟ ⎟ ⎜⎜ 11 11 12 12 1p 1p ⎟ ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜⎜ ⎟ ⎟ ⎜⎜ ⊗ T ⊗ T ... ⊗ T ⎟ ⎟ ⎜⎜ Ap1 Bp1 Ap2 Bp2 App Bpp ⎟ ⎟ ∈ R ⎜⎜ ⎟ ⎟ . (2.17) ⎜⎜BT R ⊗ A R BT R ⊗ A R ... BT R ⊗ A R ⎟ ⎟ ⎜⎜ 11 1 11 1 12 2 12 2 1p p 1p p⎟ ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜⎜ ⎟ ⎟ ⎜⎜ T ⊗ T ⊗ ... T ⊗ ⎟ ⎟ ⎜⎜B R1 Ap1R1 B R2 Ap2R2 B Rp AppRp⎟ ⎟ ⎜⎜ p1 p2 pp ⎟ ⎟ ⎜⎜A R ⊗ BT R A R ⊗ BT R ... A R ⊗ BT R ⎟ ⎟ ⎜⎜ 11 1 11 1 12 2 12 2 1p p 1p p⎟ ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎝⎝ . . . . ⎠ ⎠ ⊗ T ⊗ T ... ⊗ T Ap1R1 Bp1R1 Ap2R2 Bp2R2 AppRp BppRp

Now if we consider the initial matrices p p p p ( ) = T T + T + T T + T = ... Xi 1 AjiEjBji BjiEj Aji RiAjiEjBji Ri RiBjiEj AjiRi,i 1, 2, ,p, (2.18) j=1 j=1 j=1 j=1 then all Xi(k) for i = 1, 2, ...,pgenerated by Algorithm 1 satisfy ⎛⎛ ⎞ ⎞ ⎛ ⎞ T ⊗ T ⊗ ... T ⊗ T B11 A11 B12 A12 B1p A1p vec(X1(k)) ⎜⎜ ⎟ ⎟ ⎜ ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜vec(X (k))⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜ 2 ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜ ( ( ))⎟ ⎜⎜ ⎟ ⎟ ⎜vec X3 k ⎟ ⎜⎜ T ⊗ T ⊗ ... T ⊗ ⎟ ⎟ ⎜⎜ Bp1 Ap1 Bp2 Ap2 Bpp App ⎟ ⎟ ⎜ . ⎟ ⎜⎜ ⎟ ⎟ ⎜ . ⎟ ⎜⎜ A ⊗ BT A ⊗ BT ... A ⊗ BT ⎟ ⎟ ⎜ . ⎟ ⎜⎜ 11 11 12 12 1p 1p ⎟ ⎟ ⎜ ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜ . ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜ . ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜ . ⎟ ⎜⎜ ⎟ ⎟ ⎜ ⎟ ⎜⎜ T T T ⎟ ⎟ ⎜ . ⎟ ⎜ Ap1 ⊗ B Ap2 ⊗ B ... App ⊗ B ⎟ ⎜ . ⎟ ∈ R ⎜ p1 p2 pp ⎟ . ⎜ . ⎟ ⎜⎜ T T T ⎟ ⎟ ⎜ ⎟ ⎜⎜B R1 ⊗ A11 R1 B R2 ⊗ A12 R2 ... B Rp ⊗ A1pRp⎟ ⎟ ⎜ ⎟ ⎜⎜ 11 12 1p ⎟ ⎟ ⎜ . ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜ . ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜ . ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎜ ⎟ ⎜⎜ ⎟ ⎟ . ⎜⎜ T ⊗ T ⊗ ... T ⊗ ⎟ ⎟ ⎜ . ⎟ ⎜⎜B R1 Ap1R1 B R2 Ap2R2 B Rp AppRp⎟ ⎟ ⎜ . ⎟ ⎜⎜ p1 p2 pp ⎟ ⎟ ⎜ ⎟ ⎜⎜A R ⊗ BT R A R ⊗ BT R ... A R ⊗ BT R ⎟ ⎟ ⎜ . ⎟ ⎜⎜ 11 1 11 1 12 2 12 2 1p p 1p p⎟ ⎟ ⎜ . ⎟ ⎜⎜ . . . . ⎟ ⎟ ⎝ . ⎠ ⎜⎜ . . . . ⎟ ⎟ ⎝ . . . . ⎠ ( ( )) ⎝ ⎠ vec Xp k ⊗ T ⊗ T ... ⊗ T Ap1R1 Bp1R1 Ap2R2 Bp2R2 AppRp BppRp 1540 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552

By using the Moore–Penrose pseudoinverse to solve a linear systems Ax = b, it is well known that for a consistent linear system Ax = b, all the solutions are given by x = A†b + Null(A); hence A†b gives ∗ the minimum norm solution. From Range(A†) = R(AT ),wehavey ∈ R(AT ) is a least 2-norm solution of the system of linear equations. Hence with the initial matrix group (X1(1),X2(1), ...,Xp(1)) where p p p p ( ) = T T + T + T T + T = ... Xi 1 AjiEjBji BjiEj Aji RiAjiEjBji Ri RiBjiEj AjiRi,i 1, 2, ,p, j=1 j=1 j=1 j=1 it follows that the generalized bisymmetric solution group obtained by Algorithm 1 is the least Frobe- nius norm generalized bisymmetric solution group. By using the above conclusions, we can present the following theorem.

Theorem 2.2. Suppose that Problem 1.1 is consistent. Let the initial iteration matrices be p p p p ( ) = T T + T + T T + T = ... Xi 1 AjiEjBji BjiEj Aji RiAjiEjBji Ri RiBjiEj AjiRi,i 1, 2, ,p, j=1 j=1 j=1 j=1  ( ) = = ... ∗ ∗ where Ej be arbitrary matrices, or especially, Xi 1 0 for j 1, 2, , p, then the solution group X1 ,X2 , ... ∗ ,Xp , generated by Algorithm 1, is the least Frobenius norm generalized bisymmetric solution group of the coupled matrix equations (1.13).

Now we study Problem 1.2 as follows. When Problem 1.1 is consistent, its solution group set Sr is nonempty. For a given matrix group n ×n (X , X , ..., X ) with X ∈ BSR j j , j = 1, 2, ...,p, it is not difficult to get 1 2 p j Rj p p p AijXjBij = Mi,i = 1, 2, ...,p ⇔ Aij(Xj − Xj)Bij = Mi − AijXjBij,i= 1, 2, ...,p. j=1 j=1 j=1  = − = − p = ... Define Xj Xj Xj and Mi Mi j=1 AijXjBij for i 1, 2, ,p. Then the matrix nearness Problem 1.2 is equivalent to first finding the least Frobenius norm generalized bisymmetric solution group ∗ ∗ ... ∗ X1, X2, , Xp of the new coupled matrix equations

p AijXjBij = Mi,i= 1, 2, ...,p, (2.19) j=1 which can be obtained by using Algorithm 1 with the initial generalized bisymmetric matrices

p p p p ( ) = T T + T + T T + T = ... Xi 1 AjiEjBji BjiEj Aji RiAjiEjBji Ri RiBjiEj AjiRi,i 1, 2, ,p, j=1 j=1 j=1 j=1 where Ej be arbitrary matrices, or especially, Xi(1) = 0forj = 1, 2, ...,p. Here the generalized bisym- metric solution group of the matrix nearness Problem 1.2 can be represented as (X1, X2, ..., Xp) with = ∗ + = ... . Xj Xj Xj,j 1, 2, ,p (2.20)

3. Numerical experiments

In this section, we reported two numerical examples to compute the generalized bisymmetric solution group of coupled matrix equations. We implemented the algorithms in MATLAB and run the − programs on a Pentium IV. Meanwhile, we assumed that ε = 10 12 . M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552 1541

Example 3.1. Consider the coupled matrix equations A X B + A X B = M , 11 1 11 12 2 12 1 (3.1) A21X1B21 + A22X2B22 = M2, with ⎛ ⎞ ⎛ ⎞ 3 −21−11 −2131 1 ⎜ ⎟ ⎜ ⎟ ⎜ 1 −21−11⎟ ⎜ 151−2 −2⎟ ⎜ ⎟ ⎜ ⎟ A11 = ⎜ 31424⎟ ,B11 = ⎜ 23411⎟ , ⎝−14−624⎠ ⎝ 65431⎠ 11351 −1 −31 6 2 ⎛ ⎞ ⎛ ⎞ 12461 1 1 211 ⎜ ⎟ ⎜ ⎟ ⎜−2 −2344⎟ ⎜ 2 3 113⎟ ⎜ ⎟ ⎜− − − ⎟ A12 = ⎜ 5 6 501⎟ ,B12 = ⎜ 1 112 3⎟ , ⎝ 5 4 533⎠ ⎝ 2212−1⎠ −1 −1110 11−13 4 ⎛ ⎞ ⎛ ⎞ 31 1 2 2 2322−2 ⎜ ⎟ ⎜ ⎟ ⎜11 2 2 1⎟ ⎜ 3345−1⎟ ⎜ − − ⎟ ⎜ − ⎟ A21 = ⎜22 2 11⎟ ,B21 = ⎜ 23452⎟ , ⎝23 1 1 2⎠ ⎝−2 −211 3⎠ 12 3 1 1 12524 ⎛ ⎞ ⎛ ⎞ 34121 0 −13 21 ⎜ ⎟ ⎜ ⎟ ⎜−1 −1112⎟ ⎜1 −3 −2 −32⎟ ⎜ − − ⎟ ⎜ ⎟ A22 = ⎜ 313 1 2⎟ ,B22 = ⎜13130⎟ , ⎝ 11431⎠ ⎝31234⎠ 1 −2 −1 −21 13511 ⎛ ⎞ 104 142 116 142 62 ⎜ ⎟ ⎜ 54 90 26 34 98 ⎟ ⎜ ⎟ M1 = ⎜170 276 302 388 64 ⎟ , ⎝130 170 212 382 110⎠ 34 170 184 110 60 and ⎛ ⎞ 134 206 362 324 134 ⎜ ⎟ ⎜ 60 66 144 100 34 ⎟ ⎜ ⎟ M2 = ⎜ 22 2 90 60 150⎟ . ⎝ 80 82 264 178 164⎠ 32 26 86 74 52 It can be verified that the matrix equations (3.2) are consistent over generalized bisymmetric matrices  ∗ ∗ and have the generalized bisymmetric solution pair X1 ,X2 with ⎛ ⎞ ⎛ ⎞ 2 2 042 20040 ⎜ ⎟ ⎜ ⎟ ⎜2 −2042⎟ ⎜0 −2402⎟ ∗ ⎜ ⎟ 5×5 ∗ ⎜ ⎟ 5×5 X = 00200∈ BSR ,X= 04202∈ BSR . 1 ⎜ ⎟ R1 2 ⎜ ⎟ R2 ⎝44002⎠ ⎝40000⎠ 22022 02202 where ⎛ ⎞ ⎛ ⎞ 10000 10 000 ⎜ ⎟ ⎜ ⎟ ⎜01 0 00⎟ ⎜0 −1000⎟ ⎜ − ⎟ 5×5 ⎜ − ⎟ 5×5 R1 = ⎜00 100⎟ ∈ SOR and R2 = ⎜00 10 0⎟ ∈ SOR . ⎝00 0 10⎠ ⎝00 010⎠ 00001 00 00−1 1542 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552

Fig. 1. The obtained results for Example 3.1 with (X1(1),X2(1)) = 0.

We apply Algorithm 1 with the initial generalized bisymmetric matrix pair (X1(1),X2(1)) = 0to calculate (X1(k),X2(k)). The derived results are displayed in Fig. 1, where

r(k) =||R(k)||F , and  ∗ ∗ |(X (k),X (k)) − X ,X | δ( ) = 1 2 1 2 F . k | ∗ ∗ | X1 ,X2 F Obviously both r(k) and δ(k) decrease, and converge to zero as k increases.

Example 3.2. Suppose that ⎛ ⎞ ⎛ ⎞ 22022 20020 ⎜ ⎟ ⎜ ⎟ ⎜22022⎟ ⎜02202⎟ ⎜ ⎟ 5×5 ⎜ ⎟ 5×5 X = 00200∈ BSR and X = 02202∈ BSR . 1 ⎜ ⎟ R1 2 ⎜ ⎟ R2 ⎝22022⎠ ⎝20020⎠ 22022 02202 In order to find the optimal approximation generalized bisymmetric solution pair to the given gen- eralized bisymmetric matrix pair (X1, X2),letX1 = X1 − X1, X2 = X2 − X2, M1 = M1 − A11 X1B11 − A12 X2B12 and M2 = M2 − A21X1B21 − A22X2B22. Now we can get the least Frobenius norm generalized ∗ ∗ bisymmetric solution pair X1, X2 of the coupled matrix equations A X B + A X B = M , 11 1 11 12 2 12 1 (3.2) A21X1B21 + A22X2B22 = M2, by choosing the initial generalized bisymmetric matrix pair (X1(1), X2(1)) = 0, that is ⎛ ⎞ −0.0000 0.0000 0.0000 2.0000 0.0000 ⎜ ⎟ ⎜ 0.0000 −4.0000 −0.0000 2.0000 −0.0000⎟ ∗ = ( ) = ⎜ . . − . − . − . ⎟ X1 X1 28 ⎜ 0 0000 0 0000 0 0000 0 0000 0 0000⎟ , ⎝ 2.0000 2.0000 −0.0000 −2.0000 −0.0000⎠ 0.0000 0.0000 0.0000 0.0000 0.0000 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552 1543

Fig. 2. The obtained results for Example 3.2 with (X1(1), X2(1)) = 0. ⎛ ⎞ −0.0000 0.0000 −0.0000 2.0000 0.0000 ⎜ ⎟ ⎜−0.0000 −4.0000 2.0000 0.0000 0.0000 ⎟ ∗ = ( ) = ⎜ . . − . − . . ⎟ X2 X2 28 ⎜ 0 0000 2 0000 0 0000 0 0000 0 0000 ⎟ , ⎝ 2.0000 0.0000 0.0000 −2.0000 0.0000 ⎠ 0.0000 0.0000 −0.0000 −0.0000 −0.0000 r(28) =||R(28)||  F  − ( ) − ( ) = M1 A11 X1 28 B11 A12 X2 28 B12 0 0 M2 − A21X1(28)B21 − A22X2(28)B22 − = 1.5373 × 10 13 , and ||( ( ) ( )) − ∗ ∗ || X1 28 , X2 28 X1, X2 F − δ(28) = = 4.1395 × 10 16 . || ∗ ∗ || X1, X2 F The obtained results are depicted in Fig. 2. Hence, the optimal approximation generalized bisymmetric solution pair to the given generalized bisymmetric matrix pair (X1, X2) is (X1, X2) with ⎛ ⎞ 2.0000 2.0000 0.0000 4.0000 2.0000 ⎜ ⎟ ⎜2.0000 −2.0000 −0.0000 4.0000 2.0000 ⎟ = ∗ + = ⎜ . . . − . − . ⎟ X1 X1 X1 ⎜0 0000 0 0000 2 0000 0 0000 0 0000⎟ , ⎝4.0000 4.0000 −0.0000 0.0000 2.0000 ⎠ 2.0000 2.0000 0.0000 2.0000 2.0000 and ⎛ ⎞ 2.0000 0.0000 −0.0000 4.0000 0.0000 ⎜ ⎟ ⎜−0.0000 −2.0000 4.0000 0.0000 2.0000⎟ = ∗ + = ⎜ . . . − . . ⎟ . X2 X2 X2 ⎜ 0 0000 4 0000 2 0000 0 0000 2 0000⎟ ⎝ 4.0000 0.0000 0.0000 0.0000 0.0000⎠ 0.0000 2.0000 2.0000 −0.0000 2.0000

The obtained results show that the speed of convergence for Algorithm 1 is high. 1544 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552

4. Concluding remarks

The coupled matrix equations (1.13) play an important role in applied mathematics and control theory. Also, it is well-know that the generalized bisymmetric matrices (including symmetric matrices as special cases) have wide applications in many fields. In the current paper, by extending the idea of conjugate gradient method, we have proposed Algorithm 1 to solve the coupled matrix equations (1.13) over generalized bisymmetric matrices. When the matrix equations are consistent over generalized bisymmetric matrix group (X1,X2, ...,Xp), for any initial generalized bisymmetric matrix group, a generalized bisymmetric solution group can be obtained within finite iteration steps in the absence of roundoff errors. The least Frobenius norm generalized bisymmetric solution group of the general coupled matrix equations can be derived when a suitable initial generalized bisymmetric matrix group is chosen. Furthermore, using Algorithm 1,we have solved Problem 1.2. Finally, two numerical examples were presented to support the theoretical results of this paper. The proposed method possesses the following features.

(1) By Algorithm 1, the solvability of Problem 1.1 can be determined automatically. (2) It is very simple and neat.

Acknowledgments

The authors would like to thank the anonymous reviewers for providing very useful comments and suggestions, which greatly improved the original manuscript of this paper. The authors are also very much indebted to Professor Siegfried M. Rump (Associate Editor) for his valuable suggestions, generous encouragement and concern during the review process of this paper.

Appendix

The proof of Lemma 2.2 ( T ( ) ( )) = ( T ( ) ( )) T ( ) ( ) = T ( ) ( ) = ... Since tr R m R n tr R n R m and tr Pi m Pi n tr Pi n Pi m for i 1, 2, ,p, we need only to show that p T ( ) ( ) = T ( ) ( ) =  <  . tr R m R n 0 and tr Pi m Pi n 0, for 1 n m s (4.1) i=1 We use induction to show (4.1). Also we do it in two steps. STEP I: Firstly, we show that p T ( + ) ( ) = T ( + ) ( ) = = ... . tr R n 1 R n 0 and tr Pi n 1 Pi n 0, for n 1, 2, ,s (4.2) i=1 We also prove (4.2) by using induction on n. When n = 1, we have tr RT (2)R(1) ⎛⎡ ||R(1)||2 = tr⎝⎣R(1) −  F p || ( )||2 t=1 Pt 1 F ⎛ ⎞⎤ ⎞ p p p T ⎜ ⎠⎦ ⎟ × diag ⎝ A1t Pt (1)B1t , A2t Pt (1)B2t , ..., Apt Pt (1)Bpt R(1)⎠ t=1 t=1 t=1 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552 1545

⎛ ⎞ ||R(1)||2 p p p =||R(1)||2 −  F tr ⎝ BT P (1)AT M − A X (1)B ⎠ F p || ( )||2 jt t jt j jt t jt t=1 Pt 1 F j=1 t=1 t=1 ⎛   ⎞ ||R(1)||2 p p p =||R(1)||2 −  F tr ⎝ PT (1) AT M − A X (1)B BT ⎠ F p || ( )||2 j jt j jt t jt jt t=1 Pt 1 F j=1 t=1 t=1 ||R(1)||2 =||R(1)||2 −  F F p || ( )||2 t=1 Pt 1 F ⎛ ⎧   p ⎨ p T p T = A Mj − = Ajt Xt (1)Bjt B × ⎝ T ( ) t 1 jt t 1 jt tr Pj 1 ⎩ j=1 4   p T p T R = A M − = A X (1)B B R + j t 1 jt j t 1 jt t jt jt j 4 ⎫⎞ p p T p p T = B M − = A X (1)B A R = B M − = A X (1)B A R ⎬ + t 1 jt j t 1 jt t jt jt + j t 1 jt j t 1 jt t jt jt j ⎠ 4 4 ⎭ ⎛ ⎞ ||R(1)||2 p =||R(1)||2 −  F tr ⎝ PT (1)P (1)⎠ F p || ( )||2 j j t=1 Pt 1 F j=1 ||R(1)||2 p =||R(1)||2 −  F ||P (1)||2 = 0. (4.3) F p || ( )||2 t F t=1 Pt 1 F t=1

Also we can write ⎛⎡   p p p T p T = A Ms − = Ast Xt (2)Bst B T ( ) ( ) = ⎝⎣ s 1 si t 1 si tr Pi 2 Pi 1 tr i=1 i=1 4 p p T = B M − = A X (2)B A + s 1 si s t 1 st t st si 4   p T p T = R A M − = A X (2)B B R + s 1 i si s t 1 st t st si i 4 ⎤ ⎞ p p T T − ( ) 2 s=1 RiBsi Ms t=1 Ast Xt 2 Bst AsiRi ||R(2)|| + + F P (1)⎦ P (1)⎠ || ( )||2 i i 4 R 1 F ⎛ ⎡   p p T p T = A Ms − = Ast Xt (2)Bst B = ⎝ T ( )⎣ s 1 si t 1 si tr Pi 1 i=1 2 ⎤⎞ p p T − ( ) 2 p s=1 Bsi Ms t=1 Ast Xt 2 Bst Asi ||R(2)|| + ⎦⎠ + F ||P (1)||2 || ( )||2 i F 2 R 1 F = ⎛ ⎞ i 1  T p p p ||R(2)||2 p = tr⎝ M − A X (2)B A P (1)B ⎠+ F ||P (1)||2 s st t st si i si || ( )||2 i F i=1 s=1 t=1 R 1 F i=1 ⎛   p p T ⎝ = tr diag M1 − A1t Xt (2)B1t , ...,Mp − Apt Xt (2)Bpt t=1 t=1 1546 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552 ⎛ ⎞ ⎞ p p ||R(2)||2 p ×diag ⎝ A P (1)B , ..., A P (1)B ⎠ ⎠ + F ||P (1)||2 1i i 1i pi i pi || ( )||2 i F i=1 i=1 R 1 F i=1  p 2 2 p = ||Pi(1)|| ||R(2)|| = i 1 F tr(RT (2)(R(1) − R(2))) + F ||P (1)||2 || ( )||2 || ( )||2 i F R 1 F R 1 F i=1  p 2   2 p = ||Pi(1)|| ||R(2)|| = i 1 F tr(RT (2)R(1)) −||R(2)||2 + F ||P (1)||2 || ( )||2 F || ( )||2 i F R 1 F R 1 F i=1 = 0. (4.4)

Assume that (4.2) holds for n = v − 1. For n = v, it follows that tr RT (v + 1)R(v) ⎛⎡ ||R(v)||2 = tr⎝⎣R(v) −  F p || ( )||2 t=1 Pt v F   ⎤ ⎞ p p p T ⎦ ⎠ ×diag A1t Pt (v)B1t , A2t Pt (v)B2t , ..., Apt Pt (v)Bpt R(v) t=1 t=1 t=1 ||R(v)||2 =||R(v)||2 −  F F p || ( )||2 t=1 Pt v F ⎛ ⎧   p ⎨ p T p T = A Mj − = Ajt Xt (v)Bjt B × ⎝ T ( ) t 1 jt t 1 jt tr Pj v ⎩ j=1 4   p T p T R = A M − = A X (v)B B R + j t 1 jt j t 1 jt t jt jt j 4 p p T = B M − = A X (v)B A + t 1 jt j t 1 jt t jt jt 4 ⎫⎞ p p T R = B M − = A X (v)B A R ⎬ + j t 1 jt j t 1 jt t jt jt j ⎠ 4 ⎭ ⎛ ⎞ ||R(v)||2 p ||R(v)||2 =||R(v)||2 −  F tr ⎝ PT (v) P (v) − F P (v − 1) ⎠ F p || ( )||2 j j || ( − )||2 j t=1 Pt v F j=1 R v 1 F ⎡ ⎤ p p ||R(v)||2 ||R(v)||2 =||R(v)||2 −  F ⎣ ||P (v)||2 − F tr PT (v)P (v − 1) ⎦ F p || ( )||2 j F || ( − )||2 j j t=1 Pt v F j=1 R v 1 F j=1 = 0. (4.5)

And also p tr PT (v + 1)P (v) i=1 i i ⎛⎡   p p T − p ( + ) T s=1 Asi Ms t=1 Ast Xt v 1 Bst Bsi = tr⎝⎣ i=1 4 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552 1547 p p T = B M − = A X (v + 1)B A + s 1 si s t 1 st t st si 4   p T p T = R A M − = A X (v + 1)B B R + s 1 i si s t 1 st t st si i 4 ⎤ ⎞ p p T T − ( + ) 2 s=1 RiBsi Ms t=1 Ast Xt v 1 Bst AsiRi ||R(v + 1)|| + + F P (v)⎦ P (v)⎠ || ( )||2 i i 4 R v F ⎛   p p T ⎝ = tr diag M1 − A1t Xt (v + 1)B1t , ...,Mp − Apt Xt (v + 1)Bpt t=1 t=1 ⎛ ⎞ ⎞ p p ||R(v + 1)||2 p ×diag ⎝ A P (v)B , ..., A P (v)B ⎠ ⎠ + F ||P (v)||2 1i i 1i pi i pi || ( )||2 i F i=1 i=1 R v F i=1  p 2 2 p = ||Pi(v)|| ||R(v + 1)|| = i 1 F tr RT (v + 1)(R(v) − R(v + 1)) + F ||P (v)||2 || ( )||2 || ( )||2 i F R v F R v F i=1 = 0. (4.6)

Hence, by the principle of induction, the conclusion (4.2) holds for all n = 1, 2, ...,s. STEP II: In this step, suppose that p T ( + ) ( ) = T ( + ) ( ) = tr R n m R n 0 and tr Pi n m Pi n 0, (4.7) i=1 for 1  n  s and 1 < m < s.

Now we prove that p T ( + + ) ( ) = T ( + + ) ( ) = . tr R n m 1 R n 0 and tr Pi n m 1 Pi n 0 (4.8) i=1 First, by using the previous results, we can obtain p T ( + ) ( ) = . tr Pj m 2 Pj 1 0 j=1 Now similarly to the proofs of (4.3)–(4.6), it is not difficult to get tr RT (n + m + 1)R(n)  ||R(n + m)||2 = tr R(n + m) −  F p || ( + )||2 t=1 Pt n m F   ⎞ p p p T ⎠ × diag A1t Pt (n + m)B1t , A2t Pt (n + m)B2t , ..., Apt Pt (n + m)Bpt R(n) t=1 t=1 t=1 ||R(n + m)||2 = tr RT (n + m)R(n) −  F p || ( + )||2 t=1 Pt n m F ⎛ ⎧   p ⎨ p T p T = A Mj − = Ajt Xt (n)Bjt B × ⎝ T ( + ) t 1 jt t 1 jt tr Pj n m ⎩ j=1 4 1548 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552     T p T p T p p R = A M − = A X (n)B B R = B M − = A X (n)B A + j t 1 jt j t 1 jt t jt jt j + t 1 jt j t 1 jt t jt jt 4 4 ⎫⎞ p p T R = B M − = A X (n)B A R ⎬ + j t 1 jt j t 1 jt t jt jt j ⎠ 4 ⎭

p ||R(n + m)||2 =− F tr PT (n + m)P (n) p || ( + )||2 j j t=1 Pt n m F j=1 p ||R(n + m)||2||R(n)||2 +  F F tr PT (n + m)P (n − 1) p || ( + )||2|| ( − )||2 j j t=1 Pt n m F R n 1 F j=1 p =······= T ( + ) ( ) c1 tr Pj m 2 Pj 1 j=1 = 0, (4.9) for certain c1. It follows that tr RT (n + m + 1)R(n) = 0 and tr RT (n + m + 1)R(n + 1) = 0.Nowwehave p T ( + + ) ( ) tr Pi n m 1 Pi n i=1 ⎛⎡   p p T − p ( + + ) T s=1 Asi Ms t=1 Ast Xt n m 1 Bst Bsi = tr⎝⎣ i=1 4 p p T = B M − = A X (n + m + 1)B A + s 1 si s t 1 st t st si 4   p T p T = R A M − = A X (n + m + 1)B B R + s 1 i si s t 1 st t st si i 4 p p T = R B M − = A X (n + m + 1)B A R + s 1 i si s t 1 st t st si i 4 ⎤ ⎞ T ||R(n + m + 1)||2 + F P (n + m)⎦ P (n)⎠ || ( + )||2 i i R n m F ⎛   p p T ⎝ = tr diag M1 − A1t Xt (n + m + 1)B1t , ...,Mp − Apt Xt (n + m + 1)Bpt t=1 t=1 ⎛ ⎞⎞ p p p ||R(n + m + 1)||2 × diag⎝ A P (n)B , ..., A P (n)B ⎠⎠+ F tr PT (n + m)P (n) 1i i 1i pi i pi || ( + )||2 i i i=1 i=1 R n m F i=1  p 2   = ||Pi(n)|| = i 1 F tr RT (n + m + 1)R(n) − tr RT (n + m + 1)R(n + 1) || ( )||2 R n F p =······= T ( + ) ( ) c2 tr Pj m 2 Pj 1 j=1 = 0, (4.10) for certain c2. By considering Steps 1 and 2, the conclusion (2.12) holds by the principle of induction. M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552 1549

5. The proof of Lemma 2.3

First by induction we prove that p ∗ − ( ) T ( ) =|| ( )||2 = .... tr Xi Xi n Pi n R n , for n 1, 2, (4.11) i=1 When n = 1, we can write p ∗ tr X − X (1) T P (1) i=1 i i i ⎛ ⎧     p T p T p ⎨ = A M − = A X (1)B B = ⎝ ∗ − ( ) T s 1 si s t 1 st t st si tr Xi Xi 1 ⎩ = 4 i 1   p − p ( ) T s=1 Bsi Ms t=1 Ast Xt 1 Bst Asi + 4     p T p T = R A M − = A X (1)B B R + s 1 i si s t 1 st t st si i 4   ⎫ p − p ( ) T ⎞ s=1 RiBsi Ms t=1 Ast Xt 1 Bst AsiRi ⎬ + ⎠ 4 ⎭     p p p = ∗ − ( ) T T − ( ) T tr Xi Xi 1 Asi Ms Ast Xt 1 Bst Bsi i=1 s=1 t=1 ⎛ ⎧  ⎫⎞ p p ⎨ p T  ⎬ = ⎝ − ( ) ∗ − ( ) ⎠ tr ⎩ Ms Ast Xt 1 Bst Asi Xi Xi 1 Bsi ⎭ i=1 s=1 t=1 ⎛   p p T ⎝ = tr diag M1 − A1t Xt (1)B1t , ...,Mp − Apt Xt (1)Bpt t=1 t=1 ⎛ ⎞ ⎞ p  p  × ⎝ ∗ − ( ) ... ∗ − ( ) ⎠ ⎠ diag A1i Xi Xi 1 B1i, , Api Xi Xi 1 Bpi i=1 i=1 ⎛   p p T ⎝ = tr diag M1 − A1t Xt (1)B1t , ...,Mp − Apt Xt (1)Bpt t=1 t=1   ⎞ p p ⎠ ×diag M1 − A1t Xt (1)B1t , ...,Mp − Apt Xt (1)Bpt t=1 t=1 =|| ( )||2. R 1 F (4.12)

We assume that the conclusion (4.11) holds for n = v. Then we have p ∗ tr X − X (v + 1) T P (v + 1) i=1 i i i ⎛ ⎧     p T p T p ⎨ = A M − = A X (v + 1)B B = ⎝ ∗ − ( + ) T s 1 si s t 1 st t st si tr Xi Xi v 1 ⎩ = 4 i 1   p − p ( + ) T s=1 Bsi Ms t=1 Ast Xt v 1 Bst Asi + 4 1550 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552

    p T p T = R A M − = A X (v + 1)B B R + s 1 i si s t 1 st t st si i 4 p p T ⎫⎞ R B M − A X (v + 1)B A R ⎬ s=1 i si s t=1 st t st si i ||R(v + 1)||2 + + F P (v) ⎠ || ( )||2 i ⎭ 4 R v F     p p p = ∗ − ( + ) T T − ( + ) T tr Xi Xi v 1 Asi Ms Ast Xt v 1 Bst Bsi i=1 s=1 t=1 2 p ||R(v + 1)|| ∗ + F tr X − X (v + 1) T P (v) || ( )||2 i i i R v F i=1 ⎛   p p T ⎝ = tr diag M1 − A1t Xt (v + 1)B1t , ...,Mp − Apt Xt (v + 1)Bpt t=1 t=1   p p × diag M1 − A1t Xt (v + 1)B1t , ...,Mp − Apt Xt (v + 1)Bpt t=1 t=1 2 p 2 p ||R(v + 1)|| ∗ ||R(v + 1)|| + F tr X − X (v) T P (v) −  F ||P (v)||2 || ( )||2 i i i p || ( )||2 i F R v F i=1 i=1 Pi v F i=1 =|| ( + )||2. R v 1 F (4.13)

By the principle of induction, the conclusion (4.11) holds for n = 1, 2, .... By applying (4.11), we have

p ∗ − ( + ) T ( ) tr Xi Xi n 1 Pi n i=1 ⎛ ⎞ p 2 T ∗ ||R(n)|| = tr ⎝ X − X (n) −  F P (n) P (n)⎠ i i p || ( )||2 i i i=1 t=1 Pt n F p 2 p ∗ ||R(n)|| = tr X − X (n) T P (n) −  F ||P (n)|| i i i p || ( )||2 i F i=1 t=1 Pt n F i=1 = 0. (4.14)

Suppose that p ∗ − ( + ) T ( ) =  . tr Xi Xi n v Pi n 0,v 1 (4.15) i=1 It follows from (2.12) and (4.15) that p ∗ tr X − X (n + v + 1) T P (n) i=1 i i i ⎛ ⎞ p 2 T ∗ ||R(n + v)|| = tr ⎝ X − X (n + v) −  F P (n + v) P (n)⎠ i i p || ( + )||2 i i i=1 t=1 Pt n v F p 2 p ∗ ||R(n)|| = tr X − X (n + v) T P (n) −  F tr P (n + v)T P (n) i i i p || ( )||2 i i i=1 t=1 Pt n F i=1 = 0. (4.16)

By the principle of induction, the conclusion (2.13) holds. M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552 1551

Suppose that p ∗ − ( ) T ( + ) =|| ( + )||2 = ... . tr Xi Xi n Pi n w R n w F for w 0, 1, ,k (4.17) i=1 It is not difficult to obtain

p ∗ − ( ) T ( + + ) tr Xi Xi n Pi n w 1 i=1 ⎛   p p T ⎝ = tr diag M1 − A1t Xt (n + w + 1)B1t , ...,Mp − Apt Xt (n + w + 1)Bpt t=1 t=1   p p × diag M1 − A1t Xt (n)B1t , ...,Mp − Apt Xt (n)Bpt t=1 t=1 2 p ||R(n + w + 1)|| ∗ + F tr X − X (n) T P (n + w) || ( + )||2 i i i R n w F i=1 =||( + + )||2. R n w 1 F (4.18)

Therefore the conclusion (2.14) holds by the principle of induction.

References

[1] X.W. Chang, J.S. Wang, The symmetric solution of the matrix equations AX + YA = C, AXAT + BYBT + C and (AT XA.BT XB) = (C, D) ,Linear Algebra Appl. 179 (1993) 171–189. [2] H. Dai, On the symmetric solutions of a linear matrix equations, Linear Algebra Appl. 131 (1990) 1–7. [3] A. Jameson, E. Kreindler, P. Lancaster, Symmetric, positive semidefinite, and positive definite real solutions or AX = XAT and AX = YB, Linear Algebra Appl. 160 (1992) 189–215. [4] K.W.E. Chu, Symmetric solutions of linear matrix equations by matrix decompositions, Linear Algebra Appl. 119 (1989) 35–50. [5] F.J.H. Don, On the symmetric solutions of a linear matrix equation, Linear Algebra Appl. 93 (1987) 1–7. [6] J.K. Baksalary, R. Kala, The matrix equation AXB + CYD = E, Linear Algebra Appl. 30 (1980) 141–147. [7] M. Dehghan, M. Hajarian, An iterative algorithm for solving a pair of matrix equations AYB = E, CYD = F over generalized centro-symmetric matrices, Comput. Math. Appl. 56 (2008) 3246–3260. [8] M. Dehghan, M. Hajarian, An iterative algorithm for the reflexive solutions of the generalized coupled Sylvester matrix equations and its optimal approximation, Appl. Math. Comput. 202 (2008) 571–588. [9] M. Dehghan, M. Hajarian, Finite iterative algorithms for the reflexive and anti-reflexive solutions of the matrix equation A1X1B1 + A2X2B2 = C, Math. Comput. Model. 49 (2009) 1937–1959. [10] M. Dehghan, M. Hajarian, On the reflexive solutions of the matrix equation AXB + CYD = E, Bull. Korean Math. Soc. 46 (2009) 511–519. [11] M. Dehghan, M. Hajarian, Efficient iterative method for solving the second-order Sylvester matrix equation EVF2 − AVF − CV = BW, IET Control Theory Appl. 3 (2009) 1401–1408. [12] M. Dehghan, M. Hajarian, A lower bound for the product of eigenvalues of solutions to matrix equations, Appl. Math. Lett. 22 (2009) 1786–1788. [13] M. Dehghan, M. Hajarian, On the reflexive and anti-reflexive solutions of the generalized coupled Sylvester matrix equations, Int. J. Systems Sci., in press. [14] M. Dehghan, M. Hajarian, The reflexive and anti-reflexive solutions of a linear matrix equation and systems of matrix equations, Rocky Mountain J. Math., in press. [15] F. Ding, T. Chen, Gradient based iterative algorithms for solving a class of matrix equations, IEEE Trans. Automat. Control 50 (2005) 1216–1221. [16] F. Ding, T. Chen, Iterative least squares solutions of coupled Sylvester matrix equations, Systems Control Lett. 54 (2005) 95–107. [17] F. Ding, T. Chen, Hierarchical gradient-based identification of multivariable discrete-time systems, Automatica 41 (2005) 315–325. [18] F. Ding, T. Chen, Hierarchical least squares identification methods for multivariable systems, IEEE Trans. Automat. Control 50 (2005) 397–402. [19] F. Ding, T. Chen, On iterative solutions of general coupled matrix equations, SIAM J. Control Optim. 44 (2006) 2269–2284. [20] F. Ding, P.X. Liu, J. Ding, Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle, Appl. Math. Comput. 197 (2008) 41–50. [21] M.R. Hestenes, Conjugate Direction Methods in Optimization, Springer-Verlag, Berlin, 1980. 1552 M. Dehghan, M. Hajarian / Linear Algebra and its Applications 432 (2010) 1531–1552

[22] Y.X. Peng, X.Y. Hu, L. Zhang, An iteration method for the symmetric solutions and the optimal approximation solution of the matrix equation AXB = C, Appl. Math. Comput. 160 (2005) 763–777. [23] Z.H. Peng, X.Y. Hu, L. Zhang, An efficient algorithm for the least-squares reflexive solution of the matrix equation A1XB1 = C1, A2XB2 = C2, Appl. Math. Comput. 181 (2006) 988–999. [24] Z.Y. Peng, Y.X. Peng, An efficient iterative method for solving the matrix equation AXB + CYD = E, Numer. Linear Algebra Appl. 13 (2006) 473–485. [25] J.K. Reid, On the method of conjugate gradients for the solution of large sparse systems of linear equations, in: J.K. Reid (Ed.), Large Sparse Sets of Linear Equations, Academic Press, New York, 1971. [26] Q.W. Wang, J.H. Sun, S.Z. Li, Consistency for bi(skew)symmetric solutions to systems of generalized Sylvester equations over a finite central algebra, Linear Algebra Appl. 353 (2002) 169–182. [27] Q.W. Wang, A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity, Linear Algebra Appl. 384 (2004) 43–54. [28] Q.W. Wang, A system of four matrix equations over von Neumann regular rings and its applications, Acta Math. Sinica, Ser. A 21 (2005) 323–334. [29] Q.W. Wang, H.S. Zhang, G.J. Song, A new solvable condition for a pair of generalized Sylvester equations, Electron. J. Linear Algebra 18 (2009) 289–301. [30] Q.W. Wang, H.X. Chang, Q.Ning, The common solution to six quaternion matrix equations with applications, Appl. Math. Comput. 198 (2008) 209–226. [31] Q.W. Wang, Bisymmetric and centrosymmetric solutions to systems of real quaternion matrix equations, Comput. Math. Appl. 49 (2005) 641–650. [32] Q.W. Wang, The general solution to a system of real quaternion matrix equations,Comput. Math. Appl. 49 (2005) 665–675. [33] Q.W. Wang, F. Zhang, The reflexive re-nonnegative definite solution to a quaternion matrix equation, Electron. J. Linear Algebra 17 (2008) 88–101. [34] Q.W. Wang, H.S. Zhang, S.W. Yu, On solutions to the quaternion matrix equation AXB + CYD = E,Electron. J. Linear Algebra 17 (2008) 343–358. [35] Q.W. Wang, J.W. Woude, H.X. Chang, A system of real quaternion matrix equations with applications, Linear Algebra Appl. 431 (2009) 2291–2303. [36] Q.W. Wang, C.K. Li, Ranks and the least-norm of the general solution to a system of quaternion matrix equations, Linear Algebra Appl. 430 (2009) 1626–1640. [37] B. Zhou, G.R. Duan, An explicit solution to the matrix equation AX − XF = BY , Linear Algebra Appl. 402 (2005) 345–366. [38] B. Zhou, G.R. Duan, A new solution to the generalized Sylvester matrix equation AV − EVF = BW, Systems Control Lett. 55 (2006) 193–198. [39] B. Zhou, G.R. Duan, Solutions to generalized Sylvester matrix equation by Schur decomposition, Internat. J. Systems Sci. 38 (2007) 369–375. [40] B. Zhou, Z.Y. Li, G.R. Duan, Y. Wang, Weighted least squares solutions to general coupled Sylvester matrix equations, J. Comput. Appl. Math. 224 (2009) 759–776. [41] B. Zhou, Z.B. Yan, Solutions to right coprime factorizations and generalized Sylvester matrix equations,Trans. Inst. Measure. Control 30 (2008) 397–426. [42] B. Zhou, G.R. Duan, On the generalized Sylvester mapping and matrix equations, Systems Control Lett. 57 (2008) 200–208. [43] B. Zhou, G.R. Duan, Z.Y. Li, Gradient based iterative algorithm for solving coupled matrix equations, Systems Control Lett. 58 (2009) 327–333. [44] B. Zhou, Z.Y. Li, G.R. Duan, Y. Wang, Solutions to a family of matrix equations by using the Kronecker matrix polynomials, Appl. Math. Comput. 212 (2009) 327–336.