Journal of Applied Mathematics

Advances in Matrices, Finite and Infinite, with Applications

Guest Editors: P. N. Shivakumar, Panayiotis Psarrakos, K. C. Sivakumar, and Yang Zhang Advances in Matrices, Finite and Infinite, with Applications Journal of Applied Mathematics

Advances in Matrices, Finite and Infinite, with Applications

Guest Editors: P. N. Shivakumar, Panayiotis Psarrakos, K. C. Sivakumar, and Yang Zhang Copyright © 2013 Hindawi Publishing Corporation. All rights reserved.

This is a special issue published in “Journal of Applied Mathematics.” All articles are open access articles distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Editorial Board

Saeid Abbasbandy, Iran Meng Fan, China Yongkun Li, China Mina B. Abd-El-Malek, Egypt Ya Ping Fang, China Wan-Tong Li, China Mohamed A. Abdou, Egypt Antonio J. M. Ferreira, J. Liang, China Subhas Abel, Michel Fliess, Ching-Jong Liao, Taiwan Mostafa Adimy, France M. A. Fontelos, Chong Lin, China Carlos J. S. Alves, Portugal Huijun Gao, China Mingzhu Liu, China Mohamad Alwash, USA Bernard J. Geurts, The Netherlands Chein-Shan Liu, Taiwan Igor Andrianov, Germany Jamshid Ghaboussi, USA Kang Liu, USA Sabri Arik, Turkey Pablo Gonzalez-Vera,´ Spain Yansheng Liu, China Francis T.K. Au, Hong Kong Laurent Gosse, Italy Fawang Liu, Australia Olivier Bahn, Canada K. S. Govinder, South Africa Shutian Liu, China Roberto Barrio, Spain Jose L. Gracia, Spain Zhijun Liu, China Alfredo Bellen, Italy Yuantong Gu, Australia Julian´ Lopez-G´ omez,´ Spain Jafar Biazar, Iran Zhihong GUAN, China Shiping Lu, China Hester Bijl, The Netherlands Nicola Guglielmi, Italy Gert Lube, Germany AnjanBiswas,SaudiArabia Frederico G. Guimaraes,˜ Nazim Idrisoglu Mahmudov, Turkey Stephane P.A. Bordas, USA Vijay Gupta, India Oluwole Daniel Makinde, South Africa James Robert Buchanan, USA Bo Han, China Francisco J. Marcellan,´ Spain Alberto Cabada, Spain Maoan Han, China Guiomar Mart´ın-Herran,´ Spain Xiao Chuan Cai, USA Pierre Hansen, Canada Nicola Mastronardi, Italy Jinde Cao, China Ferenc Hartung, Hungary Michael McAleer, The Netherlands Alexandre Carvalho, Brazil Xiaoqiao He, Hong Kong Stephane Metens, France Song Cen, China Luis Javier Herrera, Spain Michael Meylan, Australia Qianshun S. Chang, China J. Hoenderkamp, The Netherlands Alain Miranville, France Tai-Ping Chang, Taiwan Ying Hu, France Ram N. Mohapatra, USA Shih-sen Chang, China Ning Hu, Japan Jaime E. Munoz Rivera, Brazil Rushan Chen, China Zhilong L. Huang, China Javier Murillo, Spain Xinfu Chen, USA Kazufumi Ito, USA Roberto Natalini, Italy Ke Chen, UK Takeshi Iwamoto, Japan Srinivasan Natesan, India Eric Cheng, Hong Kong George Jaiani, Georgia Jiri Nedoma, Czech Republic Francisco Chiclana, UK Zhongxiao Jia, China Jianlei Niu, Hong Kong Jen-Tzung Chien, Taiwan Tarun Kant, India Roger Ohayon, France C. S. Chien, Taiwan Ido Kanter, Israel Javier Oliver, Spain Han H. Choi, Republic of Korea Abdul Hamid Kara, South Africa Donal O’Regan, Ireland Tin-Tai Chow, China Hamid Reza Karimi, Norway Martin Ostoja-Starzewski, USA M. S. H. Chowdhury, Malaysia Jae-Wook Kim, UK Turgut Ozis¨ ¸, Turkey Carlos Conca, Jong Hae Kim, Republic of Korea Claudio Padra, Argentina Vitor Costa, Portugal Kazutake Komori, Japan Reinaldo Martinez Palhares, Brazil Livija Cveticanin, Serbia Fanrong Kong, USA Francesco Pellicano, Italy Eric de Sturler, USA Vadim . Krysko, Juan Manuel Pena,˜ Spain Orazio Descalzi, Chile Jin L. Kuang, Singapore Ricardo Perera, Spain Kai Diethelm, Germany Miroslaw Lachowicz, Poland Malgorzata Peszynska, USA Vit Dolejsi, Czech Republic Hak-Keung Lam, UK James F. Peters, Canada Bo-Qing Dong, China Tak-Wah Lam, Hong Kong Mark A. Petersen, South Africa Magdy A. Ezzat, Egypt PGL Leach, Cyprus Miodrag Petkovic, Serbia Vu Ngoc Phat, Vietnam Abdel-Maksoud A. Soliman, Egypt Qing-Wen Wang, China Andrew Pickering, Spain Xinyu Song, China Guangchen Wang, China Hector Pomares, Spain Qiankun Song, China Junjie Wei, China Maurizio Porfiri, USA Yuri N. Sotskov, Belarus Li Weili, China Mario Primicerio, Italy Peter J. C. Spreij, The Netherlands Martin Weiser, Germany Morteza Rafei, The Netherlands Niclas Stromberg,¨ Sweden Frank Werner, Germany Roberto Reno,` Italy RayKLSu,HongKong Shanhe Wu, China Jacek Rokicki, Poland Jitao Sun, China Dongmei Xiao, China Dirk Roose, Wenyu Sun, China Gongnan Xie, China Carla Roque, Portugal XianHua Tang, China Yuesheng Xu, USA Debasish Roy, India Alexander Timokha, Norway Suh-Yuh Yang, Taiwan SamirH.Saker,Egypt Mariano Torrisi, Italy Bo Yu, China Marcelo A. Savi, Brazil Jung-Fa Tsai, Taiwan Jinyun Yuan, Brazil Wolfgang Schmidt, Germany Ch Tsitouras, Greece Alejandro Zarzo, Spain Eckart Schnack, Germany Kuppalapalle Vajravelu, USA Guisheng Zhai, Japan Mehmet Sezer, Turkey Alvaro Valencia, Chile Jianming Zhan, China Naseer Shahzad, Saudi Arabia Erik Van Vleck, USA Zhihua Zhang, China Fatemeh Shakeri, Iran Ezio Venturino, Italy Jingxin Zhang, Australia Jian Hua Shen, China Jesus Vigo-Aguiar, Spain Shan Zhao, USA Hui-Shen Shen, China Michael N. Vrahatis, Greece Chongbin Zhao, Australia Fernando Simoes,˜ Portugal Baolin Wang, China Renat Zhdanov, USA Theodore E. Simos, Greece Mingxin Wang, China Hongping Zhu, China Advances in Matrices, Finite and Infinite, with Applications, P. N. Shivakumar, Panayiotis Psarrakos, K. C. Sivakumar, and Yang Zhang Volume 2013, Article ID 156136, 3 pages

On Nonnegative Moore-Penrose Inverses of Perturbed Matrices, Shani Jose and K. C. Sivakumar Volume 2013, Article ID 680975, 7 pages

A New Construction of Multisender Authentication Codes from Polynomials over Finite Fields, Xiuli Wang Volume 2013, Article ID 320392, 7 pages

Geometrical and Spectral Properties of the Orthogonal Projections of the Identity,LuisGonzalez,´ Antonio Suarez,´ and Dolores Garc´ıa Volume 2013, Article ID 435730, 8 pages

A Parameterized Splitting Preconditioner for Generalized Saddle Point Problems, Wei-Hua Luo and Ting-Zhu Huang Volume 2013, Article ID 489295, 6 pages

Solving Optimization Problems on Hermitian Matrix Functions with Applications, Xiang Zhang and Shu-Wen Xiang Volume 2013, Article ID 593549, 11 pages

Left and Right Inverse Eigenpairs Problem for κ-Hermitian Matrices, Fan-Liang Li, Xi-Yan Hu, and Lei Zhang Volume 2013, Article ID 230408, 6 pages

Completing a 2 × 2 Block Matrix of Real Quaternions with a Partial Specified Inverse, Yong Lin and Qing-Wen Wang Volume 2013, Article ID 271978, 5 pages

Fuzzy Approximate Solution of Positive Fully Fuzzy Linear Matrix Equations, Xiaobin Guo and Dequan Shang Volume 2013, Article ID 178209, 7 pages

Ranks of a Constrained Hermitian Matrix Expression with Applications, Shao-Wen Yu Volume 2013, Article ID 514984, 9 pages

An Efficient Algorithm for the Reflexive Solution of the Quaternion Matrix Equation AXB + CXH D = F, Ning Li, Qing-Wen Wang, and Jing Jiang Volume 2013, Article ID 217540, 14 pages

The Optimization on Ranks and Inertias of a Quadratic Hermitian Matrix Function and Its Applications, Yirong Yao Volume 2013, Article ID 961568, 6 pages

Constrained Solutions of a System of Matrix Equations, Qing-Wen Wang and Juan Yu Volume 2012, Article ID 471573, 19 pages Contents

On the Hermitian R-Conjugate Solution of a System of Matrix Equations, Chang-Zhou Dong, Qing-Wen Wang, and Yu-Ping Zhang Volume 2012, Article ID 398085, 14 pages   X − m A∗XA n B∗XB = I Perturbation Analysis for the Matrix Equation i=1 i i + j=1 j j , Xue-Feng Duan and Qing-Wen Wang Volume 2012, Article ID 784620, 13 pages

Refinements of Kantorovich Inequality for Hermitian Matrices, Feixiang Chen Volume 2012, Article ID 483624, 11 pages Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 156136, 3 pages http://dx.doi.org/10.1155/2013/156136

Editorial Advances in Matrices, Finite and Infinite, with Applications

P. N. Shivakumar,1 Panayiotis Psarrakos,2 K. C. Sivakumar,3 and Yang Zhang1

1 Department of Mathematics, University of Manitoba, Winnipeg, MB, Canada R3T 2N2 2 Department of Mathematics, School of Applied Mathematical and Physical Sciences, National Technical Univeristy of Athens, Zografou Campus, 15780 Athens, Greece 3 Department of Mathematics, Indian Institute of Technology Madras, Chennai 600 036, India

Correspondence should be addressed to P. N. Shivakumar; [email protected]

Received 13 June 2013; Accepted 13 June 2013

Copyright © 2013 P. N. Shivakumar et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

In the mathematical modeling of many problems, including, (Linear Algebra Appl., 2009). Specifically, an example of an but not limited to, physical sciences, biological sciences, application can be found in the classical problem “shape engineering, image processing, computer graphics, medical of a drum,” in P. N. Shivakumar, W. Yan, and Y. Zhang imaging, and even social sciences lately, solution methods (WSES transactions in Mathematics, 2011). The focal themes most frequently involve matrix equations. In conjunction ofthisspecialissueareinversepositivematricesincluding𝑀- with powerful computing machines, complex numerical matrices, applications of operator theory, matrix perturbation calculations employing matrix methods could be performed theory, and pseudospectra, matrix functions and generalized sometimes even in real time. It is well known that infinite eigenvalue problems and inverse problems including scat- matrices arise more naturally than finite matrices and have tering and matrices over quaternions. In the following, we a colorful history in development from sequences, series presentabriefoverviewofafewofthecentraltopicsofthis and quadratic forms. Modern viewpoint considers infinite special issue. matrices more as operators defined between certain spe- For 𝑀-matrices, let us start by mentioning the most cited cific infinite-dimensional normed spaces, Banach spaces, books by R. Berman and R. J. Plemmons (SIAM, 1994), and by or Hilbert spaces. Giant and rapid advancements in the R.A.HornandC.R.Johnson(Cambridge,1994),asexcellent theory and applications of finite matrices have been made sources. One of the most important properties of invertible over the past several decades. These developments have 𝑀-matrices is that their inverses are nonnegative. There are been well documented in many books, monographs, and several works that have considered generalizations of some research journals. The present topics of research include of the important properties of 𝑀-matrices. We only point diagonally dominant matrices, their many extensions, inverse outafewofthemhere.ThearticlebyD.MishraandK.C. of matrices, their recursive computation, singular matrices, Sivakumar (Oper. Matrices, 2012), considers generalizations generalized inverses, inverse positive matrices with specific of inverse nonnegativity to the Moore-Penrose inverse, while emphasis on their applications to economics like the Leon- more general classes of matrices are the objects of discussion tief input-output models, and use of finite difference and in D. Mishra and K. C. Sivakumar (Linear Algebra Appl., finite element methods in the solution of partial differential 2012), and A. N. Sushama, K. Premakumari and K. C. equations, perturbation methods, and eigenvalue problems of Sivakumar(Electron.J.LinearAlgebra2012). interest and importance to numerical analysts, statisticians, A brief description of the papers in the issue is as follows. physical , and engineers, to name a few. IntheworkofZ.Zhang,theproblemofsolvingcer- The aim of this special issue is to announce certain tain optimization problems on Hermitian matrix functions recent advancements in matrices, finite and infinite, and is studied. Specifically, the author considers the extremal 𝑚×𝑛 their applications. For a review of infinite matrices and inertias and ranks of a given function 𝑓(𝑋, 𝑌) : C → 𝑚×𝑛 applications, see P. N. Shivakumar, and K. C. Sivakumar C (which is defined in terms of matrices 𝐴, 𝐵, 𝐶,and𝐷 2 Journal of Applied Mathematics

𝑚×𝑛 𝑛×𝑛 in C ), subject to 𝑋, 𝑌 satisfying certain matrix equations. Let H denote the space of all 𝑛×𝑛matrices whose 𝑛×𝑛 As applications, the author presents necessary and sufficient entries are quaternions. Let 𝑄∈H be Hermitian and self- 𝑛×𝑛 conditions for 𝑓 to be positive definite, positive semidefinite, invertible. 𝑋∈H is called reflexive (relative to 𝑄)if𝑋= and so forth. As another application, the author presents a 𝑄𝑋𝑄. The authors F.-L. Li, X.-Y. Hu, and L. Zhang present characterization for 𝑓(𝑋, 𝑌),again,subjectto =0 𝑋, 𝑌 an efficient algorithm for finding the reflexive solution ofthe ∗ satisfying certain matrix equations. quaternion matrix equation: 𝐴𝑋𝐵 +𝐶𝑋 𝐷=𝐹.Theyalso The task of determining parametrized splitting precon- showthat,given an appropriateinitialmatrix,theirprocedure ditioners for certain generalized saddle point problems is yields the least reflexive solution with respect to the Frobenius undertaken by W.-H.Luo and T.-Z. Huang in their work. The norm. well-known and very widely applicable Sherman-Morrison- In the work on the ranks of a constrained hermitian Woodbury formula for the sum of matrices is used in matrix expression, S. W. Yu gives formulas for the maximal determining a preconditioner for linear equations where and minimal ranks of the quaternion Hermitian matrix 𝐶 −𝐴𝑋𝐴∗ 𝑋 the coefficient matrix may be singular and nonsymmetric. expression 4 4 4,where is a Hermitian solution 𝐴 𝑋=𝐶 𝑋𝐵 =𝐶 Illustrations of the procedure are provided for Stokes and to quaternion matrix equations 1 1, 1 2,and 𝐴 𝑋𝐴∗ =𝐶 Oseen problems. 3 3 3, and also applies this result to some special The work reported by A. Gonzalez,A.SuarezandD.´ system of quaternion matrix equations. Garcia deals with the problem of estimating the Frobenius Y. Yao reports his findings on the optimization on ranks 𝑛×𝑛 condition of the matrix 𝐴𝑁,givenamatrix𝐴∈R ,where and inertias of a quadratic hermitian matrix function. He presents one solution for optimization problems on the ranks 𝑁 satisfies the condition that 𝑁=argmin𝑋∈𝑆‖𝐼 − 𝐴𝑋‖𝐹. 𝑛 and inertias of the quadratic Hermitian matrix function 𝑄− Here, 𝑆 is a certain subspace of R and ‖⋅‖𝐹 denotes the ∗ 𝑋𝑃𝑋 subject to a consistent system of matrix equations Frobenius norm. The authors derive certain spectral and 𝐴𝑋 =𝐶 𝑋𝐵 =𝐷 geometrical properties of the preconditioner 𝑁 as above. and . As applications, he derives necessary and sufficient conditions for the solvability to the systems of Let 𝐵, 𝐶, 𝐷 and 𝐸 be (not necessarily square) matrices 𝐴𝐵 matrix equations and matrix inequalities 𝐴𝑋 = 𝐶, 𝑋𝐵 =𝐷, with quaternion entries so that the matrix 𝑀=( ) ∗ 𝐶𝐷 and 𝑋𝑃𝑋 = (>, <, ≥, ≤)𝑄. is square. Y. Lin and Q.-W. Wang present necessary and 𝑅 𝐴 𝑀 Let be a nontrivial real symmetric involution matrix. sufficient conditions so that the top left block of exists 𝐴 𝑅 𝐴 = 𝑅𝐴𝑅 satisfying the conditions that 𝑀 is nonsingular and the matrix Acomplexmatrix is called -conjugate if . −1 In the work on the hermitian 𝑅-conjugate solution of a 𝐸 is the top left block of 𝑀 . A general expression for such system of matrix equations, C.-Z. Dong, Q.-W. Wang, and an 𝐴, when it exists, is obtained. A numerical illustration is Y.-P. Zhang present necessary and sufficient conditions for presented. the existence of the hermitian 𝑅-conjugate solution to the 𝐴 𝑛 For a complex hermitian of order with eigenvalues system of complex matrix equations 𝐴𝑋=𝐶and 𝑋𝐵 =𝐷. 𝜆1 ≤𝜆2 ≤ ⋅⋅⋅ ≤ 𝜆𝑛,thecelebratedKantorovichinequalityis 𝑅 ∗ ∗ −1 2 An expression of the Hermitian -conjugate solution is also 1≤𝑥 𝐴𝑥𝑥 𝐴 𝑥≤(𝜆𝑛 +𝜆1) /4𝜆1𝜆𝑛, for all unit vectors 𝑥∈ 𝑛 ∗ ∗ −1 2 presented. C .Equivalently0≤𝑥 𝐴𝑥𝑥 𝐴 𝑥−1 ≤ (𝜆𝑛−𝜆1) /4𝜆1𝜆𝑛,for A perturbation analysis for the matrix equation 𝑋− 𝑥∈C𝑛 𝑚 ∗ 𝑛 ∗ all unit vectors . F. Chen, in his work on refinements of ∑𝑖=1 𝐴𝑖 𝑋𝐴 𝑖 + ∑𝑗=1 𝐵𝑗 𝑋𝐵𝑗 =𝐼is undertaken by X.-F. Duan Kantorovich inequality for hermitian matrices, obtains some and Q.-W. Wang. They give a precise perturbation bound for refinements of the second inequality, where the proofs use a positive definite solution. A numerical example is presented ∼ only elementary ideas (http://www.math.ntua.gr/ ppsarr/). to illustrate the sharpness of the perturbation bound. It is pertinent to point out that, among other things, an Linear matrix equations and their optimal approxima- interesting generalization of 𝑀-matrices is obtained by S. tion problems have great applications in structural design, JoseandK.C.Sivakumar.Theauthorsconsidertheprob- biology, statistics, control theory, and linear optimal control, lem of nonnegativity of the Moore-Penrose inverse of a and as a consequence, they have attracted the attention of 𝑇 † perturbation of the form 𝐴−𝑋𝐺𝑌 given that 𝐴 ≥0.Using several researchers for many decades. In the work of Q.-W. a generalized version of the Sherman-Morrison-Woodbury Wang and J. Yu, necessary and sufficient conditions of and the 𝑇 † formula, conditions for (𝐴 − 𝑋𝐺𝑌 ) to be nonnegative expressions for orthogonal solutions, symmetric orthogonal are derived. Applications of the results are presented briefly. solutions, and skew-symmetric orthogonal solutions of the Iterative versions of the results are also studied. system of matrix equations 𝐴𝑋=𝐵and 𝑋𝐶 =𝐷 are Itiswellknownthattheconceptofquaternionswas derived. When the solvability conditions are not satisfied, the introduced by Hamilton in 1843 and has played an important least squares symmetric orthogonal solutions and the least role in contemporary mathematics such as noncommuta- squares skew-symmetric orthogonal solutions are obtained. tive algebra, analysis, and topology. Nowadays, quaternion A methodology is provided to compute the least squares matrices are widely and heavily used in computer science, symmetric orthogonal solutions, and an illustrative example quantum physics, altitude control, robotics, signal and colour is given to illustrate the proposed algorithm. image processing, and so on. Many questions can be reduced Many real-world engineering systems are too complex to to quaternion matrices and solving quaternion matrix equa- be defined in precise terms, and, in many matrix equations, tions. This special issue has provided an excellent opportunity some or all of the system parameters are vague or imprecise. for researchers to report their recent results. Thus, solving a fuzzy matrix equation becomes important. In Journal of Applied Mathematics 3 theworkreportedbyX.GuoandD.Shang,thefuzzymatrix equation 𝐴⊗̃ 𝑋⊗̃ 𝐵=̃ 𝐶̃ in which 𝐴,̃ 𝐵̃,and𝐶̃ are 𝑚×𝑚, 𝑛×𝑛,and𝑚×𝑛nonnegative LR fuzzy numbers matrices, respectively, is investigated. This fuzzy matrix system, which has a wide use in control theory and control engineering, is extended into three crisp systems of linear matrix equations according to the arithmetic operations of LR fuzzy numbers. Based on the pseudoinverse matrix, a computing model to the positive fully fuzzy linear matrix equation is provided, and the fuzzy approximate solution of original fuzzy systems is obtained by solving the crisp linear matrix systems. In addi- tion, the existence condition of nonnegative fuzzy solution is discussed, and numerical examples are given to illustrate the proposed method. Overall, the technique of LR fuzzy numbers and their operations appears to be a strong tool in investigating fully fuzzy matrix equations. Left and right inverse eigenpairs problem arises in a natural way, when studying spectral perturbations of matrices andrelativerecursiveproblems.F.-L.Li,X.-Y.Hu,andL. Zhang consider in their work left and right inverse eigenpairs problem for 𝑘-hermitian matrices and its optimal approx- imateproblem.Theclassof𝑘-hermitian matrices contains hermitian and perhermitian matrices and has numerous applications in engineering and statistics. Based on the special properties of 𝑘-hermitian matrices and their left and right eigenpairs, an equivalent problem is obtained. Then, combining a new inner product of matrices, necessary and sufficient conditions for the solvability of the problem are given, and its general solutions are derived. Furthermore, the optimal approximate solution, a calculation procedure to compute the unique optimal approximation, and numerical experiments are provided. Finally, let us point out a work on algebraic coding theory. Recall that a communication system, in which a receiver can verify the authenticity of a message sent by a group of senders, is referred to as a multisender authentication code. IntheworkreportedbyX.Wang,theauthorconstructs multi-sender authentication codes using polynomials over finitefields.Here,certainimportantparametersandthe probabilities of deceptions of these codes are also computed.

Acknowledgments We wish to thank all the authors for their contributions and the reviewers for their valuable comments in bringing the special issue to fruition. P. N . S hiv akumar Panayiotis Psarrakos K. C. Sivakumar Yang Zhang Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 680975, 7 pages http://dx.doi.org/10.1155/2013/680975

Research Article On Nonnegative Moore-Penrose Inverses of Perturbed Matrices

Shani Jose and K. C. Sivakumar

Department of Mathematics, Indian Institute of Technology Madras, Chennai, Tamil Nadu 600036, India

Correspondence should be addressed to K. C. Sivakumar; [email protected]

Received 22 April 2013; Accepted 7 May 2013

Academic Editor: Yang Zhang

Copyright © 2013 S. Jose and K. C. Sivakumar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

𝑇 † Nonnegativity of the Moore-Penrose inverse of a perturbation of the form 𝐴−𝑋𝐺𝑌 is considered when 𝐴 ≥0. Using a generalized 𝑇 † version of the Sherman-Morrison-Woodbury formula, conditions for (𝐴 − 𝑋𝐺𝑌 ) to be nonnegative are derived. Applications of the results are presented briefly. Iterative versions of the results are also studied.

1. Introduction play a crucial role. Let us mention that this problem has been studied in the literature. (See, for instance [2]formatricesand We consider the problem of characterizing nonnegativity [3] for operators over Hilbert spaces). We refer the reader to of the Moore-Penrose inverse for matrix perturbations of 𝑇 the references in the latter for other recent extensions. the type 𝐴−𝑋𝐺𝑌, when the Moore-Penrose inverse of In this paper, first we present alternative proofs of 𝐴 is nonnegative. Here, we say that a matrix 𝐵=(𝑏𝑖𝑗 ) is generalizationsoftheSMWformulaforthecasesofthe nonnegative and denote it by 𝐵≥0if 𝑏𝑖𝑗 ≥ 0, ∀𝑖,.This 𝑗 Moore-Penrose inverse (Theorem 5)andthegroupinverse problem was motivated by the results in [1], where the authors (Theorem 6)inSection 3.InSection 4,wecharacterizethe consider an 𝑀-matrix 𝐴 and find sufficient conditions for (𝐴 − 𝑋𝐺𝑌𝑇)† 𝑇 nonnegativity of . This is done in Theorem 9 the perturbed matrix (𝐴 − 𝑋𝑌 ) to be an 𝑀-matrix. Let us and is one of the main results of the present work. As a recall that a matrix 𝐵=(𝑏𝑖𝑗 ) is said to 𝑍-matrix if 𝑏𝑖𝑗 ≤0 consequence, we present a result for 𝑀-matrices which seems for all 𝑖, 𝑗, 𝑖 =𝑗̸.An𝑀-matrix is a nonsingular 𝑍-matrix with new. We present a couple of applications of the main result nonnegative inverse. The authors in [1] use the well-known in Theorems 13 and 15.Intheconcludingsection,westudy Sherman-Morrison-Woodbury (SMW) formula as one of the iterative versions of the results of the second section. We 𝑇 † important tools to prove their main result. The SMW formula prove two characterizations for (𝐴−𝑋𝑌 ) to be nonnegative 𝑇 gives an expression for the inverse of (𝐴−𝑋𝑌 ) in terms of the in Theorems 18 and 21. 𝑇 inverse of 𝐴, when it exists. When 𝐴 is nonsingular, 𝐴−𝑋𝑌 Before concluding this introductory section, let us give 𝑇 −1 is nonsingular if and only 𝐼−𝑌 𝐴 𝑋 is nonsingular. In that a motivation for the work that we have undertaken here. case, It is a well-documented fact that 𝑀-matrices arise quite often in solving sparse systems of linear equations. An 𝑇 −1 −1 −1 𝑇 −1 −1 𝑇 −1 (𝐴−𝑋𝑌 ) =𝐴 −𝐴 𝑋(𝐼−𝑌 𝐴 𝑋) 𝑌 𝐴 . (1) extensive theory of 𝑀-matrices has been developed relative to their role in numerical analysis involving the notion of The main objective of the present work is to study certain splitting in iterative methods and discretization of differential 𝑇 structured perturbations 𝐴−𝑋𝑌 of matrices 𝐴 such that equations, in the mathematical modeling of an economy, the Moore-Penrose inverse of the perturbation is nonnegative optimization, and Markov chains [4, 5]. Specifically, the whenever the Moore-Penrose inverse of 𝐴 is nonnegative. inspiration for the present study comes from the work of [1], Clearly, this class of matrices includes the class of matrices where the authors consider a system of linear inequalities that have nonnegative inverses, especially 𝑀-matrices. In our arising out of a problem in third generation wireless commu- approach, extensions of SMW formula for singular matrices nication systems. The matrix defining the inequalities there is 2 Journal of Applied Mathematics

𝑚×𝑛 † 𝑛×𝑚 an 𝑀-matrix. In the likelihood that the matrix of this Theorem 2 (see [7]). Let 𝐴∈R .Then𝐴 ∈ R is the 𝑛×𝑚 problem is singular (due to truncation or round-off errors), unique matrix 𝑋∈R satisfying the earlier method becomes inapplicable. Our endeavour is 𝑇 to extend the applicability of these results to more general (i) 𝑍𝐴𝑥 =𝑥, ∀𝑥 ∈ 𝑅(𝐴 ), 𝑇 matrices, for instance, matrices with nonnegative Moore- (ii) 𝑍𝑦 =0, ∀𝑦 ∈ 𝑁(𝐴 ). Penrose inverses. Finally, as mentioned earlier, since matrices 𝑛×𝑛 𝑛×𝑛 with nonnegative generalized inverses include in particular Now, for 𝐴∈R ,any𝑍∈R satisfying the equations 𝑀-matrices, it is apparent that our results are expected to 𝐴𝑍𝐴, =𝐴 𝑍𝐴𝑍 =𝑍,and𝐴𝑍 = 𝑍𝐴 is called the group enlarge the applicability of the methods presently available for inverse of 𝐴.Thegroupinversedoesnotexistforevery 𝑀-matrices, even in a very general framework, including the matrix. But whenever it exists, it is unique. A necessary and specific problem mentioned above. sufficient condition for the existence of the group inverse of 𝐴 is that the index of 𝐴 is 1, where the index of a matrix is the 𝑘+1 𝑘 2. Preliminaries smallest positive integer 𝑘 such that rank (𝐴 )=rank (𝐴 ). Some of the well-known properties of the Moore-Penrose 𝑛 𝑚×𝑛 𝑇 Let R, R ,andR denote the set of all real numbers, the inverseandthegroupinversearegivenasfollows:𝑅(𝐴 )= † 𝑇 † † † 𝑛-dimensional real Euclidean space, and the set of all 𝑚×𝑛 𝑅(𝐴 ) 𝑁(𝐴 )=𝑁(𝐴) 𝐴 𝐴=𝑃 𝑇 𝐴𝐴 =𝑃 𝑚×𝑛 ⊥ , , 𝑅(𝐴 ),and 𝑅(𝐴).In matrices over R.For𝐴∈R ,let𝑅(𝐴), 𝑁(𝐴), 𝑅(𝐴) ,and 𝑇 † 𝑇 particular, x ∈ 𝑅(𝐴 ) if and only if x =𝐴𝐴x.Also,𝑅(𝐴) = 𝐴 denote the range space, the null space, the orthogonal # # # # 𝑅(𝐴 ), 𝑁(𝐴) = 𝑁(𝐴 ),and𝐴 𝐴=𝐴𝐴 =𝑃𝑅(𝐴),𝑁(𝐴).Here, complement of the range space, and the transpose of the 𝑘 𝑇 𝑛 for complementary subspaces 𝐿 and 𝑀 of R , 𝑃𝐿,𝑀 denotes matrix 𝐴,respectively.Forx =(𝑥1,𝑥2,...,𝑥𝑛) ∈ R ,we 𝑘 the projection of R onto 𝐿 along 𝑀. 𝑃𝐿 denotes 𝑃𝐿,𝑀 if 𝑀= say that x is nonnegative, that is, x ≥0if and only if x𝑖 ≥0 𝐿⊥ for all 𝑖 = 1,2,...,𝑛. As mentioned earlier, for a matrix 𝐵 we . For details, we refer the reader to the book [8]. use 𝐵≥0to denote that all the entries of 𝐵 are nonnegative. In matrix analysis, a decomposition (splitting) of a matrix Also, we write 𝐴≤𝐵if 𝐵−𝐴≥0. is considered in order to study the convergence of iterative Let 𝜌(𝐴) denote the spectral radius of the matrix 𝐴.If schemes that are used in the solution of linear systems of alge- 𝑛×𝑛 𝜌(𝐴) <1 for 𝐴∈R ,then𝐼−𝐴is invertible. The braic equations. As mentioned earlier, regular splittings are next result gives a necessary and sufficient condition for the useful in characterizing matrices with nonnegative inverses, −1 nonnegativity of (𝐼 − 𝐴) .Thiswillbeoneoftheresultsthat whereas, proper splittings are used for studying singular systems of linear equations. Let us next recall this notion. will be used in proving the first main result. 𝑚×𝑛 For a matrix 𝐴∈R ,adecomposition𝐴=𝑈−𝑉is 𝑛×𝑛 Lemma 1 (see [5, Lemma 2.1, Chapter 6]). Let 𝐴∈R be called a proper splitting [9]if𝑅(𝐴) = 𝑅(𝑈) and 𝑁(𝐴) = −1 𝑁(𝑈) nonnegative. Then, 𝜌(𝐴) <1 if and only if (𝐼 − 𝐴) exists and . It is rather well-known that a proper splitting exists ∞ for every matrix and that it can be obtained using a full- −1 𝑘 rank factorization of the matrix. For details, we refer to [10]. (𝐼−𝐴) = ∑𝐴 ≥0. (2) 𝑘=0 Certain properties of a proper splitting are collected in the next result. More generally, matrices having nonnegative inverses are characterized using a property called monotonicity. The Theorem 3 (see [9,Theorem1]). Let 𝐴=𝑈−𝑉be a proper 𝑚×𝑛 notion of monotonicity was introduced by Collatz [6]. A real splitting of 𝐴∈R .Then, 𝑛×𝑛matrix 𝐴 is called monotone if 𝐴x ≥0⇒x ≥0.Itwas −1 † proved by Collatz [6]that𝐴 is monotone if and only if 𝐴 (a) 𝐴=𝑈(𝐼−𝑈𝑉), 𝐴−1 ≥0 † exists and . (b) 𝐼−𝑈 𝑉 is nonsingular and, One of the frequently used tools in studying monotone 𝐴† =(𝐼−𝑈†𝑉)−1𝑈† matrices is the notion of a regular splitting. We only refer the (c) . reader to the book [5] for more details on the relationship between these concepts. The following result by Berman and Plemmons [9]gives 𝐴† 𝐴 The notion of monotonicity has been extended in a vari- a characterization for to be nonnegative when has a ety of ways to singular matrices using generalized inverses. proper splitting. This result will be used in proving our first First, let us briefly review the notion of two important main result. generalized inverses. 𝑚×𝑛 Theorem 4 𝐴=𝑈−𝑉 For 𝐴∈R , the Moore-Penrose inverse is the unique (see [9,Corollary4]). Let be a proper 𝑛×𝑚 𝐴∈R𝑚×𝑛 𝑈† ≥0 𝑈†𝑉≥0 𝑍∈R satisfying the Penrose equations: 𝐴𝑍𝐴, =𝐴 splitting of ,where and .Then 𝑇 𝑇 𝐴† ≥0 𝜌(𝑈†𝑉) < 1 𝑍𝐴𝑍 =𝑍, (𝐴𝑍) =𝐴𝑍,and(𝑍𝐴) =𝑍𝐴.Theunique if and only if . † Moore-Penrose inverse of 𝐴 is denoted by 𝐴 , and it coincides −1 with 𝐴 when 𝐴 is invertible. 3. Extensions of the SMW Formula for The following theorem by Desoer and Whalen, which Generalized Inverses is used in the sequel, gives an equivalent definition for the Moore-Penrose inverse. Let us mention that this result was The primary objects of consideration in this paper are gen- proved for operators between Hilbert spaces. eralized inverses of perturbations of certain types of a matrix Journal of Applied Mathematics 3

𝑇 # 𝐴. Naturally, extensions of the SMW formula for generalized Conversely, if (𝐴−𝑋𝐺𝑌 ) exists, then the formula above holds, 𝑇 inverses are relevant in the proofs. In what follows, we present and we have rank (𝐴 − 𝑋𝐺𝑌 )=rank (𝐴). two generalizations of the SMW formula for matrices. We # wouldliketoemphasizethatourproofsalsocarryover Proof. Since 𝐴 exists, 𝑅(𝐴) and 𝑁(𝐴) are complementary 𝑛×𝑛 to infinite dimensional spaces (the proof of the first result subspaces of R . 𝑇 is verbatim and the proof of the second result with slight Suppose that rank (𝐴 − 𝑋𝐺𝑌 )=rank (𝐴).As𝑅(𝑋) ⊆ 𝑇 modifications applicable to range spaces instead of ranks of 𝑅(𝐴),itfollowsthat𝑅(𝐴 − 𝑋𝐺𝑌 ) ⊆ 𝑅(𝐴).Thus𝑅(𝐴 − 𝑇 the operators concerned). However, we are confining our 𝑋𝐺𝑌 ) = 𝑅(𝐴). By the rank-nullity theorem, the nullity of 𝑇 attention to the case of matrices. Let us also add that these both 𝐴−𝑋𝐺𝑌 and 𝐴 are the same. Again, since 𝑅(𝑌) ⊆ 𝑇 𝑇 results have been proved in [3] for operators over infinite 𝑅(𝐴 ),itfollowsthat𝑁(𝐴 − 𝑋𝐺𝑌 )=𝑁(𝐴).Thus, 𝑇 𝑇 dimensional spaces. We have chosen to include these results 𝑅(𝐴 − 𝑋𝐺𝑌 ) and 𝑁(𝐴 − 𝑋𝐺𝑌 ) are complementary sub- here as our proofs are different from the ones in [3]andthat spaces. This guarantees the existence of the group inverse of our intention is to provide a self-contained treatment. 𝑇 𝐴−𝑋𝐺𝑌 . 𝑇 # 𝑚×𝑛 𝑚×𝑘 Conversely, suppose that (𝐴 − 𝑋𝐺𝑌 ) exists. It can be Theorem 5 (see [3,Theorem2.1]). Let 𝐴∈R , 𝑋∈R , # # −1 𝑌∈R𝑛×𝑘 verified by direct computation that 𝑍=𝐴 +𝐴𝑋(𝐺 − and be such that 𝑇 # −1 𝑇 # 𝑇 𝑌 𝐴 𝑋) 𝑌 𝐴 isthegroupinverseof𝐴−𝑋𝐺𝑌 .Also,we 𝑅 𝑋 ⊆𝑅 𝐴 ,𝑅𝑌 ⊆𝑅(𝐴𝑇). 𝑇 𝑇 # # 𝑇 ( ) ( ) ( ) (3) have (𝐴−𝑋𝐺𝑌 )(𝐴−𝑋𝐺𝑌 ) =𝐴𝐴,sothat𝑅(𝐴−𝑋𝐺𝑌 )= 𝑇 𝑘×𝑘 𝑇 𝑅(𝐴), and hence the rank (𝐴 − 𝑋𝐺𝑌 )=rank (𝐴). Let 𝐺∈R ,andΩ=𝐴−𝑋𝐺𝑌 .If 𝑇 𝑅(𝑋𝑇)⊆𝑅((𝐺† −𝑌𝑇𝐴†𝑋) ), We conclude this section with a fairly old result [2]asa 𝑅(𝑌𝑇)⊆𝑅(𝐺† −𝑌𝑇𝐴†𝑋) , 𝑅 (𝑋𝑇)⊆𝑅(𝐺) , (4) consequence of Theorem 5.

𝑇 𝑇 𝑚×𝑛 𝑅(𝑌 )⊆𝑅(𝐺 ) Theorem 7 (see [2,Theorem15]). Let 𝐴∈R of rank 𝑟, 𝑚×𝑟 𝑛×𝑟 𝑋∈R and 𝑌∈R .Let𝐺 be an 𝑟×𝑟nonsingular matrix. then 𝑇 −1 𝑇 † Assume that 𝑅(𝑋) ⊆ ,𝑅(𝐴) 𝑅(𝑌) ⊆ 𝑅(𝐴 ),and𝐺 −𝑌 𝐴 𝑋 † 𝑇 Ω† =𝐴† +𝐴†𝑋(𝐺† −𝑌𝑇𝐴†𝑋) 𝑌𝑇𝐴†. (5) is nonsingular. Let Ω=𝐴−𝑋𝐺𝑌 .Then

† † † 𝑇 † † 𝑇 † Proof. Set 𝑍=𝐴 +𝐴 𝑋𝐻 𝑌 𝐴 ,where𝐻=𝐺 −𝑌 𝐴 𝑋. † −1 † 𝑇 † † −1 𝑇 † 𝑇 † From the conditions (3)and(4), it follows that 𝐴𝐴 𝑋=𝑋, (𝐴−𝑋𝐺𝑌 ) =𝐴 +𝐴 𝑋(𝐺 −𝑌 𝐴 𝑋) 𝑌 𝐴 . (8) † † 𝑇 𝑇 † 𝑇 𝑇 † 𝑇 𝑇 𝐴 𝐴𝑌 =𝑌, 𝐻 𝐻𝑋 =𝑋 , 𝐻𝐻 𝑌 =𝑌 , 𝐺𝐺 𝑋 =𝑋 ,and 𝐺†𝐺𝑌𝑇 =𝑌𝑇 . 𝑇 † Now, 4. Nonnegativity of (𝐴−𝑋𝐺𝑌 ) † † 𝑇 † † 𝑇 † 𝑇 𝐴 𝑋𝐻 𝑌 −𝐴 𝑋𝐻 𝑌 𝐴 𝑋𝐺𝑌 In this section, we consider perturbations of the form 𝐴− 𝑇 𝑇 † 𝑋𝐺𝑌 and derive characterizations for (𝐴 − 𝑋𝐺𝑌 ) to be † † † 𝑇 𝑇 † 𝑇 † =𝐴𝑋𝐻 (𝐺 𝐺𝑌 −𝑌 𝐴 𝑋𝐺𝑌 ) (6) nonnegative when 𝐴 ≥0, 𝑋≥0, 𝑌≥0and 𝐺≥0. In order

† † 𝑇 † 𝑇 to motivate the first main result of this paper, let us recall the =𝐴𝑋𝐻 𝐻𝐺𝑌 =𝐴𝑋𝐺𝑌 . followingwellknowncharacterizationof𝑀-matrices [5]. † 𝑇 𝑇 Thus, 𝑍Ω = 𝐴 𝐴.Since𝑅(Ω )⊆𝑅(𝐴),itfollowsthat Theorem 8. 𝐴 𝑍 𝐴= 𝑇 Let be a -matrixwiththerepresentation 𝑍Ω(x)=𝑥, ∀x ∈ 𝑅(Ω ). 𝑠𝐼−𝐵 𝐵≥0 𝑠≥0 𝑇 𝑇 𝑇 𝑇 ,where and . Then the following statements Let y ∈𝑁(Ω).Then,𝐴 y −𝑌𝐺𝑋 y =0so that are equivalent: 𝑇 †𝑇 𝑇 𝑇 𝑇 𝑇 † 𝑇 𝑋 𝐴 (𝐴 −𝑌𝐺 𝑋 )y =0.Substituting𝑋 =𝐺𝐺𝑋 and 𝑇 𝑇 𝑇 𝑇 𝑇 𝑇 −1 −1 simplifying it, we get 𝐻 𝐺 𝑋 y =0.Also,𝐴 y =𝑌𝐺 𝑋 y = (a) 𝐴 exists and 𝐴 ≥0. † 𝑇 𝑇 † 𝑌𝐻𝐻 𝐺 𝑋 y =0and so 𝐴 y =0.Thus,𝑍y =0for 𝑇 † y ∈𝑁(Ω ).Hence,byTheorem 2, 𝑍=Ω. (b) There exists 𝑥>0such that 𝐴𝑥. >0

The result for the group inverse follows. (c) 𝜌(𝐵) <𝑠. 𝑛×𝑛 # Theorem 6. Let 𝐴∈R be such that 𝐴 exists. Let 𝑋, 𝑛×𝑘 Let us prove the first result of this article. This extends 𝑌∈R and 𝐺 be nonsingular. Assume that 𝑅(𝑋) ⊆ ,𝑅(𝐴) 𝑇 −1 𝑇 # Theorem 8 to singular matrices. We will be interested in 𝑅(𝑌) ⊆ 𝑅(𝐴 ) and 𝐺 −𝑌 𝐴 𝑋 is nonsingular. Suppose that 𝑇 𝑇 # extensions of conditions (a) and (c) only. rank (𝐴 − 𝑋𝐺𝑌 )=rank (𝐴).Then,(𝐴 − 𝑋𝐺𝑌 ) exists and 𝑚×𝑛 the following formula holds: Theorem 9. Let 𝐴=𝑈−𝑉be a proper splitting of 𝐴∈R 𝑈† ≥0𝑈†𝑉≥0 𝜌(𝑈†𝑉) < 1 𝐺∈R𝑘×𝑘 𝑇 # # # −1 𝑇 # −1 𝑇 # with , and .Let be (𝐴−𝑋𝐺𝑌 ) =𝐴 +𝐴 𝑋(𝐺 −𝑌 𝐴 𝑋) 𝑌 𝐴 . (7) 𝑚×𝑘 𝑛×𝑘 nonsingular and nonnegative, 𝑋∈R ,and𝑌∈R be 4 Journal of Applied Mathematics

𝑇 −1 nonnegative such that 𝑅(𝑋) ⊆ ,𝑅(𝐴) 𝑅(𝑌) ⊆ 𝑅(𝐴 ) and 𝐺 − Corollary 11. Let 𝐴=𝑠𝐼−𝐵where, 𝐵≥0and 𝜌(𝐵) <𝑠 (i.e., 𝐴 𝑇 † 𝑇 𝑚×𝑘 𝑛×𝑘 𝑌 𝐴 𝑋 is nonsingular. Let Ω=𝐴−𝑋𝐺𝑌 .Then,thefollowing is an 𝑀-matrix). Let 𝑋∈R and 𝑌∈R be nonnegative 𝑇 † 𝑇 are equivalent such that 𝐼−𝑌 𝐴 𝑋 is nonsingular. Let Ω=𝐴−𝑋𝑌 .Then † the following are equivalent: (a) Ω ≥0. −1 𝑇 † −1 −1 (b) (𝐺 −𝑌 𝐴 𝑋) ≥0. (a) Ω ≥0. † 𝑇 𝑇 † −1 (c) 𝜌(𝑈 𝑊) < 1 where 𝑊=𝑉+𝑋𝐺𝑌 . (b) (𝐼 − 𝑌 𝐴 𝑋) ≥0. 𝑇 (c) 𝜌(𝐵(𝐼 +𝑋𝑌 )) < 𝑠. Proof. First, we observe that since 𝑅(𝑋) ⊆ ,𝑅(𝐴) 𝑅(𝑌) ⊆ 𝑅(𝐴𝑇) 𝐺−1 −𝑌𝑇𝐴†𝑋 ,and is nonsingular, by Theorem 7,we Proof. From the proof of Theorem 10,since𝐴 is nonsingular, † † have it follows that ΩΩ =𝐼and Ω Ω=𝐼.ThisshowsthatΩ −1 Ω† =𝐴† +𝐴†𝑋(𝐺−1 −𝑌𝑇𝐴†𝑋) 𝑌𝑇𝐴†. (9) isinvertible.Therestoftheproofisomitted,asitisaneasy consequence of the previous result. ΩΩ† =𝐴𝐴† Ω†Ω=𝐴†𝐴 We thus have and . Therefore, In the rest of this section, we discuss two applications 𝑅 (Ω) =𝑅(𝐴) ,𝑁(Ω) =𝑁(𝐴) . (10) of Theorem 10. First, we characterize the least element in a polyhedral set defined by a perturbed matrix. Next, we † Note that the first statement also implies that 𝐴 ≥0,by consider the following Suppose that the “endpoints” of an Theorem 4. interval matrix satisfy a certain positivity property. Then all 𝑇 (a)⇒(b):Bytaking𝐸=𝐴and 𝐹=𝑋𝐺𝑌,weget matrices of a particular subset of the interval also satisfy † † Ω=𝐸−𝐹as a proper splitting for Ω such that 𝐸 =𝐴 ≥0 that positivity condition. The problem now is if that we are † † 𝑇 𝑇 † and 𝐸 𝐹=𝐴𝑋𝐺𝑌 ≥0(since 𝑋𝐺𝑌 ≥0). Since Ω ≥0, given a specific structured perturbation of these endpoints, † 𝑇 † by Theorem 4,wehave𝜌(𝐴 𝑋𝐺𝑌 )=𝜌(𝐸𝐹) <.This 1 what conditions guarantee that the positivity property for the 𝑇 † 𝑇 † implies that 𝜌(𝑌 𝐴 𝑋𝐺) <1.Wealsohave𝑌 𝐴 𝑋𝐺 ≥0. corresponding subset remains valid. (𝐼−𝑌𝑇𝐴†𝑋𝐺)−1 The first result is motivated by Theorem 12 below. Let us Thus, by Lemma 1, exists and is nonnegative. x∗ ∈ 𝐼−𝑌𝑇𝐴†𝑋𝐺 = (𝐺−1 −𝑌𝑇𝐴†𝑋)𝐺 0≤ recall that with respect to the usual order, an element But, we have .Now, X ⊆ R𝑛 X x∗ ≤ x (𝐼−𝑌𝑇𝐴†𝑋𝐺)−1 =𝐺−1(𝐺−1 −𝑌𝑇𝐴†𝑋)−1 is called a least element of if it satisfies for . This implies that x ∈ X (𝐺−1 −𝑌𝑇𝐴†𝑋)−1 ≥0 𝐺≥0 ( ) all . Note that a nonempty set may not have the least since .Thisproves b . element, and if it exists, then it is unique. In this connection, ( )⇒( ) 𝑈−𝑊 = 𝑈−𝑉−𝑋𝐺𝑌𝑇 =𝐴−𝑋𝑌𝑇 = b c :Wehave the following result is known. Ω.Also𝑅(Ω) = 𝑅(𝐴) = 𝑅(𝑈) and 𝑁(Ω) = 𝑁(𝐴) = 𝑁(𝑈). † So, Ω=𝑈−𝑊is a proper splitting. Also 𝑈 ≥0and Theorem 12 𝐴∈R𝑚×𝑛 b ∈ † † 𝑇 † −1 𝑇 † −1 (see [11, Theorem 3.2]). For and 𝑈 𝑊=𝑈(𝑉+𝑋𝐺𝑌 )≥𝑈𝑉≥0.Since(𝐺 −𝑌 𝐴 𝑋) ≥ R𝑚 † ,let 0,itfollowsfrom(9)thatΩ ≥0. (c) now follows from X ={x ∈ R𝑛 :𝐴x + y ≥ b,𝑃 x =0, Theorem 4. b 𝑁(𝐴) 𝑇 (c)⇒(a):Since𝐴 = 𝑈−𝑉,wehaveΩ = 𝑈−𝑉−𝑋𝐺𝑌 = 𝑚 (11) † † † 𝑃 y =0,𝑓𝑜𝑟𝑠𝑜𝑚𝑒y ∈ R }. 𝑈−𝑊.Alsowehave𝑈 ≥0, 𝑈 𝑉≥0.Thus,𝑈 𝑊≥0,since 𝑅(𝐴) 𝑇 𝑋𝐺𝑌 ≥0.Now,byTheorem 4,wearedoneifthesplitting ∗ ∗ Then, a vector x is the least element of Xb if and only if x = Ω=𝑈−𝑊is a proper splitting. Since 𝐴=𝑈−𝑉is a proper † † 𝐴 b ∈ Xb with 𝐴 ≥0. splitting, we have 𝑅(𝑈) = 𝑅(𝐴) and 𝑁(𝑈) = 𝑁(𝐴).Now, from the conditions in (10), we get that 𝑅(𝑈) = 𝑅(Ω) and 𝑁(𝑈) = 𝑁(Ω) Ω=𝑈−𝑊 Now, we obtain the nonnegative least element of a .Hence is a proper splitting, and polyhedral set defined by a perturbed matrix. This is an this completes the proof. immediate application of Theorem 10.

The following result is a special case of Theorem 9. 𝑚×𝑛 † Theorem 13. Let 𝐴∈R be such that 𝐴 ≥0.Let𝑋∈ 𝑚×𝑘 𝑛×𝑘 Theorem 10. 𝐴=𝑈−𝑉 𝐴∈R𝑚×𝑛 R ,andlet𝑌∈R be nonnegative, such that 𝑅(𝑋) ⊆ Let be a proper splitting of 𝑇 𝑇 † 𝑈† ≥0 𝑈†𝑉≥0 𝜌(𝑈†𝑉) < 1 𝑋∈R𝑚×𝑘 𝑅(𝐴), 𝑅(𝑌) ⊆ 𝑅(𝐴 ),and𝐼−𝑌 𝐴 𝑋 is nonsingular. Suppose with , and .Let and 𝑇 † −1 𝑚 𝑛×𝑘 (𝐼 − 𝑌 𝐴 𝑋) ≥0 b ∈ R 𝑏≥0 𝑌∈R be nonnegative such that 𝑅(𝑋) ⊆ ,𝑅(𝐴) 𝑅(𝑌) ⊆ that .For , ,let 𝑅(𝐴𝑇) 𝐼−𝑌𝑇𝐴†𝑋 Ω=𝐴−𝑋𝑌𝑇 𝑛 ,and is nonsingular. Let .Then Sb ={x ∈ R :Ωx + y ≥ b,𝑃𝑁(Ω)x =0, the following are equivalent: 𝑚 (12) † 𝑃 y =0,𝑓𝑜𝑟𝑠𝑜𝑚𝑒y ∈ R }, (a) Ω ≥0. 𝑅(Ω) 𝑇 † −1 𝑇 ∗ 𝑇 † (b) (𝐼−𝑌 𝐴 𝑋) ≥0. where Ω=𝐴−𝑋𝑌.Then,x =(𝐴−𝑋𝑌) b is the least † 𝑇 (c) 𝜌(𝑈 𝑊) <,where 1 𝑊=𝑉+𝑋𝑌 . element of S𝑏.

The following consequence of Theorem 10 appears to be Proof. From the assumptions, using Theorem 10,itfol- † new. This gives two characterizations for a perturbed 𝑀- lows that Ω ≥0. The conclusion now follows from matrix to be an 𝑀-matrix. Theorem 12. Journal of Applied Mathematics 5

𝑛×𝑛 † 𝑚 To state and prove the result for interval matrices, let us Theorem 16. Let 𝐴∈R be such that 𝐴 ≥0.Letx ∈ R 𝑚 𝑛 first recall the notion of interval matrices. For 𝐴, 𝐵∈R ×𝑛, and y ∈ R be nonnegative vectors such that x ∈ 𝑅(𝐴), y ∈ 𝑇 𝑇 † 𝑇 † an interval (matrix) 𝐽=[𝐴,𝐵]is defined as 𝐽=[𝐴,𝐵]={𝐶: 𝑅(𝐴 ),and1−y 𝐴 x =0̸.Then(𝐴 − xy ) ≥0if and only if 𝐴≤𝐶≤𝐵} 𝐽=[𝐴,𝐵] 𝑇 † .Theinterval is said to be range-kernel y 𝐴 x <1. regular if 𝑅(𝐴) = 𝑅(𝐵) and 𝑁(𝐴) = 𝑁(𝐵).Thefollowing 1−11 result [12] provides necessary and sufficient conditions for Example 17. Let us consider 𝐴=(1/3)(−1 4 −1 ).Itcanbe 𝐶† ≥0 𝐶∈𝐾 𝐾 = {𝐶 ∈ 𝐽 : 𝑅(𝐶) =𝑅(𝐴) 1−11 for ,where 𝐼2 𝑅(𝐵), 𝑁(𝐶) = 𝑁(𝐴) =𝑁(𝐵)} verified that 𝐴 canbewrittenintheform𝐴=(𝑇 )(𝐼2 + . 𝑒1 e 𝑒𝑇)−1𝐶−1(𝐼 + e 𝑒𝑇)−1 (𝐼 e ) e = (1, 0)𝑇 𝐶 Theorem 14. 𝐽=[𝐴,𝐵] 1 1 2 1 1 2 1 where 1 and is Let be range-kernel regular. Then, the the 2×2circulant matrix generated by the row (1, 1/2).We following are equivalent: † 212 𝐴 = (1/2) ( 121)≥0 𝐴= have 212 . Also, the decomposition 𝐶† ≥0 𝐶∈𝐾 1−12 00 1 (a) whenever , 𝑈−𝑉,where𝑈= (1/3) ( −14−2),and𝑉 = (1/3) ( 02−1) is † † 1−12 00 1 𝐴 ≥0 𝐵 ≥0 † † † (b) and . apropersplittingof𝐴 with 𝑈 ≥0, 𝑈 𝑉≥0,and𝜌(𝑈 𝑉) < † ∞ † 𝑗 † 1. In such a case, we have 𝐶 = ∑ (𝐵 (𝐵 − 𝐶)) 𝐵 . 𝑇 𝑗=0 Let x = y = (1/3, 1/6, 1/3) .Then,x and let y are 𝑇 𝑇 † nonnegative, x ∈ 𝑅(𝐴) and y ∈ 𝑅(𝐴 ).Wehave1−y 𝐴 x = Now, we present a result for the perturbation. † 𝑇 5/12 > 0 and 𝜌(𝑈 𝑊) < 1 for 𝑊=𝑉+xy .Also,itcanbe † 𝑇 † 47 28 47 Theorem 15. Let 𝐽=[𝐴,𝐵]be range-kernel regular with 𝐴 ≥ (𝐴 − xy ) = (1/20) ( 28 32 28 )≥0 † seen that 47 28 47 .Thisillustrates 0 and 𝐵 ≥0.Let𝑋1, 𝑋2, 𝑌1,and𝑌2 be nonnegative matrices 𝑇 Theorem 10. such that 𝑅(𝑋1) ⊆ 𝑅(𝐴), 𝑅(𝑌1)⊆𝑅(𝐴 ), 𝑅(𝑋2)⊆𝑅(𝐵),and 𝑇 𝑇 † 𝑇 † 𝑚 𝑛 𝑅(𝑌2) ⊆ 𝑅(𝐵 ).Supposethat𝐼−𝑌1 𝐴 𝑋1 and 𝐼−𝑌2 𝐴 𝑋2 are Let x1, x2,...,x𝑘 ∈ R and y1, y2,...,y𝑘 ∈ R be nonneg- nonsingular with nonnegative inverses. Suppose further that ative. Denote 𝑋𝑖 =(x1, x2,...,x𝑖) and 𝑌𝑖 =(y1, y2,...,y𝑖) for 𝑇 𝑇 ̃ 𝑇 𝑇 𝑘 𝑇 𝑋2𝑌2 ≤𝑋1𝑌1 .Let𝐽 = [𝐸,,where 𝐹] 𝐸=𝐴−𝑋1𝑌1 and 𝑖=1,2,...,𝑘.Then𝐴−𝑋𝑘𝑌𝑘 =𝐴−∑𝑖=1 x𝑖y𝑖 .Thefollowing 𝑇 ̃ ̃ 𝐹=𝐵−𝑋2𝑌2 .Finally,let𝐾={𝑍∈𝐽 : 𝑅(𝑍) = 𝑅(𝐸)= theorem is obtained by a recurring application of the rank- 𝑅(𝐹) 𝑎𝑛𝑑 𝑁(𝑍) = .Then,𝑁(𝐸) =𝑁(𝐹)} one result of Theorem 16. † † 𝑚×𝑛 (a) 𝐸 ≥0and 𝐹 ≥0. Theorem 18. Let 𝐴∈R and let 𝑋𝑖,𝑌𝑖 be as above. Further, † x ∈ 𝑅(𝐴 −𝑋 𝑌𝑇 ) y ∈ 𝑅(𝐴 −𝑋 𝑌𝑇 )𝑇 (b) 𝑍 ≥0whenever 𝑍∈𝐾̃. suppose that 𝑖 𝑖−1 𝑖−1 , 𝑖 𝑖−1 𝑖−1 and 1−y𝑇(𝐴 − 𝑋 𝑌𝑇 )†x (𝐴 − 𝑋 𝑌𝑇 )† ≥0 † ∞ † † 𝑗 † 𝑖 𝑖−1 𝑖−1 𝑖 be nonzero. Let 𝑖−1 𝑖−1 , In that case, 𝑍 =∑ (𝐵 𝐵−𝐹 𝑍) 𝐹 . 𝑇 † 𝑗=0 for all 𝑖 = 1,2,...,𝑘.Then(𝐴 −𝑘 𝑋 𝑌𝑘 ) ≥0if and only if 𝑇 𝑇 † 𝑇 y (𝐴 −𝑖−1 𝑋 𝑌 ) x𝑖 <1,where𝐴−𝑋0𝑌 is taken as 𝐴. Proof. It follows from Theorem 7 that 𝑖 𝑖−1 0 𝑇 −1 Proof. Set 𝐵𝑖 =𝐴−𝑋𝑖𝑌 for 𝑖 = 0,1,...,𝑘,where𝐵0 is 𝐸† =𝐴† +𝐴†𝑋 (𝐼 − 𝑌𝑇𝐴†𝑋 ) 𝑌𝑇𝐴†, 𝑖 1 1 1 1 identified as 𝐴. The conditions in the theorem can be written (13) as: † † † 𝑇 † −1 𝑇 † 𝐹 =𝐵 +𝐵 𝑋2(𝐼 −2 𝑌 𝐵 𝑋2) 𝑌2 𝐵 . 𝑇 x𝑖 ∈𝑅(𝐵𝑖−1), y𝑖 ∈𝑅(𝐵𝑖−1), † † † † † † Also,wehave𝐸𝐸 =𝐴𝐴, 𝐸 𝐸=𝐴𝐴, 𝐹 𝐹=𝐵𝐵, † † 1−y𝑇𝐵† x =0,̸ and 𝐹𝐹 =𝐵𝐵.Hence,𝑅(𝐸) = 𝑅(𝐴), 𝑁(𝐸) = 𝑁(𝐴), 𝑖 𝑖−1 𝑖 (14) 𝑅(𝐹) = 𝑅(𝐵) 𝑁(𝐹) = 𝑁(𝐵) ,and . This implies that the ∀𝑖=1,2...,𝑘. interval 𝐽̃ is range-kernel regular. Now, since 𝐸 and 𝐹 satisfy † † the conditions of Theorem 10,wehave𝐸 ≥0and 𝐹 ≥0 † † Also, we have 𝐵 ≥0,forall𝑖=0,1,...,𝑘−1. proving (a). Hence, by Theorem 14, 𝑍 ≥0whenever 𝑍∈𝐾̃. 𝑖 𝐵† ≥0 𝑍† = ∑∞ (𝐹†(𝐹 − 𝑍))𝑗𝐹† = Now, assume that 𝑘 .ThenbyTheorem 16 and the Again, by Theorem 14,wehave 𝑗=0 † ∞ † † 𝑗 † conditions in (14)for𝑖=𝑘,wehavey𝑘𝐵𝑘−1x𝑘 <1.Also ∑𝑗=0(𝐵 𝐵−𝐹 𝑍) 𝐹 . † since by assumption 𝐵𝑖 ≥0for all 𝑖 = 0,1...,𝑘−1,wehave 𝑇 † y𝑖 𝐵𝑖−1x𝑖 <1for all 𝑖=1,2...,𝑘. 5. Iterations That Preserve Nonnegativity of The converse part can be proved iteratively. Then condi- 𝑇 † the Moore-Penrose Inverse tion y1 𝐵0 x1 <1and the conditions in (14)for𝑖=1imply 𝐵 † ≥0 𝑖=2 𝑘 In this section, we present results that typically provide that 1 . Repeating the argument for to proves the conditions for iteratively defined matrices to have nonneg- result. ative Moore-Penrose inverses given that the matrices that we start with have this property. We start with the following The following result is an extension of Lemma 2.31 in[ ], result about the rank-one perturbation case, which is a direct whichisinturnobtainedasacorollary.Thiscorollarywillbe 𝑇 † consequence of Theorem 10. used in proving another characterization for (𝐴−𝑋𝑘𝑌𝑘 ) ≥0. 6 Journal of Applied Mathematics

𝑛×𝑛 𝑛 Theorem 19. Let 𝐴∈R , b,c ∈ R be such that −b ≥0, Now, applying the above argument to the matrix 𝐻𝑘−1, −c ≥0 𝛼(∈ R)>0 b ∈ 𝑅(𝐴) −1 −1 and .Further,supposethat and we have that 𝐻𝑘−1 ≥0holds if and only if 𝐻𝑘−2 ≥0 c ∈ 𝑅(𝐴𝑇) 𝐴=(̂ 𝐴 b )∈R(𝑛+1)×(𝑛+1) 𝐴̂† ≥0 𝑇 † 𝑇 † −1 𝑇 † .Let c𝑇 𝛼 .Then, if and holds and y𝑘−1𝐴 𝑋𝑘−2(𝐼𝑘−2 −𝑌𝑘−2𝐴 𝑋𝑘−2) 𝑌𝑘−2𝐴 x𝑘−1 < † 𝑇 † 𝑇 † only if 𝐴 ≥0and c 𝐴 b <𝛼. 1−y𝑘−1𝐴 x𝑘−1.Continuingtheaboveargument,weget,(𝐴 − 𝑋 y𝑇)† ≥0 y𝑇𝐴†𝑋 𝐻−1 𝑌𝑇 𝐴†x <1−y𝑇𝐴†x † 𝑇 † 𝑇 𝑘 𝑘 if and only if 𝑖 𝑖−1 𝑖−1 𝑖−1 𝑖 𝑖 𝑖, Proof. Let us first observe that 𝐴𝐴 b = b and c 𝐴 𝐴=c . ∀𝑖=1,...,𝑘. This is condition (17). Set We conclude the paper by considering an extension of 𝐴†bc𝑇𝐴† −𝐴†b 𝐴† + Example 17. 𝛼−c𝑇𝐴†b 𝛼−c𝑇𝐴†b 𝑋=( c𝑇𝐴† 1 ). (15) Example 22. For a fixed 𝑛,let𝐶 be the circulant matrix − 𝛼−c𝑇𝐴†b 𝛼−c𝑇𝐴†b generated by the row vector (1, (1/2), (1/3), . . . (1/(𝑛 − 1))). Consider

𝐼 −1 −1 𝐴𝐴† 0 𝐴†𝐴 0 𝑇 −1 𝑇 𝐴𝑋=(̂ ) 𝑋𝐴=(̂ ) 𝐴=( 𝑇)(𝐼+e 𝑒 ) 𝐶 (𝐼 + e 𝑒 ) (𝐼 e ), It then follows that 0𝑇 1 and 0𝑇 1 .Using 𝑒 1 1 1 1 1 (19) † 1 thesetwoequations,itcaneasilybeshownthat𝑋=𝐴̂ . † 𝑇 † Suppose that 𝐴 ≥0and 𝛽=𝛼−c 𝐴 b >0.Then, where 𝐼 is the identity matrix of order 𝑛−1, e1 is (𝑛−1)×1 † † 1 0 𝐴∈ 𝐴̂ ≥0.Converselysupposethat𝐴̂ ≥0.Then,wemust vector with as the first entry and elsewhere. Then, R𝑛×𝑛 𝐴† 𝑛×𝑛 have 𝛽>0.Let{e1, e2,...,e𝑛+1} denote the standard basis . is the nonnegative Toeplitz matrix 𝑛+1 ̂† of R .Then,for𝑖 = 1,2,...,𝑛,wehave0≤𝐴 (e𝑖)= 1 1 1 † † 𝑇 † 𝑇 † 𝑇 1 ... 1 (𝐴 e𝑖 +(𝐴 bc 𝐴 e𝑖/𝛽), −(c 𝐴 e𝑖/𝛽)) .Sincec ≤0,itfollows † † 𝑛−1 𝑛−2 2 that 𝐴 e𝑖 ≥0for 𝑖=1,2,...,𝑛.Thus,𝐴 ≥0. 1 1 1 1 ( 1 ... ) ( 2 𝑛−1 3 2 ) Corollary 20 𝐴∈R𝑛×𝑛 † ( ) (see [1, Lemma 2.3]). Let be 𝐴 = ( ...... ) . (20) b, c ∈ R𝑛 −b ≥0−c ≥0 ( ...... ) nonsingular. Let be such that , ( 1 1 1 1 ) 𝛼(∈ R)>0 𝐴=(̂ 𝐴 b ) 𝐴̂−1 ∈ R(𝑛+1)×(𝑛+1) ... 1 and .Let c𝑇 𝛼 .Then, 𝑛−1 𝑛−2 𝑛−3 𝑛−1 ̂−1 −1 1 1 1 is nonsingular with 𝐴 ≥0if and only if 𝐴 ≥0and 1 ... 1 𝑇 −1 c 𝐴 b <𝛼. ( 𝑛−1 𝑛−2 2 ) 𝑥 𝐵† Now, we obtain another necessary and sufficient condi- Let 𝑖 be the first row of 𝑖−1 written as a column vector 𝑇 † 𝑖 † (𝐴 − 𝑋 𝑌 ) multiplied by 1/𝑛 and let 𝑦𝑖 bethefirstcolumnof𝐵𝑖−1 tion for 𝑘 𝑘 to be nonnegative. 𝑖 𝑇 multiplied by 1/𝑛 ,where𝐵𝑖 =𝐴−𝑋𝑖𝑌𝑖 ,with𝐵0 being 𝑚×𝑛 † 𝐴 𝑖 = 0,2,...,𝑘 x ≥0 Theorem 21. Let 𝐴∈R be such that 𝐴 ≥0.Letx𝑖, y𝑖 identified with , .Wethenhave 𝑖 , 𝑛 𝑚 𝑇 † be nonnegative vectors in R and R respectively, for every 𝑖= y𝑖 ≥0, 𝑥1 ∈ 𝑅(𝐴) and 𝑦1 ∈ 𝑅(𝐴 ).Now,if𝐵𝑖−1 ≥0for 1,...,𝑘,suchthat each iteration, then the vectors 𝑥𝑖 and 𝑦𝑖 will be nonnegative. 1−𝑦𝑇𝐵† 𝑥 >0 𝑇 Hence, it is enough to check that 𝑖 𝑖−1 𝑖 in order x ∈𝑅(𝐴−𝑋 𝑌𝑇 ), y ∈𝑅(𝐴−𝑋 𝑌𝑇 ) , † 𝑖 𝑖−1 𝑖−1 𝑖 𝑖−1 𝑖−1 (16) to get 𝐵𝑖 ≥0. Experimentally, it has been observed that for 𝑇 † 𝑛>3,thecondition1−𝑦𝑖 𝐵𝑖−1𝑥𝑖 >0holds true for any 𝑋 =(x ,...,x ) 𝑌 =(y ,...,y ) 𝐴−𝑋𝑌𝑇 † † where 𝑖 1 𝑖 , 𝑖 1 𝑖 and 0 0 is iteration. However, for 𝑛=3,itisobservedthat𝐵1 ≥0, 𝐵2 ≥ 𝐴 𝐻 =𝐼−𝑌𝑇𝐴†𝑋 1−y𝑇𝐴†x 𝑇 † † † 𝑇 † taken as .If 𝑖 𝑖 𝑖 𝑖 is nonsingular and 𝑖 𝑖 0, 1−𝑦1 𝐴 𝑥1 >0,and1−𝑦2 𝐵1𝑥2 >0.But,1−𝑦3 𝐵2𝑥3 <0, 𝑖 = 1,2,...,𝑘 (𝐴 − 𝑋 𝑌𝑇)† ≥0 † is positive for all ,then 𝑘 𝑘 if and andinthatcase,weobservethat𝐵3 is not nonnegative. only if 𝑇 † 𝑇 † −1 𝑇 † References y𝑖 𝐴 x𝑖 <1−y𝑖 𝐴 𝑋𝑖−1𝐻𝑖−1𝑌𝑖−1𝐴 x𝑖, ∀𝑖=1,...,𝑘. (17) [1] J. Ding, W. Pye, and L. Zhao, “Some results on structured Proof. The range conditions in (16) imply that 𝑅(𝑋𝑘) ⊆ 𝑅(𝐴) 𝑀 𝑇 -matrices with an application to wireless communications,” and 𝑅(𝑌𝑘) ⊆ 𝑅(𝐴 ). Also, from the assumptions, it follows Linear Algebra and its Applications,vol.416,no.2-3,pp.608– 𝑇 † that 𝐻𝑘 is nonsingular. By Theorem 10, (𝐴 −𝑘 𝑋 𝑌𝑘 ) ≥0if 614, 2006. −1 and only if 𝐻𝑘 ≥0.Now, [2] S. K. Mitra and P. Bhimasankaram, “Generalized inverses of partitioned matrices and recalculation of least squares estimates 𝑇 † fordataormodelchanges,”SankhyaA¯ ,vol.33,pp.395–410,1971. 𝐻𝑘−1 −𝑌𝑘−1𝐴 x𝑘 𝐻𝑘 =( 𝑇 † 𝑇 † ). (18) [3] C. Y. Deng, “A generalization of the Sherman-Morrison- −y𝑘 𝐴 𝑋𝑘−1 1−y𝑘 𝐴 x𝑘 Woodbury formula,” Applied Mathematics Letters,vol.24,no. 𝑇 † −1 9, pp. 1561–1564, 2011. Since 1−y 𝐴 x𝑘 >0,usingCorollary 20,itfollowsthat𝐻 ≥ 𝑘 𝑘 [4] A. Berman, M. Neumann, and R. J. Stern, Nonnegative Matrices 0 y𝑇𝐴†𝑋 𝐻−1 𝑌𝑇 𝐴†x <1−y𝑇𝐴†x if and only if 𝑘 𝑘−1 𝑘−1 𝑘−1 𝑘 𝑘 𝑘 and in Dynamical Systems, John Wiley & Sons, New York, NY, USA, −1 𝐻𝑘−1 ≥0. 1989. Journal of Applied Mathematics 7

[5] A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Society for Industrial and Applied Mathematics (SIAM), 1994. [6] L. Collatz, Functional Analysis and Numerical Mathematics, Academic Press, New York, NY, USA, 1966. [7] C. A. Desoer and B. H. Whalen, “A note on pseudoinverses,” Society for Industrial and Applied Mathematics,vol.11,pp.442– 447, 1963. [8] A. Ben-Israel and T. N. E. Greville, Generalized Inverses:Theory and Applications, Springer, New York, NY, USA, 2003. [9] A. Berman and R. J. Plemmons, “Cones and iterative methods for best least squares solutions of linear systems,” SIAM Journal on Numerical Analysis,vol.11,pp.145–154,1974. [10] D. Mishra and K. C. Sivakumar, “On splittings of matrices and nonnegative generalized inverses,” Operators and Matrices,vol. 6,no.1,pp.85–95,2012. [11] D. Mishra and K. C. Sivakumar, “Nonnegative generalized inverses and least elements of polyhedral sets,” Linear Algebra and its Applications,vol.434,no.12,pp.2448–2455,2011. [12] M. R. Kannan and K. C. Sivakumar, “Moore-Penrose inverse positivity of interval matrices,” Linear Algebra and its Applica- tions,vol.436,no.3,pp.571–578,2012. Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 320392, 7 pages http://dx.doi.org/10.1155/2013/320392

Research Article A New Construction of Multisender Authentication Codes from Polynomials over Finite Fields

Xiuli Wang

College of Science, Civil Aviation University of China, Tianjin 300300, China

Correspondence should be addressed to Xiuli Wang; [email protected]

Received 3 February 2013; Accepted 7 April 2013

Academic Editor: Yang Zhang

Copyright © 2013 Xiuli Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Multisender authentication codes allow a group of senders to construct an authenticated message for a receiver such that the receiver can verify the authenticity of the received message. In this paper, we construct one multisender authentication code from polynomials over finite fields. Some parameters and the probabilities of deceptions of this code are also computed.

1. Introduction respectively. Let (𝑆,𝑖 𝐸 ,𝑇𝑖;𝑓𝑖) (𝑖= 1,2,...,𝑛)be the senders’ Cartesian authentication code, (𝑆,𝑅 𝐸 ,𝑇;𝑔) be the receiver’s Multisender authentication code was firstly constructed by Cartesian authentication code, ℎ:𝑇1 ×𝑇2 ×⋅⋅⋅×𝑇𝑛 → Gilbertetal.[1] in 1974. Multisender authentication system 𝑇 be the synthesis algorithm, and 𝜋𝑖 :𝐸 →𝑖 be 𝐸 a refers to who a group of senders, cooperatively send a message subkey generation algorithm, where 𝐸 is the key set of the to a receiver; then the receiver should be able to ascertain key distribution center. When authenticating a message, the that the message is authentic. About this case, many scholars senders and the receiver should comply with the protocol. andresearchershadmadegreatcontributionstomultisender The key distribution center randomly selects an encoding authentication codes, such as [2–6]. rule 𝑒∈𝐸and sends 𝑒𝑖 =𝜋𝑖(𝑒) to the 𝑖th sender 𝑈𝑖 (𝑖 = In the actual computer network communications, mul- 1,2,...,𝑛), secretly; then he calculates 𝑒𝑅 by 𝑒 according to tisender authentication codes include sequential model and an effective algorithm and secretly sends 𝑒𝑅 to the receiver simultaneous model. Sequential model is that each sender 𝑅. If the senders would like to send a source state 𝑠 to the uses his own encoding rules to encode a source state orderly, receiver 𝑅, 𝑈𝑖 computes 𝑡𝑖 =𝑓𝑖(𝑠,𝑖 𝑒 ) (𝑖 = 1,2,...,𝑛) and the last sender sends the encoded message to the receiver, sends 𝑚𝑖 =(𝑠,𝑡𝑖) (𝑖= 1,2,...,𝑛)to the synthesizer through and the receiver receives the message and verifies whether the an open channel. The synthesizer receives the message 𝑚𝑖 = message is legal or not. Simultaneous model is that all senders (𝑠,𝑖 𝑡 ) (𝑖=1,2,...,𝑛)and calculates 𝑡=ℎ(𝑡1,𝑡2,...,𝑡𝑛) by the use their own encoding rules to encode a source state, and synthesis algorithm ℎ and then sends message 𝑚 = (𝑠, 𝑡) to each sender sends the encoded message to the synthesizer, the receiver; he checks the authenticity by verifying whether respectively; then the synthesizer forms an authenticated 𝑡=𝑔(𝑠,𝑒𝑅) or not. If the equality holds, the message is message and verifies whether the message is legal or not. In authentic and is accepted. Otherwise, the message is rejected. this paper, we will adopt the second model. We assume that the key distribution center is credible, and In a simultaneous model, there are four participants: a though he know the senders’ and receiver’s encoding rules, he group of senders 𝑈={𝑈1,𝑈2,...,𝑈𝑛},thekeydistribution will not participate in any communication activities. When center, he is responsible for the key distribution to senders transmitters and receiver are disputing, the key distribution and receiver, including solving the disputes between them, a center settles it. At the same time, we assume that the system receiver 𝑅, and a synthesizer, where he only runs the trusted follows the Kerckhoff principle in which, except the actual synthesis algorithm. The code works as follows: each sender used keys, the other information of the whole system is and receiver has their own Cartesian authentication code, public. 2 Journal of Applied Mathematics

In a multisender authentication system, we assume that constructions of multisender authentication codes that are the whole senders are cooperative to form a valid message; givenin[3–6]. The construction of authentication codes that is, all senders as a whole and receiver are reliable. But is combinational design in its nature. We know that the there are some malicious senders who together cheat the polynomial over finite fields can provide a better algebra receiver; the part of senders and receiver are not credible, and structure and is easy to count. In this paper, we construct they can take impersonation attack and substitution attack. one multisender authentication code from the polynomial In the whole system, we assume that {𝑈1,𝑈2,...,𝑈𝑛} are over finite fields. Some parameters and the probabilities of senders, 𝑅 is a receiver, 𝐸𝑖 is the encoding rules set of the deceptions of this code are also computed. We realize the sender 𝑈𝑖,and 𝐸𝑅 is the decoding rules set of the receiver generalization and the application of the similar idea and 𝑅. If the source state space 𝑆 and the key space 𝐸𝑅 of receiver method of the paper [7–9]. 𝑅 are according to a uniform distribution, then the message space 𝑀 and the tag space 𝑇 are determined by the probability 𝑆 𝐸 𝐿={𝑖,𝑖 ,...,𝑖} ⊂ {1,2,...,𝑛} distribution of and 𝑅. 1 2 𝑙 , 2. Some Results about Finite Field 𝑙<𝑛, 𝑈𝐿 ={𝑈𝑖 ,𝑈𝑖 ,...,𝑈𝑖 }, 𝐸𝐿 ={𝐸𝑈 ,𝐸𝑈 ,...,𝐸𝑈 }. 1 2 𝑙 𝑖1 𝑖2 𝑖𝑙 Now consider that let us consider the attacks from malicious Let 𝐹𝑞 be the finite field with 𝑞 elements, where 𝑞 is a power ∗ groups of senders. Here, there are two kinds of attack. of a prime 𝑝 and 𝐹 is a field containing 𝐹𝑞; denote by 𝐹𝑞 be 𝑈 The opponent’s impersonation attack to receiver: 𝐿,after the nonzero elements set of 𝐹𝑞.Inthispaper,wewillusethe receiving their secret keys, encode a message and send it to following conclusions over finite fields. the receiver. 𝑈𝐿 are successful if the receiver accepts it as 𝑃 ∗ legitimate message. Denote by 𝐼 the largest probability of Conclusion 1. A generator 𝛼 of 𝐹𝑞 is called a primitive some opponent’s successful impersonation attack to receiver; element of 𝐹𝑞. it can be expressed as 󵄨 󵄨 Conclusion 2. Let 𝛼∈𝐹𝑞; if some polynomials contain 𝛼 as 󵄨{𝑒 ∈𝐸 |𝑒 ⊂𝑚}󵄨 𝑃 = {󵄨 𝑅 𝑅 𝑅 󵄨}. their root and their leading coefficient are 1 over 𝐹𝑞, then the 𝐼 max 󵄨 󵄨 (1) 𝑚∈𝑀 󵄨𝐸𝑅󵄨 polynomial having least degree among all such polynomials is called a minimal polynomial over 𝐹𝑞. The opponent’s substitution attack to the receiver: 𝑈𝐿 󸀠 𝑛 replace 𝑚 with another message 𝑚 ,aftertheyobservea Conclusion 3. Let |𝐹| = 𝑞 ,then𝐹 is an 𝑛-dimensional 𝑚 𝑈 legitimate message . 𝐿 are successful if the receiver accepts vector space over 𝐹𝑞.Let𝛼 be a primitive element of 𝐹𝑞 it as legitimate message; it can be expressed as and 𝑔(𝑥) the minimal polynomial about 𝛼 over 𝐹𝑞;then 2 𝑛−1 󵄨 󸀠 󵄨 dim 𝑔(𝑥) =𝑛 and 1, 𝛼, 𝛼 ,...,𝛼 is a basis of 𝐹.Further- 󸀠 󵄨{𝑒 ∈𝐸 |𝑒 ⊂𝑚,𝑚}󵄨 max𝑚 =𝑚∈𝑀̸ 󵄨 𝑅 𝑅 𝑅 󵄨 1, 𝛼, 𝛼2,...,𝛼𝑛−1 𝑃𝑆 = max { 󵄨 󵄨 }. (2) more, is linear independent, and it is equal 𝑚∈𝑀 󵄨{𝑒 ∈𝐸 |𝑒 ⊂𝑚}󵄨 2 𝑛−1 𝑛 󵄨 𝑅 𝑅 𝑅 󵄨 to 𝛼, 𝛼 ,...,𝛼 ,𝛼 (𝛼 is a primitive element, 𝛼 =0)̸ is also 𝑝 𝑝2 𝑝𝑛−1 𝑝𝑛 linear independent; moreover, 𝛼 ,𝛼 ,...,𝛼 ,𝛼 is also There might be 𝑙 malicious senders who together cheat linear independent. the receiver; that is, the part of senders and the receiver are not credible, and they can take impersonation attack. 𝑚 𝑚 Conclusion 4. Consider (𝑥1 +𝑥2 +⋅⋅⋅+𝑥𝑛) =(𝑥1) + Let 𝐿={𝑖1,𝑖2,...,𝑖𝑙} ⊂ {1,2,...,𝑛}, 𝑙<𝑛and 𝐸𝐿 = 𝑚 𝑚 (𝑥2) + ⋅⋅⋅ + (𝑥𝑛) ,where𝑥𝑖 ∈𝐹𝑞,(1≤𝑖≤𝑛)and 𝑚 is a {𝐸𝑈𝑖 ,𝐸𝑈𝑖 ,...,𝐸𝑈𝑖 }. Assume that 𝑈𝐿 ={𝑈𝑖 ,𝑈𝑖 ,...,𝑈𝑖 }, 1 2 𝑙 1 2 𝑙 nonnegative power of character 𝑝 of 𝐹𝑞. after receiving their secret keys, send a message 𝑚 to the receiver 𝑅; 𝑈𝐿 are successful if the receiver accepts it as legiti- Conclusion 5. Let 𝑚≤𝑛.Then,thenumberof𝑚×𝑛matrices 𝑃 (𝐿) 𝑚(𝑚−1)/2 𝑛 𝑖 mate message. Denote by 𝑈 the maximum probability of of rank 𝑚 over 𝐹𝑞 is 𝑞 ∏𝑖=𝑛−𝑚+1(𝑞 −1). success of the impersonation attack to the receiver. It can be expressed as More results about finite fields can be found in[10–12].

𝑃𝑈 (𝐿) 󵄨 󵄨 max𝑚∈𝑀 󵄨{𝑒𝑅 ∈𝐸𝑅 |𝑒𝑅 ⊂𝑚,𝑝(𝑒𝑅,𝑒𝑃) =0}̸ 󵄨 3. Construction = { 󵄨 󵄨 }. max 𝑒max∈𝑒 󵄨 󵄨 𝑛 (𝑛−1) 𝑒𝐿∈𝐸𝐿 𝐿 𝑈 󵄨{𝑒𝑅 ∈𝐸𝑅 |𝑝(𝑒𝑅,𝑒𝑃) =0}̸ 󵄨 𝑝 𝑝 󵄨 󵄨 Let the polynomial 𝑝𝑗(𝑥) =𝑗1 𝑎 𝑥 +𝑎𝑗2𝑥 +⋅⋅⋅+ (3) 𝑝 𝑎𝑗𝑛𝑥 (1 ≤ 𝑗 ≤ 𝑘),wherethecoefficient𝑎𝑖𝑙 ∈𝐹𝑞, (1≤𝑙≤𝑛), and these vectors by the composition of their Notes. 𝑝(𝑒𝑅,𝑒𝑃) =0̸ implies that any information 𝑠 encoded by coefficient are linearly independent. The set of source states 𝑆=𝐹 𝑖 𝐸 = 𝑒𝑇 can be authenticated by 𝑒𝑅. 𝑞;thesetof th transmitter’s encoding rules 𝑈𝑖 {𝑝 (𝑥 ), 𝑝 (𝑥 ),...,𝑝 (𝑥 ), 𝑥 ∈𝐹∗}(1≤ 𝑖 ≤ 𝑛) In [2],Desmedtetal.gavetwoconstructionsforMRA- 1 𝑖 2 𝑖 𝑘 𝑖 𝑖 𝑞 ;theset codes based on polynomials and finite geometries, respec- of receiver’s encoding rules 𝐸𝑅 ={𝑝1(𝛼),2 𝑝 (𝛼),...,𝑝𝑘(𝛼), tively. To construct multisender or multireceiver authenti- where 𝛼 is a primitive element of 𝐹𝑞};thesetof𝑖th transmit- cation by polynomials over finite fields, many researchers ter’s tags 𝑇𝑖 ={𝑡𝑖 |𝑡𝑖 ∈𝐹𝑞}(1≤𝑖≤𝑛);thesetofreceiver’s have done much work, for example, [7–9]. There are other tags 𝑇={𝑡|𝑡∈𝐹𝑞}. Journal of Applied Mathematics 3

𝑓 :𝑆×𝐸 →𝑇 𝑓(𝑠, 𝑒 )= Define the encoding map 𝑖 𝑈𝑖 𝑖, 𝑖 𝑈𝑖 It follows that 2 𝑘 𝑠𝑝1(𝑥𝑖)+𝑠 𝑝2(𝑥𝑖)+⋅⋅⋅+𝑠 𝑝𝑘(𝑥𝑖), 1≤𝑖≤𝑛. 𝑓:𝑆 ×𝐸 →𝑇 𝑓(𝑠, 𝑒 )=𝑠𝑝(𝛼) + The decoding map 𝑅 , 𝑅 1 𝑝𝑛 𝑝𝑛−1 𝑝 2 𝑘 𝑥 𝑥 ⋅⋅⋅ 𝑥 𝑠 𝑝2(𝛼)+⋅⋅⋅+𝑠 𝑝𝑘(𝛼) 1 1 1 . 𝑛−1 ℎ:𝑇×𝑇 × ⋅⋅⋅ × 𝑇 →𝑇 𝑥𝑝n 𝑥𝑝 ⋅⋅⋅ 𝑥𝑝 The synthesizing map 1 2 𝑛 , ( 2 2 2 ) ℎ(𝑡1,𝑡2,...,𝑡𝑛)=𝑡1 +𝑡2 +⋅⋅⋅+𝑡𝑛...... The code works as follows. 𝑛 𝑛−1 Assume that 𝑞 is larger than, or equal to, the number of 𝑥𝑝 𝑥𝑝 ⋅⋅⋅ 𝑥𝑝 ( 𝑛 𝑛 𝑛 ) (5) the possible message and 𝑛≤𝑞. 𝑠 𝑎11 𝑎21 ⋅⋅⋅ 𝑎𝑘1 𝑡1 3.1. Key Distribution. The key distribution center randomly 𝑎 𝑎 ⋅⋅⋅ 𝑎 𝑠2 𝑡 𝑘(𝑘 ≤ 𝑛) 𝑝 (𝑥), 𝑝 (𝑥),...,𝑝 (𝑥) 12 22 𝑘2 2 generates polynomials 1 2 𝑘 , ×( . . . . )(. )=(. ). 𝑝𝑛 𝑝(𝑛−1) 𝑝 ...... where 𝑝𝑗(𝑥) =𝑗1 𝑎 𝑥 +𝑎𝑗2𝑥 +⋅⋅⋅+𝑎𝑗𝑛𝑥 (1≤𝑗≤𝑘), ...... 𝑎 𝑎 ⋅⋅⋅ 𝑎 𝑘 𝑡 and make these vectors by composed of their coefficient is 1𝑛 2𝑛 𝑘𝑛 𝑠 𝑛 linearly independent, it is equivalent to the column vectors 𝑎11 𝑎21 ⋅⋅⋅ 𝑎𝑘1 𝑎12 𝑎22 ⋅⋅⋅ 𝑎𝑘2 of the matrix ( . . . . ) is linearly independent. He Denote . . . . 𝑎1𝑛 𝑎2𝑛 ⋅⋅⋅ 𝑎𝑘𝑛 selects 𝑛 distinct nonzero elements 𝑥1,𝑥2,...,𝑥𝑛 ∈𝐹𝑞 again 𝑎 𝑎 ⋅⋅⋅ 𝑎 and makes 𝑥𝑖 (1≤𝑖≤𝑛)secret;thenhesendsprivately 11 21 𝑘1 𝑎 𝑎 ⋅⋅⋅ 𝑎 𝑝1(𝑥𝑖), 𝑝2(𝑥𝑖),...,𝑝𝑘(𝑥𝑖) to the sender 𝑈𝑖 (1≤𝑖≤𝑛).The 12 22 𝑘2 key distribution center also randomly chooses a primitive 𝐴=( . . . . ), . . . . element 𝛼 of 𝐹𝑞 satisfying 𝑥1 +𝑥2 +⋅⋅⋅+𝑥𝑛 =𝛼and sends 𝑎 𝑎 ⋅⋅⋅ 𝑎 𝑝1(𝛼),2 𝑝 (𝛼),...,𝑝𝑘(𝛼) to the receiver 𝑅. 1𝑛 2𝑛 𝑘𝑛

𝑛 𝑛−1 3.2. Broadcast. If the senders want to send a source state 𝑠∈𝑆 𝑥𝑝 𝑥𝑝 ⋅⋅⋅ 𝑥𝑝 𝑅 𝑈 𝑡 =𝑓(𝑠, 𝑒 )= 1 1 1 to the receiver , the sender 𝑖 calculates 𝑖 𝑖 𝑈𝑖 𝑝𝑛 𝑝𝑛−1 𝑝 2 𝑘 𝑥2 𝑥2 ⋅⋅⋅ 𝑥2 𝐴𝑠(𝑥𝑖)=𝑠𝑝1(𝑥𝑖)+𝑠 𝑝2(𝑥𝑖)+⋅⋅⋅+𝑠 𝑝𝑘(𝑥𝑖), 1≤𝑖≤𝑛and then 𝑋=( ), (6) 𝐴 (𝑥 )=𝑡 . . . . sends 𝑠 𝑖 𝑖 to the synthesizer. . . . . 𝑝𝑛 𝑝𝑛−1 𝑝 𝑥𝑛 𝑥𝑛 ⋅⋅⋅ 𝑥𝑛 3.3. Synthesis. After the synthesizer receives 𝑡1,𝑡2,...,𝑡𝑛,he ℎ(𝑡 ,𝑡 ,...,𝑡 )=𝑡 +𝑡 +⋅⋅⋅+𝑡 calculates 1 2 𝑛 1 2 𝑛 and then sends 𝑠 𝑡1 𝑚 = (𝑠, 𝑡) to the receiver 𝑅. 2 𝑠 𝑡2 𝑆=( ), 𝑡=( ). . . 3.4. Verification. When the receiver 𝑅 receives 𝑚 = (𝑠, 𝑡),he . . 󸀠 2 𝑠𝑘 𝑡 calculates 𝑡 =𝑔(𝑠,𝑒𝑅)=𝐴𝑠(𝛼) = 𝑠𝑝1(𝛼) + 𝑠 𝑝2(𝛼)+⋅⋅⋅+ 𝑛 𝑘 󸀠 𝑠 𝑝𝑘(𝛼).If𝑡=𝑡, he accepts 𝑡; otherwise, he rejects it. Next, we will show that the above construction is a well defined multisender authentication code with arbitration. The above linear equation is equivalent to 𝑋𝐴𝑆 =𝑡, because the column vectors of 𝐴 are linearly independent, Lemma 1. 𝐶 =(𝑆,𝐸 ,𝑇,𝑓) 𝑋 𝑋 Let 𝑖 𝑃𝑖 𝑖 𝑖 ; then the code is an A-code, is equivalent to a Vandermonde matrix, and is inverse; 1≤𝑖≤𝑛. therefore, the above linear equation has a unique solution, so 𝑠 is only defined; that is, 𝑓𝑖 (1≤𝑖≤𝑛)isasurjection. (1) 𝑒 ∈𝐸 𝑠∈𝑆 𝐸 ={𝑝(𝑥 ), (2) 𝑠󸀠 ∈𝑆 𝑠𝑝 (𝑥 )+ Proof. For any 𝑈𝑖 𝑈𝑖 , ,because 𝑈𝑖 1 𝑖 If is another source state satisfying 1 𝑖 ∗ 2 2 𝑘 󸀠 󸀠2 󸀠𝑘 𝑝2(𝑥𝑖),...,𝑝𝑘(𝑥𝑖), 𝑥𝑖 ∈𝐹𝑞 },so𝑡𝑖 =𝑠𝑝1(𝑥𝑖)+𝑠 𝑝2(𝑥𝑖)+⋅⋅⋅+ 𝑠 𝑝2(𝑥𝑖)+⋅⋅⋅+𝑠 𝑝𝑘(𝑥𝑖)=𝑠𝑝1(𝑥𝑖)+𝑠 𝑝2(𝑥𝑖)+⋅⋅⋅+𝑠 𝑝𝑘(𝑥𝑖)= 𝑘 󸀠 2 󸀠2 𝑠 𝑝 (𝑥 )∈𝑇 =𝐹 𝑡 ∈𝑇 𝑒 = 𝑡𝑖,anditisequivalentto(𝑠 − 𝑠 )𝑝1(𝑥𝑖)+(𝑠 −𝑠 )𝑝2(𝑥𝑖)+⋅⋅⋅+ 𝑘 𝑖 𝑖 𝑞.Conversely,forany 𝑖 𝑖,choose 𝑈𝑖 𝑛 𝑘 󸀠𝑘 ∗ 𝑝 (𝑠 −𝑠 )𝑝𝑘(𝑥𝑖)=0,then {𝑝1(𝑥𝑖), 𝑝2(𝑥𝑖),...,𝑝𝑘(𝑥𝑖), 𝑥𝑖 ∈𝐹𝑞 },where𝑝𝑗(𝑥) =𝑗1 𝑎 𝑥 + (𝑛−1) 𝑎 𝑥𝑝 +⋅⋅⋅+𝑎 𝑥𝑝 (1≤𝑗≤𝑘) 𝑡 =𝑓(𝑠, 𝑒 )= 𝑗2 𝑗𝑛 ,andlet 𝑖 𝑖 𝑈𝑖 2 𝑘 𝑝𝑛 𝑝𝑛−1 𝑝 𝑠𝑝1(𝑥𝑖)+𝑠 𝑝2(𝑥𝑖)+⋅⋅⋅+𝑠 𝑝𝑘(𝑥𝑖);itisequivalentto (𝑥𝑖 ,𝑥𝑖 ,...,𝑥𝑖 ) 𝑠 𝑎11 𝑎21 ⋅⋅⋅ 𝑎𝑘1 󸀠 2 𝑠−𝑠 𝑎 𝑎 ⋅⋅⋅ 𝑎 𝑠 𝑎11 𝑎21 ⋅⋅⋅ 𝑎𝑘1 𝑝𝑛 𝑝𝑛−1 𝑝 12 22 𝑘2 𝑎 𝑎 ⋅⋅⋅ 𝑎 𝑠2 −𝑠󸀠2 (7) (𝑥𝑖 ,𝑥𝑖 ,...,𝑥𝑖 )( . . . . )(. )=𝑡𝑖. 12 22 𝑘2 . . . . . ×( . . . . )( . )=0. 𝑘 . . . . . 𝑎1𝑛 𝑎2𝑛 ⋅⋅⋅ 𝑎𝑘𝑛 𝑠 . . . . . 𝑎 𝑎 ⋅⋅⋅ 𝑎 𝑠𝑘 −𝑠󸀠𝑘 (4) 1𝑛 2𝑛 𝑘𝑛 4 Journal of Applied Mathematics

󸀠 󸀠 Thus (2) If 𝑠 is another source state satisfying 𝑡=𝑔(𝑠,𝑒𝑅),then 𝑠󸀠 𝑝𝑛 𝑝𝑛−1 𝑝 𝑠󸀠2 𝑥 𝑥 ⋅⋅⋅ 𝑥 𝑝𝑛 𝑝𝑛−1 𝑝 1 1 1 (𝛼 ,𝛼 ,...,𝛼 )𝐴( . ) n 𝑛−1 . 𝑥𝑝 𝑥𝑝 ⋅⋅⋅ 𝑥𝑝 . ( 2 2 2 ) 𝑠󸀠𝑘 ...... (10) 𝑛 𝑛−1 𝑠 𝑥𝑝 𝑥𝑝 ⋅⋅⋅ 𝑥𝑝 ( 𝑛 𝑛 𝑛 ) 𝑠2 𝑝𝑛 𝑝𝑛−1 𝑝 =(𝛼 ,𝛼 ,...,𝛼 )𝐴(. ); 𝑎 𝑎 ⋅⋅⋅ 𝑎 𝑠−𝑠󸀠 0 . 11 21 𝑘1 𝑘 2 󸀠2 𝑠 𝑎12 𝑎22 ⋅⋅⋅ 𝑎𝑘2 𝑠 −𝑠 0 ×( )( )=( )...... 𝑠󸀠 𝑠 ...... 󸀠2 𝑠2 ...... 𝑛 𝑛−1 [ 𝑠 ] (𝛼𝑝 ,𝛼𝑝 ,...,𝛼𝑝)𝐴[ ( ] − [ ])=0 𝑎 𝑎 ⋅⋅⋅ 𝑎 𝑘 󸀠𝑘 0 that is, . . .Sim- 1𝑛 2𝑛 𝑘𝑛 𝑠 −𝑠 . . 󸀠𝑘 [ 𝑠𝑘 ] (8) [ 𝑠 ] ilar to (1),wegetthatthehomogeneouslinearequation 𝑝𝑛 𝑝𝑛−1 𝑝 󸀠 (𝛼 ,𝛼 ,...,𝛼 )𝐴(𝑆 −𝑆)= 0has a unique solution; that 𝑆=𝑆󸀠 𝑠=𝑠󸀠 Similar to (1), we know that the homogeneous linear equation is, there is only zero solution, so ;thatis, .So, 𝑠 𝑒 𝑡 𝑋𝐴𝑆 =0 has a unique solution; that is, there is only zero istheuniquesourcestatedeterminedby 𝑅 and ;thus, 󸀠 𝐶=(𝑆,𝐸 ,𝑇,𝑔) solution, so 𝑠=𝑠.So,𝑠 istheuniquesourcestatedetermined 𝑅 is an A-code. 𝑒 𝑡 𝐶 (1≤𝑖≤𝑛) At the same time, for any valid 𝑚 = (𝑠, 𝑡),wehaveknown by 𝑈𝑖 and 𝑖;thus, 𝑖 is an A-code. 󸀠 that 𝛼=𝑥1 +𝑥2 +⋅⋅⋅+𝑥𝑛,anditfollowsthat𝑡 =𝑠𝑝1(𝛼) + 𝑠2𝑝 (𝛼)+⋅ ⋅ ⋅+𝑠𝑘𝑝 (𝛼) = 𝑠𝑝 (𝑥 +𝑥 +⋅⋅⋅+𝑥 )+𝑠2𝑝 (𝑥 +𝑥 + Lemma 2. Let 𝐶=(𝑆,𝐸𝑅,𝑇,𝑔); then the code is an A-code. 2 𝑘 1 1 2 𝑛 2 1 2 𝑘 ⋅⋅⋅+𝑥𝑛)+⋅⋅⋅+𝑠 𝑝𝑘(𝑥1 +𝑥2 +⋅⋅⋅+𝑥𝑛).Wealsohaveknown (1) 𝑠∈𝑆 𝑒 ∈𝐸 𝑒 𝑛 (𝑛−1) Proof. For any , 𝑅 𝑅, from the definition of 𝑅, 𝑝 (𝑥) = 𝑎 𝑥𝑝 +𝑎 𝑥𝑝 +⋅⋅⋅+𝑎 𝑥𝑝 (1≤𝑗≤𝑘) 𝐸 ={𝑝(𝛼), 𝑝 (𝛼),...,𝑝 (𝛼) 𝛼 that 𝑗 𝑗1 𝑗2 𝑗𝑛 ;from we assume that 𝑅 1 2 𝑘 ,where is a 𝑝𝑚 𝑝𝑚 𝑝𝑚 𝐹 } 𝑔(𝑠, 𝑒 )=𝑠𝑝(𝛼) +2 𝑠 𝑝 (𝛼) + ⋅ ⋅ ⋅ + Conclusion 4, (𝑥1 +𝑥2 +⋅⋅⋅+𝑥𝑛) =(𝑥1) +(𝑥2) +⋅⋅⋅+ primitive element of 𝑞 , 𝑅 1 2 𝑝𝑚 𝑘 (𝑥𝑛) ,where𝑚 is a nonnegative power of character 𝑝 of 𝐹𝑞, 𝑠 𝑝𝑘(𝛼) ∈ 𝑇𝑞 =𝐹 ;ontheotherhand,forany𝑡∈𝑇,choose andweget𝑝𝑗(𝑥1 +𝑥2 +⋅⋅⋅+𝑥𝑛)=𝑝𝑗(𝑥1)+𝑝𝑗(𝑥2)+⋅⋅⋅+𝑝𝑗(𝑥𝑛); 𝑒𝑅 ={𝑝1(𝛼),2 𝑝 (𝛼),...,𝑝𝑘(𝛼),where𝛼 is a primitive element 𝑡󸀠 =𝑠𝑝(𝛼) +2 𝑠 𝑝 (𝛼)+⋅⋅⋅+𝑠𝑘𝑝 (𝛼) = (𝑠𝑝 (𝑥 )+ 𝐹 } 𝑔(𝑠, 𝑒 )=𝑠𝑝(𝛼) +2 𝑠 𝑝 (𝛼) + ⋅ ⋅ ⋅ +𝑘 𝑠 𝑝 (𝛼) = 𝑡 therefore, 1 2 𝑘 1 1 of 𝑞 , 𝑅 1 2 𝑘 ;itis 2 𝑘 2 𝑘 𝑠 𝑝2(𝑥1)+⋅⋅⋅+𝑠 𝑝𝑘(𝑥1))+(𝑠𝑝1(𝑥2)+𝑠 𝑝2(𝑥2)+⋅⋅⋅+𝑠 𝑝𝑘(𝑥2))+ equivalent to 2 𝑘 ⋅ ⋅ ⋅+(𝑠𝑝1(𝑥𝑛)+𝑠 𝑝2(𝑥𝑛)+⋅⋅⋅+𝑠 𝑝𝑘(𝑥𝑛)) = 𝑡1 +𝑡2 +⋅⋅⋅+𝑡𝑛 =𝑡, and the receiver 𝑅 accepts 𝑚. 𝑝𝑛 𝑝𝑛−1 𝑝 (𝛼 ,𝛼 ,⋅⋅⋅,𝛼 ) From Lemmas 1 and 2,weknowthatsuchconstructionof multisender authentication codes is reasonable and there are 𝑠 𝑛 senders in this system. Next, we compute the parameters 𝑎11 𝑎21 ⋅⋅⋅ 𝑎𝑘1 2 of this code and the maximum probability of success in 𝑎12 𝑎22 ⋅⋅⋅ 𝑎𝑘2 𝑠 ×( . . . . )(. )=𝑡, impersonation attack and substitution attack by the group of . . . . . senders. 𝑘 𝑎1𝑛 𝑎2𝑛 ⋅⋅⋅ 𝑎𝑘𝑛 𝑠 (9) Theorem 3. Some parameters of this construction are |𝑆| = 𝑞, |𝐸 |=[𝑞𝑘(𝑘−1)/2∏𝑛 (𝑞𝑖 − 1)] ( 𝑞−1 )= 𝑎11 𝑎21 ⋅⋅⋅ 𝑎𝑘1 𝑈𝑖 𝑖=𝑛−𝑘+1 1 [𝑞𝑘(𝑘−1)/2∏𝑛 (𝑞𝑖 − 1)](𝑞 − 1) (1 ≤ 𝑖 ≤ 𝑛), |𝑇 |=𝑞(1≤ 𝑎12 𝑎22 ⋅⋅⋅ 𝑎𝑘2 𝑖=𝑛−𝑘+1 𝑖 𝐴=( ); 𝑖≤𝑛),|𝐸|=[𝑞𝑘(𝑘−1)/2∏𝑛 (𝑞𝑖 − 1)]𝜑(𝑞 − 1), |𝑇| =𝑞 . . . . 𝑅 𝑖=𝑛−𝑘+1 . . . . . Where 𝜑(𝑞 − 1) is the 𝐸𝑢𝑙𝑒𝑟 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 of 𝑞−1,itrepresents 𝑎1𝑛 𝑎2𝑛 ⋅⋅⋅ 𝑎𝑘𝑛 the number of primitive element of 𝐹𝑞 here.

Proof. For |𝑆| = 𝑞, |𝑇𝑖|=𝑞,and|𝑇| = 𝑞,theresults 𝑠 𝐸 𝐸 ={𝑝 (𝑥 ), 𝑝 (𝑥 ),..., 𝑠2 are straightforward. For 𝑈𝑖 ,because 𝑈𝑖 1 𝑖 2 𝑖 𝑛 𝑛−1 𝑛 (𝑛−1) (𝛼𝑝 ,𝛼𝑝 ,...,𝛼𝑝)𝐴 ( )=𝑡 𝑝 (𝑥 ), 𝑥 ∈𝐹∗} 𝑝 (𝑥) = 𝑎 𝑥𝑝 +𝑎 𝑥𝑝 +⋅⋅⋅+ that is, . .FromConclusion 𝑘 𝑖 𝑖 𝑞 ,where 𝑗 𝑗1 𝑗2 . 𝑝 𝑠𝑘 𝑎 𝑥 (1≤𝑗≤𝑘) 𝑛 𝑛−1 𝑗𝑛 , and these vectors by the composition of (𝛼𝑝 ,𝛼𝑝 ,...,𝛼𝑝) their coefficient are linearly independent, it is equivalent to 3,weknowthat is linearly independent 𝑎 𝑎 ⋅⋅⋅ 𝑎 𝐴 11 21 𝑘1 and the column vectors of are also linearly independent; 𝑎12 𝑎22 ⋅⋅⋅ 𝑎𝑘2 𝑠 𝐴=( ) therefore, the above linear equation has unique solution, so the columns of . . . . is linear independent. 𝑔 . . . . is only defined; that is, is a surjection. 𝑎1𝑛 𝑎2𝑛 ⋅⋅⋅ 𝑎𝑘𝑛 Journal of Applied Mathematics 5

𝑝𝑛 𝑝𝑛−1 𝑝 From Conclusion 5, we can conclude that the number of 𝐴 since 𝛼 ,𝛼 ,...,𝛼 and the column vectors of 𝐴 both 𝑘(𝑘−1)/2 𝑛 𝑖 󸀠 satisfying the condition is 𝑞 ∏𝑖=𝑛−𝑘+1(𝑞 −1).Onthe are linearly independent, it forces that 𝑠=𝑠;thisisa otherhand,thenumberofdistinctnonzeroelements𝑥𝑖 (1 ≤ contradiction. Therefore, we get 𝑞−1 𝑘(𝑘−1)/2 𝑛 𝑖 𝑖≤𝑛)in 𝐹𝑞 is ( )=𝑞−1,so|𝐸𝑈 |=[𝑞 ∏𝑖=𝑛−𝑘+1(𝑞 − 1 𝑖 󸀠 1)](𝑞 − 1).For𝐸𝑅, 𝐸𝑅 ={𝑝1(𝛼),2 𝑝 (𝛼),...,𝑝𝑘(𝛼),where𝛼 𝑠−𝑠 𝐹 } 𝛼 [ 𝑠2 −𝑠󸀠2 ] is a primitive element of 𝑞 .For ,fromConclusion1,a −1 [ 𝑛 𝑛−1 ] ∗ ∗ (𝑡 −󸀠 𝑡 ) [(𝛼𝑝 ,𝛼𝑝 ,...,𝛼𝑝)𝐴 ( ) ] =1, generator of 𝐹𝑞 is called a primitive element of 𝐹𝑞, |𝐹𝑞 |=𝑞−1; [ . ] . by the theory of the group, we know that the number of 𝑠𝑘 −𝑠󸀠𝑘 𝐹∗ 𝜑(𝑞−1) 𝛼 𝜑(𝑞−1) [ ] generator of 𝑞 is ;thatis,thenumberof is . (∗) For 𝑝1(𝑥),2 𝑝 (𝑥),...,𝑝𝑘(𝑥).Fromabove,wehaveconfirmed 𝑘(𝑘−1)/2 𝑛 𝑖 that the number of these polynomials is 𝑞 ∏𝑖=𝑛−𝑘+1(𝑞 − 󸀠 󸀠 −1 𝑘(𝑘−1)/2 𝑛 𝑖 since 𝑡, 𝑡 is given, (𝑡 − 𝑡 ) is unique, by equation (∗),for 1); therefore, |𝐸𝑅|=[𝑞 ∏ (𝑞 −1)]𝜑(𝑞−1). 󸀠 󸀠 𝑝𝑛 𝑝𝑛−1 𝑝 𝑖=𝑛−𝑘+1 any given 𝑠, 𝑠 and 𝑡, 𝑡 ,weobtainthat(𝛼 ,𝛼 ,...,𝛼 )𝐴 is 󸀠 Lemma 4. 𝑚∈𝑀 𝑒 𝑚 only determined; thus, the number of 𝑒𝑅 contained 𝑚 and 𝑚 For any ,thenumberof 𝑅 contained is 1 𝜑(𝑞 − 1). is .

Lemma 6. For any fixed 𝑒𝑈 ={𝑝1(𝑥𝑖), 𝑝2(𝑥𝑖),...,𝑝𝑘(𝑥𝑖), 𝑥𝑖 ∈ Proof. Let 𝑚 = (𝑠, 𝑡), ∈𝑀 𝑒𝑅 ={𝑝1(𝛼),2 𝑝 (𝛼),...,𝑝𝑘(𝛼), ∗ 𝐹 }(1≤𝑖≤𝑛)containing a given 𝑒𝐿, then the number of 𝑒𝑅 where 𝛼 is a primitive element of 𝐹𝑞}∈𝐸𝑅.If𝑒𝑅 ⊂𝑚, 𝑞 2 𝑘 𝑝𝑛 𝑝𝑛−1 which is incidence with 𝑒𝑈 is 𝜑(𝑞 − 1). then 𝑠𝑝1(𝛼) + 𝑠 𝑝2(𝛼) + ⋅ ⋅ ⋅ + 𝑠 𝑝𝑘(𝛼) = 𝑡 ⇔ (𝛼 ,𝛼 , 𝑠 2 𝑠 Proof. For any fixed 𝑒𝑈 ={𝑝1(𝑥𝑖), 𝑝2(𝑥𝑖),...,𝑝𝑘(𝑥𝑖), 𝑥𝑖 ∈ ...,𝛼𝑝)𝐴 ( )=𝑡 𝛼 𝐹∗}(1≤ 𝑖 ≤ 𝑛) 𝑒 . .Forany , suppose that there is 𝑞 containing a given 𝐿, we assume that . 𝑛 (𝑛−1) 𝑠𝑘 𝑝 𝑝 𝑝 𝑠 𝑝𝑗(𝑥𝑖)=𝑎𝑗1𝑥𝑖 +𝑎𝑗2𝑥𝑖 +⋅⋅⋅+𝑎𝑗𝑛𝑥𝑖 (1≤𝑗≤𝑘,1≤𝑖≤𝑛), 𝑠2 󸀠 𝑝𝑛 𝑝𝑛−1 𝑝 󸀠 𝑒𝑅 ={𝑝1(𝛼),2 𝑝 (𝛼),...,𝑝𝑘(𝛼),where𝛼 is a primitive element another 𝐴 such that (𝛼 ,𝛼 ,...,𝛼 )𝐴 ( . )=𝑡, 𝐹 } 𝑒 𝑒 . of 𝑞 . From the definitions of 𝑅 and 𝑈 and Conclusion 4, 𝑠𝑘 𝑒 𝑒 𝑠 we can conclude that 𝑅 is incidence with 𝑈 if and only if 2 𝑥 +𝑥 +⋅⋅⋅+𝑥 =𝛼 𝛼 (1,1,...,1) = 𝑛 𝑛−1 𝑠 1 2 𝑛 .Forany ,sinceRank (𝛼𝑝 ,𝛼𝑝 ,...,𝛼𝑝)(𝐴 −󸀠 𝐴 )( )=0 then . ,because Rank(1,1,...,1,𝛼)=1<𝑛,sotheequation𝑥1+𝑥2+⋅⋅⋅+𝑥𝑛 = . 𝑠𝑘 𝛼 always has a solution. From the proof of Theorem 3,we 𝑠 𝑠2 know the number of 𝑒𝑅 which is incident with 𝑒𝑈 (i.e., the 𝑝𝑛 𝑝𝑛−1 𝑝 󸀠 𝑘(𝑘−1)/2 𝑛 𝑖 𝛼 ,𝛼 ,...,𝛼 is linearly independent, so (𝐴−𝐴 )( . )= number of all 𝐸𝑅)is[𝑞 ∏𝑖=𝑛−𝑘+1(𝑞 −1)]𝜑(𝑞−1). . 𝑠𝑘 𝑠 Lemma 7. 𝑒 ={𝑝(𝑥 ), 𝑝 (𝑥 ),...,𝑝 (𝑥 ), 𝑥 ∈ 2 For any fixed 𝑈 1 𝑖 2 𝑖 𝑘 𝑖 𝑖 𝑠 𝐹∗}(1≤𝑖≤𝑛) 𝑒 𝑚 = (𝑠, 𝑡) 0 ( ) 𝐴−𝐴󸀠 =0 𝑞 containing a given 𝐿 and ,thenum- ,but . is arbitrarily; therefore, ;thatis, . ber of 𝑒𝑅 which is incidence with 𝑒𝑈 and contained in 𝑚 is 1. 𝑠𝑘 󸀠 𝐴=𝐴,anditfollowsthat𝐴 is only determined by 𝛼. Proof. For any 𝑠∈𝑆,𝑒𝑅 ∈𝐸𝑅, we assume that 𝑒𝑅 = Therefore, as 𝛼∈𝐸𝑅,foranygiven𝑠 and 𝑡,thenumberof {𝑝1(𝛼),2 𝑝 (𝛼),...,𝑝𝑘(𝛼),where𝛼 is a primitive element 𝑒𝑅 contained in 𝑚 is 𝜑(𝑞 − 1). of 𝐹𝑞}.SimilartoLemma 6,foranyfixed𝑒𝑈 = {𝑝 (𝑥 ), 𝑝 (𝑥 ),...,𝑝 (𝑥 ), 𝑥 ∈𝐹∗} (1≤𝑖≤𝑛) 󸀠 󸀠 󸀠 1 𝑖 2 𝑖 𝑘 𝑖 𝑖 𝑞 , containing Lemma 5. For any 𝑚 = (𝑠, 𝑡) ∈𝑀 and 𝑚 =(𝑠,𝑡 )∈𝑀with 𝑒 𝑒 𝑒 𝑠 =𝑠̸󸀠 𝑒 𝑚 𝑚󸀠 1 agiven 𝐿,wehaveknownthat 𝑅 is incident with 𝑈 if and ,thenumberof 𝑅 contained and is . only if

Proof. Assume that 𝑒𝑅 ={𝑝1(𝛼),2 𝑝 (𝛼),...,𝑝𝑘(𝛼),where 𝑥1 +𝑥2 +⋅⋅⋅+𝑥𝑛 =𝛼. (11) 𝛼 is a primitive element of 𝐹𝑞}∈𝐸𝑅.If𝑒𝑅 ⊂𝑚and 𝑒 ⊂𝑚󸀠 𝑠𝑝 (𝛼) +2 𝑠 𝑝 (𝛼)+ ⋅⋅⋅𝑘 +𝑠 𝑝 (𝛼) = 𝑡 ⇔ 𝑅 ,then 1 2 𝑘 Again, with 𝑒𝑅 ⊂𝑚,wecanget 𝑠 𝑠2 𝑝𝑛 𝑝𝑛−1 𝑝 󸀠 󸀠2 𝑠𝑝 (𝛼) +𝑠2𝑝 (𝛼) +⋅⋅⋅+𝑠𝑘𝑝 (𝛼) =𝑡. (𝛼 ,𝛼 ,...,𝛼 )𝐴 ( . )=𝑡, 𝑠 𝑝1(𝛼) + 𝑠 𝑝2(𝛼)+ ⋅⋅⋅ + 1 2 𝑘 (12) . 𝑠𝑘 𝑠󸀠 By (11)and(12)andthepropertyof𝑝𝑗(𝑥) (1 ≤ 𝑗 ,we≤ 𝑘) 𝑠󸀠2 󸀠𝑘 𝑝𝑛 𝑝𝑛−1 𝑝 have the following conclusion: 𝑠 𝑝𝑘(𝛼) = 𝑡 ⇔ (𝛼 ,𝛼 ,...,𝛼 )𝐴 ( . )=𝑡.Itis . 𝑠󸀠𝑘 𝑛 𝑛 𝑛 2 𝑘 𝑠−𝑠󸀠 𝑠𝑝 (∑𝑥 )+𝑠 𝑝 (∑𝑥 )+⋅⋅⋅+𝑠 𝑝 (∑𝑥 ) 2 󸀠2 1 𝑖 2 𝑖 𝑘 𝑖 𝑛 𝑛−1 𝑠 −𝑠 (𝛼𝑝 ,𝛼𝑝 ,...,𝛼𝑝)𝐴 ( )=𝑡−𝑡󸀠 𝑖=1 𝑖=1 𝑖=1 equivalent to . because . 𝑛 𝑛 𝑛 𝑠𝑘−𝑠󸀠𝑘 󸀠 󸀠 󸀠 =𝑡⇐⇒(𝑝1 (∑𝑥𝑖),𝑝2 (∑𝑥𝑖),...,𝑝𝑘 (∑𝑥𝑖)) 𝑠 =𝑠̸ ,so𝑡 =𝑡̸ ; otherwise, we assume that 𝑡=𝑡and 𝑖=1 𝑖=1 𝑖=1 6 Journal of Applied Mathematics

𝑠 𝜑(𝑞−1) 2 = 𝑠 [𝑞𝑘(𝑘−1)/2∏𝑛 (𝑞𝑖 −1)]𝜑(𝑞−1) ×( ) 𝑖=𝑛−𝑘+1 . 1 . = . 𝑘 𝑘(𝑘−1)/2 𝑛 𝑖 𝑠 𝑞 ∏𝑖=𝑛−𝑘+1 (𝑞 −1) 𝑠 (15) 2 𝑛 𝑛 𝑛 s =𝑡⇐⇒(∑𝑝 (𝑥 ),∑𝑝 (𝑥 ),...,∑𝑝 (𝑥 ))( ) By Lemmas 4 and 5,weget 1 𝑖 2 𝑖 𝑘 𝑖 . 𝑖=1 𝑖=1 𝑖=1 . 󵄨 󸀠 󵄨 𝑘 max𝑚󸀠 =𝑚∈𝑀̸ 󵄨{𝑒𝑅 ∈𝐸𝑅 |𝑒𝑅 ⊂𝑚,𝑚}󵄨 𝑠 𝑃 = { 󵄨 󵄨} 𝑆 max 󵄨 󵄨 𝑚∈𝑀 󵄨{𝑒𝑅 ∈𝐸𝑅 |𝑒𝑅 ⊂𝑚}󵄨 𝑠 (16) 𝑛 𝑛−1 2 1 𝑛 𝑛 𝑛 𝑠 = . =𝑡⇐⇒( (∑𝑥 ) ,(∑𝑥 ) ,...,(∑𝑥 ))𝐴( ) 𝜑(𝑞−1) 𝑖 𝑖 𝑖 . 𝑖=1 𝑖=1 𝑖=1 . 𝑘 𝑠 By Lemmas 6 and 7,weget 𝑛 𝑛 𝑛 𝑃𝑈 (𝐿) =𝑡⇐⇒[(∑𝑝1 (𝑥𝑖),∑𝑝2 (𝑥𝑖),...,∑𝑝𝑘 (𝑥𝑖)) 𝑖=1 𝑖=1 𝑖=1 󵄨 󵄨 max𝑚∈𝑀 󵄨{𝑒𝑅 ∈𝐸𝑅 |𝑒𝑅 ⊂𝑚,𝑝(𝑒𝑅,𝑒𝑃) =0}̸ 󵄨 = max max{ 󵄨 󵄨 } 𝑛 𝑛 𝑛 𝑛−1 𝑛 𝑒 ∈𝐸 𝑒 ∈𝑒 󵄨 󵄨 𝐿 𝐿 𝐿 𝑈 󵄨{𝑒𝑅 ∈𝐸𝑅 |𝑝(𝑒𝑅,𝑒𝑃) =0}̸ 󵄨 −((∑𝑥 ) ,(∑𝑥 ) ,...,(∑𝑥 ))𝐴] 𝑖 𝑖 𝑖 1 𝑖=1 𝑖=1 𝑖=1 = . [𝑞𝑘(𝑘−1)/2∏𝑛 (𝑞𝑖 −1)]𝜑(𝑞−1) 𝑠 𝑖=𝑛−𝑘+1 𝑠2 (17) ×(. )=0, . 𝑠𝑘 Acknowledgments (13) This paper is supported by the NSF of China (61179026) and the Fundamental Research of the Central Universi- because 𝑠 is any given. Similar to the proof of Lemma 4, 𝑛 𝑛 𝑛 ties of China Civil Aviation University of Science special we can get (∑𝑖=1 𝑝1(𝑥𝑖), ∑𝑖=1 𝑝2(𝑥𝑖),...,∑𝑖=1 𝑝𝑘(𝑥𝑖)) − 𝑛 𝑛 𝑛 𝑛−1 𝑛 𝑛 𝑛 (ZXH2012k003). ((∑𝑖=1𝑥𝑖) ,(∑𝑖=1𝑥𝑖) ,...,(∑𝑖=1 𝑥𝑖))𝐴=0;thatis,((∑𝑖=1 𝑥𝑖) , 𝑛 𝑛−1 𝑛 𝑛 𝑛 (∑𝑖=1 𝑥𝑖) ,...,(∑𝑖=1𝑥𝑖))𝐴 = (∑𝑖=1 𝑝1(𝑥𝑖), ∑𝑖=1 𝑝2(𝑥𝑖),..., 𝑛 References ∑𝑖=1 𝑝𝑘(𝑥𝑖)),but𝑝1(𝑥𝑖), 𝑝2(𝑥𝑖),...,𝑝𝑘(𝑥𝑖) and 𝑥𝑖 (1≤𝑖≤𝑛) also are fixed; thus, 𝛼 and 𝐴are only determined, so the [1] E. N. Gilbert, F. J. MacWilliams, and N. J. A. Sloane, “Codes number of 𝑒𝑅 which is incident with 𝑒𝑈 and contained in 𝑚 which detect deception,” The Bell System Technical Journal,vol. is 1. 53, pp. 405–424, 1974. [2]Y.Desmedt,Y.Frankel,andM.Yung,“Multi-receiver/multi- Theorem 8. In the constructed multisender authentication sender network security: efficient authenticated multicast/ codes, if the senders’ encoding rules and the receiver’s decoding feedback,”in Proceedings of the the 11th Annual Conference of the rules are chosen according to a uniform probability distribu- IEEE Computer and Communications Societies (Infocom ’92),pp. tion,thenthelargestprobabilitiesofsuccessfordifferenttypes 2045–2054, May 1992. of deceptions, respectively, are [3] K. Martin and R. Safavi-Naini, “Multisender authentication schemes with unconditional security,”in Information and Com- 1 munications Security,vol.1334ofLecture Notes in Computer 𝑃 = , 𝐼 𝑘(𝑘−1)/2 𝑛 𝑖 Science, pp. 130–143, Springer, Berlin, Germany, 1997. 𝑞 ∏𝑖=𝑛−𝑘+1 (𝑞 −1) [4] W. Ma and X. Wang, “Several new constructions of multi 1 trasmitters authentication codes,” Acta Electronica Sinica,vol. 𝑃 = , 𝑆 𝜑(𝑞−1) (14) 28,no.4,pp.117–119,2000. [5] G. J. Simmons, “Message authentication with arbitration of 1 𝑃 (𝐿) = . transmitter/receiver disputes,” in Advances in Cryptology— 𝑈 𝑘(𝑘−1)/2 𝑛 𝑖 EUROCRYPT’87,WorkshopontheTheoryandApplicationofof [𝑞 ∏𝑖=𝑛−𝑘+1 (𝑞 −1)]𝜑(𝑞−1) Cryptographic Techniques,vol.304ofLecture Notes in Computer Science, pp. 151–165, Springer, 1988. Proof. By Theorem 3 and Lemma 4,weget [6] S. Cheng and L. Chang, “Two constructions of multi-sender 󵄨 󵄨 authentication codes with arbitration based linear codes to be 󵄨{𝑒 ∈𝐸 |𝑒 ⊂𝑚}󵄨 𝑃 = {󵄨 𝑅 𝑅 𝑅 󵄨} published in ,” WSEAS Transactions on Mathematics, vol. 11, no. 𝐼 max 󵄨 󵄨 𝑚∈𝑀 󵄨𝐸𝑅󵄨 12, 2012. Journal of Applied Mathematics 7

[7] R. Safavi-Naini and H. Wang, “New results on multi- receiver authentication codes,” in Advances in Cryptology— EUROCRYPT ’98 (Espoo),vol.1403ofLecture Notes in Comput. Sci.,pp.527–541,Springer,Berlin,Germany,1998. [8] R. Aparna and B. B. Amberker, “Multi-sender multi-receiver authentication for dynamic secure group communication,” International Journal of Computer Science and Network Security, vol.7,no.10,pp.47–63,2007. [9] R. Safavi-Naini and H. Wang, “Bounds and constructions for multireceiver authentication codes,”in Advances in cryptology— ASIACRYPT’98 (Beijing),vol.1514ofLecture Notes in Comput. Sci., pp. 242–256, Springer, Berlin, Germany, 1998. [10] S. Shen and L. Chen, Information and Coding Theory, Science press in China, 2002. [11] J. J. Rotman, Advanced Modern Algebra, High Education Press in China, 2004. [12] Z. Wan, Geometry of Classical Group over Finite Field, Science Press in Beijing, New York, NY, USA, 2002. Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 435730, 8 pages http://dx.doi.org/10.1155/2013/435730

Research Article Geometrical and Spectral Properties of the Orthogonal Projections of the Identity

Luis González, Antonio Suárez, and Dolores García

Department of Mathematics, Research Institute SIANI, University of Las Palmas de Gran Canaria, Campus de Tafira, 35017 Las Palmas de Gran Canaria, Spain

Correspondence should be addressed to Luis Gonzalez;´ [email protected]

Received 28 December 2012; Accepted 10 April 2013

Academic Editor: K. Sivakumar

Copyright © 2013 Luis Gonzalez´ et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

𝑛×𝑛 We analyze the best approximation 𝐴𝑁 (in the Frobenius sense) to the identity matrix in an arbitrary matrix subspace 𝐴𝑆 (𝐴∈R 𝑛×𝑛 nonsingular, 𝑆 being any fixed subspace of R ). Some new geometrical and spectral properties of the orthogonal projection 𝐴𝑁 are derived. In particular, new inequalities for the trace and for the eigenvalues of matrix 𝐴𝑁 are presented for the special case that 𝐴𝑁 is symmetric and positive definite.

1. Introduction the so-called left and right preconditioned systems, respec- tively. In this paper, we address only the case of the right-hand 𝑛×𝑛 R𝑛×𝑛 The set of all real matrices is denoted by ,and side preconditioned matrices 𝐴𝑁,butanalogousresultscan 𝐼 denotes the identity matrix of order 𝑛.Inthefollowing, 𝑇 be obtained for the left-hand side preconditioned matrices 𝐴 and tr(𝐴) denote,asusual,thetransposeandthetrace 𝑁𝐴 𝑛×𝑛 . of matrix 𝐴∈R .Thenotations⟨⋅, ⋅⟩𝐹 and ‖⋅‖𝐹 stand The preconditioning of the system (1)isoftenperformed for the Frobenius inner product and matrix norm, defined 𝐴𝑁 𝑛×𝑛 in order to get a preconditioned matrix as close as on the matrix space R . Throughout this paper, the terms possible to the identity in some sense, and the preconditioner orthogonality,angle,andcosinewillbeusedinthesenseof 𝑁 is called an approximate inverse of 𝐴.Theclosenessof𝐴𝑁 the Frobenius inner product. to 𝐼 maybemeasuredbyusingasuitablematrixnormlike,for Our starting point is the linear system instance, the Frobenius norm [4]. In this way, the problem 𝑛×𝑛 𝑛 𝑁 𝐴𝑥=𝑏,R 𝐴∈ ,𝑥,𝑏∈R , (1) of obtaining the best preconditioner (with respect to the Frobenius norm) of the system (1) in an arbitrary subspace 𝑛×𝑛 where 𝐴 is a large, nonsingular, and sparse matrix. The 𝑆 of R is equivalent to the minimization problem; see, for resolution of this system is usually performed by iterative example, [5] methods based on Krylov subspaces (see, e.g., [1, 2]). The 𝐴 min‖𝐴𝑀 −𝐼‖𝐹 = ‖𝐴𝑁 −𝐼‖𝐹. coefficient matrix of the system (1) is often extremely 𝑀∈𝑆 (3) ill-conditioned and highly indefinite, so that in this case, Krylov subspace methods are not competitive without a The solution 𝑁 to the problem (3) will be referred to as good preconditioner (see, e.g., [2, 3]). Then, to improve the the “optimal” or the “best” approximate inverse of matrix 𝐴 convergence of these Krylov methods, the system (1)can in the subspace 𝑆. Since matrix 𝐴𝑁 is the best approximation be preconditioned with an adequate nonsingular precondi- to the identity in subspace 𝐴𝑆, it will be also referred to 𝑁 tioning matrix , transforming it into any of the equivalent as the orthogonal projection of the identity matrix onto the systems subspace 𝐴𝑆.Althoughmanyoftheresultspresentedinthis 𝑁𝐴𝑥=𝑁𝑏, paperarealsovalidforthecasethatmatrix𝑁 is singular, from 𝑁 (2) now on, we assume that the optimal approximate inverse 𝐴𝑁𝑦=𝑏, 𝑥=𝑁𝑦, (and thus also the orthogonal projection 𝐴𝑁) is a nonsingular 2 Journal of Applied Mathematics

𝑛×𝑛 matrix. The solution 𝑁 to the problem (3)hasbeenstudied subspace 𝐴𝑆 ⊂ R . For more details about these results and as a natural generalization of the classical Moore-Penrose for their proofs, we refer the reader to [5, 6, 11]. inverse in [6], where it has been referred to as the 𝑆-Moore- Taking advantage of the prehilbertian character of the Penrose inverse of matrix 𝐴. matrix Frobenius norm, the solution 𝑁 to the problem (3) The main goal of this paper is to derive new geometrical canbeobtainedusingtheorthogonalprojectiontheorem. and spectral properties of the best approximations 𝐴𝑁 (in the More precisely, the matrix product 𝐴𝑁 is the orthogonal sense of formula (3)) to the identity matrix. Such properties projection of the identity onto the subspace 𝐴𝑆, and it satisfies couldbeusedtoanalyzethequalityandtheoreticaleffective- the conditions stated by the following lemmas; see [5, 11]. 𝑁 ness of the optimal approximate inverse as preconditioner 𝑛×𝑛 Lemma 1. Let 𝐴∈R be nonsingular and let 𝑆 be a linear of the system (1). However, it is important to highlight 𝑛×𝑛 that the purpose of this paper is purely theoretical, and we subspace of R .Let𝑁 be the solution to the problem (3). are not looking for immediate numerical or computational Then, approaches (although our theoretical results could be poten- 0≤‖𝐴𝑁‖2 = (𝐴𝑁) ≤𝑛, tially applied to the preconditioning problem). In particular, 𝐹 tr (4) the term “optimal (or best) approximate inverse” is used in 0≤‖𝐴𝑁 −𝐼‖2 =𝑛− (𝐴𝑁) ≤𝑛. the sense of formula (3) and not in any other sense of this 𝐹 tr (5) expression. An explicit formula for matrix 𝑁 can be obtained by ex- Among the many different works dealing with practical pressing the orthogonal projection 𝐴𝑁 of the identity matrix algorithms that can be used to compute approximate inverses, onto the subspace 𝐴𝑆 by its expansion with respect to an we refer the reader to for example, [4, 7–9]andtothe orthonormal basis of 𝐴𝑆 [5].Thisistheideaofthefollowing references therein. In [4], the author presents an exhaustive lemma. survey of preconditioning techniques and, in particular, 𝑛×𝑛 describes several algorithms for computing sparse approxi- Lemma 2. Let 𝐴∈R be nonsingular. Let 𝑆 be a linear mate inverses based on Frobenius norm minimization like, 𝑛×𝑛 subspace of R of dimension 𝑑 and {𝑀1,...,𝑀𝑑} abasisof 𝑆 for instance, the well-known SPAI and FSAI algorithms. A such that {𝐴𝑀1,...,𝐴𝑀𝑑} is an orthogonal basis of 𝐴𝑆.Then, different approach (which is also focused on approximate the solution 𝑁 to the problem (3) is inverses based on minimizing ‖𝐴𝑀 𝐹− 𝐼‖ )canbefound in [7], where an iterative descent-type method is used to 𝑑 (𝐴𝑀 ) 𝑁=∑tr 𝑖 𝑀 , approximate each column of the inverse, and the iteration 󵄩 󵄩2 𝑖 (6) 󵄩𝐴𝑀 󵄩 is done with “sparse matrix by sparse vector” operations. 𝑖=1 󵄩 𝑖󵄩𝐹 When the system matrix is expressed in block-partitioned ( ) form, some preconditioning options are explored in [8]. In and the minimum residual Frobenius norm is [9], the idea of “target” matrix is introduced, in the context 𝑑 [ (𝐴𝑀 )]2 of sparse approximate inverse preconditioners, and the gen- ‖𝐴𝑁 −𝐼‖2 =𝑛−∑ tr 𝑖 . 2 𝑇 𝐹 󵄩 󵄩2 (7) eralized Frobenius norms ‖𝐵‖𝐹,𝐻 = tr(𝐵𝐻𝐵 ) (𝐻 symmetric 𝑖=1 󵄩𝐴𝑀𝑖󵄩𝐹 positive definite) are used, for minimization purposes, as an alternative to the classical Frobenius norm. Let us mention two possible options, both taken from [5], The last results of our work are devoted to the special case for choosing in practice the subspace 𝑆 and its corresponding 𝑑 that matrix 𝐴𝑁 is symmetric and positive definite. In this basis {𝑀𝑖}𝑖=1. The first example consists of considering the sense, let us recall that the cone of symmetric and positive subspace 𝑆 of 𝑛×𝑛matrices with a prescribed sparsity pattern, definite matrices has a rich geometrical structure and, in this that is, context, the angle that any symmetric and positive definite 𝑆={𝑀∈R𝑛×𝑛 :𝑚 = 0 ∀ (𝑖, 𝑗) ∉𝐾} , matrix forms with the identity plays a very important role 𝑖𝑗 [10]. In this paper, the authors extend this geometrical point (8) 𝐾⊂{1,2,...,𝑛} × {1,2,...,𝑛} . of view and analyze the geometrical structure of the subspace of symmetric matrices of order 𝑛,includingthelocationofall Then, denoting by 𝑀𝑖,𝑗,the𝑛×𝑛matrix whose only nonzero orthogonal matrices not only the identity matrix. entry is 𝑚𝑖𝑗 =1, a basis of subspace 𝑆 is clearly {𝑀𝑖,𝑗 : Thispaperhasbeenorganizedasfollows.InSection2, (𝑖, 𝑗) ,andthen∈ 𝐾} {𝐴𝑀𝑖,𝑗 : (𝑖, 𝑗) ∈𝐾} will be a basis we present some preliminary results required to make the of subspace 𝐴𝑆 (since we have assumed that matrix 𝐴 is paper self-contained. Sections 3 and 4 are devoted to obtain nonsingular). In general, this basis of 𝐴𝑆 is not orthogonal, new geometrical and spectral relations, respectively, for the so that we only need to use the Gram-Schmidt procedure orthogonal projections 𝐴𝑁 of the identity matrix. Finally, to obtain an orthogonal basis of 𝐴𝑆, in order to apply the Section 5 closes the paper with its main conclusions. orthogonal expansion (6). For the second example, consider a linearly independent set of 𝑛×𝑛real symmetric matrices {𝑃1,...,𝑃𝑑} and the 2. Some Preliminaries corresponding subspace

Now, we present some preliminary results concerning the 𝑆󸀠 = {𝑃 𝐴𝑇,...,𝑃 𝐴𝑇}, orthogonal projection 𝐴𝑁 of the identity onto the matrix span 1 𝑑 (9) Journal of Applied Mathematics 3

𝑛×𝑛 which clearly satisfies Theorem 5. Let 𝐴∈R be nonsingular and let 𝑆 be a linear 𝑛×𝑛 subspace of R .Let𝑁 be the solution to the problem (3). Then, 𝑆󸀠 ⊆{𝑀=𝑃𝐴𝑇 :𝑃∈R𝑛×𝑛 𝑃𝑇 =𝑃} 󵄨 󵄨 2 2 (10) (1 − 󵄨𝜆 󵄨) ≤(1−𝜎) ≤ ‖𝐴𝑁 −𝐼‖2 ={𝑀∈R𝑛×𝑛 : (𝐴𝑀)𝑇 =𝐴𝑀}. 󵄨 𝑛󵄨 𝑛 𝐹 (14) 󵄨 󵄨2 2 ≤𝑛(1−󵄨𝜆𝑛󵄨 )≤𝑛(1−𝜎𝑛). Hence, we can explicitly obtain the solution 𝑁 to the 󸀠 𝑇 𝑇 Remark 6. Theorem 5 states that the closer the smallest problem (3)forsubspace𝑆 ,fromitsbasis{𝑃1𝐴 ,...,𝑃𝑑𝐴 }, 𝑇 𝑇 singular value 𝜎𝑛 of matrix 𝐴𝑁 is to the unity, the closer as follows. If {𝐴𝑃1𝐴 ,...,𝐴𝑃𝑑𝐴 } is an orthogonal basis of 󸀠 matrix 𝐴𝑁 willbetotheidentity,thatis,thesmaller subspace 𝐴𝑆 , then we just use the orthogonal expansion (6) ‖𝐴𝑁 𝐹− 𝐼‖ will be, and conversely. The same happens with for obtaining 𝑁.Otherwise,weuseagaintheGram-Schmidt 󸀠 the smallest eigenvalue’s modulus |𝜆𝑛| of matrix 𝐴𝑁.Inother proceduretoobtainanorthogonalbasisofsubspace𝐴𝑆 ,and words, we get a good approximate inverse 𝑁 of 𝐴 when 𝜎𝑛 then we apply formula (6). The interest of this second example (|𝜆𝑛|)issufficientlycloseto1. stands in the possibility of using the conjugate gradient method for solving the preconditioned linear system, when 𝐴𝑁 To finish this section, let us mention that, recently, lower the symmetric matrix is positive definite. For a more andupperboundsonthenormalizedFrobeniuscondition detailed exposition of the computational aspects related to number of the orthogonal projection 𝐴𝑁 of the identity these two examples, we refer the reader to [5]. onto the subspace 𝐴𝑆 have been derived in [12]. In addition, Now, we present some spectral properties of the orthog- 𝐴𝑁 {𝜆 }𝑛 this work proposes a natural generalization (related to an onal projection . From now on, we denote by 𝑖 𝑖=1 and 𝑆 R𝑛×𝑛 {𝜎 }𝑛 arbitrary matrix subspace of )ofthenormalized 𝑖 𝑖=1 the sets of eigenvalues and singular values, respectively, Frobenius condition number of the nonsingular matrix 𝐴. of matrix 𝐴𝑁 arranged,asusual,innonincreasingorder,that is, 3. Geometrical Properties 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨𝜆1󵄨 ≥ 󵄨𝜆2󵄨 ≥⋅⋅⋅≥󵄨𝜆𝑛󵄨 >0, In this section, we present some new geometrical properties (11) 𝐴𝑁 𝑁 𝜎 ≥𝜎 ≥⋅⋅⋅≥𝜎 >0. for matrix , being the optimal approximate inverse of 1 2 𝑛 matrix 𝐴, defined by (3). Our first lemma states some basic properties involving the cosine of the angle between matrix 𝐴𝑁 The following lemma [11] provides some inequalities and the identity, that is, involving the eigenvalues and singular values of the precon- ⟨𝐴𝑁, 𝐼⟩𝐹 tr (𝐴𝑁) ditioned matrix 𝐴𝑁. cos (𝐴𝑁, 𝐼) = = . (15) ‖𝐴𝑁‖𝐹‖𝐼‖𝐹 ‖𝐴𝑁‖𝐹√𝑛 Lemma 3. 𝐴∈R𝑛×𝑛 𝑆 Let be nonsingular and let be a linear 𝑛×𝑛 R𝑛×𝑛 Lemma 7. Let 𝐴∈R be nonsingular and let 𝑆 be a linear subspace of .Then, 𝑛×𝑛 subspace of R .Let𝑁 be the solution to the problem (3). Then, 𝑛 𝑛 𝑛 󵄨 󵄨2 ∑𝜆2 ≤ ∑󵄨𝜆 󵄨 ≤ ∑𝜎2 = ‖𝐴𝑁‖2 𝑖 󵄨 𝑖󵄨 𝑖 𝐹 tr (𝐴𝑁) ‖𝐴𝑁‖𝐹 √tr (𝐴𝑁) 𝑖=1 𝑖=1 𝑖=1 cos (𝐴𝑁, 𝐼) = = = , (16) 𝑛 𝑛 𝑛 (12) ‖𝐴𝑁‖ √𝑛 √𝑛 √𝑛 󵄨 󵄨 𝐹 = tr (𝐴𝑁) = ∑𝜆𝑖 ≤ ∑ 󵄨𝜆𝑖󵄨 ≤ ∑𝜎𝑖. 𝑖=1 𝑖=1 𝑖=1 0≤cos (𝐴𝑁, 𝐼) ≤1, (17)

2 2 ‖𝐴𝑁 −𝐼‖𝐹 =𝑛(1−cos (𝐴𝑁, 𝐼)). (18) The following fact [11]isadirectconsequenceof Lemma 3. Proof. First, using (15)and(4)weimmediatelyobtain(16). 𝑛×𝑛 As a direct consequence of (16), we derive that cos(𝐴𝑁, 𝐼) is Lemma 4. Let 𝐴∈R be nonsingular and let 𝑆 be a linear 𝑛×𝑛 always nonnegative. Finally, using (5)and(16), we get subspace of R . Then, the smallest singular value and the smallest eigenvalue’s modulus of the orthogonal projection 𝐴𝑁 2 2 ‖𝐴𝑁 −𝐼‖𝐹 =𝑛−tr (𝐴𝑁) =𝑛(1−cos (𝐴𝑁, 𝐼)) (19) of the identity onto the subspace 𝐴𝑆 are never greater than 1. That is, and the proof is concluded.

󵄨 󵄨 Remark 8. In [13], the authors consider an arbitrary approxi- 0<𝜎𝑛 ≤ 󵄨𝜆𝑛󵄨 ≤1. (13) mate inverse 𝑄 of matrix 𝐴 and derive the following equality:

2 2 The following theorem [11]establishesatightconnection ‖𝐴𝑄 −𝐼‖𝐹 =(‖𝐴𝑄‖𝐹 − ‖𝐼‖𝐹) between the closeness of matrix 𝐴𝑁 to the identity matrix (20) +2(1− (𝐴𝑄, 𝐼)) ‖𝐴𝑄‖ ‖𝐼‖ , and the closeness of 𝜎𝑛(|𝜆𝑛|)totheunity. cos 𝐹 𝐹 4 Journal of Applied Mathematics that is, the typical decomposition (valid in any inner product The next lemma provides us with a relationship between space) of the strong convergence into the convergence of the Frobenius norms of the inverses of matrices 𝐴 and its best 2 𝑁 𝑆 the norms (‖𝐴𝑄‖𝐹 −‖𝐼‖𝐹) and the weak convergence (1 − approximate inverse in subspace . cos(𝐴𝑄, 𝐼))‖𝐴𝑄‖𝐹‖𝐼‖𝐹.Notethatforthespecialcasethat𝑄 Lemma 12. 𝐴∈R𝑛×𝑛 𝑆 is the optimal approximate inverse 𝑁 defined by (3), formula Let be nonsingular and let be a linear R𝑛×𝑛 𝑁 (18) has stated that the strong convergence is reduced just subspace of .Let be the solution to the problem (3). to the weak convergence and, indeed, just to the cosine Then, (𝐴𝑁, 𝐼) 󵄩 󵄩 󵄩 󵄩 cos . 󵄩𝐴−1󵄩 󵄩𝑁−1󵄩 ≥1. 󵄩 󵄩𝐹󵄩 󵄩𝐹 (26) Remark 9. More precisely, formula (18)statesthatthecloser cos(𝐴𝑁, 𝐼) is to the unity (i.e., the smaller the angle ∠(𝐴𝑁, 𝐼) Proof. Using (4), we get is), the smaller ‖𝐴𝑁 𝐹− 𝐼‖ will be, and conversely. This gives 󵄩 󵄩 ‖𝐴𝑁‖ ≤ √𝑛=‖𝐼‖ = 󵄩(𝐴𝑁)(𝐴𝑁)−1󵄩 us a new measure of the quality (in the Frobenius sense) of 𝐹 𝐹 󵄩 󵄩𝐹 the approximate inverse 𝑁 of matrix 𝐴,bycomparingthe 󵄩 󵄩 󵄩 󵄩 (27) ≤ ‖𝐴𝑁‖ 󵄩(𝐴𝑁)−1󵄩 󳨐⇒ 󵄩(𝐴𝑁)−1󵄩 ≥1, minimum residual norm ‖𝐴𝑁 𝐹− 𝐼‖ with the cosine of the 𝐹󵄩 󵄩𝐹 󵄩 󵄩𝐹 angle between 𝐴𝑁 and the identity, instead of with tr(𝐴𝑁), ‖𝐴𝑁‖𝐹 (Lemma 1), or 𝜎𝑛, |𝜆𝑛| (Theorem 5). So for a fixed and hence 𝑛×𝑛 nonsingular matrix 𝐴∈R and for different subspaces 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 𝑛×𝑛 󵄩𝐴−1󵄩 󵄩𝑁−1󵄩 ≥ 󵄩𝑁−1𝐴−1󵄩 = 󵄩(𝐴𝑁)−1󵄩 ≥1, 𝑆⊂R ,wehave 󵄩 󵄩𝐹󵄩 󵄩𝐹 󵄩 󵄩𝐹 󵄩 󵄩𝐹 (28) 󵄨 󵄨 tr (𝐴𝑁) ↗𝑛⇐⇒‖𝐴𝑁‖𝐹 ↗ √𝑛⇐⇒𝜎𝑛 ↗1⇐⇒󵄨𝜆𝑛󵄨 ↗1 and the proof is concluded.

⇐⇒ cos (𝐴𝑁, 𝐼) ↗1⇐⇒‖𝐴𝑁 −𝐼‖𝐹 ↘0. The following lemma compares the minimum residual (21) norm ‖𝐴𝑁 𝐹− 𝐼‖ with the distance (with respect to the −1 Frobenius norm) ‖𝐴 −𝑁‖𝐹 between the inverse of 𝐴 and Obviously, the optimal theoretical situation corresponds to the optimal approximate inverse 𝑁 of 𝐴 in any subspace 𝑛×𝑛 𝑛×𝑛 the case 𝑆⊂R . First, note that for any two matrices 𝐴, 𝐵 ∈ R (𝐴 nonsingular), from the submultiplicative property of the tr (𝐴𝑁) =𝑛⇐⇒‖𝐴𝑁‖𝐹 = √𝑛⇐⇒𝜎𝑛 =1⇐⇒𝜆𝑛 =1 Frobenius norm, we immediately get ⇐⇒ cos (𝐴𝑁, 𝐼) =1⇐⇒‖𝐴𝑁 −𝐼‖𝐹 =0 2 󵄩 −1 󵄩2 ‖𝐴𝐵 −𝐼‖ = 󵄩𝐴(𝐵−𝐴 )󵄩 −1 −1 𝐹 󵄩 󵄩𝐹 ⇐⇒ 𝑁 = 𝐴 ⇐⇒ 𝐴 ∈𝑆. (29) 󵄩 󵄩2 (22) ≤ ‖𝐴‖2 󵄩𝐵−𝐴−1󵄩 . 𝐹󵄩 󵄩𝐹 Remark 10. Note that the ratio between cos(𝐴𝑁, 𝐼) and 𝐵=𝑁 (𝐴, 𝐼) 𝑛 𝐴 However, for the special case that (the solution to cos is independent of the order of matrix . Indeed, the problem (3)), we also get the following inequality. assuming that tr(𝐴) =0̸ and using (16), we immediately obtain 𝑛×𝑛 Lemma 13. Let 𝐴∈R be nonsingular and let 𝑆 be a linear (𝐴𝑁, 𝐼) ‖𝐴𝑁‖ : √𝑛 ‖𝐴‖ 𝑛×𝑛 cos = 𝐹 = ‖𝐴𝑁‖ 𝐹 . subspace of R .Let𝑁 be the solution to the problem (3). (𝐴, 𝐼) (𝐴) : ‖𝐴‖ √𝑛 𝐹 (𝐴) (23) cos tr 𝐹 tr Then, 󵄩 󵄩 The following lemma compares the trace and the Frobe- ‖𝐴𝑁 −𝐼‖2 ≤ ‖𝐴‖ 󵄩𝑁−𝐴−1󵄩 . 𝐹 𝐹󵄩 󵄩 (30) nius norm of the orthogonal projection 𝐴𝑁. 𝐹 𝑛×𝑛 Proof. Using the Cauchy-Schwarz inequality and (5), we get Lemma 11. Let 𝐴∈R be nonsingular and let 𝑆 be a linear 𝑛×𝑛 subspace of R .Let𝑁 be the solution to the problem (3). 󵄨 −1 𝑇 󵄨 󵄩 −1 󵄩 󵄩 𝑇󵄩 󵄨⟨𝐴 −𝑁,𝐴 ⟩ 󵄨 ≤ 󵄩𝐴 −𝑁󵄩 󵄩𝐴 󵄩 Then, 󵄨 𝐹󵄨 󵄩 󵄩𝐹󵄩 󵄩𝐹 󵄨 󵄨 󵄩 󵄩 󵄩 󵄩 (𝐴𝑁) ≤ ‖𝐴𝑁‖ ⇐⇒ ‖𝐴𝑁‖ ≤1⇐⇒ (𝐴𝑁) ≤1 󳨐⇒ 󵄨 ((𝐴−1 −𝑁)𝐴)󵄨 ≤ 󵄩𝐴−1 −𝑁󵄩 󵄩𝐴𝑇󵄩 tr 𝐹 𝐹 tr 󵄨tr 󵄨 󵄩 󵄩𝐹󵄩 󵄩𝐹

1 (24) 󵄨 −1 󵄨 󵄩 −1󵄩 ⇐⇒ cos (𝐴𝑁, 𝐼) ≤ , 󳨐⇒ 󵄨 (𝐴 (𝐴 −𝑁))󵄨 ≤ 󵄩𝑁− 󵄩 ‖𝐴‖ √𝑛 󵄨tr 󵄨 󵄩 A 󵄩𝐹 𝐹 󵄩 󵄩 (31) (𝐴𝑁) ≥ ‖𝐴𝑁‖ ⇐⇒ ‖𝐴𝑁‖ ≥1⇐⇒ (𝐴𝑁) ≥1 󳨐⇒ | (𝐼−𝐴𝑁)| ≤ 󵄩𝑁−𝐴−1󵄩 ‖𝐴‖ tr 𝐹 𝐹 tr tr 󵄩 󵄩𝐹 𝐹

1 (25) 󵄩 −1󵄩 ⇐⇒ (𝐴𝑁, 𝐼) ≥ . 󳨐⇒ 𝑛 − (𝐴𝑁) ≤ 󵄩𝑁−𝐴 󵄩 ‖𝐴‖ cos √𝑛 tr 󵄩 󵄩𝐹 𝐹 󵄩 󵄩 󳨐⇒ ‖𝐴𝑁 −𝐼‖2 ≤ ‖𝐴‖ 󵄩𝑁−𝐴−1󵄩 . Proof. Using (4), we immediately obtain the four leftmost 𝐹 𝐹󵄩 󵄩𝐹 equivalences. Using (16), we immediately obtain the two rightmost equivalences. Journal of Applied Mathematics 5

𝑛 󵄨 𝑇 󵄨 The following extension of the Cauchy-Schwarz inequal- ≤ (‖𝐴‖ ‖𝐴𝑁‖ + 󵄨⟨𝐴 ,𝐴𝑁⟩ 󵄨) 2 𝐹 𝐹 󵄨 𝐹󵄨 ity, in a real or complex inner product space (𝐻, ⟨⋅, ⋅⟩),was 𝑎, 𝑥, 𝑏∈𝐻 𝑛 󵄩 𝑇󵄩 obtained by Buzano [14]. For all ,wehave ≤ (‖𝐴‖ ‖𝐴𝑁‖ + 󵄩𝐴 󵄩 ‖𝐴𝑁‖ ) 2 𝐹 𝐹 󵄩 󵄩𝐹 𝐹 1 2 |⟨𝑎, 𝑥⟩ ⋅ ⟨𝑥,⟩| 𝑏 ≤ (‖𝑎‖‖𝑏‖ + |⟨𝑎,⟩| 𝑏 ) ‖𝑥‖ . =𝑛‖𝐴‖ ‖𝐴𝑁‖ 2 (32) 𝐹 𝐹 󳨐⇒ | (𝐴)|‖𝐴𝑁‖2 ≤𝑛‖𝐴‖ ‖𝐴𝑁‖ The next lemma provides us with lower and upper bounds tr 𝐹 𝐹 𝐹 ⟨𝐴𝑁, 𝐵⟩ 𝑛×𝑛 𝐵 on the inner product 𝐹,forany real matrix . ‖𝐴𝑁‖ ‖𝐴‖ 󳨐⇒ 𝐹 ≤ 𝐹 𝑛×𝑛 𝑛 𝐴 Lemma 14. Let 𝐴∈R be nonsingular and let 𝑆 be a linear |tr ( )| R𝑛×𝑛 𝑁 subspace of .Let be the solution to the problem (3). ‖𝐴𝑁‖2 ‖𝐴‖2 (𝐴𝑁) 𝑛‖𝐴‖2 𝐵∈R𝑛×𝑛 󳨐⇒ 𝐹 ≤ 𝐹 󳨐⇒ tr ≤ 𝐹 . Then, for every ,wehave 2 2 2 𝑛 [tr (𝐴)] 𝑛 [tr (𝐴)] 󵄨 󵄨 1 󵄨⟨𝐴𝑁,⟩ 𝐵 󵄨 ≤ (√𝑛‖𝐵‖ + | (𝐵)|). 󵄨 𝐹󵄨 2 𝐹 tr (33) (36)

Proof. Using (32)for𝑎=𝐼, 𝑥=𝐴𝑁, 𝑏=𝐵,and(4), we get Remark 16. Lemma 15 has the following interpretation in 󵄨 󵄨 1 󵄨 󵄨 2 󵄨⟨𝐼, 𝐴𝑁⟩ ⋅ ⟨𝐴𝑁,⟩ 𝐵 󵄨 ≤ (‖𝐼‖ ‖𝐵‖ + 󵄨⟨𝐼,⟩ 𝐵 󵄨) ‖𝐴𝑁‖ terms of the quality of the optimal approximate inverse 𝑁 of 󵄨 𝐹 𝐹󵄨 2 𝐹 𝐹 󵄨 𝐹󵄨 𝐹 matrix 𝐴 in subspace 𝑆. The closer the ratio 𝑛‖𝐴‖𝐹/| tr(𝐴)| 󵄨 󵄨 (𝐴𝑁) 󳨐⇒ 󵄨tr (𝐴𝑁) ⋅ ⟨𝐴𝑁,⟩ 𝐵 𝐹󵄨 is to zero, the smaller tr will be, and thus, due to (5), the larger ‖𝐴𝑁 𝐹− 𝐼‖ will be, and this happens for any matrix 1 ≤ (√𝑛‖𝐵‖ + | (𝐵)|) ‖𝐴𝑁‖2 subspace 𝑆. 2 𝐹 tr 𝐹 Remark 17. By the way, from Lemma 15,weobtainthe 󵄨 󵄨 1 𝑛×𝑛 󳨐⇒ 󵄨⟨𝐴𝑁,⟩ 𝐵 𝐹󵄨 ≤ (√𝑛‖𝐵‖𝐹 + |tr (𝐵)|). following inequality for any nonsingular matrix 𝐴∈R . 2 𝑆 𝐴−1 ∈𝑆 𝑁=𝐴−1 (34) Consider any matrix subspace s.t. .Then, , and using Lemma 15,weget ‖𝐴𝑁‖2 ‖𝐼‖2 1 ‖𝐴‖2 𝐹 = 𝐹 = ≤ 𝐹 𝑛2 𝑛2 𝐴 2 The next lemma provides an upper bound on the arith- n [tr ( )] (37) 2 meticmeanofthesquaresofthe𝑛 terms in the orthogonal 󳨐⇒ |tr (𝐴)| ≤ √𝑛‖𝐴‖𝐹. projection 𝐴𝑁. By the way, it also provides us with an upper bound on the arithmetic mean of the 𝑛 diagonal terms in 4. Spectral Properties the orthogonal projection 𝐴𝑁. These upper bounds (valid for any matrix subspace 𝑆) are independent of the optimal In this section, we present some new spectral properties approximate inverse 𝑁, and thus they are independent of the for matrix 𝐴𝑁, 𝑁 being the optimal approximate inverse subspace 𝑆 and only depend on matrix 𝐴. of matrix 𝐴, defined by (3). Mainly, we focus on the case 𝐴𝑁 𝑛×𝑛 that matrix is symmetric and positive definite. This has Lemma 15. Let 𝐴∈R be nonsingular with tr(𝐴) =0̸ and 𝑛×𝑛 been motivated by the following reason. When solving a large let 𝑆 be a linear subspace of R .Let𝑁 be the solution to the nonsymmetric linear system (1) by using Krylov methods, problem (3).Then, a possible strategy consists of searching for an adequate optimal preconditioner 𝑁 such that the preconditioned ‖𝐴𝑁‖2 ‖𝐴‖2 𝐴𝑁 𝐹 ≤ 𝐹 , matrix is symmetric positive definite [5]. This enables one 2 2 𝑛 [tr (𝐴)] to use the conjugate gradient method (CG-method), which is, (35) in general, a computationally efficient method for solving the (𝐴𝑁) 𝑛‖𝐴‖2 tr ≤ 𝐹 . new preconditioned system [2, 15]. 2 𝑛 [tr (𝐴)] Our starting point is Lemma 3,whichhasestablishedthat the sets of eigenvalues and singular values of any orthogonal 𝑇 𝐴𝑁 Proof. Using (32)for𝑎=𝐴, 𝑥=𝐼,and𝑏=𝐴𝑁,andthe projection satisfy 𝑇 Cauchy-Schwarz inequality for ⟨𝐴 ,𝐴𝑁⟩𝐹 and (4), we get 𝑛 𝑛 𝑛 2 󵄨 󵄨2 2 ∑𝜆𝑖 ≤ ∑󵄨𝜆𝑖󵄨 ≤ ∑𝜎𝑖 󵄨 𝑇 󵄨 󵄨⟨𝐴 ,𝐼⟩ ⋅ ⟨𝐼, 𝐴𝑁⟩ 󵄨 𝑖=1 𝑖=1 𝑖=1 󵄨 𝐹 𝐹󵄨 (38) 𝑛 𝑛 󵄨 󵄨 𝑛 1 󵄩 𝑇󵄩 󵄨 𝑇 󵄨 2 󵄨 󵄨 󵄩 󵄩 󵄨 󵄨 = ∑𝜆𝑖 ≤ ∑ 󵄨𝜆𝑖󵄨 ≤ ∑𝜎𝑖. ≤ (󵄩𝐴 󵄩 ‖𝐴𝑁‖𝐹 + 󵄨⟨𝐴 ,𝐴𝑁⟩ 󵄨) ‖𝐼‖𝐹 󵄨 󵄨 2 󵄩 󵄩𝐹 󵄨 𝐹󵄨 𝑖=1 𝑖=1 𝑖=1 󳨐⇒ |tr (𝐴) ⋅ tr (𝐴𝑁)| Let us particularize (38)forsomespecialcases. 6 Journal of Applied Mathematics

First, note that if 𝐴𝑁 is normal (i.e., for all 1≤𝑖≤𝑛: The rest of the paper is devoted to obtain new properties 𝜎𝑖 =|𝜆𝑖| [16]), then (38)becomes about the eigenvalues of the orthogonal projection 𝐴𝑁 for the 𝑛 𝑛 𝑛 special case that this matrix is symmetric positive definite. 2 󵄨 󵄨2 2 ∑𝜆𝑖 ≤ ∑󵄨𝜆𝑖󵄨 = ∑𝜎𝑖 First, let us recall that the smallest singular value and the 𝑖=1 𝑖=1 𝑖=1 smallest eigenvalue’s modulus of the orthogonal projection (39) 𝐴𝑁 are never greater than 1 (see Lemma 4). The following 𝑛 𝑛 󵄨 󵄨 𝑛 = ∑𝜆 ≤ ∑ 󵄨𝜆 󵄨 = ∑𝜎 . theorem establishes the dual result for the largest eigenvalue 𝑖 󵄨 𝑖󵄨 𝑖 𝐴𝑁 𝑖=1 𝑖=1 𝑖=1 of matrix (symmetric positive definite). 𝑛×𝑛 In particular, if 𝐴𝑁 is symmetric (𝜎𝑖 =|𝜆𝑖|=±𝜆𝑖 ∈ R), then Theorem 19. Let 𝐴∈R be nonsingular and let 𝑆 be a 𝑛×𝑛 (38)becomes linear subspace of R .Let𝑁 be the solution to the problem 𝑛 𝑛 𝑛 (3). Suppose that matrix 𝐴𝑁 is symmetric and positive definite. 2 󵄨 󵄨2 2 ∑𝜆𝑖 = ∑󵄨𝜆𝑖󵄨 = ∑𝜎𝑖 Then, the largest eigenvalue of the orthogonal projection 𝐴𝑁 of 𝑖=1 𝑖=1 𝑖=1 the identity onto the subspace 𝐴𝑆 is never less than 1.Thatis, (40) 𝑛 𝑛 𝑛 󵄨 󵄨 𝜎1 =𝜆1 ≥1. (47) = ∑𝜆𝑖 ≤ ∑ 󵄨𝜆𝑖󵄨 = ∑𝜎𝑖. 𝑖=1 𝑖=1 𝑖=1 Proof. Using (41), we get In particular, if 𝐴𝑁 is symmetric and positive definite (𝜎𝑖 = + 𝑛 𝑛 𝑛−1 |𝜆𝑖|=𝜆𝑖 ∈ R ), then the equality holds in all (38), that is, 2 2 2 ∑𝜆𝑖 = ∑𝜆𝑖 󳨐⇒ 𝜆 𝑛 −𝜆𝑛 = ∑ (𝜆𝑖 −𝜆𝑖 ). (48) 𝑛 𝑛 𝑛 𝑖=1 𝑖=1 𝑖=1 󵄨 󵄨2 ∑𝜆2 = ∑󵄨𝜆 󵄨 = ∑𝜎2 𝑖 󵄨 𝑖󵄨 𝑖 2 𝑖=1 𝑖=1 𝑖=1 Now, since 𝜆𝑛 ≤1(Lemma 4), then 𝜆𝑛 −𝜆𝑛 ≤0. This implies (41) 𝑛 𝑛 𝑛 that at least one summand in the rightmost sum in (48)must 󵄨 󵄨 be less than or equal to zero. Suppose that such summand is = ∑𝜆𝑖 = ∑ 󵄨𝜆𝑖󵄨 = ∑𝜎𝑖. 𝑖=1 𝑖=1 𝑖=1 the 𝑘th one (1≤𝑘≤𝑛−1). Since 𝐴𝑁 is positive definite, then 𝜆𝑘 >0,andthus The next lemma compares the traces of matrices 𝐴𝑁 and 2 2 2 (𝐴𝑁) . 𝜆𝑘 −𝜆𝑘 ≤0󳨐⇒𝜆𝑘 ≤𝜆𝑘 󳨐⇒ 𝜆 𝑘 ≥1󳨐⇒𝜆1 ≥1 (49) 𝑛×𝑛 Lemma 18. Let 𝐴∈R be nonsingular and let 𝑆 be a linear and the proof is concluded. 𝑛×𝑛 subspace of R .Let𝑁 be the solution to the problem (3). In Theorem 19, the assumption that matrix 𝐴𝑁 is positive Then, definite is essential for assuring that |𝜆1|≥1,asthefollowing (i) for any orthogonal projection 𝐴𝑁 simple counterexample shows. Moreover, from Lemma 4 and 2 2 Theorem 19, respectively, we have that the smallest and largest tr ((𝐴𝑁) )≤‖𝐴𝑁‖𝐹 = tr (𝐴𝑁) , (42) eigenvalues of 𝐴𝑁 (symmetric positive definite) satisfy 𝜆𝑛 ≤ 1 𝜆 ≥1 𝐴𝑁 and 1 ,respectively.Nothingcanbeassertedabout (ii) for any symmetric orthogonal projection the remaining eigenvalues of the symmetric positive definite 󵄩 󵄩 󵄩(𝐴𝑁)2󵄩 ≤ ((𝐴𝑁)2)=‖𝐴𝑁‖2 = (𝐴𝑁) , matrix 𝐴𝑁,whichcanbegreaterthan,equalto,orlessthan 󵄩 󵄩 tr 𝐹 tr (43) 𝐹 the unity, as the same counterexample also shows. (iii) for any symmetric positive definite orthogonal projec- 𝑛=3 tion 𝐴𝑁 Example 20. For ,let 󵄩 󵄩 󵄩(𝐴𝑁)2󵄩 ≤ ((𝐴𝑁)2)=‖𝐴𝑁‖2 = (𝐴𝑁) ≤ [ (𝐴𝑁)]2. 300 󵄩 󵄩𝐹 tr 𝐹 tr tr 𝐴𝑘 =(0𝑘0), 𝑘∈R, (50) (44) 001 Proof. (i) Using (38), we get let 𝐼3 be identity matrix of order 3, and let 𝑆 be the subspace of 𝑛 𝑛 𝑛 all 3×3 scalar matrices; that is, 𝑆=span{𝐼3}.Thenthesolution ∑𝜆2 ≤ ∑𝜎2 = ∑𝜆 . 𝑖 𝑖 𝑖 (45) 𝑁𝑘 to the problem (3)forsubspace𝑆 canbeimmediately 𝑖=1 𝑖=1 𝑖=1 obtained by using formula (6) as follows: ‖(𝐴𝑁)2‖ ≤ (ii) It suffices to use the obvious fact that 𝐹 (𝐴 ) 𝑘+4 ‖𝐴𝑁‖2 𝑁 = tr 𝑘 𝐼 = 𝐼 𝐹 and the following equalities taken from (40): 𝑘 󵄩 󵄩2 3 2 3 (51) 󵄩𝐴 󵄩 𝑘 +10 𝑛 𝑛 𝑛 󵄩 𝑘󵄩𝐹 ∑𝜆2 = ∑𝜎2 = ∑𝜆 . 𝑖 𝑖 𝑖 (46) andthenweget 𝑖=1 𝑖=1 𝑖=1 300 (iii) It suffices to use (43) and the fact that (see, e.g., [17, 𝑘+4 𝑘+4 𝐴 𝑁 = 𝐴 = (0𝑘0). 18]) if 𝑃 and 𝑄 are symmetric positive definite matrices then 𝑘 𝑘 2 𝑘 2 (52) 𝑘 +10 𝑘 +10 001 tr(𝑃𝑄) ≤ tr(𝑃) tr(𝑄) for 𝑃=𝑄=𝐴𝑁. Journal of Applied Mathematics 7

Let us arrange the eigenvalues and singular values of problem (3). Suppose that matrix 𝐴𝑁 is symmetric and positive matrix 𝐴𝑘𝑁𝑘,asusual,innonincreasingorder(asshownin definite. Then, (11)). 2 On one hand, for 𝑘=−2,wehave 1≤‖𝐴𝑁‖𝐹 ≤ tr (𝐴𝑁) = ‖𝐴𝑁‖𝐹 ≤𝑛, (57) 300 1 1 cos (𝐴𝑁, 𝐼) ≥ . (58) 𝐴−2𝑁−2 = (0−20), (53) √𝑛 7 001 Proof. Denote by ‖⋅‖2 the spectral norm. Using the well- and then known inequality ‖⋅‖2 ≤‖⋅‖𝐹 [19], Theorem 19,and(4), we 󵄨 󵄨 3 get 𝜎 = 󵄨𝜆 󵄨 = 1 󵄨 1󵄨 7 ‖𝐴𝑁‖𝐹 ≥ ‖𝐴𝑁‖2 =𝜎1 =𝜆1 ≥1 󵄨 󵄨 2 󵄨 󵄨 2 (59) ≥𝜎2 = 󵄨𝜆2󵄨 = (54) 7 󳨐⇒ 1 ≤ ‖𝐴𝑁‖𝐹 ≤ tr (𝐴𝑁) = ‖𝐴𝑁‖𝐹 ≤𝑛. 󵄨 󵄨 1 ≥𝜎 = 󵄨𝜆 󵄨 = . Finally, (58)followsimmediatelyfrom(57)and(25). 3 󵄨 3󵄨 7 Let us mention that an upper bound on all the eigenvalues 𝐴 𝑁 𝜎 =|𝜆 |=3/7<1 Hence, −2 −2 is indefinite and 1 1 . moduli and on all singular values of any orthogonal projec- On the other hand, for 1<𝑘<3,wehave(seematrix tion 𝐴𝑁 can be immediately obtained from (38)and(4)as (52)) follows: 𝑘+4 𝑛 𝑛 󵄨 󵄨2 2 2 𝜎1 =𝜆1 =3 󵄨 󵄨 𝑘2 +10 ∑󵄨𝜆𝑖󵄨 ≤ ∑𝜎𝑖 = ‖𝐴𝑁‖𝐹 ≤𝑛 𝑖=1 𝑖=1 (60) 𝑘+4 ≥𝜎 =𝜆 =𝑘 (55) 󵄨 󵄨 √ 2 2 𝑘2 +10 󳨐⇒ 󵄨𝜆𝑖󵄨 ,𝜎𝑖 ≤ 𝑛, ∀𝑖=1,2,...,𝑛. 𝑘+4 ≥𝜎 =𝜆 = , Our last theorem improves the upper bound given in 3 3 𝑘2 +10 (60) for the special case that the orthogonal projection 𝐴𝑁 is symmetric positive definite. and then 𝑛×𝑛 9 Theorem 22. Let 𝐴∈R be nonsingular and let 𝑆 be a 𝑘=2 𝜎 =𝜆 = >1, 𝑛×𝑛 : 1 1 7 linear subspace of R .Let𝑁 be the solution to the problem 𝐴𝑁 6 (3). Suppose that matrix is symmetric and positive definite. 𝜎 =𝜆 = <1, Then, all the eigenvalues of matrix 𝐴𝑁 satisfy 2 2 7 1+√𝑛 3 𝜎 =𝜆 ≤ ∀𝑖=1,2,...,𝑛. 𝜎 =𝜆 = <1, 𝑖 𝑖 (61) 3 3 7 2 5 6 Proof. First, note that the assertion is obvious for the smallest 𝑘= 𝜎 =𝜆 = >1, |𝜆 |≤1 2: 1 1 5 singular value since 𝑛 for any orthogonal projection 𝐴𝑁 (Lemma 4). For any eigenvalue of 𝐴𝑁, we use the fact 2 𝜎2 =𝜆2 =1, (56) that 𝑥−𝑥 ≤1/4for all 𝑥>0.Thenfrom(41), we get 2 𝑛 𝑛 2 𝜎3 =𝜆3 = <1, 5 ∑𝜆𝑖 = ∑𝜆𝑖 𝑖=1 𝑖=1 8 90 𝑘= : 𝜎1 =𝜆1 = >1, 𝑛 𝑛−1 3 77 󳨐⇒ 𝜆 2 −𝜆 = ∑ (𝜆 −𝜆2)≤ 1 1 𝑖 𝑖 4 80 𝑖=2 (62) 𝜎2 =𝜆2 = >1, 77 1+√𝑛 1+√𝑛 󳨐⇒ 𝜆 ≤ 󳨐⇒ 𝜆 ≤ 30 1 2 𝑖 2 𝜎3 =𝜆3 = <1. 77 ∀𝑖=1,2,...,𝑛. Hence, for 𝐴𝑘𝑁𝑘 positive definite, we have (depending on 𝑘) 𝜆2 <1, 𝜆2 =1,or𝜆2 >1. The following corollary improves the lower bound zero on both tr(𝐴𝑁),givenin(4), and cos(𝐴𝑁,,givenin( 𝐼) 17). 5. Conclusion 𝑛×𝑛 Corollary 21. Let 𝐴∈R be nonsingular and let 𝑆 In this paper, we have considered the orthogonal projection 𝑛×𝑛 be a linear subspace of R .Let𝑁 be the solution to the 𝐴𝑁 (in the Frobenius sense) of the identity matrix onto an 8 Journal of Applied Mathematics

𝑛×𝑛 arbitrary matrix subspace 𝐴𝑆 (𝐴∈R nonsingular, 𝑆⊂ Linear Algebra and Its Applications,vol.429,no.8-9,pp.2089– 𝑛×𝑛 R ). Among other geometrical properties of matrix 𝐴𝑁, 2097, 2008. we have established a strong relation between the quality [14] M. L. Buzano, “Generalizzazione della diseguaglianza di Cau- of the approximation 𝐴𝑁 ≈𝐼 and the cosine of the angle chy-Schwarz,” Rendiconti del Seminario Matematico Universita` ∠(𝐴𝑁,.Also,thedistancebetween 𝐼) 𝐴𝑁 and the identity has ePolitecnicodiTorino,vol.31,pp.405–409,1974(Italian). been related to the ratio 𝑛‖𝐴‖𝐹/| tr(𝐴)| (which is independent [15] M. R. Hestenes and E. Stiefel, “Methods of conjugate gradients of the subspace 𝑆). The spectral analysis has provided lower for solving linear systems,” JournalofResearchoftheNational and upper bounds on the largest eigenvalue of the symmetric Bureau of Standards,vol.49,no.6,pp.409–436,1952. positive definite orthogonal projections of the identity. [16]R.A.HornandC.R.Johnson,Topics in Matrix Analysis,Cam- bridge University Press, Cambridge, UK, 1991. [17] E. H. Lieb and W.Thirring, “Inequalities for the moments of the Acknowledgments eigenvalues of the Schrodinger¨ Hamiltonian and their relation The authors are grateful to the anonymous referee for valuable to Sobolev inequalities,” in Studies in Mathematical Physics: Essays in Honor of Valentine Bargmann, E. Lieb, B. Simon, and comments and suggestions, which have improved the earlier A. Wightman, Eds., pp. 269–303, Princeton University Press, versionofthispaper.Thisworkwaspartiallysupportedbythe Princeton, NJ, USA, 1976. “Ministerio de Econom´ıa y Competitividad” (Spanish Gov- [18] Z. Ulukok¨ and R. Turkmen,¨ “On some matrix trace inequali- ernment), and FEDER, through Grant Contract CGL2011- ties,” Journal of Inequalities and Applications,vol.2010,Article 29396-C03-01. ID 201486, 8 pages, 2010. [19] R.A.HornandC.R.Johnson,Matrix Analysis, Cambridge Uni- References versity Press, Cambridge, UK, 1985.

[1] O. Axelsson, Iterative Solution Methods, Cambridge University Press, Cambridge, Mass, USA, 1994. [2] Y. Saad, Iterative Methods for Sparse Linear Systems,PWSPub- lishing, Boston, Mass, USA, 1996. [3] S.-L. Wu, F. Chen, and X.-Q. Niu, “A note on the eigenvalue analysis of the SIMPLE preconditioning for incompressible flow,” Journal of Applied Mathematics,vol.2012,ArticleID 564132, 7 pages, 2012. [4] M. Benzi, “Preconditioning techniques for large linear systems: a survey,” Journal of Computational Physics,vol.182,no.2,pp. 418–477, 2002. [5]G.Montero,L.Gonzalez,´ E. Florez,´ M. D. Garc´ıa, and A. Suarez,´ “Approximate inverse computation using Frobenius in- ner product,” Numerical Linear Algebra with Applications,vol.9, no. 3, pp. 239–247, 2002. [6] A. Suarez´ and L. Gonzalez,´ “A generalization of the Moore- 𝑛×𝑚 Penrose inverse related to matrix subspaces of 𝐶 ,” Applied Mathematics and Computation,vol.216,no.2,pp.514–522,2010. [7] E. Chow and Y. Saad, “Approximate inverse preconditioners via sparse-sparse iterations,” SIAM Journal on Scientific Computing, vol. 19, no. 3, pp. 995–1023, 1998. [8] E. Chow and Y. Saad, “Approximate inverse techniques for block-partitioned matrices,” SIAMJournalonScientificCom- puting,vol.18,no.6,pp.1657–1675,1997. [9]R.M.Holland,A.J.Wathen,andG.J.Shaw,“Sparseapproxi- mate inverses and target matrices,” SIAMJournalonScientific Computing,vol.26,no.3,pp.1000–1011,2005. [10] R. Andreani, M. Raydan, and P. Tarazaga, “On the geometrical structure of symmetric matrices,” Linear Algebra and Its Appli- cations,vol.438,no.3,pp.1201–1214,2013. [11] L. Gonzalez,´ “Orthogonal projections of the identity: spectral analysis and applications to approximate inverse precondition- ing,” SIAM Review,vol.48,no.1,pp.66–75,2006. [12] A. Suarez´ and L. Gonzalez,´ “Normalized Frobenius condition number of the orthogonal projections of the identity,” Journal of Mathematical Analysis and Applications,vol.400,no.2,pp. 510–516, 2013. [13] J.-P. Chehab and M. Raydan, “Geometrical properties of the Frobenius condition number for positive definite matrices,” Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 489295, 6 pages http://dx.doi.org/10.1155/2013/489295

Research Article A Parameterized Splitting Preconditioner for Generalized Saddle Point Problems

Wei-Hua Luo1,2 and Ting-Zhu Huang1

1 School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China 2 Key Laboratory of Numerical Simulation of Sichuan Province University, Neijiang Normal University, Neijiang, Sichuan 641112, China

Correspondence should be addressed to Ting-Zhu Huang; [email protected]

Received 4 January 2013; Revised 31 March 2013; Accepted 1 April 2013

Academic Editor: P. N. Shivakumar

Copyright © 2013 W.-H. Luo and T.-Z. Huang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

By using Sherman-Morrison-Woodbury formula, we introduce a preconditioner based on parameterized splitting idea for generalized saddle point problems which may be singular and nonsymmetric. By analyzing the eigenvalues of the preconditioned matrix, we find that when 𝛼 is big enough, it has an eigenvalue at 1 with multiplicity at least n, and the remaining eigenvalues are all located in a unit circle centered at 1. Particularly, when the preconditioner is used in general saddle point problems, it guarantees eigenvalue at 1 with the same multiplicity, and the remaining eigenvalues will tend to 1 as the parameter 𝛼→0. Consequently, this can lead to a good convergence when some GMRES iterative methods are used in Krylov subspace. Numerical results of Stokes problems and Oseen problems are presented to illustrate the behavior of the preconditioner.

1. Introduction sparse wavelet preconditioner. Particularly, many precondi- tioning methods for saddle problems have been presented In some scientific and engineering applications, such as finite recently, such as dimensional splitting (DS) [7], relaxed element methods for solving partial differential equations1 [ , dimensional factorization (RDF) [8], splitting preconditioner 2], and computational fluid dynamics [3, 4], we often consider [9], and Hermitian and skew-Hermitian splitting precondi- solutions of the generalized saddle point problems of the form tioner [10]. 𝐴𝐵𝑇 𝑥 𝑓 Amongtheseresults,Caoetal.[9]haveusedsplittingidea ( )( )=( ), −𝐵 𝐶 𝑦 𝑔 (1) to give a preconditioner for saddle point problems where the matrix 𝐴 is symmetric and positive definite and 𝐵 is of full 𝑛×𝑛 𝑚×𝑛 𝑚×𝑚 where 𝐴∈R , 𝐵∈R (𝑚 ≤ 𝑛),and𝐶∈R row rank. According to his preconditioner, the eigenvalues 𝑛 𝑚 are positive semidefinite, 𝑥, 𝑓 ∈ R , and 𝑦, 𝑔 ∈ R .When of the preconditioned matrix would tend to 1 when the 𝐶=0,(1) is a general saddle point problem which is also a parameter 𝑡→∞. Consequently, just as we have seen from researching object for many authors. those examples of [9], preconditioner has guaranteed a good Itiswellknownthatwhenthematrices𝐴,,and 𝐵 𝐶 convergencewhensomeiterativemethodswereused. are large and sparse, the iterative methods are more efficient In this paper, being motivated by [9], we use the splitting and attractive than direct methods assuming that (1)hasa idea to present a preconditioner for the system (1), where 𝐴 good preconditioner. In recent years, a lot of preconditioning may be nonsymmetric and singular (when rank(𝐵) <). 𝑚 We techniques have arisen for solving linear system; for example, find that, when the parameter is big enough, the precondi- Saad [5]andChen[6]havecomprehensivelysurveyedsome tioned matrix has the eigenvalue at 1 with multiplicity at least classical preconditioning techniques, including ILU pre- 𝑛, and the remaining eigenvalues are all located in a unit circle conditioner, triangular preconditioner, SPAI preconditioner, centered at 1. Particularly, when the precondidtioner is used multilevel recursive Schur complements preconditioner, and in some general saddle point problems (namely, 𝐶=0), we 2 Journal of Applied Mathematics see that the multiplicity of the eigenvalue at 1 is also at least 𝑛, Proof. Because but the remaining eigenvalues will tend to 1 as the parameter −1 −1 𝛼→0 𝐵𝑇𝐵 𝐵𝐴−1𝐵𝑇 . (𝐴 + ) −𝐴−1𝐵𝑇(𝐼 + ) The remainder of the paper is organized as follows. 𝛼 𝛼 𝑀−1 =( ), In Section 2,wepresentourpreconditionerbasedonthe −1 −1 𝐵 𝐵𝑇𝐵 𝐵𝐴−1𝐵𝑇 splitting idea and analyze the bound of eigenvalues of the (𝐴 + ) (𝐼 + ) preconditioned matrix. In Section 3, we use some numerical 𝛼 𝛼 𝛼 examples to show the behavior of the new preconditioner. (8) Finally, we draw some conclusions and outline our future work in Section 4. we can easily get −1 𝑇 −1 −1 𝑇 𝐵𝐴 𝐵 𝐶 2. A Parameterized Splitting Preconditioner 𝐼𝐴𝐵 (𝐼 + ) (𝐼 − ) −1 𝛼 𝛼 𝐼−𝑀 𝑁=( −1 ), Now we consider using splitting idea with a variable param- 𝐵𝐴−1𝐵𝑇 𝐵𝐴−1𝐵𝑇 𝐶 0(𝐼+ ) ( + ) eter to present a preconditioner for the system (1). 𝛼 𝛼 𝛼 Firstly, it is evident that when 𝛼 =0̸ ,thesystem(1)is (9) equivalent to −1 𝐴𝐵𝑇 𝑓 which implies the preconditioned matrix 𝐼−𝑀 𝑁 has an 𝑥 eigenvalue at 1 with multiplicity at least 𝑛. ( )( )=( ). −𝐵 𝐶 𝑦 𝑔 (2) For the remaining eigenvalues, let 𝛼 𝛼 𝛼 −1 𝐵𝐴−1𝐵𝑇 𝐵𝐴−1𝐵𝑇 𝐶 (𝐼 + ) ( + )𝜔=𝜆𝜔 (10) Let 𝛼 𝛼 𝛼 𝐴𝐵𝑇 00 ‖𝜔‖ = 1 𝑀=( ), 𝑁=( ), with ;thenwehave −𝐵 𝐶 𝐼 0𝐼− 𝐵𝐴−1𝐵𝑇 𝐶 𝐵𝐴−1𝐵𝑇 𝛼 𝛼 ( + )𝜔=𝜆(𝐼+ )𝜔. (11) (3) 𝛼 𝛼 𝛼 𝑓 𝑥 By multiplying both sides of this equality from the left with 𝑋=( ), 𝐹=(𝑔). 𝑇 𝑦 𝜔 ,wecanget 𝛼 𝑠 +𝑠 𝑀−𝑁 𝜆= 1 2 . Then the coefficient matrix of(2) can be expressed by . 𝑠 +𝛼 (12) Multiplying both sides of system (2)fromtheleftwithmatrix 1 −1 𝑀 ,wehave This completes the proof of Theorem 1. (𝐼 − 𝑀−1𝑁)𝑋=𝑀−1𝐹. (4) Remark 2. From Theorem 1, we can get that when parameter 𝛼 is big enough, the modulus of nonnil eigenvalues 𝜆 will be Hence, we obtain a preconditioned linear system from (1) locatedininterval(0, 1). using the idea of splitting and the corresponding precondi- tioner is Remark 3. In Theorem 1,ifthematrix𝐶=0, then for nonnil −1 𝐴𝐵𝑇 𝐼0 eigenvalues we have 𝐻=( ) ( ). −𝐵 𝐼 (5) lim 𝜆=1. (13) 𝐼 0 𝛼→0 𝛼 𝛼 Figures 1, 2,and3 are the eigenvalues plots of the pre- Now we analyze the eigenvalues of the preconditioned system conditioned matrices obtained with our preconditioner. As (4). we can see in the following numerical experiments, this good phenomenon is useful for accelerating convergence of Theorem 1. 𝐼−𝑀−1𝑁 The preconditioned matrix has an iterative methods in Krylov subspace. 𝑛 eigenvalue at 1 with multiplicity at least .Theremaining Additionally, for the purpose of practically executing our 𝜆 eigenvalues satisfy preconditioner 𝑠1 +𝑠2 −1 −1 𝜆= , (6) 𝐵𝑇𝐵 −𝐴−1𝐵𝑇 𝐵𝐴−1𝐵𝑇 𝑠1 +𝛼 (𝐴 + ) (𝐼 + ) 𝛼 𝛼 𝛼 𝑇 −1 𝑇 𝑇 𝑚 𝑠 =𝜔 𝐵𝐴 𝐵 𝜔,𝑠=𝜔 𝐶𝜔, 𝜔∈C 𝐻=( −1 −1 ), where 1 2 and satisfies 𝐵 𝐵𝑇𝐵 1 𝐵𝐴−1𝐵𝑇 (𝐴 + ) (𝐼 + ) 𝐵𝐴−1𝐵𝑇 𝐶 𝐵𝐴−1𝐵𝑇 𝛼 𝛼 𝛼 𝛼 ( + )𝜔=𝜆(𝐼+ )𝜔, ‖𝜔‖ =1. (7) 𝛼 𝛼 𝛼 (14) Journal of Applied Mathematics 3

−4 ×10 = 0.1, 𝛼 = 0.0001 =0.1, 𝛼=norm(𝐶, fro)/20 4 0.08

3 0.06 0.04 2 0.02 1 0 0 −0.02

−1 −0.04

−2 −0.06

−3 −0.08 0 0.2 0.4 0.6 0.81 1.2 1.4 0 0.2 0.4 0.6 0.81 1.2 1.4 (a) (b)

Figure 1: Spectrum of the preconditioned steady Oseen matrix with viscosity coefficient V = 0.1, 32 × 32 grid. (a) Q2-Q1 FEM, 𝐶=0;(b) Q1-P0 FEM, 𝐶 =0̸ .

−3 ×10 = 0.01, 𝛼 = 0.0001 =0.01, 𝛼=norm(𝐶, fro)/20 1 0.2

0.8 0.15 0.6 0.1 0.4 0.2 0.05 0 0 −0.2 −0.05 −0.4 −0.1 −0.6 −0.8 −0.15 −1 −0.2 −0.2 0 0.2 0.4 0.6 0.81 1.2 0 0.2 0.4 0.6 0.81 1.2 1.4

(a) (b)

Figure 2: Spectrum of the preconditioned steady Oseen matrix with viscosity coefficient V = 0.01, 32 × 32 grid. (a) Q2-Q1 FEM, 𝐶=0;(b) Q1-P0 FEM, 𝐶 =0̸ .

−4 ×10 = 0.001, 𝛼 = 0.0001 = 0.001, 𝛼=norm(𝐶, fro)/20 8 0.08

6 0.06

4 0.04

2 0.02

0 0

−2 −0.02

−4 −0.04

−6 −0.06

−8 −0.08 0 0.2 0.4 0.6 0.81 1.2 1.4 −0.2 0 0.2 0.4 0.6 0.81 1.2 (a) (b)

Figure 3: Spectrum of the preconditioned steady Oseen matrix with viscosity coefficient V = 0.001, 32 × 32 grid. (a) Q2-Q1 FEM, 𝐶=0;(b) Q1-P0 FEM, 𝐶 =0̸ . 4 Journal of Applied Mathematics

Table 1: Preconditioned GMRES(20) on Stokes problems with Table 7: Preconditioned GMRES(20) on steady Oseen problems different grid sizes (uniform grids). with different grid sizes (uniform grids), viscosity V = 0.01.

Grid 𝛼 its LU time its time Total time Grid 𝛼 its LU time its time Total time 16 × 16 0.0001 4 0.0457 0.0267 0.0724 16 × 16 10000 3 0.0458 0.0224 0.0682 32 × 32 0.0001 6 0.3912 0.0813 0.4725 32 × 32 10000 3 0.4309 0.0451 0.4760 64 × 64 0.0001 9 5.4472 0.5519 5.9991 64 × 64 10000 4 7.6537 0.1712 7.8249 128 × 128 0.0001 14 91.4698 5.7732 97.2430 128 × 128 10000 5 175.1587 3.4554 178.6141

Table 2: Preconditioned GMRES(20) on steady Oseen problems Table 8: Preconditioned GMRES(20) on steady Oseen problems V = 0.001 with different grid sizes (uniform grids), viscosity V = 0.1. with different grid sizes (uniform grids), viscosity . 𝛼 Grid 𝛼 its LU time its time Total time Grid its LU time its time Total time 16 × 16 16 × 16 0.0001 3 0.0479 0.0244 0.0723 10000 3 0.0507 0.0212 0.0719 32 × 32 32 × 32 0.0001 3 0.6312 0.0696 0.7009 10000 3 0.4735 0.0449 0.5184 64 × 64 64 × 64 0.0001 4 13.2959 0.3811 13.6770 10000 4 6.6482 0.1645 6.8127 128 × 128 128 × 128 0.0001 6 130.5463 3.7727 134.3190 10000 4 172.0516 2.1216 174.1732

Table 3: Preconditioned GMRES(20) on steady Oseen problems Table9:PreconditionedGMRES(20)onStokesproblemswith with different grid sizes (uniform grids), viscosity V = 0.01. different grid sizes (uniform grids).

Grid 𝛼 its LU time its time Total time Grid 𝛼 its LU time its time Total time 16 × 16 0.0001 2 0.0442 0.0236 0.0678 16 × 16 norm(𝐶, fro) 21 0.0240 0.0473 0.0713 32 × 32 0.0001 3 0.4160 0.0557 0.4717 32 × 32 norm(𝐶, fro) 20 0.0890 0.1497 0.2387 64 × 64 0.0001 3 7.3645 0.2623 7.6268 64 × 64 norm(𝐶, fro) 20 0.8387 0.6191 1.4578 128 × 128 0.0001 4 169.4009 3.1709 172.5718 128 × 128 norm(𝐶, fro) 20 6.8917 3.0866 9.9783

Table 4: Preconditioned GMRES(20) on steady Oseen problems with different grid sizes (uniform grids), viscosity V = 0.001. Table 10: Preconditioned GMRES(20) on steady Oseen problems with different grid sizes (uniform grids), viscosity V = 0.1. Grid 𝛼 its LU time its time Total time Grid 𝛼 its LU time its time Total time 16 × 16 0.0001 2 0.0481 0.0206 0.0687 16 × 16 norm(𝐶, fro)/20 10 0.0250 0.0302 0.0552 32 × 32 0.0001 3 0.4661 0.0585 0.5246 32 × 32 norm(𝐶, fro)/20 10 0.0816 0.0832 0.1648 64 × 64 0.0001 3 6.6240 0.2728 6.8969 64 × 64 norm(𝐶, fro)/20 12 0.8466 0.3648 1.2114 128 × 128 0.0001 4 177.3130 2.9814 180.2944 128 × 128 norm(𝐶, fro)/20 14 6.9019 2.0398 8.9417

Table 5: Preconditioned GMRES(20) on Stokes problems with Table 11: Preconditioned GMRES(20) on steady Oseen problems different grid sizes (uniform grids). with different grid sizes (uniform grids), viscosity V = 0.01. 𝛼 Grid its LU time its time Total time Grid 𝛼 its LU time its time Total time 16 × 16 10000 5 0.0471 0.0272 0.0743 16 × 16 norm(𝐶, fro)/20 7 0.0238 0.0286 0.0524 32 × 32 10000 7 0.3914 0.0906 0.4820 32 × 32 norm(𝐶, fro)/20 7 0.0850 0.0552 0.1402 64 × 64 10000 9 5.6107 0.4145 6.0252 64 × 64 norm(𝐶, fro)/20 11 0.8400 0.3177 1.1577 128 × 128 10000 14 92.7154 4.8908 97.6062 128 × 128 norm(𝐶, fro)/20 16 6.9537 2.2306 9.1844

Table 6: Preconditioned GMRES(20) on steady Oseen problems Table 12: Preconditioned GMRES(20) on steady Oseen problems with different grid sizes (uniform grids), viscosity V = 0.1. with different grid sizes (uniform grids), viscosity V = 0.001.

Grid 𝛼 its LU time its time Total time Grid 𝛼 its LU time its time Total time 16 × 16 10000 4 0.0458 0.0223 0.0681 16 × 16 norm(𝐶, fro)/20 5 0.0245 0.0250 0.0495 32 × 32 10000 4 0.5748 0.0670 0.6418 32 × 32 norm(𝐶, fro)/20 5 0.0905 0.0587 0.1492 64 × 64 10000 5 12.2642 0.4179 12.6821 64 × 64 norm(𝐶, fro)/20 8 1.2916 0.3200 1.6116 128 × 128 10000 7 128.2758 1.6275 129.9033 128 × 128 norm(𝐶, fro)/20 15 10.4399 2.7468 13.1867 Journal of Applied Mathematics 5

Table 13: Size and number of non-nil elements of the coefficient matrix generalized by using Q2-Q1 FEM in steady Stokes problems.

𝑇 Grid dim(𝐴) nnz(𝐴) dim(𝐵) nnz(𝐵) nnz(𝐴 + (1/𝛼)𝐵 𝐵) 16 × 16 578 × 578 6178 81 × 578 2318 36904 32 × 32 2178 × 2178 28418 289 × 2178 10460 193220 64 × 64 8450 × 8450 122206 1089 × 8450 44314 875966 128 × 128 33282 × 33282 506376 4225 × 33282 184316 3741110

we should efficiently deal with the computation of (𝐼 + where 𝜎𝑖, 𝑖=1,2,...,𝑚,are𝑚 positive singular values of the −1 𝑇 −1 −1/2 (𝐵𝐴 𝐵 /𝛼)) . This can been tackled by the well-known matrix 𝐵𝐴 . Sherman-Morrison-Woodbury formula: All these systems can be generalized by using IFISS software package11 [ ] (this is a free 𝑇 −1 (𝐴1 +𝑋1𝐴2𝑋2 ) package that can be downloaded from the site (15) http://www.maths.manchester.ac.uk/∼djs/ifiss/). We use −1 −1 −1 𝑇 −1 −1 𝑇 −1 =𝐴1 −𝐴1 𝑋1(𝐴2 +𝑋2 𝐴1 𝑋1) 𝑋2 𝐴1 , restarted GMRES(20) as the Krylov subspace method, and we always take a zero initial guess. The stopping criterion is 𝑛×𝑛 𝑟1×𝑟1 where 𝐴1 ∈ R , and 𝐴2 ∈ R are invertible matrices, 𝑛×𝑟1 𝑛×𝑟1 𝑋1 ∈ R , and 𝑋2 ∈ R areanymatrices,and𝑛,1 𝑟 are any positive integers. 󵄩 󵄩 󵄩𝑟𝑘󵄩2 −6 From (15)weimmediatelyget 󵄩 󵄩 ≤10 , (20) 󵄩𝑟0󵄩2 −1 𝑇 −1 𝐵𝐴 𝐵 −1 (𝐼 + ) =𝐼−𝐵(𝛼𝐴+𝐵𝑇𝐵) 𝐵𝑇. (16) 𝛼

where 𝑟𝑘 is the residual vector at the 𝑘th iteration. In the following numerical examples we will always use (16) −1 𝑇 −1 In the whole course of computation, we always replace to compute (𝐼 + (𝐵𝐴 𝐵 /𝛼)) in (14). −1 𝑇 −1 (𝐼 + (𝐵𝐴 𝐵 /𝛼)) in (14)with(16)andusethe𝐿𝑈 factor- 𝑇 𝑇 −1 ization of 𝐴+(𝐵 𝐵/𝛼) to tackle (𝐴 + (𝐵 𝐵/𝛼)) V,where 3. Numerical Examples V is a corresponding vector in the iteration. Concretely, let 𝐴+(𝐵𝑇𝐵/𝛼) = 𝐿𝑈 In this section, we give numerical experiments to illustrate the ; then we complete the matrix-vector (𝐴 + (𝐵𝑇𝐵/𝛼))−1V 𝑈\𝐿\V behavior of our preconditioner. The numerical experiments product by in MATLAB term. 𝐶 aredonebyusingMATLAB7.1.Thelinearsystemsare In the following tables, the denotation norm ( ,fro)means 𝐶 obtained by using finite element methods in the Stokes prob- the Frobenius form of the matrix .Thetotaltimeisthesum lems and steady Oseen problems, and they are respectively of LU time and iterative time, and the LU time is the time to 𝐴+(𝐵𝑇𝐵/𝛼) the cases of compute LU factorization of . (1) 𝐶=0, which is caused by using Q2-Q1 FEM; Case 1 (for our preconditioner). 𝐶=0(using Q2-Q1 FEM in (2) 𝐶 =0̸ , which is caused by using Q1-P0 FEM. Stokes problems and steady Oseen problems with different viscosity coefficients. The results are in Tables 1, 2, 3, 4 ). Furthermore, we compare our preconditioner with that of [9] in the case of general saddle point problems (namely, 𝐶= Case 1’ (for preconditioner of [9]). 𝐶=0(using Q2-Q1 FEM 0). For the general saddle point problem, [9] has presented in Stokes problems and steady Oseen problems with different the preconditioner viscosity coefficients The results are in Tables 5, 6, 7, 8).

−1 𝐴+𝑡𝐵𝑇𝐵0 Case 2. 𝐶 =0̸ (using Q1-P0 FEM in Stokes problems and 𝐻̂ = ( ) steady Oseen problems with different viscosity coefficients 𝐼 (17) The results are in Tables 9, 10, 11, 12). −2𝐵 𝑡 From Tables 1, 2, 3, 4, 5, 6, 7,and8 we can see that these results are in agreement with the theoretical analyses (13) and with 𝑡 as a parameter and has proved that when 𝐴 is (19), respectively. Additionally, comparing with the results in symmetric positive definite, the preconditioned matrix has Tables 9, 10, 11,and12, we find that, although the iterations an eigenvalue 1 with multiplicity at 𝑛,andtheremaining used in Case 1 (either for the preconditioner of [9]orour eigenvalues satisfy preconditioner) are less than those in Case 2, the time spent byCase1ismuchmorethanthatofCase2.Thisisbecausethe 𝑡𝜎2 𝜆= 𝑖 , density of the coefficient matrix generalized by Q2-Q1 FEM 2 (18) 1+𝑡𝜎𝑖 is much larger than that generalized by Q1-P0 FEM. This can be partly illustrated by Tables 13 and 14, and the others can be 𝜆=1, 𝑡→∞lim (19) illustrated similarly. 6 Journal of Applied Mathematics

Table 14: Size and number of non-nil elements of the coefficient matrix generalized by using Q1-P0 FEM in steady Stokes problems.

𝑇 Grid dim(𝐴) nnz(𝐴) dim(𝐵) nnz(𝐵) nnz(𝐴 + (1/𝛼)𝐵 𝐵) 16 × 16 578 × 578 3826 256 × 578 1800 7076 32 × 32 2178 × 2178 16818 1024 × 2178 7688 31874 64 × 64 8450 × 8450 70450 4096 × 8450 31752 136582 128 × 128 33282 × 33282 288306 16384 × 33282 129032 567192

4. Conclusions Mathematics, Cambridge University Press, Cambridge, UK, 2005. In this paper, we have introduced a splitting preconditioner [7] M. Benzi and X.-P. Guo, “A dimensional split preconditioner for solving generalized saddle point systems. Theoretical forStokesandlinearizedNavier-Stokesequations,”Applied analysis showed the modulus of eigenvalues of the precon- Numerical Mathematics,vol.61,no.1,pp.66–76,2011. (0, 1) ditioned matrix would be located in interval when the [8]M.Benzi,M.Ng,Q.Niu,andZ.Wang,“Arelaxeddimensional parameter is big enough. Particularly when the submatrix factorization preconditioner for the incompressible Navier- 𝐶=0, the eigenvalues will tend to 1 as the parameter 𝛼→0. Stokes equations,” Journal of Computational Physics,vol.230, These performances are tested by some examples, and the no.16,pp.6185–6202,2011. results are in agreement with the theoretical analysis. [9] Y.Cao, M.-Q. Jiang, and Y.-L.Zheng, “Asplitting preconditioner There are still some future works to be done: how to prop- for saddle point problems,” Numerical Linear Algebra with erly choose a parameter 𝛼 so that the preconditioned matrix Applications,vol.18,no.5,pp.875–895,2011. has better properties? How to further precondition submatrix [10] V. Simoncini and M. Benzi, “Spectral properties of the Her- −1 𝑇 −1 −1 𝑇 (𝐼 + (𝐵𝐴 𝐵 /𝛼)) ((𝐵𝐴 𝐵 /𝛼) + (𝐶/𝛼)) to improve our mitian and skew-Hermitian splitting preconditioner for saddle preconditioner? point problems,” SIAM Journal on Matrix Analysis and Applica- tions,vol.26,no.2,pp.377–389,2004. [11] H. C. Elman, A. Ramage, and D. J. Silvester, “Algorithm 886: Acknowledgments IFISS, a Matlab toolbox for modelling incompressible flow,” ACMTransactionsonMathematicalSoftware,vol.33,no.2, The authors would express their great thankfulness to the article 14, 2007. referees and the editor Professor P. N. Shivakumar for their helpful suggestions for revising this paper. The authors would like to thank H. C. Elman, A. Ramage, and D. J. Silvester for their free IFISS software package. This research is supported by Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020), Sichuan Province Sci. & Tech. Research Project (2012GZX0080), and the Fundamen- talResearchFundsfortheCentralUniversities.

References

[1] F. Brezzi and M. Fortin, Mixed and Hybrid Finite Element Meth- ods,vol.15ofSpringer Series in Computational Mathematics, Springer,NewYork,NY,USA,1991. [2] Z. Chen, Q. Du, and J. Zou, “Finite element methods with matching and nonmatching meshes for Maxwell equations with discontinuous coefficients,” SIAM Journal on Numerical Analysis,vol.37,no.5,pp.1542–1570,2000. [3]H.C.Elman,D.J.Silvester,andA.J.Wathen,Finite Elements and Fast Iterative Solvers: With Applications in Incompressible Fluid Dynamics, Numerical Mathematics and Scientific Com- putation, Oxford University Press, New York, NY, USA, 2005. [4]C.Cuvelier,A.Segal,andA.A.vanSteenhoven,Finite Element Methods and Navier-Stokes Equations,vol.22ofMathematics and its Applications,D.Reidel,Dordrecht,TheNetherlands, 1986. [5] Y. Saad, IterativeMethodsforSparseLinearSystems, Society for Industrial and Applied Mathematics, Philadelphia, Pa, USA, 2nd edition, 2003. [6] K. Chen, Matrix Preconditioning Techniques and Applications, vol. 19 of Cambridge Monographs on Applied and Computational Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 593549, 11 pages http://dx.doi.org/10.1155/2013/593549

Research Article Solving Optimization Problems on Hermitian Matrix Functions with Applications

Xiang Zhang and Shu-Wen Xiang

Department of Computer Science and Information, Guizhou University, Guiyang 550025, China

Correspondence should be addressed to Xiang Zhang; [email protected]

Received 17 October 2012; Accepted 20 March 2013

Academic Editor: K. Sivakumar

Copyright © 2013 X. Zhang and S.-W. Xiang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

∗ ∗ We consider the extremal inertias and ranks of the matrix expressions 𝑓(𝑋, 𝑌) =𝐴3 −𝐵3𝑋−(𝐵3𝑋) −𝐶3𝑌𝐷3 −(𝐶3𝑌𝐷3) ,where ∗ 𝐴3 =𝐴3,𝐵3,𝐶3,and𝐷3 are known matrices and 𝑌 and 𝑋 are the solutions to the matrix equations 𝐴1𝑌=𝐶1, 𝑌𝐵1 =𝐷1,and 𝐴2𝑋=𝐶2, respectively. As applications, we present necessary and sufficient condition for the previous matrix function 𝑓(𝑋, 𝑌) to be positive (negative), non-negative (positive) definite or nonsingular. We also characterize the relations between the Hermitian part of the solutions of the above-mentioned matrix equations. Furthermore, we establish necessary and sufficient conditions for ∗ ∗ the solvability of the system of matrix equations 𝐴1𝑌=𝐶1, 𝑌𝐵1 =𝐷1, 𝐴2𝑋=𝐶2,and𝐵3𝑋+(𝐵3𝑋) +𝐶3𝑌𝐷3 +(𝐶3𝑌𝐷3) =𝐴3, andgiveanexpressionofthegeneralsolutiontothe above-mentioned system when it is solvable.

1. Introduction symbols 𝑖+(𝐴) and 𝑖−(𝐴) are called the positive index and the negative index of inertia, respectively. For two Hermitian C Throughout, we denote the field of complex numbers by , matrices 𝐴 and 𝐵 ofthesamesizes,wesay𝐴≥𝐵(𝐴≤𝐵) 𝑚×𝑛 C C𝑚×𝑛 the set of all matrices over by ,andthesetof in the Lowner¨ partial ordering if 𝐴−𝐵is positive (negative) 𝑚×𝑚 C𝑚×𝑚 𝐴∗ all Hermitian matrices by ℎ .Thesymbols and semidefinite. The Hermitian part of 𝑋 is defined as 𝐻(𝑋) = R(𝐴) ∗ standfortheconjugatetranspose,thecolumnspaceof 𝑋+𝑋.Wewillsaythat𝑋 is Re-nnd (Re-nonnegative acomplexmatrix𝐴 respectively. 𝐼𝑛 denotes the 𝑛×𝑛identity 𝐻(𝑋) ≥ 0, 𝑋 † semidefinite) if is Re-pd (Re-positive definite) matrix. The Moore-Penrose inverse [1] 𝐴 of 𝐴, is the unique if 𝐻(𝑋),and >0 𝑋 is Re-ns if 𝐻(𝑋) is nonsingular. solution 𝑋 to the four matrix equations: It is well known that investigation on the solvability (i) 𝐴𝑋𝐴 =𝐴, conditions and the general solution to linear matrix equations is very active (e.g., [2–9]). In 1999, Braden [10]gavethe (ii) 𝑋𝐴𝑋 =𝑋, general solution to ∗ (1) (iii)(𝐴𝑋) =𝐴𝑋, ∗ 𝐵𝑋 + (𝐵𝑋) =𝐴. (3) ∗ (iv)(𝑋𝐴) =𝑋𝐴. In 2007, Djordjevic[´ 11] considered the explicit solution to (3) Moreover, 𝐿𝐴 and 𝑅𝐴 standfortheprojectors𝐿𝐴 =𝐼− † † for linear bounded operators on Hilbert spaces. Moreover, 𝐴 𝐴,𝐴 𝑅 =𝐼−𝐴𝐴 induced by 𝐴. It is well known that the 𝑛×𝑛 Cao [12] investigated the general explicit solution to eigenvalues of a Hermitian matrix 𝐴∈C are real, and the inertia of 𝐴 isdefinedtobethetriplet ∗ 𝐵𝑋𝐶 + (𝐵𝑋𝐶) =𝐴. (4) I𝑛 (𝐴) = {𝑖+ (𝐴) ,𝑖− (𝐴) ,𝑖0 (𝐴)} , (2) where 𝑖+(𝐴), 𝑖−(𝐴),and𝑖0(𝐴) stand for the numbers of Xu et al. [13] obtained the general expression of the solution positive, negative, and zero eigenvalues of 𝐴,respectively.The of operator equation (4). In 2012, Wang and He [14]studied 2 Journal of Applied Mathematics some necessary and sufficient conditions for the consistence 2. Extremal Ranks and Inertias of Hermitian of the matrix equation Matrix Function (10) with Some Restrictions 𝐴 𝑋+(𝐴 𝑋)∗ +𝐵 𝑌𝐶 +(𝐵 𝑌𝐶 )∗ =𝐸 1 1 1 1 1 1 1 (5) In this section, we consider formulas for the extremal ranks and presented an expression of the general solution to (5). and inertias of (10)subjectto(11)and(13). We begin with the Note that (5)isaspecialcaseofthefollowingsystem: following Lemmas. 𝐴 𝑌=𝐶,𝑌𝐵=𝐷,𝐴𝑋=𝐶, 1 1 1 1 2 2 Lemma 1 (see [21]). (a) Let 𝐴1, 𝐶1, 𝐵1,and𝐷1 be given. Then ∗ ∗ (6) the following statements are equivalent: 𝐵3𝑋+(𝐵3𝑋) +𝐶3𝑌𝐷3 +(𝐶3𝑌𝐷3) =𝐴3. (1) system (13) is consistent, Toour knowledge, there has been little information about (6). One goal of this paper is to give some necessary and sufficient (2) conditions for the solvability of the system of matrix (6)and 𝑅𝐴 𝐶1 =0, 𝐷1𝐿𝐵 =0, 𝐴1𝐷1 =𝐶1𝐵1. (14) present an expression of the general solution to system (6) 1 1 when it is solvable. In order to find necessary and sufficient conditions for the (3) solvability of the system of matrix equations (6), we need to 𝑟[𝐴1 𝐶1]=𝑟(𝐴1), consider the extremal ranks and inertias of (10)subjectto(13) and (11). 𝐷1 [ ]=𝑟(𝐵1), (15) There have been many papers to discuss the extremal 𝐵1 ranks and inertias of the following Hermitian expressions: ∗ 𝐴1𝐷1 =𝐶1𝐵1. 𝑝 (𝑋) =𝐴3 −𝐵3𝑋−(𝐵3𝑋) , (7) ∗ 𝑔 (𝑌) =𝐴−𝐵𝑌𝐶−(𝐵𝑌𝐶) , (8) In this case, the general solution can be written as ℎ (𝑋, 𝑌) =𝐴 −𝐵 𝑋𝐵∗ −𝐶 𝑌𝐶∗, 𝑌=𝐴† 𝐶 +𝐿 𝐷 𝐵† +𝐿 𝑉𝑅 , 1 1 1 1 1 (9) 1 1 𝐴1 1 1 𝐴1 𝐵1 (16) ∗ ∗ 𝑉 𝑓 (𝑋, 𝑌) =𝐴3 −𝐵3𝑋−(𝐵3𝑋) −𝐶3𝑌𝐷3 − (𝐶3𝑌𝐷3) . where is arbitrary. 𝐴 𝐶 (10) (b) Let 2 and 2 be given. Then the following statements are equivalent: Tian has contributed much in this field. One of his works [15] considered the extremal ranks and inertias of (7). He (1) equation (11) is consistent, and Wang [16] derived the extremal ranks and inertias of (7) (2) subject to 𝐴1𝑋=𝐶1, 𝐴2𝑋𝐵2 =𝐶2.LiuandTian[17]studied 𝑅 𝐶 =0, the extremal ranks and inertias of (8). Chu et al. [18]andLiu 𝐴2 2 (17) and Tian [19] derived the extremal ranks and inertias of (9). Zhang et al. [20] presented the extremal ranks and inertias of (3) (9), where 𝑋 and 𝑌 are Hermitian solutions of 𝑟[𝐴2 𝐶2]=𝑟(𝐴2). (18) 𝐴2𝑋=𝐶2, (11)

𝑌𝐵2 =𝐷2, (12) In this case, the general solution can be written as † respectively. He and Wang [16] derived the extremal ranks 𝑋=𝐴𝐶+𝐿𝐴𝑊, (19) and inertias of (10). We consider the extremal ranks and iner- tias of (10)subjectto(11)and where 𝑊 is arbitrary.

𝐴1𝑌=𝐶1,𝑌𝐵1 =𝐷1, (13) 𝑚×𝑚 Lemma 2 ([22, Lemma 1.5, Theorem 2.3]). Let 𝐴∈Cℎ , 𝐵∈C𝑚×𝑛 𝐷∈C𝑛×𝑛 which is not only the generalization of the above matrix func- ,and ℎ , and denote that tions, but also can be used to investigate the solvability con- 𝐴𝐵 ditions for the existence of the general solution to the system 𝑀=[ ], 𝐵∗ 0 (6). Moreover, it can be applied to characterize the relations between Hermitian part of the solutions of (11)and(13). 0𝑄 𝑁=[ ], The remainder of this paper is organized as follows. In 𝑄∗ 0 Section 2, we consider the extremal ranks and inertias of (20) (10)subjectto(11)and(13). In Section 3, we characterize the 𝐴𝐵 𝐿=[ ], relations between the Hermitian part of the solution to (11) 𝐵∗ 𝐷 and (13). In Section 4, we establish the solvability conditions for the existence of a solution to (6) and obtain an expression 𝑃𝑀𝐿𝑁 𝐺=[ ∗ ]. of the general solution to (6). 𝐿𝑁𝑀 0 Journal of Applied Mathematics 3

Then one has the following (h) any 𝑋∈𝐻satisfies 𝑋≥0(𝑋≤0)if and only if max𝑋∈𝐻𝑖−(𝑋) = 0max ( 𝑋∈𝐻𝑖+(𝑋) = 0). (a) the following equalities hold ∗ Lemma 6 (see [16]). Let 𝑝(𝑋, 𝑌) = 𝐴 − 𝐵𝑋 −(𝐵𝑋) −𝐶𝑌𝐷− 𝑖 (𝑀) =𝑟(𝐵) +𝑖 (𝑅 𝐴𝑅 ) , ∗ ± ± 𝐵 𝐵 (21) (𝐶𝑌𝐷) ,where𝐴, 𝐵, 𝐶,and𝐷 are given with appropriate sizes, and denote that 𝑖± (𝑁) =𝑟(𝑄) , (22) 𝐴𝐵𝐶 𝑀 = [𝐵∗ 00] , (b) if R(𝐵) ⊆ R(𝐴),then𝑖±(𝐿) =± 𝑖 (𝐴) +± 𝑖 (𝐷 − 1 ∗ † 𝐶∗ 00 𝐵 𝐴 𝐵)).Thus𝑖±(𝐿) =± 𝑖 (𝐴) ifandonlyifR(𝐵) ⊆ [ ] R(𝐴) 𝑖 (𝐷 −∗ 𝐵 𝐴†𝐵) = 0 and ± , 𝐴𝐵𝐷∗ [ ∗ ] (c) 𝑀2 = 𝐵 00 , [ 𝐷0] 0 𝑃𝑀0 [ ∗ ∗] ∗ 𝑖± (𝐺) = 𝑀 0𝑁 −𝑟(𝑁) . (23) 𝐴𝐵𝐶𝐷 𝑀3 =[ ∗ ], (24) [ 0𝑁0] 𝐵 00 0 𝐴𝐵𝐶𝐷∗ Lemma 3 𝐴∈C𝑚×𝑛 𝐵∈C𝑚×𝑘 𝐶∈C𝑙×𝑛 [ ∗ ] (see [23]). Let , ,and . 𝑀4 = 𝐵 00 0 , ∗ Then they satisfy the following rank equalities: [𝐶 00 0] (a) 𝑟[𝐴 𝐵] = 𝑟(𝐴)𝐴 +𝑟(𝐸 𝐵) = 𝑟(𝐵) 𝐵+ 𝑟(𝐸 𝐴), 𝐴𝐵𝐶𝐷∗ 𝐴 [ ∗ ] (b) 𝑟[𝐶 ]=𝑟(𝐴)+𝑟(𝐶𝐹𝐴) = 𝑟(𝐶) + 𝑟(𝐴𝐹𝐶), 𝑀5 = 𝐵 00 0 . 𝐴𝐵 [ 𝐷00] 0 (c) 𝑟[𝐶0] = 𝑟(𝐵) + 𝑟(𝐶)𝐵 +𝑟(𝐸 𝐴𝐹𝐶), 𝑟[𝐵 𝐴𝐹 ]=𝑟[𝐵𝐴]−𝑟(𝐶) (d) 𝐶 0𝐶 , Then one has the following: 𝐶 𝐶0 (e) 𝑟[𝐸 𝐴 ]=𝑟[ ]−𝑟(𝐵), 𝐵 𝐴𝐵 (1) the maximal rank of 𝑝(𝑋, 𝑌) is 𝐴𝐵0 𝐴𝐵𝐹𝐷 (f) 𝑟[𝐸 𝐶0]=𝑟[𝐶0𝐸] − 𝑟(𝐷) − 𝑟(𝐸), 𝐸 0𝐷0 max 𝑟[𝑝(𝑋, 𝑌)]=min {𝑚, 𝑟1 (𝑀 ),𝑟(𝑀2),𝑟(𝑀3)} , 𝑋∈C𝑛×𝑚,𝑌 𝑚×𝑚 𝑚×𝑛 𝑛×𝑛 Lemma 4 (see [15]). Let 𝐴∈Cℎ , 𝐵∈C , 𝐶∈Cℎ , 𝑄∈ (25) 𝑚×𝑛 𝑝×𝑛 𝑚×𝑚 C ,and𝑃∈C be given, and 𝑇∈C be nonsingular. Then one has the following (2) the minimal rank of 𝑝(𝑋, 𝑌) is ∗ (1) 𝑖±(𝑇𝐴𝑇 )=𝑖±(𝐴), min 𝑟[𝑝(𝑋, 𝑌)] 𝐴0 𝑋∈C𝑛×𝑚,𝑌 (2) 𝑖± [ 0𝐶]=𝑖±(𝐴) +± 𝑖 (𝐶), 0𝑄 =2𝑟(𝑀3)−2𝑟(𝐵) (26) (3) 𝑖± [ 𝑄∗ 0 ]=𝑟(𝑄), 𝐴𝐵𝐿 𝐴𝐵 0 𝑃 ∗ ∗ + max {𝑢+ +𝑢−, V+ + V−,𝑢+ + V−,𝑢− + V+}, (4) 𝑖± [ ∗ ] + 𝑟(𝑃)± =𝑖 [ 𝐵 0𝑃 ]. 𝐿𝑃𝐵 0 0𝑃0

Lemma 5 (see [22, Lemma 1.4]). Let 𝑆 be a set consisting of (3) the maximal inertia of 𝑝(𝑋, 𝑌) is 𝑚×𝑚 (square) matrices over C ,andlet𝐻 be a set consisting of 𝑚×𝑚 C max 𝑖± [𝑝 (𝑋, 𝑌)]=min {𝑖± (𝑀1),𝑖± (𝑀2)} , (square) matrices over ℎ . Then Then one has the following 𝑋∈C𝑛×𝑚,𝑌 (27)

(a) 𝑆 hasanonsingularmatrixifandonlyifmax𝑋∈𝑆𝑟(𝑋) = 𝑚; (4) the minimal inertias of 𝑝(𝑋, 𝑌) is

(b) any 𝑋∈𝑆is nonsingular if and only if min𝑋∈𝑆𝑟(𝑋) = min 𝑖± [𝑝 (𝑋, 𝑌)]=𝑟(𝑀3)−𝑟(𝐵) 𝑚; 𝑋∈C𝑛×𝑚,𝑌

(c) {0} ∈ 𝑆 ifandonlyifmin𝑋∈𝑆𝑟(𝑋); =0 + max {𝑖± (𝑀1)−𝑟(𝑀4), (28) (d) 𝑆={0}if and only if max𝑋∈𝑆𝑟(𝑋); =0 𝑖 (𝑀 )−𝑟(𝑀)} , (e) 𝐻 has a matrix 𝑋>0(𝑋<0)if and only if ± 2 5 max𝑋∈𝐻𝑖+(𝑋) = 𝑚(max𝑋∈𝐻𝑖−(𝑋) = 𝑚); where (f) any 𝑋∈𝐻satisfies 𝑋>0(𝑋<0)if and only if min𝑋∈𝐻𝑖+(𝑋) = 𝑚 (min𝑋∈𝐻𝑖−(𝑋) = 𝑚); 𝑢± =𝑖± (𝑀1) −𝑟(𝑀4) , V± =𝑖± (𝑀2) −𝑟(𝑀5) . (29) (g) 𝐻 has a matrix 𝑋≥0(𝑋≤0)if and only if min𝑋∈𝐻𝑖−(𝑋) = 0min ( 𝑋∈𝐻𝑖+(𝑋) = 0); Now we present the main theorem of this section. 4 Journal of Applied Mathematics

𝑚×𝑛 𝑚×𝑘 𝑘×𝑙 𝑛×𝑙 Theorem 7. Let 𝐴1 ∈ C ,𝐶1 ∈ C , 𝐵1 ∈ C , 𝐷1 ∈ C , (c) the maximal inertia of (10) subject to (13) and (11) is 𝑡×𝑞 𝑡×𝑝 𝑝×𝑝 𝑝×𝑞 𝑝×𝑛 𝐴2 ∈ C , 𝐶2 ∈ C ,𝐴3 ∈ Cℎ , 𝐵3 ∈ C , 𝐶3 ∈ C , 𝐷 ∈ C𝑝×𝑛 and 3 be given, and suppose that the system of matrix max 𝑖± [𝑓 (𝑋, 𝑌)]=min {𝑖± (𝐸1)−𝑟(𝐴1)−𝑟(𝐴2), equations (13) and (11) is consistent, respectively. Denote the set 𝑋∈𝐺,𝑌∈𝑆𝑥 of all solutions to (13) by 𝑆 and (11) by 𝐺.Put 𝑖± (𝐸2)−𝑟(𝐵1)−𝑟(𝐴2)} , ∗ ∗ ∗ 𝐴3 𝐶3 𝐷3 𝐶1 𝐵3 𝐶2 (33) [ ∗ ∗ ] [ 𝐶3 0𝐴1 00] [ ] 𝐸1 = [𝐶1𝐷3 𝐴1 000] , [ ∗ ∗] 𝐵3 000𝐴2 (d) the minimal inertia of (10) subject to (13) and (11) is [ 𝐶2 00𝐴2 0 ] 𝐴 𝐷∗ 𝐶 𝐷 𝐵 𝐶∗ 𝐵3 3 3 3 1 3 2 min 𝑖± [𝑓 (𝑋, 𝑌)]=𝑟(𝐸3)−𝑟[ ] [ ] 𝑋∈𝐺,𝑌∈𝑆 𝐴2 [ 𝐷3 0𝐵1 00] 𝐸 =𝑟[𝐷∗𝐶∗ 𝐵∗ 000] , 2 [ 1 3 1 ] + {𝑖 (𝐸 )−𝑟(𝐸 ), (34) [ ∗ ∗] max ± 1 4 𝐵3 000𝐴2 𝐶 00𝐴0 [ 2 2 ] 𝑖± (𝐸2)−𝑟(𝐸5)} . ∗ ∗ ∗ 𝐴3 𝐵3 𝐶3 𝐷3 𝐶2 [ ∗ ∗] [ 𝐵3 000𝐴2] [ ∗ ∗ ∗ ] Proof. By Lemma 1, the general solutions to (13)and(11)can 𝐸3 = [𝐷1 𝐶3 00𝐵1 0 ] , [ ] be written as 𝐶1𝐷3 0𝐴1 00 (30) 𝐶 𝐴 000 [ 2 2 ] † 𝑋=𝐴2𝐶2 +𝐿𝐴 𝑊, ∗ ∗ ∗ ∗ 2 𝐴3 𝐵3 𝐶3 𝐷3 𝐶2 𝐷3 𝐶1 (35) [ 𝐵∗ 000𝐴∗ 0 ] 𝑌=𝐴† 𝐶 +𝐿 𝐷 𝐵† +𝐿 𝑍𝑅 , [ 3 2 ] 1 1 𝐴1 1 1 𝐴1 𝐵1 [ 𝐶∗ 0000 𝐴∗ ] 𝐸 = [ 3 1 ] , 4 [ 000𝐵∗ 00] [ 1 ] 𝑊 𝑍 [𝐶 𝐷 0𝐴 00 0] where and are arbitrary matrices with appropriate sizes. 1 3 1 Put [ 𝐶2 𝐴2 000 0]

∗ ∗ 𝑄=𝐵3𝐿𝐴 ,𝑇=𝐶3𝐿𝐴 ,𝐽=𝑅𝐵 𝐷3, 𝐴3 𝐵3 𝐶3 𝐷3 𝐶2 𝐶3𝐷1 2 1 1 [ ∗ ∗ ] [ 𝐵3 000𝐴2 0 ] ∗ [ ] 𝑃=𝐴 −𝐵 𝐴† 𝐶 −(𝐵 𝐴† 𝐶 ) [ 𝐷3 0000 𝐵1 ] 3 3 2 2 3 2 2 𝐸5 = [ ∗ ∗ ∗ ] . [𝐷1 𝐶3 00𝐴1 00] (36) [ ] −𝐶 (𝐴† 𝐶 +𝐿 𝐷 𝐵†)𝐷 00𝐴1 00 0 3 1 1 𝐴1 1 1 3 𝐶 𝐴 000 0 [ 2 2 ] † † ∗ −(𝐶3 (𝐴1𝐶1 +𝐿𝐴 𝐷1𝐵1)𝐷3) . Then one has the following: 1 (a) the maximal rank of (10) subject to (13) and (11) is Substituting (36)into(10) yields max 𝑟[𝑓(𝑋, 𝑌)] 𝑋∈𝐺,𝑌∈𝑆 ∗ ∗ 𝑓 (𝑋, 𝑌) =𝑃−𝑄𝑊−(𝑄𝑊) −𝑇𝑍𝐽−(𝑇𝑍𝐽) . (37) = min {𝑝, 𝑟1 (𝐸 )−2𝑟(𝐴1)−2𝑟(𝐴2), (31) Clearly 𝑃 is Hermitian. It follows from Lemma 6 that 𝑟(𝐸2)−2𝑟(𝐵1)−2𝑟(𝐴2), 𝑟(𝐸 )−2𝑟(𝐴 )−𝑟(𝐴 )−𝑟(𝐵 )} , max 𝑟[𝑓(𝑋, 𝑌)] 3 2 1 1 𝑋∈𝐺,𝑌∈𝑆

∗ ∗ (b) the minimal rank of (10) subject to (13) and (11) is = max 𝑟(𝑃−𝑄𝑊−(𝑄𝑊) −𝑇𝑍𝐽−(𝑇𝑍𝐽) ) (38) 𝑊, 𝑍 min 𝑟[𝑓(𝑋, 𝑌)] 𝑋∈𝐺,𝑌∈𝑆 = min {𝑚, 𝑟1 (𝑁 ),𝑟(𝑁2),𝑟(𝑁3)} ,

𝐵3 =2𝑟(𝐸3)−2𝑟[ ] min 𝑟[𝑓(𝑋, 𝑌)] 𝐴2 𝑋∈𝐺,𝑌∈𝑆 (32) ∗ ∗ + {𝑟 (𝐸 )−2𝑟(𝐸 ),𝑟(𝐸 )−2𝑟(𝐸 ), = max 𝑟(𝑃−𝑄𝑊−(𝑄𝑊) −𝑇𝑍𝐽−(𝑇𝑍𝐽) ) max 1 4 2 5 𝑊, 𝑍 (39) 𝑖+ (𝐸1)+𝑖− (𝐸2)−𝑟(𝐸4)−𝑟(𝐸5), =2𝑟(𝑁3)−2𝑟(𝑄)

𝑖− (𝐸1)+𝑖+ (𝐸2)−𝑟(𝐸4)−𝑟(𝐸5)} , + max {𝑠+ +𝑠−,𝑡+ +𝑡−,𝑠+ +𝑡−,𝑠− +𝑡+}, Journal of Applied Mathematics 5

max 𝑖± [𝑓 (𝑋, 𝑌)] 𝐶 𝐵 =𝐴 𝐷 𝑋∈𝐺,𝑌∈𝑆 By 1 1 1 1,weobtain ∗ ∗ ∗ 𝑃𝑄𝐽 = max 𝑟(𝑃−𝑄𝑊−(𝑄𝑊) −𝑇𝑍𝐽−(𝑇𝑍𝐽) ) (40) 𝑊, 𝑍 [ ∗ ] 𝑟(𝑁2)=𝑟[𝑄 00]

= min {𝑖± (𝑁1),𝑖± (𝑁2)} , [ 𝐽00] 𝑖 [𝑓 (𝑋, 𝑌)] ∗ ∗ min ± 𝐴3 𝐷3 𝐶3𝐷1 𝐵3 𝐶2 𝑋∈𝐺,𝑌∈𝑆 [ ] [ 𝐷 0𝐵 00] ∗ ∗ [ 3 1 ] = max 𝑟(𝑃−𝑄𝑊−(𝑄𝑊) −𝑇𝑍𝐽−(𝑇𝑍𝐽) ) (41) [ ] 𝑊, 𝑍 =𝑟[𝐷∗𝐶∗ 𝐵∗ 000] [ 1 3 1 ] [ ∗ ∗] =𝑟(𝑁3)−𝑟(𝑄) + max {𝑠±,𝑡±}, [ 𝐵3 000𝐴2]

[ 𝐶2 00𝐴2 0 ] where −2𝑟(𝐵1)−2𝑟(𝐴2), 𝑃 𝑄𝑇𝐽∗ 𝑃𝑄𝑇 𝑟(𝑁 )=𝑟[ ] [ ∗ ] 3 𝑄∗ 000 𝑁1 = 𝑄 00 , ∗ [𝑇 00] ∗ ∗ ∗ 𝐴3 𝐵3 𝐶3 𝐷3 𝐶2 ∗ [ ] 𝑃𝑄𝐽 [ 𝐵∗ 000𝐴∗] [ ∗ ] [ 3 2] 𝑁2 = 𝑄 00 , [ ] =𝑟[𝐷∗𝐶∗ 00𝐵∗ 0 ] [ 𝐽00] [ 1 3 1 ] [ ] [𝐶 𝐷 0𝐴 00] 𝑃𝑄𝑇𝐽∗ 1 3 1 𝑁 =[ ∗ ], 3 𝑄 000 [ 𝐶2 𝐴2 000] (42) ∗ 𝑃𝑄𝑇𝐽 −𝑟(𝐵1)−2𝑟(𝐴2)−𝑟(𝐴1), 𝑁 = [𝑄∗ 000] , 4 𝑃𝑄𝑇𝐽∗ 𝑇∗ 000 [ ] [ ] 𝑟(𝑁 )=𝑟[𝑄∗ 000] ∗ 4 𝑃𝑄𝑇𝐽 ∗ [ ∗ ] [𝑇 000] 𝑁5 = 𝑄 000 , [ 𝐽000] ∗ ∗ ∗ ∗ 𝐴3 𝐵3 𝐶3 𝐷3 𝐶2 𝐷3 𝐶1 [ ∗ ∗ ] 𝑠 =𝑖 (𝑁 )−𝑟(𝑁), 𝑡 =𝑖 (𝑁 )−𝑟(𝑁). [ 𝐵3 000𝐴2 0 ] ± ± 1 4 ± ± 2 5 [ ∗ ∗ ] [ 𝐶3 0000 𝐴1 ] =𝑟[ ∗ ] [ 000𝐵1 00] [𝐶 𝐷 0𝐴 00 0] Now, we simplify the ranks and inertias of block matrices in 1 3 1 𝐶 𝐴 000 0 (38)–(41). [ 2 2 ] By Lemma 4, block Gaussian elimination, and noting that −𝑟(𝐵1)−2𝑟(𝐴2)−2𝑟(𝐴1), 𝑃𝑄𝑇𝐽∗ ∗ † ∗ ∗ ∗ † [ ∗ ] 𝐿 = (𝐼−𝑆 𝑆) =𝐼−𝑆 (𝑆 ) =𝑅∗ , 𝑟(𝑁5)=𝑟 𝑄 000 𝑆 𝑆 (43) ∗ [𝑇 000] ∗ ∗ 𝐴3 𝐵3 𝐶3 𝐷3 𝐶2 𝐶3𝐷1 we have the following: [ ∗ ∗ ] [ 𝐵3 000𝐴2 0 ] [ ] [ 𝐷3 0000 𝐵1 ] =𝑟[ ∗ ∗ ∗ ] 𝑃𝑄𝑇 [𝐷1 𝐶3 00𝐴1 00] [ ] [ ∗ ] 00𝐴1 00 0 𝑟(𝑁1)=𝑟 𝑄 00 ∗ 𝐶 𝐴 000 0 [𝑇 00] [ 2 2 ] ∗ ∗ ∗ −2𝑟(𝐵1)−2𝑟(𝐴2)−𝑟(𝐴1). 𝐴3 𝐶3 𝐷3 𝐶1 𝐵3 𝐶2 [ ∗ ∗ ] [ 𝐶3 0𝐴1 00] (44) (45) [ ] =𝑟[𝐶1𝐷3 𝐴1 000] [ ∗ ∗] By Lemma 2,wecangetthefollowing: 𝐵3 000𝐴2 [ 𝐶2 00𝐴2 0 ] 𝑃𝑄𝑇 [ ∗ ] 𝑖± (𝑁1)=𝑖± 𝑄 00 −2𝑟(𝐴 )−2𝑟(𝐴 ). ∗ 1 2 [𝑇 00] 6 Journal of Applied Mathematics

𝐴 𝐶 𝐷∗𝐶∗ 𝐵 𝐶∗ ∗ ∗ 3 3 3 1 3 2 (e) 𝐴3 −𝐵3𝑋−(𝐵3𝑋) −𝐶3𝑌𝐷3 −(𝐶3𝑌𝐷3) >0for all [ ∗ ∗ ] [ 𝐶3 0𝐴1 00] 𝑋∈𝐺 𝑌∈𝑆 [ ] and ifandonlyif =𝑖± [𝐶1𝐷3 𝐴1 000] [ 𝐵∗ 000𝐴∗] 3 2 𝐵 𝐶 00𝐴0 3 [ 2 2 ] 𝑟(𝐸3)−𝑟[ ]+𝑖+ (𝐸1)−𝑟(𝐸4)=𝑝 𝐴2 −𝑟(𝐴1)−𝑟(𝐴2), (52) 𝐵3 𝑜𝑟 𝑟3 (𝐸 )−𝑟[ ]+𝑖+ (𝐸2)−𝑟(𝐸5)=𝑝, (46) 𝐴2 𝑃𝑄𝐽∗ 𝑖 (𝑁 )=𝑖 [𝑄∗ 00] ± 2 ± ∗ ∗ [ 𝐽00] (f) 𝐴3 −𝐵3𝑋−(𝐵3𝑋) −𝐶3𝑌𝐷3 −(𝐶3𝑌𝐷3) <0for all 𝑋∈𝐺 𝑌∈𝑆 ∗ ∗ and ifandonlyif 𝐴3 𝐷3 𝐶3𝐷1 𝐵3 𝐶2 [ 𝐷 0𝐵 00] [ 3 1 ] (47) 𝐵 =𝑖 [𝐷∗𝐶∗ 𝐵∗ 000] 𝑟(𝐸 )−𝑟[ 3 ]+𝑖 (𝐸 )−𝑟(𝐸 )=𝑝 ± [ 1 3 1 ] 3 𝐴 − 1 4 [ 𝐵∗ 000𝐴∗] 2 3 2 (53) 𝐶2 00𝐴2 0 [ ] 𝐵3 𝑜𝑟 𝑟3 (𝐸 )−𝑟[ ]+𝑖− (𝐸2)−𝑟(𝐸5)=𝑝, 𝐴2 −𝑟(𝐵1)−𝑟(𝐴2).

Substituting (44)-(47)into(38)and(41)yields(31)–(34), ∗ ∗ respectively. (g) 𝐴3 −𝐵3𝑋−(𝐵3𝑋) −𝐶3𝑌𝐷3 −(𝐶3𝑌𝐷3) ≥0for all 𝑋∈𝐺and 𝑌∈𝑆ifandonlyif Corollary 8. Let 𝐴1, 𝐶1, 𝐵1, 𝐷1, 𝐴2, 𝐶2, 𝐴3, 𝐵3, 𝐶3, 𝐷3, 𝐸 𝑖 = 1,2,...,5 and 𝑖,( )beasinTheorem 7,andsuppose 𝑖 (𝐸 )−𝑟(𝐴 )−𝑟(𝐴 )≤0 that the system of matrix equations (13) and (11) is consistent, − 1 1 2 (54) respectively. Denote the set of all solutions to (13) by 𝑆 and (11) 𝑜𝑟− 𝑖 (𝐸2)−𝑟(𝐵1)−𝑟(𝐴2)≤0, by 𝐺.Then,onehasthefollowing:

(a) there exist 𝑋∈𝐺and 𝑌∈𝑆such that 𝐴3 −𝐵3𝑋− (𝐵 𝑋)∗ −𝐶 𝑌𝐷 −(𝐶 𝑌𝐷 )∗ >0 ∗ ∗ 3 3 3 3 3 if and only if (h) 𝐴3 −𝐵3𝑋−(𝐵3𝑋) −𝐶3𝑌𝐷3 −(𝐶3𝑌𝐷3) ≤0for all 𝑋∈𝐺and 𝑌∈𝑆ifandonlyif 𝑖+ (𝐸1)−𝑟(𝐴1)−𝑟(𝐴2)≥𝑝, (48) 𝑖+ (𝐸2)−𝑟(𝐵1)−𝑟(𝐴2)≥𝑝. 𝑖+ (𝐸1)−𝑟(𝐴1)−𝑟(𝐴2)≤0 (55) 𝑜𝑟+ 𝑖 (𝐸2)−𝑟(𝐵1)−𝑟(𝐴2)≤0, (b) there exist 𝑋∈𝐺and 𝑌∈𝑆such that 𝐴3 −𝐵3𝑋− ∗ ∗ (𝐵3𝑋) −𝐶3𝑌𝐷3 −(𝐶3𝑌𝐷3) <0if and only if

𝑖− (𝐸1)−𝑟(𝐴1)−𝑟(𝐴2)≥𝑝, (i) there exist 𝑋∈𝐺and 𝑌∈𝑆such that 𝐴3 −𝐵3𝑋− ∗ ∗ (49) (𝐵3𝑋) −𝐶3𝑌𝐷3 −(𝐶3𝑌𝐷3) is nonsingular if and only 𝑖 (𝐸 )−𝑟(𝐵 )−𝑟(𝐴 )≥𝑝, − 2 1 2 if

(c) there exist 𝑋∈𝐺and 𝑌∈𝑆such that 𝐴3 −𝐵3𝑋− ∗ ∗ 𝑟(𝐸1)−2𝑟(𝐴1)−2𝑟(𝐴2)≥𝑝, (𝐵3𝑋) −𝐶3𝑌𝐷3 −(𝐶3𝑌𝐷3) ≥0if and only if 𝑟(𝐸 )−2𝑟(𝐵 )−2𝑟(𝐴 )≥𝑝, 𝐵 2 1 2 (56) 𝑟(𝐸 )−𝑟[ 3 ]+𝑖 (𝐸 )−𝑟(𝐸 )≤0, 3 𝐴 − 1 4 2 𝑟(𝐸3)−2𝑟(𝐴2)−𝑟(𝐴1)−𝑟(𝐵1)≥𝑝. (50) 𝐵 𝑟(𝐸 )−𝑟[ 3 ]+𝑖 (𝐸 )−𝑟(𝐸 )≤0. 3 𝐴 − 2 5 2 3. Relations between the Hermitian Part of the Solutions to (13) and (11) (d) there exist 𝑋∈𝐺and 𝑌∈𝑆such that 𝐴3 −𝐵3𝑋− ∗ ∗ (𝐵3𝑋) −𝐶3𝑌𝐷3 −(𝐶3𝑌𝐷3) ≤0if and only if Now we consider the extremal ranks and inertias of the difference between the Hermitian part of the solutions to13 ( ) 𝐵3 and (11). 𝑟(𝐸3)−𝑟[ ]+𝑖+ (𝐸1)−𝑟(𝐸4)≤0, 𝐴2 𝑚×𝑝 𝑚×𝑝 𝑝×𝑙 (51) Theorem 9. Let 𝐴1 ∈ C , 𝐶1 ∈ C , 𝐵1 ∈ C , 𝐷1 ∈ 𝐵 𝑝×𝑙 𝑡×𝑝 𝑡×𝑝 𝑟(𝐸 )−𝑟[ 3 ]+𝑖 (𝐸 )−𝑟(𝐸 )≤0, C , 𝐴2 ∈ C ,and𝐶2 ∈ C ,begiven.Supposethat 3 𝐴 + 2 5 2 the system of matrix equations (13) and (11) is consistent, Journal of Applied Mathematics 7

∗ ∗ respectively. Denote the set of all solutions to (13) by 𝑆 and (11) min 𝑖± [(𝑋 + 𝑋 )−(𝑌+𝑌 )] 𝑋∈𝐺,𝑌∈𝑆 by 𝐺.Put =𝑟(𝐸3)−𝑝+max {𝑖± (𝐻1)−𝑟(𝐻4),𝑖± (𝐻2)−𝑟(𝐻5)} . ∗ ∗ (58) 0𝐼𝐶1 −𝐼2 𝐶 [ 𝐼0𝐴∗ 00] [ 1 ] 𝐴 =0𝐵 =−𝐼𝐶 =𝐼 𝐷 =𝐼 [ ] Proof. By letting 3 , 3 , 3 ,and 3 in 𝐻1 = [𝐶1 𝐴1 000] , [ ∗] Theorem 7,wecangettheresults. −𝐼000𝐴2 𝐶 00𝐴0 [ 2 2 ] Corollary 10. Let 𝐴1, 𝐶1, 𝐵1, 𝐷1, 𝐴2, 𝐶2,and𝐻𝑖,(𝑖= ∗ 1,2,...,5)beasinTheorem 9,andsupposethatthesystemof 0𝐼𝐷1 −𝐼 𝐶 [ 2 ] matrix equations (13) and (11) is consistent, respectively. Denote [ 𝐼0𝐵1 00] [ ∗ ∗ ] the set of all solutions to (13) by 𝑆 and (11) by 𝐺.Then,onehas 𝐻2 =𝑟[𝐷1 𝐵1 000] , [ ∗] the following: −𝐼 0 0 0 𝐴2 ∗ [𝐶2 00𝐴2 0 ] (a) there exist 𝑋∈𝐺and 𝑌∈𝑆such that (𝑋+𝑋 )> (𝑌 + 𝑌∗) ∗ if and only if 0−𝐼𝐼2 𝐼𝐶 [ ∗] 𝑖 (𝐻 )−𝑟(𝐴 )−𝑟(𝐴 )≥𝑝, [−𝐼 0 0 0 𝐴2] + 1 1 2 [ ∗ ∗ ] 𝐻3 = [𝐷1 00𝐵1 0 ] , (59) [ ] 𝑖+ (𝐻2)−𝑟(𝐵1)−𝑟(𝐴2)≥𝑝. 𝐶1 0𝐴1 00 (57) 𝐶 𝐴 000 [ 2 2 ] ∗ (b) there exist 𝑋∈𝐺and 𝑌∈𝑆such that (𝑋+𝑋 )< ∗ ∗ (𝑌 + 𝑌∗) 0−𝐼𝐼2 𝐼𝐶 𝐶1 if and only if [ ∗ ] [−𝐼 0 0 0 𝐴2 0 ] [ ∗] 𝑖− (𝐻1)−𝑟(𝐴1)−𝑟(𝐴2)≥𝑝, [ 𝐼0000𝐴1] 𝐻4 = [ ∗ ] , (60) [ 000𝐵1 00] [ ] 𝑖− (𝐻2)−𝑟(𝐵1)−𝑟(𝐴2)≥𝑝, 𝐶1 0𝐴1 000 𝐶 𝐴 0000 ∗ [ 2 2 ] (c) there exist 𝑋∈𝐺and 𝑌∈𝑆such that (𝑋+𝑋 )≥ ∗ ∗ (𝑌 + 𝑌 ) if and only if 0−𝐼𝐼2 𝐼𝐶𝐷1 [ ∗ ] [−𝐼000𝐴2 0 ] [ ] 𝑟(𝐻3)−𝑝+𝑖− (𝐻1)−𝑟(𝐻4)≤0, [ 𝐼0000𝐵1 ] 𝐻5 = [ ∗ ∗ ] . (61) [𝐷1 00𝐴1 00] 𝑟(𝐻 )−𝑝+𝑖 (𝐻 )−𝑟(𝐻)≤0, [ ] 3 − 1 4 00𝐴1 000 𝐶 𝐴 0000 ∗ [ 2 2 ] (d) there exist 𝑋∈𝐺and 𝑌∈𝑆such that (𝑋+𝑋 )≤ ∗ (𝑌 + 𝑌 ) if and only if

Then one has the following: 𝑟(𝐻3)−𝑝+𝑖+ (𝐻1)−𝑟(𝐻4)≤0, (62) 𝑟(𝐻 )−𝑝+𝑖 (𝐻 )−𝑟(𝐻)≤0, ∗ ∗ 3 + 2 5 max 𝑟[(𝑋+𝑋 )−(𝑌+𝑌 )] 𝑋∈𝐺,𝑌∈𝑆 ∗ ∗ (e) (𝑋 + 𝑋 )>(𝑌+𝑌 ) for all 𝑋∈𝐺and 𝑌∈𝑆if and = min {𝑝, 𝑟1 (𝐻 )−2𝑟(𝐴1)−2𝑟(𝐴2),𝑟(𝐻2)−2𝑟(𝐵1) only if

−2𝑟 (𝐴2),𝑟(𝐻3)−2𝑟(𝐴2)−𝑟(𝐴1)−𝑟(𝐵1)} , 𝑟(𝐻3)−𝑝+𝑖+ (𝐻1)−𝑟(𝐻4)=𝑝 ∗ ∗ (63) min 𝑟[(𝑋+𝑋 )−(𝑌+𝑌 )] 𝑜𝑟 𝑟3 (𝐻 )−𝑝+𝑖+ (𝐻2)−𝑟(𝐻5)=𝑝, 𝑋∈𝐺,𝑌∈𝑆 (𝑋 +∗ 𝑋 )<(𝑌+𝑌∗) 𝑋∈𝐺 𝑌∈𝑆 =2𝑟(𝐻3)−2𝑝 (f) for all and if and only if + max {𝑟 1(𝐻 )−2𝑟(𝐻4),𝑟(𝐻2)−2𝑟(𝐻5), 𝑟(𝐻3)−𝑝+𝑖− (𝐻1)−𝑟(𝐻4)=𝑝 𝑖+ (𝐻1)+𝑖− (𝐻2)−𝑟(𝐻4)−𝑟(𝐻5), (64) 𝑜𝑟 𝑟3 (𝐻 )−𝑝+𝑖− (𝐻2)−𝑟(𝐻5)=𝑝, 𝑖− (𝐻1)+𝑖+ (𝐻2)−𝑟(𝐻4)−𝑟(𝐻5)} , (𝑋 + 𝑋∗)≥(𝑌+𝑌∗) 𝑋∈𝐺 𝑌∈𝑆 ∗ ∗ (g) for all and if and max 𝑖± [(𝑋 + 𝑋 )−(𝑌+𝑌 )] 𝑋∈𝐺,𝑌∈𝑆 only if 𝑖 (𝐻 )−𝑟(𝐴 )−𝑟(𝐴 )≤0 = min {𝑖± (𝐻1)−𝑟(𝐴1)−𝑟(𝐴2), − 1 1 2 (65) 𝑜𝑟 𝑖 (𝐻 )−𝑟(𝐵 )−𝑟(𝐴 )≤0, 𝑖± (𝐻2)−𝑟(𝐵1)−𝑟(𝐴2)} , − 2 1 2 8 Journal of Applied Mathematics

∗ ∗ (h) (𝑋 + 𝑋 )≤(𝑌+𝑌 ) for all 𝑋∈𝐺and 𝑌∈𝑆if and where 𝑈1, 𝑈2, 𝑉1, 𝑉2, 𝑊1,and𝑊2 are arbitrary matrices over only if C with appropriate sizes. 𝑖 (𝐻 )−𝑟(𝐴 )−𝑟(𝐴 )≤0 + 1 1 2 Now we give the main theorem of this section. (66) 𝑜𝑟+ 𝑖 (𝐻2)−𝑟(𝐵1)−𝑟(𝐴2)≤0, Theorem 12. Let 𝐴𝑖, 𝐶𝑖,(𝑖 = 1, 2,), 3 𝐵𝑗,and𝐷𝑗,(𝑗=1,3)be ∗ (i) there exist 𝑋∈𝐺and 𝑌∈𝑆such that (𝑋 + 𝑋 ) − (𝑌 + given. Set ∗ 𝑌 ) is nonsingular if and only if 𝐴=𝐵𝐿 ,𝐵=𝐶𝐿 , 3 𝐴2 3 𝐴1 𝑟(𝐻1)−2𝑟(𝐴1)−2𝑟(𝐴2)≥𝑝, 𝐶=𝑅 𝐷 ,𝐹=𝑅𝐵, 𝐵1 3 𝐴 𝑟(𝐻2)−2𝑟(𝐵1)−2𝑟(𝐴2)≥𝑝, (67) ∗ (71) 𝐺=𝐶𝑅𝐴,𝑀=𝑅𝐹𝐺 , 𝑟(𝐻3)−2𝑟(𝐴2)−𝑟(𝐴1)−𝑟(𝐵1)≥𝑝. ∗ ∗ 𝑁=𝐹 𝐿𝐺,𝑆=𝐺𝐿𝑀, 4. The Solvability Conditions and the General ∗ 𝐷=𝐴 −𝐵 𝐴† 𝐶 −(𝐵 𝐴† 𝐶 ) Solution to System (6) 3 3 2 2 3 2 2 −𝐶 (𝐴† 𝐶 +𝐿 𝐷 𝐵†)𝐷 We now turn our attention to (6). We in this section use 3 1 1 𝐴1 1 1 3 (72) Theorem 9 to give some necessary and sufficient conditions ∗ −𝐷∗(𝐴† 𝐶 +𝐿 𝐷 𝐵†) 𝐶∗, for the existence of a solution to (6) and present an expression 3 1 1 𝐴1 1 1 3 of the general solution to (6). We begin with a lemma which isusedinthelatterpartofthissection. 𝐸=𝑅𝐴𝐷𝑅𝐴. (73)

𝑚×𝑛1 𝑚×𝑛2 Lemma 11 (see [14]). Let 𝐴1 ∈ C , 𝐵1 ∈ C , 𝐶1 ∈ Then the following statements are equivalent: C𝑞×𝑚 𝐸 ∈ C𝑚×𝑚 𝐴=𝑅 𝐵 𝐵=𝐶𝑅 ,and 1 ℎ be given. Let 𝐴1 1, 1 𝐴1 , 𝐸=𝑅 𝐸 𝑅 𝑀=𝑅 𝐵∗ 𝑁=𝐴∗𝐿 𝑆=𝐵∗𝐿 𝐴1 1 𝐴1 , 𝐴 , 𝐵,and 𝑀.Then (1) system (6) is consistent, the following statements are equivalent: (2) the equalities in (14) and (17) hold, and (1) equation (5) is consistent, (2) 𝑅𝑀𝑅𝐹𝐸=0,𝐹 𝑅 𝐸𝑅𝐹 =0, 𝐿𝐺𝐸𝐿 𝐺 =0, (74)

𝑅𝑀𝑅𝐴𝐸=0,𝐴 𝑅 𝐸𝑅𝐴 =0, 𝐿𝐵𝐸𝐿 𝐵 =0, (68) (3) the equalities in (15) and (18) hold, and (3) ∗ ∗ ∗ 𝐸1 𝐵1 𝐶1 𝐴1 ∗ 𝐴3 𝐶3 𝐷3 𝐵3 𝐶2 ∗ 𝑟[ ∗ ]=𝑟[𝐵1 𝐶1 𝐴1]+𝑟(𝐴1), [ ∗ ∗] 𝐶3 𝐷3 𝐵3 𝐴1 000 [ 𝐵3 000𝐴2] [ ] [ ] [𝐴1 00] 𝐴2 𝑟 [𝐶1𝐷3 𝐴1 000] =𝑟[ ∗ ] +𝑟[ ], 𝐸 𝐵 𝐴 [ ∗ ∗ ∗ ] 0𝐵1 0 𝐵3 1 1 1 𝐷1 𝐶3 0𝐵1 00 [ ∗ ] 00𝐴2 𝑟 𝐴1 00=2𝑟[𝐵1 𝐴1], 𝐶 00𝐴0 [ ] ∗ (69) [ 2 2 ] [𝐵1 00] ∗ ∗ ∗ 𝐴3 𝐶3 𝐵3 𝐷3 𝐶1 𝐶2 𝐸 𝐶∗ 𝐴 [ ∗ ∗ ] 1 1 1 [ 𝐶3 00 𝐴1 0 ] 𝐶3 𝐵3 𝑟 [𝐴∗ 00] =2𝑟[𝐶∗ 𝐴 ]. [ ∗ ∗] [ ] 1 1 1 𝑟 [ 𝐵3 00 0 𝐴2] =2𝑟 𝐴1 0 , 𝐶 00 [ ] [ 1 ] 𝐶1𝐷3 𝐴1 000 [ 0𝐴2] 𝐶 0𝐴 00 In this case, the general solution of (5) can be expressed as [ 2 2 ] ∗ ∗ ∗ 1 † † † ∗ † † † † † ∗ † 𝐴3 𝐷3 𝐵3 𝐶3𝐷1 𝐶2 𝑌= [𝐴 𝐸𝐵 −𝐴 𝐵 𝑀 𝐸𝐵 −𝐴 𝑆(𝐵 ) 𝐸𝑁 𝐴 𝐵 [ ] ∗ 2 [ 𝐷3 00 𝐵1 0 ] 𝐷3 𝐵3 [ 𝐵∗ 00 0 𝐴∗] [𝐵∗ 0 ] ∗ ∗ 𝑟 [ 3 2] =2𝑟 1 . +𝐴†𝐸(𝑀†) +(𝑁†) 𝐸𝐵†𝑆†𝑆] + 𝐿 𝑉 +𝑉𝑅 [ ∗ ∗ ∗ ] 𝐴 1 2 𝐵 𝐷1 𝐶3 𝐵1 000 [ 0𝐴2] [ 𝐶2 0𝐴2 00] ∗ † ∗ † +𝑈1𝐿𝑆𝐿𝑀 +𝑅𝑁𝑈2 𝐿𝑀 −𝐴 𝑆𝑈2𝑅𝑁𝐴 𝐵 , (75)

† ∗ 𝑋=𝐴1 [𝐸1 −𝐵1𝑌𝐶1 −(𝐵1𝑌𝐶1) ] In this case, the general solution of system (6) can be expressed 1 − 𝐴† [𝐸 −𝐵 𝑌𝐶 −(𝐵 𝑌𝐶 )∗]𝐴 𝐴† as 2 1 1 1 1 1 1 1 1 † 𝑋=𝐴2𝐶2 +𝐿𝐴 𝑈, −𝐴† 𝑊 𝐴∗ +𝑊∗𝐴 𝐴† +𝐿 𝑊 , 2 1 1 1 1 1 1 𝐴1 2 (76) 𝑌=𝐴† 𝐶 +𝐿 𝐷 𝐵† +𝐿 𝑉𝑅 , (70) 1 1 𝐴1 1 1 𝐴1 𝐵1 Journal of Applied Mathematics 9

𝐴 𝐷∗ 𝐵 𝐶 𝐷 𝐶∗ where 3 3 3 3 1 2 [ ] [ 𝐷 00 𝐵 0 ] 1 † † † ∗ † † † † ∗ † ∗ † [ 3 1 ] 𝑉= [𝐹 𝐸𝐺 −𝐹 𝐺 𝑀 𝐸𝐺 −𝐹 𝑆(𝐺 ) 𝐸𝑁 𝐹 𝐺 [ ] 2 𝐿 𝐸𝐿 =0⇐⇒𝑟[ 𝐵∗ 00 0 𝐴∗] 𝐺 𝐺 [ 3 2] [ ∗ ∗ ∗ ] † † ∗ † ∗ † † [𝐷 𝐶 𝐵 000] +𝐹 𝐸(𝑀 ) +(𝑁 ) 𝐸𝐺 𝑆 𝑆] + 𝐿𝐹𝑉1 1 3 1 [ 𝐶2 0𝐴2 00] +𝑉𝑅 +𝑈𝐿 𝐿 +𝑅 𝑈∗𝐿 −𝐹†𝑆𝑈 𝑅 𝐹∗𝐺†, 2 𝐺 1 𝑆 𝑀 𝑁 2 𝑀 2 𝑁 ∗ 𝐷3 𝐵3 † ∗ 𝑈=𝐴 [𝐷 − 𝐵𝑉𝐶 − (𝐵𝑉𝐶) ] [ ∗ ] =2𝑟[𝐵1 0 ] . 1 0𝐴 − 𝐴† [𝐷 − 𝐵𝑉𝐶 − (𝐵𝑉𝐶)∗]𝐴𝐴† [ 2] 2 (79) −𝐴†𝑊 𝐴∗ +𝑊∗𝐴𝐴† +𝐿 𝑊 , 1 1 𝐴 2 (1) ⇔ (2): We separate the four equations in system (6)into (77) three groups: where 𝑈1, 𝑈2, 𝑉1, 𝑉2, 𝑊1,and𝑊2 are arbitrary matrices over 𝐴1𝑌=𝐶1,𝑌𝐵1 =𝐷1, (80) C with appropriate sizes. 𝐴2𝑋=𝐶2, (81) Proof. (2) ⇔ (3): Applying Lemma 3 and Lemma 11 gives ∗ ∗ 𝐵3𝑋+(𝐵3𝑋) +𝐶3𝑌𝐷3 +(𝐶3𝑌𝐷3) =𝐴3. (82) 𝑅𝑀𝑅𝐹𝐸=0⇐⇒𝑟(𝑅𝑀𝑅𝐹𝐸) 𝐷𝐵𝐶∗ 𝐴 By Lemma 1,weobtainthatsystem(80)issolvableifand =0⇐⇒𝑟[ ] 𝐴∗ 000 only if (14), (81) is consistent if and only if (17). The general solutions to system (80)and(81) can be expressed as (16)and =𝑟[𝐵𝐶∗ 𝐴]+𝑟(𝐴) (19), respectively. Substituting (16)and(19)into(82) yields ∗ ∗ ∗ 𝐴𝑈 + (𝐴𝑈) +𝐵𝑉𝐶+(𝐵𝑉𝐶) =𝐷. (83) 𝐷𝐶3 𝐷3 𝐵3 0 [ ∗ ∗] [𝐵3 000𝐴2] [ ] Hence, the system (5) is consistent if and only if (80), (81), and ⇐⇒ 𝑟 [ 0𝐴1 000] [ 00𝐵∗ 00] (83) are consistent, respectively. It follows from Lemma 11 that 1 (83)issolvableifandonlyif [ 000𝐴2 0 ]

∗ 𝑅𝑀𝑅𝐹𝐸=0,𝐹 𝑅 𝐸𝑅𝐹 =0, 𝐿𝐺𝐸𝐿 𝐺 =0. (84) 𝐶3 𝐷3 𝐵3 [𝐴 00] 𝐴 (78) =𝑟[ 1 ] +𝑟[ 2] [ 0𝐵∗ 0 ] 𝐵 We know by Lemma 11 that the general solution of (83)can 1 3 be expressed as (77). [ 00𝐴2] ∗ ∗ In Theorem 12,let𝐴1 and 𝐷1 vanish. Then we can obtain 𝐴3 𝐶3 𝐷3 𝐵3 𝐶2 [ ∗ ∗] the general solution to the following system: [ 𝐵3 000𝐴2] [ ] ⇐⇒ 𝑟 [𝐶1𝐷3 𝐴1 000] 𝐴 𝑋=𝐶,𝑌𝐵=𝐷, [𝐷∗𝐶∗ 0𝐵∗ 00] 2 2 1 1 1 3 1 (85) 𝐶 00𝐴0 ∗ ∗ [ 2 2 ] 𝐵3𝑋+(𝐵3𝑋) +𝐶3𝑌𝐷3 +(𝐶3𝑌𝐷3) =𝐴3.

∗ ∗ 𝐶3 𝐷3 𝐵3 Corollary 13. 𝐴 𝐶 𝐵 𝐷 𝐵 𝐶 𝐷 𝐴 =𝐴 [ ] Let 2, 2, 1, 1, 3, 3, 3,and 3 3 [𝐴1 00] 𝐴2 =𝑟[ ∗ ] +𝑟[ ]. be given. Set 0𝐵1 0 𝐵3 00𝐴2 𝐴=𝐵𝐿 ,𝐶=𝑅𝐷 , [ ] 3 𝐴2 𝐵1 3

By a similar approach, we can obtain that 𝐹=𝑅𝐴𝐶3,𝐺=𝐶𝑅𝐴, ∗ ∗ ∗ ∗ ∗ 𝐴3 𝐶3 𝐵3 𝐷3 𝐶1 𝐶2 𝑀=𝑅𝐹𝐺 ,𝑁=𝐹𝐿𝐺, [ ∗ ∗ ] [ 𝐶3 00 𝐴1 0 ] [ 𝐵∗ 00 0 𝐴∗] 𝑆=𝐺∗𝐿 , 𝑅𝐹𝐸𝑅𝐹 =0⇐⇒𝑟[ 3 2] 𝑀 (86) [ ] 𝐶1𝐷3 𝐴1 000 ∗ 𝐷=𝐴 −𝐵 𝐴† 𝐶 −(𝐵 𝐴† 𝐶 ) [ 𝐶2 0𝐴2 00] 3 3 2 2 3 2 2 † † ∗ 𝐶3 𝐵3 −𝐶 𝐷 𝐵 𝐷 −(𝐶 𝐷 𝐵 𝐷 ) , [ ] 3 1 1 3 3 1 1 3 =2𝑟 𝐴1 0 , [ 0𝐴2] 𝐸=𝑅𝐴𝐷𝑅𝐴. 10 Journal of Applied Mathematics

Then the following statements are equivalent: andcomments,whichresultedinagreatimprovementofthe original paper. This research was supported by the Grants (1) system (85) is consistent from the National Natural Science Foundation of China (2) (NSFC (11161008)) and Doctoral Program Fund of Ministry 𝑅 𝐶 =0, 𝐷𝐿 =0, 𝑅 𝑅 𝐸=0, of Education of P.R.China (20115201110002). 𝐴2 2 1 𝐵1 𝑀 𝐹 (87)

𝑅𝐹𝐸𝑅𝐹 =0, 𝐿𝐺𝐸𝐿 𝐺 =0, (88) References

(3) [1] H. Zhang, “Rank equalities for Moore-Penrose inverse and Drazin inverse over quaternion,” Annals of Functional Analysis, 𝐷1 vol.3,no.2,pp.115–127,2012. 𝑟[𝐴2 𝐶2]=𝑟(𝐴2), [ ]=𝑟(𝐵1), 𝐵1 [2] L.Bao,Y.Lin,andY.Wei,“Anewprojectionmethodforsolving large Sylvester equations,” Applied Numerical Mathematics,vol. 𝐴 𝐶 𝐷∗ 𝐵 𝐶∗ 3 3 3 3 2 𝐶 𝐷∗ 𝐵 57, no. 5–7, pp. 521–532, 2007. [ 𝐵∗ 000𝐴∗] 3 3 3 𝐴 𝑟 [ 3 2] =𝑟[ 0𝐵∗ 0 ] +𝑟[ 2], [3] M. Dehghan and M. Hajarian, “On the generalized reflexive and [𝐷∗𝐶∗ 0𝐵∗ 00] 1 𝐵 anti-reflexive solutions to a system of matrix equations,” Linear 1 3 1 00𝐴 3 [ 2] Algebra and Its Applications,vol.437,no.11,pp.2793–2812,2012. [ 𝐶2 00𝐴2 0 ] [4]F.O.Farid,M.S.Moslehian,Q.-W.Wang,andZ.-C.Wu,“On ∗ 𝐴3 𝐶3 𝐵3 𝐶2 the Hermitian solutions to a system of adjointable operator [𝐶∗ 000] 𝐶 𝐵 equations,” Linear Algebra and Its Applications,vol.437,no.7, 𝑟 [ 3 ] =2𝑟[ 3 3 ], [ ∗ ∗] pp.1854–1891,2012. 𝐵3 00𝐴2 0𝐴2 [𝐶2 0𝐴2 0 ] [5] Z. H. He and Q. W. Wang, “A real quaternion matrix equation with with applications,” Linear Multilinear Algebra,vol.61,pp. ∗ ∗ 𝐴3 𝐷3 𝐵3 𝐶3𝐷1 𝐶2 725–740, 2013. [ ] ∗ [ 𝐷3 00 𝐵1 0 ] 𝐷3 𝐵3 [6] Q.-W. Wang and Z.-C. Wu, “Common Hermitian solutions [ ∗ ∗] [ ∗ ] 𝐶𝑥 𝑟 [ 𝐵3 00 0 𝐴2] =2𝑟 𝐵1 0 . to some operator equations on Hilbert -modules,” Linear [ ∗ ∗ ∗ ] Algebra and Its Applications,vol.432,no.12,pp.3159–3171,2010. 𝐷1 𝐶3 𝐵1 000 [ 0𝐴2] 𝐶 0𝐴 00 [7] Q.-W. Wang and C.-Z. Dong, “Positive solutions to a system [ 2 2 ] 𝐶𝑥 (89) of adjointable operator equations over Hilbert -modules,” Linear Algebra and Its Applications,vol.433,no.7,pp.1481–1489, 2010. In this case, the general solution of system (6) can be expressed [8] Q.-W. Wang, J. W. van der Woude, and H.-X. Chang, “A system as of real quaternion matrix equations with applications,” Linear 𝑋=𝐴† 𝐶 +𝐿 𝑈, Algebra and Its Applications,vol.431,no.12,pp.2291–2303,2009. 2 2 𝐴2 (90) [9] Q. W. Wang, H.-S. Zhang, and G.-J. Song, “A new solvable con- † dition for a pair of generalized Sylvester equations,” Electronic 𝑌=𝐷1𝐵1 +𝑉𝑅𝐵 , 1 Journal of Linear Algebra,vol.18,pp.289–301,2009. 𝑇 𝑇 where [10] H. W. Braden, “The equations 𝐴 𝑋±𝑋 𝐴=𝐵,” SIAM Journal on Matrix Analysis and Applications,vol.20,no.2,pp.295–302, 1 ∗ 𝑉= [𝐹†𝐸𝐺† −𝐹†𝐺∗𝑀†𝐸𝐺† −𝐹†𝑆(𝐺†) 𝐸𝑁†𝐹∗𝐺† 1999. 2 [11] D. S. Djordjevic,´ “Explicit solution of the operator equation ∗ ± ∗ ∗ 𝐴 𝑋+𝑋 𝐴=𝐵,” Journal of Computational and Applied Math- +𝐹†𝐸(𝑀†) +(𝑁†) 𝐸𝐺†𝑆†𝑆] + 𝐿 𝑉 𝐹 1 ematics,vol.200,no.2,pp.701–704,2007.

∗ † ∗ † [12] W. Cao, “Solvability of a quaternion matrix equation,” Applied +𝑉2𝑅𝐺 +𝑈1𝐿𝑆𝐿𝑀 +𝑅𝑁𝑈2 𝐿𝑀 −𝐹 𝑆𝑈2𝑅𝑁𝐹 𝐺 , Mathematics, vol. 17, no. 4, pp. 490–498, 2002.

† ∗ [13] Q. Xu, L. Sheng, and Y. Gu, “The solutions to some operator 𝑈=𝐴 [𝐷 −3 𝐶 𝑉𝐶−(𝐶3𝑉𝐶) ] equations,” Linear Algebra and Its Applications,vol.429,no.8-9, pp. 1997–2024, 2008. 1 † ∗ † − 𝐴 [𝐷 −3 𝐶 𝑉𝐶−(𝐶3𝑉𝐶) ]𝐴𝐴 [14] Q.-W. Wang and Z.-H. He, “Some matrix equations with appli- 2 cations,” Linear and Multilinear Algebra,vol.60,no.11-12,pp. −𝐴†𝑊 𝐴∗ +𝑊∗𝐴𝐴† +𝐿 𝑊 , 1327–1353, 2012. 1 1 𝐴 2 [15] Y. Tian, “Maximization and minimization of the rank and 𝑥 (91) inertia of the Hermitian matrix expression 𝐴 − 𝐵𝑋 − (𝐵𝑋) ,” 𝑈 𝑈 𝑉 𝑉 𝑊 𝑊 Linear Algebra and Its Applications,vol.434,no.10,pp.2109– where 1, 2, 1, 2, 1,and 2 are arbitrary matrices over 2139, 2011. C with appropriate sizes. [16] Z.-H. He and Q.-W.Wang, “Solutions to optimization problems on ranks and inertias of a matrix function with applications,” Acknowledgments Applied Mathematics and Computation,vol.219,no.6,pp.2989– 3001, 2012. The authors would like to thank Dr. Mohamed, Dr. Sivaku- [17] Y.Liu and Y.Tian, “Max-min problems on the ranks and inertias ∗ mar, and a referee very much for their valuable suggestions of the matrix expressions 𝐴−𝐵𝑋𝐶±(𝐵𝑋𝐶),” Journal of Journal of Applied Mathematics 11

Optimization Theory and Applications,vol.148,no.3,pp.593– 622, 2011. [18] D. Chu, Y.S. Hung, and H. J. Woerdeman, “Inertia and rank cha- racterizations of some matrix expressions,” SIAM Journal on Matrix Analysis and Applications,vol.31,no.3,pp.1187–1226, 2009. [19] Y. Liu and Y. Tian, “A simultaneous decomposition of a mat- rix triplet with applications,” Numerical Linear Algebra with Ap- plications,vol.18,no.1,pp.69–85,2011. [20] X. Zhang, Q.-W. Wang, and X. Liu, “Inertias and ranks of some Hermitian matrix functions with applications,” Central Euro- pean Journal of Mathematics,vol.10,no.1,pp.329–351,2012. [21] Q.-W. Wang, “The general solution to a system of real quater- nion matrix equations,” Computers & Mathematics with Appli- cations,vol.49,no.5-6,pp.665–675,2005. [22] Y. Tian, “Equalities and inequalities for inertias of Hermitian matrices with applications,” Linear Algebra and Its Applications, vol. 433, no. 1, pp. 263–296, 2010. [23] G. Marsaglia and G. P. H. Styan, “Equalities and inequalities for ranks of matrices,” Linear and Multilinear Algebra,vol.2,pp. 269–292, 1974. Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 230408, 6 pages http://dx.doi.org/10.1155/2013/230408

Research Article Left and Right Inverse Eigenpairs Problem for 𝜅-Hermitian Matrices

Fan-Liang Li,1 Xi-Yan Hu,2 and Lei Zhang2

1 Institute of Mathematics and Physics, School of Sciences, Central South University of Forestry and Technology, Changsha 410004, China 2 College of Mathematics and Econometrics, Hunan University, Changsha 410082, China

Correspondence should be addressed to Fan-Liang Li; [email protected]

Received 6 December 2012; Accepted 20 March 2013

Academic Editor: Panayiotis J. Psarrakos

Copyright © 2013 Fan-Liang Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Left and right inverse eigenpairs problem for 𝜅-hermitian matrices and its optimal approximate problem are considered. Based on the special properties of 𝜅-hermitian matrices, the equivalent problem is obtained. Combining a new inner product of matrices, the necessary and sufficient conditions for the solvability of the problem and its general solutions are derived. Furthermore, the optimal approximate solution and a calculation procedure to obtain the optimal approximate solution are provided.

1. Introduction eigenpairs problem. For example, we studied the left and right inverse eigenpairs problem of skew-centrosymmetric Throughout this paper we use some notations as follows. Let 𝑛×𝑚 𝑛×𝑛 𝑛×𝑛 matrices and generalized centrosymmetric matrices, respec- 𝐶 be the set of all 𝑛×𝑚complex matrices, 𝑈𝐶 , 𝐻𝐶 , 𝑛×𝑛 tively [5, 6].Basedonthespecialpropertiesofleftandright 𝑆𝐻𝐶 denote the set of all 𝑛×𝑛unitary matrices, hermitian 𝐻 eigenpairs of these matrices, we derived the solvability condi- matrices, skew-hermitian matrices, respectively. Let 𝐴, 𝐴 , + tions of the problem and its general solutions. In this paper, and 𝐴 be the conjugate, conjugate transpose, and the Moore- combining the special properties of 𝜅-hermitian matrices and Penrose generalized inverse of 𝐴,respectively.For𝐴, 𝐵 ∈ a new inner product of matrices, we first obtain the equivalent 𝑛×𝑚 𝐻 𝐻 𝐶 , ⟨𝐴, 𝐵⟩ = re(tr(𝐵 𝐴)), where re(tr(𝐵 𝐴)) denotes the problem, then derive the necessary and sufficient conditions 𝐻 real part of tr(𝐵 𝐴), the inner product of matrices 𝐴 and 𝐵. for the solvability of the problem and its general solutions. TheinducedmatrixnormiscalledFrobeniusnorm.Thatis, Hill and Waters [7] introduced the following matrices. 1/2 𝐻 1/2 ‖𝐴‖ = ⟨𝐴, 𝐴⟩ =(tr(𝐴 𝐴)) . Left and right inverse eigenpairs problem is a special Definition 1. Let 𝜅 be a fixed product of disjoint transposi- inverse eigenvalue problem. That is, giving partial left and tions, and let 𝐾 be the associated permutation matrix, that is, 𝐻 2 𝑛×𝑛 right eigenpairs (eigenvalue and corresponding eigenvector), 𝐾=𝐾 =𝐾, 𝐾 =𝐼𝑛,amatrix𝐴∈𝐶 is said to be 𝜅- (𝜆𝑖,𝑥𝑖), 𝑖 = 1,...,ℎ; (𝜇𝑗,𝑦𝑗), 𝑗 = 1,...,𝑙,aspecialmatrixset hermitian matrices (skew 𝜅-hermitian matrices) if and only 𝑆, finding a matrix 𝐴∈𝑆such that if 𝑎𝑖𝑗 = 𝑎𝑘(𝑗)𝑘(𝑖) (𝑎𝑖𝑗 =−𝑎𝑘(𝑗)𝑘(𝑖)), 𝑖,𝑗=1,...,𝑛.Wedenotethe set of 𝜅-hermitian matrices (skew 𝜅-hermitian matrices) by 𝑛×𝑛 𝑛×𝑛 𝐴𝑥𝑖 =𝜆𝑖𝑥𝑖, 𝑖=1,...,ℎ, 𝐾𝐻𝐶 (𝑆𝐾𝐻𝐶 ). 𝑇 𝑇 (1) 𝑦𝑗 𝐴=𝜇𝑗𝑦𝑗 , 𝑗=1,...,𝑙. From Definition,itiseasytoseethathermitianmatrices 1 and perhermitian matrices are special cases of 𝜅-hermitian This problem, which usually arises in perturbation analysis matrices, with 𝑘(𝑖) =𝑖 and 𝑘(𝑖)= 𝑛−𝑖+1,respectively. of matrix eigenvalues and in recursive matters, has profound Hermitian matrices and perhermitian matrices, which are application background [1–6]. When the matrix set 𝑆 is one of twelve symmetry patterns of matrices [8], are applied different, it is easy to obtain different left and right inverse in engineering, statistics, and so on [9, 10]. 2 Journal of Applied Mathematics

From Definition, 1 it is also easy to prove the following In this paper, we suppose that 𝐾 is a fixed permutation conclusions. matrix and assume (𝜆𝑖,𝑥𝑖), 𝑖 = 1,...,ℎ,berighteigenpairs 𝑛×𝑛 𝐻 of 𝐴; (𝜇𝑗,𝑦𝑗), 𝑗 = 1,...,𝑙, be left eigenpairs of 𝐴.Ifwelet (1) 𝐴∈𝐾𝐻𝐶 if and only if 𝐴=𝐾𝐴 𝐾. 𝑋=(𝑥,...,𝑥 )∈𝐶𝑛×ℎ Λ= (𝜆 ,...,𝜆 )∈𝐶ℎ×ℎ 𝐴∈𝑆𝐾𝐻𝐶𝑛×𝑛 𝐴=−𝐾𝐴𝐻𝐾 1 ℎ , diag 1 ℎ ; (2) if and only if . 𝑌=(𝑦,...,𝑦)∈𝐶𝑛×𝑙 Γ= (𝜇 ,...,𝜇)∈𝐶𝑙×𝑙 𝑛×𝑛 1 𝑙 , diag 1 𝑙 , then the (3) If 𝐾 is a fixed permutation matrix, then 𝐾𝐻𝐶 and 𝑛×𝑛 𝑛×𝑛 problems studied in this paper can be described as follows. 𝑆𝐾𝐻𝐶 are the closed linear subspaces of 𝐶 and 𝑛×ℎ ℎ×ℎ satisfy Problem 2. Giving 𝑋∈𝐶 , Λ=diag(𝜆1,...,𝜆ℎ)∈𝐶 ; 𝑛×𝑙 𝑙×𝑙 𝑛×𝑛 𝑛×𝑛 𝑛×𝑛 𝑛×𝑛 𝑌∈𝐶 Γ= (𝜇 ,...,𝜇)∈𝐶 𝐴∈𝐾𝐻𝐶 𝐶 =𝐾𝐻𝐶 ⨁ 𝑆𝐾𝐻𝐶 . (2) , diag 1 𝑙 ,find such that

The notation 𝑉1 ⊕𝑉2 stands for the orthogonal direct sum of 𝐴𝑋=𝑋Λ, linear subspace 𝑉1 and 𝑉2. (8) 𝑌𝑇𝐴=Γ𝑌𝑇. 𝑛×𝑛 (4) 𝐴∈𝐾𝐻𝐶 if and only if there is a matrix 𝐴∈̃ 𝑛×𝑛 ̃ 𝑛×𝑛 ̂ 𝐻𝐶 such that 𝐴=𝐾𝐴. Problem 3. Giving 𝐵∈𝐶 ,find𝐴∈𝑆𝐸 such that 𝐴∈𝑆𝐾𝐻𝐶𝑛×𝑛 𝐴∈̃ 󵄩 󵄩 (5) if and only if there is a matrix 󵄩𝐵−𝐴̂󵄩 = min ‖𝐵−𝐴‖ , 𝑛×𝑛 ̃ 󵄩 󵄩 ∀𝐴∈𝑆 (9) 𝑆𝐻𝐶 such that 𝐴=𝐾𝐴. 𝐸 𝑛×𝑛 𝑆 Proof. (1) From Definition,if 1 𝐴=(𝑎𝑖𝑗 )∈𝐾𝐻𝐶 ,then where 𝐸 is the solution set of Problem 2. 𝑎 = 𝑎 𝐴=𝐾𝐴𝐻𝐾 𝐾𝐴𝐻𝐾=(𝑎 ) 𝑖𝑗 𝑘(𝑗)𝑘(𝑖), this implies ,for 𝑘(𝑗)𝑘(𝑖) . This paper is organized as follows. In Section 2,wefirst (2) 𝑛×𝑛 (2) With the same method, we can prove .So,theproof obtain the equivalent problem with the properties of 𝐾𝐻𝐶 is omitted. and then derive the solvability conditions of Problem 2 and its 𝑛×𝑛 𝑛×𝑛 (3) (a) For any 𝐴∈𝐶 , there exist 𝐴1 ∈𝐾𝐻𝐶 , general solution’s expression. In Section 3,wefirstattestthe 𝑛×𝑛 𝐴2 ∈𝑆𝐾𝐻𝐶 such that existence and uniqueness theorem of Problem 3 then present theuniqueapproximationsolution.Finally,weprovidea 𝐴=𝐴1 +𝐴2, (3) calculation procedure to compute the unique approximation 𝐻 solution and numerical experiment to illustrate the results where 𝐴1 = (1/2)(A +𝐾𝐴 𝐾), 𝐴2 = (1/2)(𝐴 − 𝐻 obtained in this paper correction. 𝐾𝐴 𝐾). 𝑛×𝑛 (b) If there exist another 𝐴1 ∈𝐾𝐻𝐶 , 𝐴2 ∈ 𝑛×𝑛 2. Solvability Conditions of Problem 2 𝑆𝐾𝐻𝐶 such that 𝑛×𝑛 We first discuss the properties of 𝐾𝐻𝐶 𝐴=𝐴1 + 𝐴2, (4) 𝑛×𝑛 (3)-(4)yields Lemma 4. Denoting 𝑀=𝐾𝐸𝐾𝐺𝐸,and𝐸∈𝐻𝐶 ,onehas the following conclusions. 𝐴1 − 𝐴1 =−(𝐴2 − 𝐴2). (5) 𝑛×𝑛 𝑛×𝑛 (1) If 𝐺∈𝐾𝐻𝐶 ,then𝑀∈𝐾𝐻𝐶 . 𝑛×𝑛 𝑛×𝑛 Multiplying (5)ontheleftandontherightby (2) If 𝐺∈𝑆𝐾𝐻𝐶 ,then𝑀∈𝑆𝐾𝐻𝐶 . 𝐾, respectively, and according to (1) and (2),we 𝑛×𝑛 𝑛×𝑛 (3) If 𝐺=𝐺1 +𝐺2,where𝐺1 ∈𝐾𝐻𝐶 , 𝐺2 ∈𝑆𝐾𝐻𝐶 , obtain 𝑛×𝑛 then 𝑀∈𝐾𝐻𝐶 if and only if 𝐾𝐸𝐾𝐺2𝐸=0.In 𝑀=𝐾𝐸𝐾𝐺𝐸 𝐴1 − 𝐴1 =𝐴2 − 𝐴2. (6) addition, one has 1 . 𝐾𝑀𝐻𝐾=𝐾𝐸𝐺𝐻𝐾𝐸𝐾𝐾 = 𝐾𝐸(𝐾𝐺𝐾)𝐾𝐸 = Combining (5)and(6)gives𝐴1 = 𝐴1, 𝐴2 = 𝐴2. Proof. (1) 𝑛×𝑛 𝑛×𝑛 𝐾𝐸𝐾𝐺𝐸 =𝑀. (c) For any 𝐴1 ∈𝐾𝐻𝐶 , 𝐴2 ∈𝑆𝐾𝐻𝐶 ,wehave 𝑛×𝑛 Hence, we have 𝑀∈𝐾𝐻𝐶 . 𝐻 𝐻 𝐾𝑀𝐻𝐾=𝐾𝐸𝐺𝐻𝐾𝐸𝐾𝐾 = 𝐾𝐸(−𝐾𝐺𝐾)𝐾𝐸 = ⟨𝐴1,𝐴2⟩=re (tr (𝐴2 𝐴1)) = re (tr (𝐾𝐴2 𝐾𝐾𝐴 1𝐾)) (2) −𝐾𝐸𝐾𝐺𝐸. =−𝑀 𝐻 𝑀∈𝑆𝐾𝐻𝐶𝑛×𝑛 = re (tr (−𝐴2 𝐴1)) =−⟨𝐴1,𝐴2⟩. Hence, we have . (3) 𝑀=𝐾𝐸𝐾(𝐺1 +𝐺2)𝐸 = 𝐾𝐸𝐾𝐺1𝐸+𝐾𝐸𝐾𝐺2𝐸,we (7) 𝑛×𝑛 𝑛×𝑛 have 𝐾𝐸𝐾𝐺1𝐸∈𝐾𝐻𝐶 , 𝐾𝐸𝐾𝐺2𝐸∈𝑆𝐾𝐻𝐶 from (1) ⟨𝐴 ,𝐴 ⟩=0 𝑛×𝑛 𝑛×𝑛 This implies 1 2 . Combining (a), (b), and (2). If 𝑀∈𝐾𝐻𝐶 ,then𝑀−𝐾𝐸𝐾𝐺1𝐸∈𝐾𝐻𝐶 , (3) 𝑛×𝑛 and (c) gives . while 𝑀−𝐾𝐸𝐾𝐺1𝐸=𝐾𝐸𝐾𝐺2𝐸∈𝑆𝐾𝐻𝐶 . Therefore from 𝐾𝐸𝐾𝐺 𝐸=0 𝑛×𝑛 𝐻 𝑛×𝑛 the conclusion (3) of Definition,wehave 1 2 ,that (4) Let 𝐴=𝐾𝐴̃ ,if𝐴∈𝐾𝐻𝐶 ,then𝐴̃ = 𝐴∈𝐻𝐶̃ . 𝑀=𝐾𝐸𝐾𝐺𝐸 𝐾𝐸𝐾𝐺 𝐸=0 𝐻 𝐻 is, 1 .Onthecontrary,if 2 ,itisclear ̃ ̃ 𝑛×n ̃ 𝐻 ̃ 𝑛×𝑛 If 𝐴 = 𝐴∈𝐻𝐶 ,then𝐴=𝐾𝐴 and 𝐾𝐴 𝐾=𝐾𝐴 𝐾𝐾 = that 𝑀=𝐾𝐸𝐾𝐺1𝐸∈𝐾𝐻𝐶 .Theproofiscompleted. 𝐾𝐴=𝐴∈𝐾𝐻𝐶̃ 𝑛×𝑛 . 𝑛×𝑛 (5) With the same method, we can prove (5). So, the proof Lemma 5. Let 𝐴∈𝐾𝐻𝐶 ,if(𝜆, 𝑥) is a right eigenpair of 𝐴, is omitted. then (𝜆, 𝐾𝑥) is a left eigenpair of 𝐴. Journal of Applied Mathematics 3

Proof. If (𝜆, 𝑥) is a right eigenpair of 𝐴,thenwehave Sufficiency:If(17)holds,then(20)holds.Hence,matrix 𝑛×𝑛 equation 𝐾𝐴𝑋=𝐾𝑋Λhas solution 𝐾𝐴 ∈ 𝐻𝐶 .Moreover, 𝐴𝑥=𝜆𝑥. (10) the general solution can be expressed as follows: From the conclusion (1) of Definition,itfollowsthat 1 + + 𝐻 + 𝐾𝐴 = 𝐾𝑋Λ𝑋 +(𝐾𝑋Λ𝑋 ) (𝐼𝑛 −𝑋𝑋 ) 𝐻 (21) 𝐾𝐴 𝐾𝑥 = 𝜆𝑥. (11) + ̃ + ̃ 𝑛×𝑛 +(𝐼𝑛 −𝑋𝑋 ) 𝐺(𝐼𝑛 −𝑋𝑋 ), ∀𝐺∈𝐻𝐶 . This implies Let 𝑇 𝑇 (𝐾𝑥) 𝐴=𝜆(𝐾𝑥) . (12) + + 𝐻 + 𝐴0 =𝑋Λ𝑋 +𝐾(𝑋Λ𝑋 ) 𝐾𝐸, 𝐸𝑛 =𝐼 −𝑋𝑋 . (22) So (𝜆, 𝐾𝑥) is a left eigenpair of 𝐴. ̃ This implies 𝐴=𝐴0 +𝐾𝐸𝐺𝐸. Combining the definition of 𝐾 𝐸 From Lemma 5, without loss of the generality, we may , and the first equation of (17), we have assume that Problem 2 is as follows. 𝐻 + 𝐻 + + 𝐻 + 𝐾𝐴 0 𝐾 = 𝐾(𝑋Λ𝑋 ) 𝐾+𝑋Λ𝑋 − 𝐾(𝑋𝑋 ) 𝐾(𝑋Λ𝑋 ) 𝑛×ℎ ℎ×ℎ 𝑋∈𝐶 ,Λ=diag (𝜆1,...,𝜆ℎ)∈𝐶 , + + 𝐻 + 𝑇 + (13) =𝑋Λ𝑋 +𝐾(𝑋Λ𝑋 ) 𝐾 − 𝐾(𝑋Λ𝑋 ) 𝐾𝑋𝑋 𝑌=𝐾𝑋∈𝐶𝑛×ℎ,Γ=Λ∈𝐶ℎ×ℎ. =𝐴0. Combining (13) and the conclusion (4) of Definition,it 1 (23) is easy to derive the following lemma. 𝑛×𝑛 Hence, 𝐴0 ∈𝐾𝐻𝐶 . Combining the definition of 𝐾, 𝐸,(13) Lemma 6. If 𝑋, Λ, 𝑌, Γ are given by (13),thenProblem2 is and (17), we have equivalent to the following problem. If 𝑋, Λ, 𝑌, Γ are given by 𝑛×𝑛 + + 𝐻 + (13),find𝐾𝐴 ∈ 𝐻𝐶 such that 𝐴0𝑋=𝑋Λ𝑋 𝑋+𝐾(𝑋Λ𝑋 ) 𝐾(𝐼𝑛 −𝑋𝑋 )𝑋=𝑋Λ, 𝐾𝐴𝑋=𝐾𝑋Λ. 𝑇 𝐻 + 𝐻 + 𝐻 (14) 𝑌 𝐴0 =𝑋 𝐾𝑋Λ𝑋 +𝑋 𝐾𝐾(𝑋Λ𝑋 ) 𝐾𝐸 𝑛×ℎ 𝑛×ℎ Lemma 7 (see [11]). If giving 𝑋∈𝐶 , 𝐵∈𝐶 ,thenmatrix = Λ𝑋𝐻𝐾𝑋𝑋+ +(𝐾𝑋Λ𝑋+𝑋)𝐻 (𝐼 −𝑋𝑋+) 𝑛×𝑛 𝑛 equation 𝐴𝑋=𝐵̃ has solution 𝐴∈𝐻𝐶̃ ifandonlyif = Λ𝑋𝐻𝐾𝑋𝑋+ + Λ𝑋𝐻𝐾(𝐼 −𝑋𝑋+) + 𝐻 𝐻 𝑛 𝐵=𝐵𝑋 𝑋, 𝐵 𝑋=𝑋 𝐵. (15) = Λ𝑋𝐻𝐾=Γ𝑌𝑇. ̃ Moreover, the general solution 𝐴 can be expressed as (24) ̃ + + 𝐻 + 𝐴 𝐴=𝐵𝑋 +(𝐵𝑋 ) (𝐼𝑛 −𝑋𝑋 ) Therefore, 0 is a special solution of Problem 2.Combining (16) the conclusion (4) of Definition, 1 Lemma 4,and𝐸=𝐼𝑛 − + ̃ + ̃ 𝑛×𝑛 + 𝑛×𝑛 +(𝐼𝑛 −𝑋𝑋 ) 𝐺(𝐼𝑛 −𝑋𝑋 ), ∀𝐺∈𝐻𝐶 . 𝑋𝑋 ∈𝐻𝐶 ,itiseasytoprovethat𝐴=𝐴0 +𝐾𝐸𝐾𝐺𝐸∈ 𝑛×𝑛 𝑛×𝑛 𝐾𝐻𝐶 if and only if 𝐺∈𝐾𝐻𝐶 .Hence,thesolutionset Theorem 8. If 𝑋, Λ, 𝑌, Γ are given by (13),thenProblem2 has 𝑛×𝑛 of Problem 2 canbeexpressedas(18). asolutionin𝐾𝐻𝐶 ifandonlyif

𝐻 𝐻 + 𝑋 𝐾𝑋Λ = Λ𝑋 𝐾𝑋, 𝑋Λ = 𝑋Λ𝑋 𝑋. (17) 3. An Expression of the Solution of Problem 3

From (18), it is easy to prove that the solution set 𝑆𝐸 of Moreover, the general solution can be expressed as Problem 2 is a nonempty closed convex set if Problem 2 has 𝑛×𝑛 𝐾𝐻𝐶𝑛×𝑛 𝐵∈𝑅𝑛×𝑛 𝐴=𝐴0 +𝐾𝐸𝐾𝐺𝐸, ∀𝐺∈𝐾𝐻𝐶 , (18) asolutionin .Weclaimthatforanygiven , there exists a unique optimal approximation for Problem 3. where 𝑛×𝑛 Theorem 9. Giving 𝐵∈𝐶 , if the conditions of 𝑋, 𝑌, Λ, Γ are + + 𝐻 + 𝐴0 =𝑋Λ𝑋 +𝐾(𝑋Λ𝑋 ) 𝐾𝐸, 𝐸𝑛 =𝐼 −𝑋𝑋 . (19) the same as those in Theorem 8,thenProblem3 has a unique ̂ ̂ solution 𝐴∈𝑆𝐸.Moreover,𝐴 can be expressed as 𝑛×𝑛 Proof. Necessity:Ifthereisamatrix𝐴∈𝐾𝐻𝐶 such that 𝑇 𝑇 𝐴=𝐴̂ +𝐾𝐸𝐾𝐵 𝐸, (𝐴𝑋=𝑋Λ, 𝑌 𝐴=Γ𝑌 ),thenfromLemma 6, there exists a 0 1 (25) 𝑛×𝑛 matrix 𝐾𝐴 ∈ 𝐻𝐶 such that 𝐾𝐴𝑋 = 𝐾𝑋Λ, and according 𝐴 𝐸 𝐵 = (1/2)(𝐵 + 𝐾𝐵𝐻𝐾) to Lemma 7,wehave where 0, are given by (19) and 1 . + 𝐻 𝐻 𝐸 =𝐼 −𝐸 𝐸 𝐾𝑋Λ = 𝐾𝑋Λ𝑋 𝑋, (𝐾𝑋Λ) 𝑋=𝑋 (𝐾𝑋Λ) . (20) Proof. Denoting 1 𝑛 ,itiseasytoprovethatmatrices and 𝐸1 are orthogonal projection matrices satisfying 𝐸𝐸1 =0. It is easy to see that (20)isequivalentto(17). It is clear that matrices 𝐾𝐸𝐾 and 𝐾𝐸1𝐾 are also orthogonal 4 Journal of Applied Mathematics

󵄩 󵄩2 󵄩 󵄩2 projection matrices satisfying (𝐾𝐸𝐾)(𝐾𝐸1𝐾) =. 0 According + 󵄩(𝐵1 −𝐴0) 𝐸1󵄩 + 󵄩𝐵2󵄩 𝑛×𝑛 to the conclusion (3) of Definition,forany 1 𝐵∈𝐶 ,there 󵄩 󵄩2 exists unique = 󵄩𝐾𝐸𝐾 (𝐵1 −𝐴0)𝐸−𝐾𝐸𝐾𝐺𝐸󵄩 󵄩 󵄩2 𝑛×𝑛 𝑛×𝑛 + 󵄩𝐾𝐸 𝐾(𝐵 −𝐴 )𝐸󵄩 𝐵1 ∈𝐾𝐻𝐶 ,𝐵2 ∈𝑆𝐾𝐻𝐶 (26) 󵄩 1 1 0 󵄩 󵄩 󵄩2 󵄩 󵄩2 + 󵄩(𝐵 −𝐴 )𝐸 󵄩 + 󵄩𝐵 󵄩 . such that 󵄩 1 0 1󵄩 󵄩 2󵄩 (29) 𝐵=𝐵1 +𝐵2, ⟨𝐵1,𝐵2⟩ =0, (27) It is easy to prove that 𝐾𝐸𝐾𝐴 0𝐸=0according to the definitions of 𝐴0, 𝐸.Sowehave where 󵄩 󵄩2 󵄩 󵄩2 ‖𝐵−𝐴‖2 = 󵄩𝐾𝐸𝐾𝐵 𝐸−𝐾𝐸𝐾𝐺𝐸󵄩 + 󵄩𝐾𝐸 𝐾(𝐵 −𝐴 )𝐸󵄩 1 𝐻 1 𝐻 󵄩 1 󵄩 󵄩 1 1 0 󵄩 𝐵1 = (𝐵 + 𝐾𝐵 𝐾), 𝐵 2 = (𝐵 − 𝐾𝐵 𝐾). (28) 2 2 󵄩 󵄩2 󵄩 󵄩2 + 󵄩(𝐵1 −𝐴0)𝐸1󵄩 + 󵄩𝐵2󵄩 . Combining Theorem 8,forany𝐴∈𝑆𝐸,wehave (30)

2 𝐴∈𝑆 ‖𝐵 − 𝐴‖ ‖𝐵−𝐴‖ Obviously, min 𝐸 is equivalent to 󵄩 󵄩 󵄩 󵄩2 min 󵄩𝐾𝐸𝐾𝐵1𝐸−𝐾𝐸𝐾𝐺𝐸󵄩 . = 󵄩𝐵−𝐴0 −𝐾𝐸𝐾𝐺𝐸󵄩 𝐺∈𝐾𝐻𝐶𝑛×𝑛 (31) 󵄩 󵄩2 𝐸𝐸 = 0, (𝐾𝐸𝐾)(𝐾𝐸 𝐾) = 0 𝐺= = 󵄩𝐵1 +𝐵2 −𝐴0 −𝐾𝐸𝐾𝐺𝐸󵄩 Since 1 1 ,itisclearthat ̂ ̂ 𝑛×𝑛 𝐵1 +𝐾𝐸1𝐾𝐺𝐸1,forany𝐺∈𝐾𝐻𝐶 , is a solution of (31). 󵄩 󵄩2 󵄩 󵄩2 = 󵄩𝐵1 −𝐴0 −𝐾𝐸𝐾𝐺𝐸󵄩 + 󵄩𝐵2󵄩 Substituting this result to (18), we can obtain (25). 󵄩 󵄩2 󵄩 󵄩2 𝑋 Λ 𝑌 Γ = 󵄩(𝐵1 −𝐴0)(𝐸+𝐸1)−𝐾𝐸𝐾𝐺𝐸󵄩 + 󵄩𝐵2󵄩 Algorithm 10. (1) Input , , , according to (13). (2) 𝐻 𝐻 + Compute 𝑋 𝐾𝑋Λ, Λ𝑋 𝐾𝑋, 𝑋Λ𝑋 𝑋, 𝑋Λ,if(17)holds, 󵄩 󵄩2 = 󵄩(𝐵1 −𝐴0)𝐸−𝐾𝐸𝐾𝐺𝐸󵄩 then continue; otherwise stop. (3) Compute 𝐴0 according to 𝐵 󵄩 󵄩2 󵄩 󵄩2 (19), and compute 1 according to (28). (4) According to (25) + 󵄩(𝐵 −𝐴 )𝐸 󵄩 + 󵄩𝐵 󵄩 󵄩 1 0 1󵄩 󵄩 2󵄩 calculate 𝐴̂. 󵄩 󵄩2 = 󵄩𝐾(𝐸+𝐸1)𝐾(𝐵1 −𝐴0)𝐸−𝐾𝐸𝐾𝐺𝐸󵄩 Example 11 (𝑛=8, ℎ=𝑙=4).

0.5661 −0.2014 − 0.1422𝑖 0.1446 + 0.2138𝑖 0.524 −0.2627 + 0.1875𝑖 0.5336 −0.2110 − 0.4370𝑖 −0.0897 + 0.3467𝑖 −0.4132 + 0.2409𝑖 0.0226 − 0.0271𝑖 −0.1095 + 0.2115𝑖 −0.3531 −0.0642𝑖 ( ) (−0.0306 + 0.2109𝑖 −0.3887 − 0.0425𝑖 0.2531 + 0.2542𝑖 0.0094 +0.2991𝑖 ) 𝑋=( ) , ( 0.0842 − 0.1778𝑖 −0.0004 − 0.3733𝑖 0.3228 − 0.1113𝑖 0.1669 +0.1952𝑖 ) 0.0139 − 0.3757𝑖 −0.2363 + 0.3856𝑖 0.2583 + 0.0721𝑖 0.1841 −0.2202𝑖 0.0460 + 0.3276𝑖 −0.1114 + 0.0654𝑖 −0.0521 − 0.2556𝑖 −0.2351 +0.3002𝑖 ( 0.0085 − 0.1079𝑖 0.0974 + 0.3610𝑖 0.5060 −0.2901 − 0.0268𝑖) −0.3967 − 0.4050𝑖 0 0 0 0 −0.3967 + 0.4050𝑖 0 0 Λ=( ), 0 0 0.0001 0 0 0 0 −0.0001𝑖

00000001 00000010 00000100 ( ) (00010000) 𝐾=( ) , (00001000) 00100000 01000000 (10000000)

𝑌=𝐾𝑋, Γ = Λ (32) Journal of Applied Mathematics 5

𝐵= Fromthefirstcolumntothefourthcolumn

−0.5218 + 0.0406𝑖 0.2267 − 0.0560𝑖 −0.1202 + 0.0820𝑖 −0.0072 −0.3362𝑖 0.3909 − 0.3288𝑖 0.2823 − 0.2064𝑖 −0.0438 − 0.0403𝑖 0.2707 +0.0547𝑖 0.2162 − 0.1144𝑖 −0.4307 + 0.2474𝑖 −0.0010 − 0.0412𝑖 0.2164 −0.1314𝑖 ( ( −0.1872 − 0.0599𝑖 −0.0061 + 0.4698𝑖 0.3605 − 0.0247𝑖 0.4251 +0.1869𝑖 ( ( −0.1227 − 0.0194𝑖 0.2477 − 0.0606𝑖 0.3918 + 0.6340𝑖 0.1226 +0.0636𝑖 (33) −0.0893 + 0.4335𝑖 0.0662 + 0.0199𝑖 −0.0177 − 0.1412𝑖 0.4047 +0.2288𝑖 0.1040 − 0.2015𝑖 0.1840 + 0.2276𝑖 0.2681 − 0.3526𝑖 −0.5252 +0.1022𝑖 ( 0.1808 + 0.2669𝑖 0.2264 + 0.3860𝑖 −0.1791 + 0.1976𝑖 −0.0961 −0.0117𝑖

From the fifth column to the eighth column

−0.2638 − 0.4952𝑖 −0.0863 − 0.1664𝑖 0.2687 + 0.1958𝑖 −0.2544 −0.1099𝑖 −0.2741 − 0.1656𝑖 −0.0227 + 0.2684𝑖 0.1846 + 0.2456𝑖 −0.0298 +0.5163𝑖 −0.1495 − 0.3205𝑖 0.1391 + 0.2434𝑖 0.1942 − 0.5211𝑖 −0.3052 −0.1468𝑖 ) −0.2554 + 0.2690𝑖 −0.4222 − 0.1080𝑖 0.2232 + 0.0774𝑖 0.0965 −0.0421𝑖 ) ) . 0.3856 − 0.0619𝑖 0.1217 − 0.0270𝑖 0.1106 − 0.3090𝑖 −0.1122 +0.2379𝑖 ) (34) −0.1130 + 0.0766𝑖 0.7102 − 0.0901𝑖 0.1017 + 0.1397𝑖 −0.0445 +0.0038𝑖 0.1216 + 0.0076𝑖 0.2343 − 0.1772𝑖 0.5242 − 0.0089𝑖 −0.0613 +0.0258𝑖 −0.0750 − 0.3581𝑖 0.0125 + 0.0964𝑖 0.0779 − 0.1074𝑖 0.6735 −0.0266𝑖 )

It is easy to see that matrices 𝑋, Λ, 𝑌, Γ satisfy (17). Hence, software “MATLAB”, we obtain the unique solution 𝐴̂ of there exists the unique solution for Problem 3.Usingthe Problem 3.

Fromthefirstcolumntothefourthcolumn

−0.1983 + 0.0491𝑖 0.1648 + 0.0032𝑖 0.0002 + 0.1065𝑖 0.1308 +0.2690𝑖 0.1071 − 0.2992𝑖 0.2106 + 0.2381𝑖 −0.0533 − 0.3856𝑖 −0.0946 +0.0488𝑖 0.1935 − 0.1724𝑖 −0.0855 − 0.0370𝑖 0.0200 − 0.0665𝑖 −0.2155 −0.0636𝑖 ( ( 0.0085 − 0.2373𝑖 −0.0843 − 0.1920𝑖 0.0136 + 0.0382𝑖 −0.0328 −0.0000𝑖 ( ( −0.0529 + 0.1703𝑖 0.1948 − 0.0719𝑖 0.1266 + 0.1752𝑖 0.0232 −0.2351𝑖 (35) 0.0855 + 0.1065𝑖 0.0325 − 0.2068𝑖 0.2624 + 0.0000𝑖 0.0136 −0.0382𝑖 0.1283 − 0.1463𝑖 −0.0467 + 0.0000𝑖 0.0325 + 0.2067𝑖 −0.0843 +0.1920𝑖 ( 0.2498 + 0.0000𝑖 0.1283 + 0.1463𝑖 0.0855 − 0.1065𝑖 0.0086 +0.2373𝑖

From the fifth column to the eighth column

0.2399 − 0.1019𝑖 0.1928 − 0.1488𝑖 −0.3480 − 0.2574𝑖 0.1017 −0.0000𝑖 −0.1955 + 0.0644𝑖 0.2925 + 0.1872𝑖 0.3869 + 0.0000𝑖 −0.3481 +0.2574𝑖 0.0074 + 0.0339𝑖 −0.3132 − 0.0000𝑖 0.2926 − 0.1872𝑖 0.1928 +0.1488𝑖 ) 0.0232 + 0.2351𝑖 −0.2154 + 0.0636𝑖 −0.0946 − 0.0489𝑖 0.1309 −0.2691𝑖 ) ) . −0.0545 − 0.0000𝑖 0.0074 − 0.0339𝑖 −0.1955 − 0.0643𝑖 0.2399 +0.1019𝑖 ) (36) 0.1266 − 0.1752𝑖 0.0200 + 0.0665𝑖 −0.0533 + 0.3857𝑖 0.0002 −0.1065𝑖 0.1949 + 0.0719𝑖 −0.0855 + 0.0370𝑖 0.2106 − 0.2381𝑖 0.1648 −0.0032𝑖 −0.0529 − 0.1703𝑖 0.1935 + 0.1724𝑖 0.1071 + 0.2992𝑖 −0.1983 −0.0491𝑖 ) 6 Journal of Applied Mathematics

Conflict of Interests There is no conflict of interests between the authors.

Acknowledgments This research was supported by National Natural Science Foundation of China (31170532). The authors are very grateful to the referees for their valuable comments and also thank the editor for his helpful suggestions.

References

[1] J. H. Wilkinson, TheAlgebraicEigenvalueProblem,Oxford University Press, Oxford, UK, 1965. [2] A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences,vol.9ofClassics in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1994. [3] M. Arav, D. Hershkowitz, V.Mehrmann, and H. Schneider, “The recursive inverse eigenvalue problem,” SIAMJournalonMatrix Analysis and Applications,vol.22,no.2,pp.392–412,2000. [4] R. Loewy and V. Mehrmann, “A note on the symmetric recursive inverse eigenvalue problem,” SIAMJournalonMatrix Analysis and Applications,vol.25,no.1,pp.180–187,2003. [5] F.L. Li, X. Y.Hu, and L. Zhang, “Left and right inverse eigenpairs problem of skew-centrosymmetric matrices,” Applied Mathe- matics and Computation,vol.177,no.1,pp.105–110,2006. [6] F. L. Li, X. Y. Hu, and L. Zhang, “Left and right inverse eigenpairs problem of generalized centrosymmetric matrices and its optimal approximation problem,” Applied Mathematics and Computation,vol.212,no.2,pp.481–487,2009. [7]R.D.HillandS.R.Waters,“On𝜅-real and 𝜅-Hermitian matrices,” Linear Algebra and its Applications,vol.169,pp.17– 29, 1992. [8] S. P. Irwin, “Matrices with multiple symmetry properties: applications of centro-Hermitian and per-Hermitian matrices,” Linear Algebra and its Applications,vol.284,no.1–3,pp.239– 258, 1998. [9] L. C. Biedenharn and J. D. Louck, Angular Momentum in Quantum Physics,vol.8ofEncyclopedia of Mathematics and its Applications, Addison-Wesley, Reading, Mass, USA, 1981. [10] W.M. Gibson and B. R. Pollard, Symmetry Principles in Elemen- tary Particle Physics, Cambridge University Press, Cambridge, UK, 1976. [11] W. F. Trench, “Hermitian, Hermitian R-symmetric, and Her- mitian R-skew symmetric Procrustes problems,” Linear Algebra and its Applications, vol. 387, pp. 83–98, 2004. Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 271978, 5 pages http://dx.doi.org/10.1155/2013/271978

Research Article Completing a 2×2Block Matrix of Real Quaternions with a Partial Specified Inverse

Yong Lin1,2 and Qing-Wen Wang1

1 Department of Mathematics, Shanghai University, Shanghai 200444, China 2 School of Mathematics and Statistics, Suzhou University, Suzhou 234000, China

Correspondence should be addressed to Qing-Wen Wang; [email protected]

Received 4 December 2012; Revised 23 February 2013; Accepted 20 March 2013

Academic Editor: K. Sivakumar

Copyright © 2013 Y. Lin and Q.-W. Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper considers a completion problem of a nonsingular 2×2block matrix over the real quaternion algebra H:Let 𝑚1×𝑛2 𝑚2×𝑛1 𝑚2×𝑛2 𝑛1×𝑚1 𝑚1,𝑚2,𝑛1,𝑛2 be nonnegative integers, 𝑚1 +𝑚2 =𝑛1 +𝑛2 =𝑛>0,and𝐴12 ∈ H ,𝐴21 ∈ H ,𝐴22 ∈ H ,𝐵11 ∈ H 𝑚1×𝑛1 be given. We determine necessary and sufficient conditions so that there exists a variant block entry matrix 𝐴11 ∈ H such that 𝐴=(𝐴11 𝐴12 )∈H𝑛×𝑛 𝐵 𝐴−1 𝐴 𝐴21 𝐴22 is nonsingular, and 11 is the upper left block of a partitioning of . The general expression for 11 is also obtained. Finally, a numerical example is presented to verify the theoretical findings.

1. Introduction Some other particular forms for 2×2block matrices over R have also been examined (see, e.g., [3]), such as The problem of completing a block-partitioned matrix of a 𝐴 𝐴 −1 𝐵 ? specified type with some of its blocks given has been studied ( 11 12) =( 11 ), by many authors. Fiedler and Markham [1] considered the 𝐴21 ? ?? following completion problem over the real number field R. 𝑚 ,𝑚,𝑛,𝑛 𝑚 +𝑚 = 𝐴 ? −1 ?? Suppose 1 2 1 2 are nonnegative integers, 1 2 ( 11 ) =( ), 𝑚1×𝑛1 𝑚1×𝑛2 𝑚2×𝑛1 (3) 𝑛1+𝑛2 =𝑛>0,𝐴11 ∈ R ,𝐴12 ∈ R ,𝐴21 ∈ R , ?? ?𝐵22 𝑛2×𝑚2 𝑚2×𝑛2 and 𝐵22 ∈ R . Determine a matrix 𝐴22 ∈ R such 𝐴 ? −1 ?𝐵 that ( 11 ) =( 12). ?𝐴22 𝐵21 ? 𝐴 𝐴 𝐴=( 11 12) The real quaternion matrices play a role in computer 𝐴 𝐴 (1) 21 22 science, quantum physics, and so on (e.g., [4–6]). Quaternion matrices are receiving much attention as witnessed recently is nonsingular and 𝐵22 is the lower right block of a partition- (e.g., [7–9]). Motivated by the work of [1, 10]andkeeping −1 ing of 𝐴 .Thisproblemhastheformof such applications of quaternion matrices in view, in this paper we consider the following completion problem over the real

−1 quaternion algebra: 𝐴11 𝐴12 ?? ( ) = ( ) , (2) H ={𝑎+𝑎𝑖+𝑎𝑗+𝑎𝑘| 𝐴21 ? ?𝐵22 0 1 2 3 (4) 2 2 2 𝑖 =𝑗 =𝑘 =𝑖𝑗𝑘=−1and 𝑎0,𝑎1,𝑎2,𝑎3 ∈ R}. and the solution and the expression for 𝐴22 were obtained in [1]. Dai [2] considered this form of completion problems with Problem 1. Suppose 𝑚1,𝑚2,𝑛1,𝑛2 are nonnegative inte- 𝑚1×𝑛2 symmetric and symmetric positive definite matrices over R. gers, 𝑚1 +𝑚2 =𝑛1 +𝑛2 =𝑛>0,and𝐴12 ∈ H , 2 Journal of Applied Mathematics

𝑚2×𝑛1 𝑚2×𝑛2 𝑛1×𝑚1 𝐴21 ∈ H ,𝐴22 ∈𝑅 ,𝐵11 ∈ H . Find a matrix Proof. It is readily seen that 𝑚1×𝑛1 𝐴11 ∈ H such that 𝐵 𝐵 ( 22 21), 𝐴 𝐴 𝑛×𝑛 𝐵 𝐵 𝐴=( 11 12)∈H (5) 12 11 𝐴21 𝐴22 (10) 𝐴 𝐴 ( 22 21) is nonsingular, and 𝐵11 is the upper left block of a partitioning 𝐴12 𝐴11 −1 of 𝐴 .Thatis are inverse to each other, so we may suppose that 𝑛(𝐴11)< −1 ?𝐴 𝐵 ? 𝑛(𝐵22). ( 12) =( 11 ), (6) 𝐴21 𝐴22 ?? If 𝑛(𝐵22)=0,necessarily𝑛(𝐴11)=0and we are finished. Let 𝑛(𝐵22)=𝑐>0, then there exists a matrix 𝐹 with 𝑐 right 𝑚×𝑛 where H denotes the set of all 𝑚×𝑛matrices over H and linearly independent columns, such that 𝐵22𝐹=0.Then, −1 𝐴 denotes the inverse matrix of 𝐴. using

Throughout, over the real quaternion algebra H,we 𝐴11𝐵12 +𝐴12𝐵22 =0, (11) denote the identity matrix with the appropriate size by 𝐼,the 𝑇 transpose of 𝐴 by 𝐴 ,therankof𝐴 by 𝑟(𝐴),theconjugate we have ∗ 𝑇 transpose of 𝐴 by 𝐴 =(𝐴) , a reflexive inverse of a matrix 𝐴 + + 𝐴11𝐵12𝐹=0. (12) over H by 𝐴 which satisfies simultaneously 𝐴𝐴 𝐴=𝐴and + + + + + 𝐴 𝐴𝐴 =𝐴 .Moreover,𝐿𝐴 = 𝐼−𝐴 𝐴,𝐴 𝑅 = 𝐼−𝐴𝐴 ,where + From 𝐴 is an arbitrary but fixed reflexive inverse of 𝐴.Clearly,𝐿𝐴 and 𝑅𝐴 are idempotent, and each is a reflexive inverse of itself. 𝐴21𝐵12 +𝐴22𝐵22 =𝐼, (13) R(𝐴) denotes the right column space of the matrix 𝐴. The rest of this paper is organized as follows. In Section 2, we have we establish some necessary and sufficient conditions to solve 𝐴21𝐵12𝐹=𝐹. (14) Problem 1 over H, and the general expression for 𝐴11 is also obtained. In Section 3, we present a numerical example to It follows that the rank 𝑟(𝐵12𝐹) ≥.Inviewof( 𝑐 12), this illustrate the developed theory. implies 𝑛 𝐴 ≥𝑟 𝐵 𝐹 ≥𝑐=𝑛 𝐵 . 2. Main Results ( 11) ( 12 ) ( 22) (15) In this section, we begin with the following lemmas. Thus

𝑚×𝑛 Lemma 1 (singular-value decomposition [9]). Let 𝐴∈H 𝑛(𝐴11)=𝑛(𝐵22). (16) be of rank 𝑟. Then there exist unitary quaternion matrices 𝑈∈ H𝑚×𝑚 𝑉∈H𝑛×𝑛 and such that 𝑚×𝑛 𝑝×𝑞 𝑚×𝑞 Lemma 3 (see [10]). Let 𝐴∈H , 𝐵∈H , 𝐷∈H be 𝐷 0 𝑛×𝑝 𝑈𝐴𝑉=( 𝑟 ), known and 𝑋∈H unknown. Then the matrix equation 00 (7) 𝐴𝑋𝐵 =𝐷 (17) where 𝐷𝑟 = diag(𝑑1,...,𝑑𝑟) and the 𝑑𝑗’s are the positive singular values of 𝐴. is consistent if and only if 𝑛 𝐴𝐴+𝐷𝐵+𝐵=𝐷. Let H𝑐 denote the collection of column vectors with 𝑛 (18) components of quaternions and 𝐴 be an 𝑚×𝑛quaternion 𝑛 matrix. Then the solutions of 𝐴𝑥=0form a subspace of H𝑐 In that case, the general solution is of dimension 𝑛(𝐴).Wehavethefollowinglemma. + + 𝑋=𝐴 𝐷𝐵 +𝐿𝐴𝑌1 +𝑌2𝑅𝐵, (19) Lemma 2. Let where 𝑌1, 𝑌2 are any matrices with compatible dimensions over 𝐴 𝐴 H. ( 11 12) (8) 𝐴21 𝐴22 By Lemma 1,letthesingularvaluedecompositionofthe 𝑛×𝑛 be a partitioning of a nonsingular matrix 𝐴∈H ,andlet matrix 𝐴22 and 𝐵11 in Problem 1 be Λ0 𝐵11 𝐵12 𝐴 =𝑄( )𝑅∗, ( ) (9) 22 00 (20) 𝐵21 𝐵22

−1 Σ0 be the corresponding (i.e., transpose) partitioning of 𝐴 .Then 𝐵 =𝑈( )𝑉∗, 11 00 (21) 𝑛(𝐴11)=𝑛(𝐵22). Journal of Applied Mathematics 3

where Λ=diag(𝜆1,𝜆2,...,𝜆𝑠) is a positive diagonal matrix, It follows that 𝜆𝑖 =0̸ (𝑖 = 1,...,𝑠) are the singular values of 𝐴22, 𝑠= ∗ Σ0 ∗ ∗ Λ0 ∗ 𝑟(𝐴22), Σ=diag(𝜎1,𝜎2,...,𝜎𝑟) is a positive diagonal matrix, 𝑄 𝐴 𝑈( )𝑉 𝑉=𝑄 𝑄( )𝑅 𝐾𝑉. 21 00 00 (29) 𝜎𝑖 =0̸ (𝑖 = 1,...,𝑟) are the singular values of 𝐵11 and 𝑟= 𝑟(𝐵11). 𝑚2×𝑚2 𝑛2×𝑛2 This implies that 𝑄=(𝑄1 𝑄2)∈H , 𝑅=(𝑅1 𝑅2)∈H , 𝑛 ×𝑛 𝑚 ×𝑚 𝑈=(𝑈 𝑈 )∈H 1 1 𝑉=(𝑉 𝑉 )∈H 1 1 1 2 , 1 2 are unitary 𝑄∗𝐴 𝑈 𝑄∗𝐴 𝑈 𝑚2×𝑠 𝑛2×𝑠 1 21 1 1 21 2 Σ0 𝑄1 ∈ H 𝑅1 ∈ H 𝑈1 ∈ quaternion matrices, where , , ( ∗ ∗ )( ) 𝑛1×𝑟 𝑚1×𝑟 𝑄 𝐴 𝑈 𝑄 𝐴 𝑈 00 H ,and𝑉1 ∈ H . 2 21 1 2 21 2 ∗ ∗ (30) Theorem 4. Λ0 𝑅 𝐾𝑉 𝑅 𝐾𝑉 Problem 1 has a solution if and only if the =( )( 1 1 1 2). 00 ∗ ∗ following conditions are satisfied: 𝑅2 𝐾𝑉1 𝑅2 𝐾𝑉2

𝐴12 (a) 𝑟( )=𝑛2, 𝐴22 Comparing corresponding blocks in (30), we obtain 𝑛 −𝑟(𝐴 )=𝑚 −𝑟(𝐵 ) 𝑛 −𝑠=𝑚 −𝑟 (b) 2 22 1 11 ,thatis 2 1 , ∗ 𝑄2 𝐴21𝑈1 =0. (31) (c) R(𝐴21𝐵11)⊂R(𝐴22), R(𝐴∗ 𝐵∗ )⊂R(𝐴∗ ) ∗ ̂ (d) 12 11 22 . Let 𝑅 𝐾𝑉 = 𝐾.From(29), (30), we have In that case, the general solution has the form of Λ−1𝑄∗𝐴 𝑈 Σ0 𝐾=(̂ 1 21 1 ), Λ−1𝑄∗𝐴 𝑈 Σ0 𝐻𝐾 𝐴 =𝐵+ +𝐴 𝑅( 1 21 1 ) 22 (32) 11 11 12 𝐻−(𝑉∗𝐴 𝑅 )−1 2 12 2 (𝑛2−𝑠)×𝑟 (𝑛2−𝑠)×(𝑚1−𝑟) (22) 𝐻∈H ,𝐾22 ∈ H . ∗ + + ×𝑉 𝐵11 +𝑌−𝑌𝐵11𝐵11, Inthesameway,from(d),wecanobtain (𝑛 −𝑠)×𝑟 where 𝐻 is an arbitrary matrix in H 2 and 𝑌 is an ∗ 𝑚 ×𝑛 𝑉 𝐴 𝑅 =0. arbitrary matrix in H 1 1 . 1 12 2 (33) 𝐴 𝑚 ×𝑛 𝐴 𝐴 ( 12 ) Proof. If there exists an 1 1 matrix 11 such that is Notice that 𝐴22 in (a) is a full column rank matrix. By (20), −1 nonsingular and 𝐵11 is the corresponding block of 𝐴 ,then (21), and (33), we have (a) is satisfied. From 𝐴𝐵 = 𝐵𝐴 =𝐼,wehavethat Λ0 𝐴 𝐵 +𝐴 𝐵 =0, ∗ 00 21 11 22 21 0𝑄 𝐴12 ( ∗ )( )𝑅=(𝑉∗𝐴 𝑅 𝑉∗𝐴 𝑅 ), (34) (23) 𝑉 0 𝐴22 1 12 1 1 12 2 𝐵11𝐴12 +𝐵12𝐴22 =0, ∗ ∗ 𝑉2 𝐴12𝑅1 𝑉2 𝐴12𝑅2 so that (c) and (d) are satisfied. By (11), we have so that ∗ 𝐴12 0𝑄 𝐴12 𝑟(𝐴22)+𝑛(𝐴22)=𝑛2,𝑟(𝐵11)+𝑛(𝐵11)=𝑚1. (24) 𝑛2 =𝑟( )=𝑟(( ∗ )( )𝑅) 𝐴22 𝑉 0 𝐴22 𝐴 𝐴 ( 11 12 ) From Lemma 2,Noticethat 𝐴21 𝐴22 is the corresponding Λ0 −1 partitioning of 𝐵 ,wehave 00 =𝑟( ∗ ∗ ) 𝑉1 𝐴12𝑅1 𝑉1 𝐴12𝑅2 (35) 𝑛 (𝐵11) =𝑛(𝐴22) , (25) ∗ ∗ 𝑉2 𝐴12𝑅1 𝑉2 𝐴12𝑅2 ( ) implying that b is satisfied. =𝑟(Λ) +𝑟(𝑉∗𝐴 𝑅 ) Conversely, from (c), we know that there exists a matrix 2 12 2 𝑛 ×𝑚 𝐾∈H 2 1 such that ∗ =𝑠+𝑟(𝑉2 𝐴12𝑅2).

𝐴21𝐵11 =𝐴22𝐾. (26) 𝑇 It follows from (b) and (35)that𝑉2 𝐴12𝑅2 is a full column Let rank matrix, so it is nonsingular. From 𝐴𝐵 =𝐼, we have the following matrix equation: 𝐵21 = −𝐾. (27) 𝐴11𝐵11 +𝐴12𝐵21 =𝐼, (36) From (20), (21), and (26), we have that is Σ0 ∗ Λ0 ∗ 𝐴21𝑈( )𝑉 =𝑄( )𝑅 𝐾. (28) 𝑚1×𝑚1 00 00 𝐴11𝐵11 =𝐼−𝐴12𝐵21,𝐼∈H , (37) 4 Journal of Applied Mathematics where 𝐵11, 𝐴12 were given, 𝐵21 =−𝐾(from (27)). By 3. An Example Lemma 3, the matrix equation (37)hasasolutionifandonly if In this section, we give a numerical example to illustrate the theoretical results. + (𝐼 − 𝐴12𝐵21)𝐵11𝐵11 =𝐼−𝐴12𝐵21. (38) Example 5. Consider Problem 1 with the parameter matrices By (21), (27), (32), and (33), we have that (38)isequivalentto: as follows: Σ−1 0 Σ0 (𝐼 + 𝐴 𝐾) 𝑉 ( )𝑈∗𝑈( )𝑉∗ =𝐼+𝐴 𝐾. 1 12 00 00 12 (39) 2+𝑗 𝑘 2 𝐴 =( ), 12 1 We simplify the equation above. The left hand side reduces to −𝑘 1 + 𝑗 ∗ 2 (𝐼 + 𝐴12𝐾)𝑉1𝑉1 and so we have ∗ ∗ 3 1 1 1 𝐴12𝐾𝑉1𝑉 −𝐴12𝐾=𝐼−𝑉1𝑉 . (40) + 𝑖−𝑗− 𝑘 1 1 2 2 2 2 (49) 𝐴 =( ), 21 1 1 3 1 So, 𝑗+ 𝑘 + 𝑖 ∗ 2 2 2 2 ̂ ∗ ∗ ̂ ∗ 𝑉1 ∗ 𝐴12𝑅𝐾𝑉 𝑉1𝑉1 −𝐴12𝑅𝐾𝑉 =(𝑉1 𝑉2)( ∗)−𝑉1𝑉1 . 𝑉2 2𝑖 1𝑖 𝐴22 =( ), 𝐵11 =( ). (41) 2𝑗 𝑘 𝑗𝑘

This implies that It is easy to show that (c), (d) are satisfied, and that 𝑉∗𝑉 𝑉∗ 𝐴 𝑅𝐾(̂ 1 1)𝑉∗ −𝐴 𝑅𝐾(̂ 1 )=𝑉𝑉∗, 12 𝑉∗𝑉 1 12 𝑉∗ 2 2 (42) 2 1 2 𝐴 𝑛 =𝑟( 12)=2, 2 𝐴 so that 22 (50) ∗ 𝑛 −𝑟(𝐴 )=𝑚 −𝑟(𝐵 )=0, ̂ 𝐼 ∗ ̂ 𝑉1 ∗ 2 22 1 11 𝐴12𝑅𝐾( )𝑉1 −𝐴12𝑅𝐾( ∗)=𝑉2𝑉2 . (43) 0 𝑉2 ( ) ( ) So, so a , b are satisfied too. Therefore, we have 0 −𝐴 𝑅𝐾(̂ )=𝑉𝑉∗, 1 1 12 𝑉∗ 2 2 (44) − 𝑗 2 2 2 𝐵+ =( ), 11 1 1 and hence, − 𝑖−𝑘 2 2 (51) Λ−1𝑄∗𝐴 𝑈 Σ0 0 −(𝐴 𝑅 𝐴 𝑅 )( 1 21 1 )( )=𝑉𝑉∗. Λ0 Σ0 12 1 12 2 𝑉∗ 2 2 𝐴 =𝑄( )𝑅∗,𝐵=𝑈( )𝑉∗, 𝐻𝐾22 2 22 00 11 00 (45)

Finally, we obtain where 𝐴 𝑅 𝐾 𝑉∗ =−𝑉𝑉∗. 12 2 22 2 2 2 (46) 1 1𝑖 2√20 𝑄= ( ), Λ=( ), ∗ 𝑗𝑘 √ Multiplying both sides of (46)by𝑉 from the left, consider- √2 0 2 𝑉∗𝐴 𝑅 ing (33) and the fact that 2 12 2 is nonsingular, we have 10 1 1𝑖 𝑅=( ), 𝑈= ( ), (52) ∗ −1 01 √2 𝑗𝑘 𝐾22 =−(𝑉2 𝐴12𝑅2) . (47) √20 10 From Lemma 3,(38), (47), Problem 1 has a solution and the Σ=( ), 𝑉=( ). √ 01 general solution is 0 2 Λ−1𝑄∗𝐴 𝑈 Σ0 𝐴 =𝐵+ +𝐴 𝑅( 1 21 1 ) We also have 11 11 12 ∗ −1 𝐻−(𝑉2 𝐴12𝑅2) (48) 1 1𝑖 10 ×𝑉∗𝐵+ +𝑌−𝑌𝐵 𝐵+ , 𝑄 = ( ), 𝑅 =( ), 11 11 11 1 √2 𝑗𝑘 1 01 (𝑛 −𝑠)×𝑟 (53) where 𝐻 is an arbitrary matrix in H 2 and 𝑌 is an 1 1𝑖 10 𝑚 ×𝑛 H 1 1 𝑈1 = ( ), 𝑉1 =( ). arbitrary matrix in . √2 𝑗𝑘 01 Journal of Applied Mathematics 5

2×2 By Theorem 4, for an arbitrary matrices 𝑌∈H ,we [6] J. W. Shuai, “The quaternion NN model: the identification of have colour images,” Acta Comput. Sinica,vol.18,no.5,pp.372–379, 𝐴 =𝐵+ +𝐴 𝑅(Λ−1𝑄∗𝐴 𝑈 Σ) 𝑉∗𝐵+ +𝑌−𝑌𝐵 𝐵+ 1995 (Chinese). 11 11 12 1 21 1 11 11 11 [7] F. Zhang, “On numerical range of normal matrices of quater- 3 1 1 3 1 3 nions,” Journal of Mathematical and Physical Sciences,vol.29, + 𝑗+ 𝑘 + 𝑖− 𝑗 no. 6, pp. 235–251, 1995. 2 4 4 4 4 2 =( ), 1 1 1 1 3 1 [8]F.Z.Zhang,Permanent Inequalities and Quaternion Matrices −𝑖+ 𝑗− 𝑘 − 𝑖− 𝑗−𝑘 [Ph.D. thesis], University of California, Santa Barbara, Calif, 2 4 4 4 4 2 USA, 1993. (54) [9] F. Zhang, “Quaternions and matrices of quaternions,” Linear it follows that Algebra and Its Applications,vol.251,pp.21–57,1997. 3 1 1 3 1 3 1 [10] Q.-W. Wang, “The general solution to a system of real quater- + 𝑗+ 𝑘 + 𝑖− 𝑗2+𝑗 𝑘 2 4 4 4 4 2 2 nion matrix equations,” Computers & Mathematics with Appli- 1 1 1 1 3 1 1 cations,vol.49,no.5-6,pp.665–675,2005. ( −𝑖+ 𝑗− 𝑘 − 𝑖− 𝑗−𝑘−𝑘1+𝑗) 𝐴=( 2 4 4 4 4 2 2 ) , 3 1 1 1 + 𝑖−𝑗− 𝑘2𝑖 2 2 2 2 1 1 3 1 𝑗+ 𝑘 + 𝑖2𝑗𝑘 ( 2 2 2 2 ) 1 𝑖 −1 −1 𝑗𝑘 0 −1 3 1 3 𝐴−1 =(−1 0 − 𝑗 ). 4 2 4 1 1 1 1 −1 −1 −𝑖 − 𝑖− 𝑗−𝑘 2 2 2 2 (55) The results verify the theoretical findings of Theorem 4.

Acknowledgments The authors would like to give many thanks to the referees and Professor K. C. Sivakumar for their valuable suggestions andcomments,whichresultedinagreatimprovementof the paper. This research was supported by Grants from the Key Project of Scientific Research Innovation Foundation of Shanghai Municipal Education Commission (13ZZ080), the National Natural Science Foundation of China (11171205), the Natural Science Foundation of Shanghai (11ZR1412500), the Discipline Project at the corresponding level of Shanghai (A. 13-0101-12-005), and Shanghai Leading Academic Discipline Project (J50101).

References

[1] M. Fiedler and T. L. Markham, “Completing a matrix when certain entries of its inverse are specified,” Linear Algebra and Its Applications, vol. 74, pp. 225–237, 1986. [2] H. Dai, “Completing a symmetric 2×2block matrix and its inverse,” Linear Algebra and Its Applications,vol.235,pp.235– 245, 1996. [3]W.W.Barrett,M.E.Lundquist,C.R.Johnson,andH.J. Woerdeman, “Completing a block diagonal matrix with a partially prescribed inverse,” Linear Algebra and Its Applications, vol. 223/224, pp. 73–87, 1995. [4]S.L.Adler,Quaternionic Quantum Mechanics and Quantum Fields,OxfordUniversityPress,NewYork,NY,USA,1994. [5] A. Razon and L. P. Horwitz, “Uniqueness of the scalar product in the tensor product of quaternion Hilbert modules,” Journal of Mathematical Physics,vol.33,no.9,pp.3098–3104,1992. Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 178209, 7 pages http://dx.doi.org/10.1155/2013/178209

Research Article Fuzzy Approximate Solution of Positive Fully Fuzzy Linear Matrix Equations

Xiaobin Guo1 and Dequan Shang2

1 College of Mathematics and Statistics, Northwest Normal University, Lanzhou 730070, China 2 Department of Public Courses, Gansu College of Traditional Chinese Medicine, Lanzhou 730000, China

Correspondence should be addressed to Dequan Shang; [email protected]

Received 22 November 2012; Accepted 19 January 2013

Academic Editor: Panayiotis J. Psarrakos

Copyright © 2013 X. Guo and D. Shang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The fuzzy matrix equations 𝐴⊗̃ 𝑋⊗̃ 𝐵=̃ 𝐶̃ in which 𝐴̃, 𝐵̃,and𝐶̃ are 𝑚×𝑚, 𝑛×𝑛,and𝑚×𝑛nonnegative LR fuzzy numbers matrices, respectively, are investigated. The fuzzy matrix systems is extended into three crisp systems of linear matrix equations according to arithmetic operations of LR fuzzy numbers. Based on pseudoinverse of matrix, the fuzzy approximate solution of original fuzzy systems is obtained by solving the crisp linear matrix systems. In addition, the existence condition of nonnegative fuzzy solution is discussed. Two examples are calculated to illustrate the proposed method.

1. Introduction [18–21], Wang and Zheng [22, 23], and Dehghan et al. [24, 25]. The general method they applied is the fuzzy linear equations Since many real-world engineering systems are too com- were converted to a crisp function system of linear equations plex to be defined in precise terms, imprecision is often with high order according to the embedding principles involved in any engineering design process. Fuzzy systems and algebraic operations of fuzzy numbers. Then the fuzzy have an essential role in this fuzzy modeling, which can solution of the original fuzzy linear systems was derived from formulate uncertainty in actual environment. In many matrix solving the crisp function linear systems. However, for a fuzzy equations, some or all of the system parameters are vague matrix equation which always has a wide use in control theory or imprecise, and fuzzy mathematics is a better tool than and control engineering, few works have been done in the crisp mathematics for modeling these problems, and hence past.In2009,Allahviranlooetal.[26] discussed the fuzzy lin- solving a fuzzy matrix equation is becoming more important. ear matrix equations (FLME) of the form 𝐴𝑋𝐵̃ = 𝐶̃ in which The concept of fuzzy numbers and arithmetic operations the matrices 𝐴 and 𝐵 are known 𝑚×𝑚and 𝑛×𝑛real matrices, with these numbers were first introduced and investigated respectively; 𝐶̃ is a given 𝑚×𝑛fuzzy matrix. By using the by Zadeh [1, 2], Dubois and Prade [3], and Nahmias [4]. parametric form of fuzzy number, they derived necessary A different approach to fuzzy numbers and the structure and sufficient conditions for the existence condition of fuzzy of fuzzy number spaces was given by Puri and Ralescu [5], solutions and designed a numerical procedure for calculating Goetschel, Jr. and Voxman [6], and Wu and Ma [7, 8]. the solutions of the fuzzy matrix equations. In 2011, Guo In the past decades, many researchers have studied the et al. [27–29] investigated a class of fuzzy matrix equations fuzzy linear equations such as fuzzy linear systems (FLS), dual 𝐴𝑥=̃ 𝐵̃ by means of the block Gaussian elimination method fuzzy linear systems (DFLS), general fuzzy linear systems (GFLS), fully fuzzy linear systems (FFLS), dual fully fuzzy andstudiedtheleastsquaressolutionsoftheinconsistent 𝐴𝑥=̃ 𝐵̃ linear systems (DFFLS), and general dual fuzzy linear systems fuzzy matrix equation by using generalized inverses (GDFLS). These works were performed mainly by Friedman of the matrix, and discussed fuzzy symmetric solutions of ̃ ̃ et al. [9, 10], Allahviranloo et al. [11–17], Abbasbandy et al. fuzzy matrix equations 𝐴𝑋=𝐵.Whatismore,therearetwo 2 Journal of Applied Mathematics shortcomings in the above fuzzy systems. The first is that in Definition 2 (see [3]). A fuzzy number 𝑀̃ is said to be an LR these fuzzy linear systems and fuzzy matrix systems the fuzzy fuzzy number if elements were denoted by triangular fuzzy numbers, so the 𝑚−𝑥 extended model equations always contain parameter 𝑟, 0≤ {𝐿( ), 𝑥≤𝑚,𝛼>0, 𝑟≤1which makes their computation especially numerical { 𝛼 𝜇 (𝑥) = implementation inconvenient in some sense. The other one 𝑀̃ { 𝑥−𝑚 (1) ̃ {𝑅( ), 𝑥≥𝑚,𝛽>0, is that the weak fuzzy solution of fuzzy linear systems 𝐴𝑥=̃ 𝑏 { 𝛽 does not exist sometimes see; [30]. To make the multiplication of fuzzy numbers easy and where 𝑚, 𝛼,and𝛽 are called the mean value, left, and right handle the fully fuzzy systems, Dubois and Prade [3]intro- spreads of 𝑀̃, respectively. The function 𝐿(⋅),whichiscalled ducedtheLRfuzzynumberin1978.Weknowthattriangular left shape function satisfies fuzzy numbers and trapezoidal fuzzy numbers [31]arejust speciouscasesofLRfuzzynumbers.In2006,Dehghanet (1) 𝐿(𝑥) = 𝐿(−𝑥), al. [25] discussed firstly computational methods for fully ̃ (2) 𝐿(0) = 1 and 𝐿(1) = 0, fuzzy linear systems 𝐴̃𝑥=̃ 𝑏 whose coefficient matrix and the right-hand side vector are LR fuzzy numbers. In 2012, (3) 𝐿(𝑥) is nonincreasing on [0, ∞). Allahviranloo et al. studied LR fuzzy linear systems [32] The definition of a right shape function 𝑅(⋅) is similar to by the linear programming with equality constraints. Otadi 𝐿(⋅) and Mosleh considered the nonnegative fuzzy solution [33] that of . Clearly, 𝑀̃ = (𝑚, 𝛼, 𝛽) is positive (negative) if and only of fully fuzzy matrix equations 𝐴̃𝑋=̃ 𝐵̃ by employing LR if 𝑚−𝛼>0(𝑚+𝛽<0). linear programming with equality constraints at the same 𝑀̃ = (𝑚, 𝛼, 𝛽) 𝑁=̃ Also, two LR fuzzy numbers LR and year. Recently, Guo and Shang [34] investigated the fuzzy (𝑛,𝛾,𝛿) 𝑚=𝑛𝛼=𝛾 approximate solution of LR fuzzy Sylvester matrix equations LR are said to be equal, if and only if , , and 𝛽=𝛿. 𝐴𝑋+̃ 𝑋𝐵̃ = 𝐶̃.Inthispaperweproposeageneralmodelfor 𝐴⊗̃ 𝑋⊗̃ 𝐵=̃ 𝐶̃ solving the fuzzy linear matrix equation ,where Definition 3 (see [5]). For arbitrary LR fuzzy numbers 𝑀=̃ 𝐴̃ 𝐵̃ 𝐶̃ 𝑚×𝑚 𝑛×𝑛 𝑚×𝑛 , ,and are , ,and nonnegative LR fuzzy (𝑚, 𝛼, 𝛽) and 𝑁=(𝑛,𝛾,𝛿)̃ ,wehavethefollowing. numbers matrices, respectively. The model is proposed in this LR LR way; that is, we extend the fuzzy linear matrix system into (1) Addition: a system of linear matrix equations according to arithmetic operations of LR fuzzy numbers. The LR fuzzy solution of 𝑀⊕̃ 𝑁=(𝑚,𝛼,𝛽)̃ ⊕(𝑛,𝛾,𝛿) =(𝑚+𝑛,𝛼+𝛾,𝛽+𝛿) . LR LR LR the original matrix equation is derived from solving crisp (2) systemsoflinearmatrixequations.Thestructureofthispaper is organized as follows. In Section 2, we recall the LR fuzzy numbers and (2) Multiplication: 𝑀>0̃ 𝑁>0̃ present the concept of fully fuzzy linear matrix equation. The (i) If and ,then computing model to the positive fully fuzzy linear matrix 𝑀⊗̃ 𝑁=(𝑚,𝛼,𝛽)̃ ⊗(𝑛,𝛾,𝛿) equation is proposed in detail and the fuzzy approximate LR LR (3) solution of the fuzzy linear matrix equation is obtained by ≅ (𝑚𝑛,𝑚𝛾+𝑛𝛼,𝑚𝛿+𝑛𝛽) . using pseudo-inverse in Section 3. Some examples are given LR to illustrate our method in Section 4 and the conclusion is 𝑀<0̃ 𝑁>0̃ drawn in Section 5. (ii) if and ,then 𝑀⊗̃ 𝑁=(𝑚,𝛼,𝛽)̃ ⊗(𝑛,𝛾,𝛿) RL LR 2. Preliminaries (4) ≅ (𝑚𝑛, 𝑛𝛼 − 𝑚𝛿, 𝑛𝛽−𝑚𝛾) , RL 2.1. The LR Fuzzy Number (iii) if 𝑀<0̃ and 𝑁<0̃ ,then Definition 1 (see [1]). A fuzzy number is a fuzzy set like 𝑢: 𝑅 → 𝐼 = [0, 1] which satisfies the following. 𝑀⊗̃ 𝑁=(𝑚,𝛼,𝛽)̃ ⊗(𝑛,𝛾,𝛿) RL LR 𝑢 (5) (1) is upper semicontinuous, ≅ (𝑚𝑛, −𝑚𝛿 − 𝑛𝛽, −𝑚𝛾−𝑛𝛼) , (2) 𝑢 is fuzzy convex, that is, 𝑢(𝜆𝑥 + (1 − 𝜆)𝑦) ≥ RL min{𝑢(𝑥), 𝑢(𝑦)} for all 𝑥, 𝑦, ∈𝑅 𝜆∈[0,1], (3) Scalar multiplication: (3) 𝑢 is normal, that is, there exists 𝑥0 ∈𝑅such that 𝑢(𝑥 )=1 0 , 𝜆×𝑀=𝜆×(𝑚,𝛼,𝛽)̃ LR (4) supp 𝑢={𝑥∈𝑅|𝑢(𝑥)>0}is the support of the 𝑢, and its closure cl(supp 𝑢)iscompact. (𝜆𝑚, 𝜆𝛼, 𝜆𝛽) ,𝜆≥0, (6) ={ LR 1 (𝜆𝑚, −𝜆𝛽, −𝜆𝛼) ,𝜆<0. Let 𝐸 be the set of all fuzzy numbers on 𝑅. RL Journal of Applied Mathematics 3

2.2. The LR Fuzzy Matrix where 𝐴̃ = (𝐴, 𝑀,, 𝑁)≥0 𝐵̃ = (𝐵, 𝐸,,and 𝐹)≥0 𝐶=̃ (𝐶, 𝐺, 𝐻) ≥0 ̃ . Definition 4. Amatrix𝐴=(𝑎̃𝑖𝑗 ) is called an LR fuzzy matrix, 𝑎̃ 𝐴̃ if each element 𝑖𝑗 of is an LR fuzzy number. 3. Method for Solving FFLME 𝐴̃ will be positive (negative) and denoted by 𝐴>0(̃ 𝐴<̃ ̃ First, we extend the fuzzy linear matrix system (9)intothree 0) if each element 𝑎̃𝑖𝑗 of 𝐴 is positive (negative), where 𝑎̃𝑖𝑗 = (𝑎 ,𝑎𝑙 ,𝑎𝑟 ) systems of linear matrix equations according to the LR fuzzy 𝑖𝑗 𝑖𝑗 𝑖𝑗 LR.Uptotherestofthispaper,weusepositive number and its arithmetic operations. LR fuzzy numbers and formulas given in Definition 3.For ̃ example, we represent 𝑚×𝑛LR fuzzy matrix 𝐴=(𝑎̃𝑖𝑗 ),that Theorem 7. The fuzzy linear matrix system (9) can be 𝑎̃ =(𝑎 ,𝛼 ,𝛽 ) 𝐴̃ = (𝐴, 𝑀, 𝑁) extended into the following model: 𝑖𝑗 𝑖𝑗 𝑖𝑗 𝑖𝑗 LR with new notation ,where 𝐴=(𝑎) 𝑀=(𝛼) 𝑁=(𝛽) 𝑚×𝑛 𝑖𝑗 , 𝑖𝑗 and 𝑖𝑗 are three crisp 𝐴𝑋𝐵 =𝐶, matrices. In particular, an 𝑛 dimensions LR fuzzy numbers 𝑙 𝑟 𝑙 vector 𝑥̃ can be denoted by (𝑥, 𝑥 ,𝑥 ),where𝑥=(𝑥𝑖), 𝑥 = 𝐴𝑋𝐸 + 𝐴𝑌𝐵 + 𝑀𝑋𝐵=𝐺, (11) (𝑥𝑙) 𝑥𝑟 =(𝑥𝑟) 𝑛 𝑖 and 𝑖 are three dimensions crisp vectors. 𝐴𝑋𝐹 + 𝐴𝑍𝐵 + 𝑁𝑋𝐵=𝐻. 𝐴=(̃ 𝑎̃ ) 𝐵=(̃ ̃𝑏 ) 𝑚×𝑛 Definition 5. Let 𝑖𝑗 and 𝑖𝑗 be two and Proof. We denote 𝐴̃ = (𝐴, 𝑀, 𝑁), 𝐵̃ = (𝐵, 𝐸, 𝐹) and 𝐶=̃ ̃ ̃ ̃ ̃ ̃ 𝑛×𝑝fuzzy matrices; we define 𝐴⊗𝐵=𝐶=(𝑐𝑖𝑗 ) which is an (𝐶, 𝐺, 𝐻), and assume 𝑋 = (𝑋, 𝑌,𝑍),then ≥0 𝑚×𝑝fuzzy matrix, where 𝐴̃𝑋̃𝐵=̃ (𝐴, 𝑀, 𝑁) ⊗ (𝑋, 𝑌, 𝑍) ⊗ (𝐵, 𝐸, 𝐹) ⊕ ̃ ̃𝑐𝑖𝑗 = ∑ 𝑎̃𝑖𝑘 ⊗ 𝑏𝑘𝑗 . (7) = (𝐴𝑋,𝐴𝑌+𝑀𝑋,𝐴𝑍+𝑁𝑋) ⊗ (𝐵, 𝐸, 𝐹) 𝑘=1,2,...,𝑛 = (𝐴𝑋𝐵, 𝐴𝑋𝐸 + 𝐴𝑌𝐵 + 𝑀𝑋𝐵,) 𝐴𝑋𝐹+𝐴𝑍𝐵𝑁𝑋𝐵 2.3. The Fully Fuzzy Linear Matrix Equation = (𝐶, 𝐺, 𝐻) , Definition 6. The matrix system: (12) according to multiplication of nonnegative LR fuzzy numbers 𝑎̃11 𝑎̃12 ⋅⋅⋅ 𝑎̃1𝑚 𝑥̃11 𝑥̃12 ⋅⋅⋅ 𝑥̃1𝑛 of Definition 2. Thus we obtain a model for solving FFLME 𝑎̃21 𝑎̃22 ⋅⋅⋅ 𝑎̃2𝑚 𝑥̃21 𝑥̃22 ⋅⋅⋅ 𝑥̃2𝑛 ( )⊗( ) (9) as follows: ...... 𝐴𝑋𝐵 =𝐶, 𝑎̃𝑚1 𝑎̃𝑚2 ⋅⋅⋅ 𝑎̃𝑚𝑚 𝑥̃𝑚1 𝑥̃𝑚2 ⋅⋅⋅ 𝑥̃𝑚𝑛 ̃ ̃ ̃ 𝐴𝑋𝐸 + 𝐴𝑌𝐵 + 𝑀𝑋𝐵=𝐺, (13) 𝑏11 𝑏12 ⋅⋅⋅ 𝑏1𝑛 ̃ ̃ ̃ 𝑏21 𝑏22 ⋅⋅⋅ 𝑏2𝑛 𝐴𝑋𝐹 + 𝐴𝑍𝐵 + 𝑁𝑋𝐵=𝐻. ⊗( . . . . ) (8) . . . . ̃𝑏 ̃𝑏 ⋅⋅⋅ ̃𝑏 𝑛1 𝑛2 𝑛𝑛 Secondly, in order to solve the fuzzy linear matrix equa- tion (9), we need to consider the crisp systems of linear ̃𝑐11 ̃𝑐12 ⋅⋅⋅ ̃𝑐1𝑛 matrix equation (11). Supposing 𝐴 and 𝐵 are nonsingular crisp ̃𝑐21 ̃𝑐22 ⋅⋅⋅ ̃𝑐2𝑛 =( ), matrices, we have ...... 𝑋=𝐴−1𝐶𝐵−1, ̃𝑐𝑚1 ̃𝑐𝑚2 ⋅⋅⋅ ̃𝑐𝑚𝑛 −1 −1 −1 ̃ 𝑌=𝐴 𝐺−𝑋𝐸𝐵 −𝐴 𝑀𝑋, (14) where 𝑎̃𝑖𝑗 , 1≤𝑖, 𝑗≤𝑚, 𝑏𝑖𝑗 , 1≤𝑖, 𝑗≤𝑛,and̃𝑐𝑖𝑗 , 1≤𝑖≤𝑚, 1≤𝑗≤𝑛 are LR fuzzy numbers, is called an LR fully fuzzy 𝑍=𝐴−1𝐻−𝑋𝐹𝐵−1 −𝐴−1𝑁𝑋, linear matrix equation (FFLME). Using matrix notation, we have that is, −1 −1 𝐴⊗̃ 𝑋⊗̃ 𝐵=̃ 𝐶.̃ (9) 𝑋=𝐴 𝐶𝐵 ,

−1 −1 −1 −1 −1 −1 −1 A fuzzy numbers matrix 𝑌=𝐴 𝐺−𝐴 𝐶𝐵 𝐸𝐵 −𝐴 𝑀𝐴 𝐶𝐵 , (15)

̃ −1 −1 −1 −1 −1 −1 −1 𝑋=(𝑥𝑖𝑗 ,𝑦𝑖𝑗 ,𝑧𝑖𝑗 ) , 1≤𝑖≤𝑚,1≤𝑗≤𝑛 𝑍=𝐴 𝐻−𝐴 𝐶𝐵 𝐹𝐵 −𝐴 𝑁𝐴 𝐶𝐵 . LR (10) is called an LR fuzzy approximate solution of fully fuzzy linear Definition 8. Let 𝑋=(𝑋,𝑌,𝑍)̃ be an LR fuzzy matrix. If ̃ matrix equation (8)if𝑋 satisfies (9). (𝑋, 𝑌,𝑍) is an exact solution of (11)suchthat𝑋≥0, 𝑌≥0, Up to the rest of this paper, we will discuss the nonnega- 𝑍≥0,and𝑋−𝑌≥0;wecall𝑋̃ = (𝑋, 𝑌,𝑍) anonnegative tive solution 𝑋̃ = (𝑋, 𝑌,𝑍) ≥0 of FFLME 𝐴⊗̃ 𝑋⊗̃ 𝐵=̃ 𝐶̃, LR fuzzy approximate solution of (9). 4 Journal of Applied Mathematics

† † Theorem 9. Let 𝐴̃ = (𝐴, 𝑀, 𝑁), 𝐵̃ = (𝐵, 𝐸,,and 𝐹) 𝐶=̃ Proof. Since 𝐴 and 𝐵 are nonnegative matrices, we have † † (𝐶, 𝐺, 𝐻) be three nonnegative fuzzy matrices, respectively, and 𝑋=𝐴𝐶𝐵 ≥0. † † † † let 𝐴 and 𝐵 be the product of a permutation matrix by a Now that 𝐺𝐵 ≥ 𝐶𝐵 𝐸+𝑀𝐴 𝐶 and 𝐻𝐵 ≥ 𝐶𝐵 𝐹+𝑁𝐴 𝐶, † † † † diagonal matrix with positive diagonal entries. Moreover, let therefore, with 𝑌=𝐴(𝐺𝐵 − 𝐶𝐵 𝐸−𝑀𝐴𝐶)𝐵 and 𝑍= −1 −1 −1 −1 † † † † 𝐺𝐵 ≥ 𝐶𝐵 𝐸+𝑀𝐴 𝐶, 𝐻𝐵 ≥ 𝐶𝐵 𝐹+𝑁𝐴 𝐶,and 𝐴 (𝐻𝐵 − 𝐶𝐵 𝐹−𝑁𝐴 𝐶)𝐵 ,wehave𝑌≥0and 𝑍≥0.Thus −1 −1 𝐶+𝐶𝐵 𝐸+𝑀𝐴 𝐶≥𝐺𝐵. Then the systems 𝐴⊗̃ 𝑋⊗̃ 𝐵=̃ 𝐶̃ 𝑋̃ = (𝑋, 𝑌,𝑍) is a fuzzy matrix which satisfies 𝐴⊗̃ 𝑋⊗̃ 𝐵=̃ † † † † has nonnegative fuzzy solutions. 𝐶̃.Since𝑋−𝑌(𝐶 =𝐴 − 𝐺𝐵 +𝐶𝐵 𝐸+𝑀𝐴𝐶)𝐵 , 𝑋̃ −1 the nonnegativity property of can be obtained from the Proof. Our hypotheses on 𝐴 and 𝐵 imply that 𝐴 and 𝐶+𝐶𝐵†𝐸+𝑀𝐴†𝐶≥𝐺𝐵 −1 condition . 𝐵 exist and they are nonnegative matrices. Thus 𝑋= −1 −1 𝐴 𝐶𝐵 ≥0. −1 −1 On the other hand, because 𝐺𝐵 ≥ 𝐶𝐵 𝐸+𝑀𝐴 𝐶 and −1 −1 −1 −1 −1 The following Theorems give some results for such 𝑆 𝐻𝐵 ≥ 𝐶𝐵 𝐹+𝑁𝐴 𝐶,sowith𝑌=𝐴 (𝐺𝐵 − 𝐶𝐵 𝐸− † ⊤ −1 −1 −1 −1 −1 −1 and 𝑆 to be nonnegative. As usual, (⋅) denotes the transpose 𝑀𝐴 𝐶)𝐵 and 𝑍=𝐴 (𝐻𝐵 − 𝐶𝐵 𝐹−𝑁𝐴 𝐶)𝐵 ,we of a matrix (⋅). have 𝑌≥0and 𝑍≥0.Thus𝑋=(𝑋,𝑌,𝑍)̃ is a fuzzy matrix ̃ ̃ ̃ ̃ −1 which satisfies 𝐴⊗𝑋⊗𝐵=𝐶.Since𝑋−𝑌=𝐴 (𝐶 − 𝐺𝐵 + Theorem 13 −1 −1 −1 ̃ (see [36]). The inverse of a nonnegative matrix 𝐶𝐵 𝐸+𝑀𝐴 𝐶)𝐵 , the positivity property of 𝑋 can be 𝐴 𝐴 −1 −1 is nonnegative if and only if is a generalized permutation obtained from the condition 𝐶+𝐶𝐵 𝐸+𝑀𝐴 𝐶≥𝐺𝐵. matrix.

When 𝐴 or 𝐵 is a singular crisp matrix, the following Theorem 14 (see [37]). Let 𝐴 be an 𝑚×𝑚nonnegative matrix result is obvious. with rank 𝑟. Then the following assertions are equivalent:

Theorem 10 (see [35]). Forlinearmatrixequations𝐴𝑋𝐵, =𝐶 𝐴† ≥0 𝑚×𝑚 𝑛×𝑛 𝑚×𝑛 (a) . where 𝐴∈𝑅 , 𝐵∈𝑅 ,and𝐶∈𝑅 .Then (b) There exists a permutation matrix 𝑃,suchthat𝑃𝐴 has † † 𝑋=𝐴𝐶𝐵 (16) the form is its minimal norm least squares solution. 𝑆1 By the pseudoinverse of matrices, we solve model (11) and 𝑆2 obtain its minimal norm least squares solution as follows: 𝑃𝐴 =( . ), . (19) † † 𝑋=𝐴𝐶𝐵 , 𝑆𝑟 𝑂 † † † 𝑌=𝐴𝐺−𝑋𝐸𝐵 −𝐴 𝑀𝑋, (17)

where each 𝑆𝑖 has rank 1 and the rows of 𝑆𝑖 are 𝑍=𝐴†𝐻−𝑋𝐹𝐵† −𝐴†𝑁𝑋, orthogonal to the rows of 𝑆𝑗,whenever𝑖/=𝑗,thezero matrix may be absent. that is, † 𝐾𝑃⊤ 𝐾𝑄⊤ (c) 𝐴 =( ⊤ ⊤ ) for some positive diagonal matrix 𝐾. 𝑋=𝐴†𝐶𝐵†, 𝐾𝑃 𝐾𝑃 In this case, 𝑌=𝐴†𝐺−𝐴†𝐶𝐵†𝐸𝐵† −𝐴†𝑀𝐴†𝐶𝐵†, (18) † ⊤ † ⊤ (𝑃+𝑄) =𝐾(𝑃+𝑄) , (𝑃−𝑄) =𝐾(𝑃−𝑄) . (20) 𝑍=𝐴†𝐻−𝐴†𝐶𝐵†𝐹𝐵† −𝐴†𝑁𝐴†𝐶𝐵†. 4. Numerical Examples Definition 11. Let 𝑋=(𝑋,𝑌,𝑍)̃ be an LR fuzzy matrix. If (𝑋, 𝑌,𝑍) is a minimal norm least squares solution of (11)such Example 15. Consider the fully fuzzy linear matrix system that 𝑋≥0, 𝑌≥0, 𝑍≥0,and𝑋−𝑌,wecall ≥0 𝑋=(𝑋,𝑌,𝑍)̃ a nonnegative LR fuzzy minimal norm least squares solution (2, 1, 1) (1, 0, 1) 𝑥̃11 𝑥̃12 𝑥̃13 of (9). ( LR LR)⊗( ) (1, 0, 0) (2, 1, 0) 𝑥̃21 𝑥̃22 𝑥̃23 Atlast,wegiveasufficientconditionfornonnegative LR LR (1, 0, 1) (2, 1, 1) (1, 1, 0) fuzzy minimal norm least squares solution of FFLME (9)in LR LR LR ⊗((2, 1, 0) (1, 0, 0) (2, 1, 1) ) thesameway. LR LR LR (1, 0, 0) (2, 1, 1) (1, 0, 1) † † LR LR LR Theorem 12. Let 𝐴 and 𝐵 be nonnegative matrices. More- † † † † (17, 12, 23) (22, 22, 29) (17, 16, 28) over, let 𝐺𝐵 ≥ 𝐶𝐵 𝐸+𝑀𝐴𝐶, 𝐻𝐵 ≥ 𝐶𝐵 𝐹+𝑁𝐴𝐶,and =( LR LR LR). † † ̃ ̃ ̃ ̃ (19, 15, 13) (23, 23, 16) (19, 16, 17) 𝐶+𝐶𝐵 𝐸+𝑀𝐴 𝐶≥𝐺𝐵. Then the systems 𝐴⊗𝑋⊗𝐵=𝐶 has LR LR LR nonnegative fuzzy minimal norm least squares solutions. (21) Journal of Applied Mathematics 5

By Theorem 7, the model of the above fuzzy linear matrix Now that the matrix 𝐵 is singular, according to formula system is made of the following three crisp systems of linear (18), the solutions of the above three systems of linear matrix matrix equations equations are as follows: 121 21 𝑥 𝑥 𝑥 17 22 17 † ( )( 11 12 13)(212)=( ), † 121 12 𝑥 𝑥 𝑥 19 23 19 21 17 22 17 21 22 23 𝑋=𝐴†𝐶𝐵† =( ) ( )(212) 121 12 19 23 19 121 011 21 𝑥 𝑥 𝑥 1.5000 1.0000 1.5000 ( )( 11 12 13)(101) =( ), 12 𝑥 𝑥 𝑥 1.5000 2.0000 1.5000 21 22 23 010 21† 12 22 16 𝑌=𝐴†𝐺−𝑋𝐸𝐵† −𝐴†𝑀𝑋=( ) ( ) 121 12 19 23 19 21 𝑦 𝑦 𝑦 +( )( 11 12 13)(212) † 12 𝑦21 𝑦22 𝑦23 011 121 121 21† 10 −𝑋(101)(212) −( ) ( )𝑋 12 01 010 121 121 (23) 10 𝑥 𝑥 𝑥 12 22 16 +( )( 11 12 13)(212)=( ), 1.0333 0.6667 0.8725 01 𝑥21 𝑥22 𝑥23 15 23 20 =( ), 121 1.1250 1.6553 1.2578

† 110 † † † 21 23 29 28 21 𝑥 𝑥 𝑥 𝑍=𝐴𝐻−𝑋𝐹𝐵 −𝐴 𝑁𝑋 =( ) ( ) ( )( 11 12 13)(001) 12 13 16 17 12 𝑥 𝑥 𝑥 21 22 23 011 110 121 † 21† 11 −𝑋(001)(212) −( ) ( )𝑋 121 12 00 21 𝑧 𝑧 𝑧 011 121 +( )( 11 12 13)(212) 12 𝑧 𝑧 𝑧 21 22 23 121 8.3333 11.6667 9.3333 =( ). 1.4167 1.3333 2.4167 121 11 𝑥 𝑥 𝑥 23 29 28 +( )( 11 12 13)(212)=( ). 00 𝑥 𝑥 𝑥 13 16 17 21 22 23 121 By Definition 11, we know that the original fuzzy linear matrix equations have a nonnegative LR fuzzy solution (22)

𝑥̃ 𝑥̃ 𝑥̃ (1.5000, 1.0333, 8.3333) (1.0000, 0.6667, 11.6667) (1.5000, 0.8725, 9.3333) 𝑋=(̃ 11 12 13)=( LR LR LR), 𝑥̃ 𝑥̃ 𝑥̃ (1.5000, 1.1250, 1.4167) (2.0000, 1.6553, 1.3333) (1.5000, 1.2578, 2.4167) (24) 21 22 23 LR LR LR

since 𝑋≥0, 𝑌≥0, 𝑍≥0,and𝑋−𝑌≥0. matrix equations:

Example 16. Consider the following fuzzy matrix system: 21 𝑥 𝑥 12 15 25 ( )( 11 12)( )=( ), 12 𝑥21 𝑥22 23 10 20 (1, 1, 0) (2, 0, 1) 𝑥̃ 𝑥̃ (1, 0, 1) (2, 1, 0) 21 𝑥 𝑥 01 21 𝑦 𝑦 12 ( LR LR)( 11 12)( LR LR) ( )( 11 12)( )+( )( 11 12)( ) (2, 0, 1) (1, 0, 0) 𝑥̃ 𝑥̃ (2, 1, 1) (3, 1, 2) 12 𝑥 𝑥 11 12 𝑦 𝑦 23 LR LR 21 22 LR LR 21 22 21 22 10 𝑥 𝑥 12 14 24 (15, 14, 18) (25, 24, 24) +( )( 11 12)( )=( ), =( LR LR). 00 𝑥21 𝑥22 23 10 17 (12, 10, 12) (20, 17, 15) LR LR 21 𝑥 𝑥 10 21 𝑧 𝑧 12 (25) ( )( 11 12)( )+( )( 11 12)( ) 12 𝑥21 𝑥22 12 12 𝑧21 𝑧22 23 01 𝑥 𝑥 12 18 24 +( )( 11 12)( )=( ). 10 𝑥 𝑥 23 12 15 By Theorem 7, the model of the above fuzzy linear matrix 21 22 system is made of following three crisp systems of linear (26) 6 Journal of Applied Mathematics

Bythesameway,weobtainthatthesolutionsoftheabove [5] M. L. Puri and D. A. Ralescu, “Differentials of fuzzy functions,” three systems of linear matrix equations are as follows: Journal of Mathematical Analysis and Applications,vol.91,no.2, pp. 552–558, 1983. 11 𝑋=𝐴−1𝐶𝐵−1 =( ), [6]R.Goetschel,Jr.andW.Voxman,“Elementaryfuzzycalculus,” 22 Fuzzy Sets and Systems,vol.18,no.1,pp.31–43,1986. 01 [7] C. X. Wu and M. Ma, “Embedding problem of fuzzy number 𝑌=𝐴−1𝐺−𝑋𝐸𝐵−1 −𝐴−1𝑀𝑋=( ), 01 (27) space. I,” Fuzzy Sets and Systems,vol.44,no.1,pp.33–38,1991. [8] C. X. Wu and M. Ma, “Embedding problem of fuzzy number 00 space. III,” Fuzzy Sets and Systems,vol.46,no.2,pp.281–286, 𝑍=𝐴−1𝐻−𝑋𝐹𝐵−1 −𝐴−1𝑁𝑋=( ). 10 1992. [9] M. Friedman, M. Ming, and A. Kandel, “Fuzzy linear systems,” Since 𝑋≥0, 𝑌≥0, 𝑍≥0,and𝑋−𝑌≥0,weknowthat Fuzzy Sets and Systems, vol. 96, no. 2, pp. 201–209, 1998. the original fuzzy linear matrix equations have a nonnegative [10] M. Ma, M. Friedman, and A. Kandel, “Duality in fuzzy linear LR fuzzy solution given by systems,” Fuzzy Sets and Systems,vol.109,no.1,pp.55–58,2000. [11] T. Allahviranloo and M. Ghanbari, “On the algebraic solution 𝑥̃ 𝑥̃ 𝑋=(̃ 11 12) of fuzzy linear systems based on interval theory,” Applied 𝑥̃21 𝑥̃22 Mathematical Modelling,vol.36,no.11,pp.5360–5379,2012. [12] T. Allahviranloo and S. Salahshour, “Fuzzy symmetric solutions (1.000, 1.000, 0.000) (1.000, 1.000, 0.000) =( LR LR). of fuzzy linear systems,” Journal of Computational and Applied (2.000, 0.000, 1.000) (2.000, 1.000, 0.000) LR LR Mathematics,vol.235,no.16,pp.4545–4553,2011. [13] T. Allahviranloo and M. Ghanbari, “A new approach to obtain algebraicsolutionofintervallinearsystems,”Soft Computing, (28) vol.16,no.1,pp.121–133,2012. [14] T.Allahviranloo, “Numerical methods for fuzzy system of linear 5. Conclusion equations,” Applied Mathematics and Computation,vol.155,no. 2,pp.493–502,2004. In this work we presented a model for solving fuzzy linear ̃ ̃ ̃ ̃ ̃ ̃ [15] T. Allahviranloo, “Successive over relaxation iterative method matrix equations 𝐴⊗𝑋⊗𝐵=𝐶 in which 𝐴 and 𝐵 are for fuzzy system of linear equations,” Applied Mathematics and 𝑚×𝑚and 𝑛×𝑛fuzzy matrices, respectively, and 𝐶̃ is Computation,vol.162,no.1,pp.189–196,2005. an 𝑚×𝑛arbitrary LR fuzzy numbers matrix. The model [16] T. Allahviranloo, “The Adomian decomposition method for was made of three crisp systems of linear equations which fuzzy system of linear equations,” Applied Mathematics and determined the mean value and the left and right spreads of Computation,vol.163,no.2,pp.553–563,2005. the solution. The LR fuzzy approximate solution of the fuzzy [17] R. Nuraei, T.Allahviranloo, and M. Ghanbari, “Finding an inner linear matrix equation was derived from solving the crisp estimation of the so- lution set of a fuzzy linear system,” Applied systems of linear matrix equations. In addition, the existence Mathematical Modelling,vol.37,no.7,pp.5148–5161,2013. condition of strong LR fuzzy solution was studied. Numerical [18] S. Abbasbandy, R. Ezzati, and A. Jafarian, “LU decomposition examples showed that our method is feasible to solve this type method for solving fuzzy system of linear equations,” Applied of fuzzy matrix equations. Based on LR fuzzy numbers and Mathematics and Computation,vol.172,no.1,pp.633–643, their operations, we can investigate all kinds of fully fuzzy 2006. matrix equations in future. [19] S. Abbasbandy, A. Jafarian, and R. Ezzati, “Conjugate gradient method for fuzzy symmetric positive definite system of linear equations,” Applied Mathematics and Computation,vol.171,no. Acknowledgments 2, pp. 1184–1191, 2005. [20] S. Abbasbandy, M. Otadi, and M. Mosleh, “Minimal solution of The work is supported by the Natural Scientific Funds of general dual fuzzy linear systems,” Chaos, Solitons & Fractals, China (no. 71061013) and the Youth Research Ability Project vol. 37, no. 4, pp. 1113–1124, 2008. of Northwest Normal University (NWNU-LKQN-1120). [21] B. Asady, S. Abbasbandy, and M. Alavi, “Fuzzy general linear systems,” Applied Mathematics and Computation,vol.169,no.1, References pp.34–40,2005. [22] K. Wang and B. Zheng, “Inconsistent fuzzy linear systems,” [1] L. A. Zadeh, “Fuzzy sets,” Information and Computation,vol.8, Applied Mathematics and Computation,vol.181,no.2,pp.973– pp. 338–353, 1965. 981, 2006. [2] L. A. Zadeh, “The concept of a linguistic variable and its [23] B. Zheng and K. Wang, “General fuzzy linear systems,” Applied application to approximate reasoning. I,” Information Science, Mathematics and Computation,vol.181,no.2,pp.1276–1286, vol.8,pp.199–249,1975. 2006. [3] D. Dubois and H. Prade, “Operations on fuzzy numbers,” [24] M. Dehghan and B. Hashemi, “Iterative solution of fuzzy linear International Journal of Systems Science,vol.9,no.6,pp.613– systems,” Applied Mathematics and Computation,vol.175,no.1, 626, 1978. pp. 645–674, 2006. [4]S.Nahmias,“Fuzzyvariables,”Fuzzy Sets and Systems. An [25] M. Dehghan, B. Hashemi, and M. Ghatee, “Solution of the fully International Journal in Information Science and Engineering, fuzzy linear systems using iterative techniques,” Chaos, Solitons vol. 1, no. 2, pp. 97–110, 1978. &Fractals, vol. 34, no. 2, pp. 316–336, 2007. Journal of Applied Mathematics 7

[26] T. Allahviranloo, N. Mikaeilvand, and M. Barkhordary, “Fuzzy linear matrix equation,” Fuzzy Optimization and Decision Mak- ing,vol.8,no.2,pp.165–177,2009. [27] X. Guo and Z. Gong, “Block Gaussian elimination methods for fuzzy matrix equations,” International Journal of Pure and Applied Mathematics,vol.58,no.2,pp.157–168,2010. [28] Z. Gong and X. Guo, “Inconsistent fuzzy matrix equations and its fuzzy least squares solutions,” Applied Mathematical Modelling,vol.35,no.3,pp.1456–1469,2011. [29] X. Guo and D. Shang, “Fuzzy symmetric solutions of fuzzy matrix equations,” Advances in Fuzzy Systems,vol.2012,Article ID 318069, 9 pages, 2012. [30] T. Allahviranloo, M. Ghanbari, A. A. Hosseinzadeh, E. Haghi, and R. Nuraei, “Anote on ‘Fuzzy linear systems’,” Fuzzy Sets and Systems,vol.177,pp.87–92,2011. [31] C.-T. Yeh, “Weighted semi-trapezoidal approximations of fuzzy numbers,” Fuzzy Sets and Systems,vol.165,pp.61–80,2011. [32]T.Allahviranloo,F.H.Lotfi,M.K.Kiasari,andM.Khezerloo, “On the fuzzy solution of LR fuzzy linear systems,” Applied Mathematical Modelling,vol.37,no.3,pp.1170–1176,2013. [33] M. Otadi and M. Mosleh, “Solving fully fuzzy matrix equations,” Applied Mathematical Modelling,vol.36,no.12,pp.6114–6121, 2012. [34] X. B. Guo and D. Q. Shang, “Approximate solutions of LR fuzzy Sylvester matrix equations,” Journal of Applied Mathematics.In press. [35]A.Ben-IsraelandT.N.E.Greville,Generalized Inverses: Theory and Applications, Springer-Verlag, New York, NY, USA, 2nd edition, 2003. [36] A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York, NY, USA, 1979. [37] R. J. Plemmons, “Regular nonnegative matrices,” Proceedings of the American Mathematical Society,vol.39,pp.26–32,1973. Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 514984, 9 pages http://dx.doi.org/10.1155/2013/514984

Research Article Ranks of a Constrained Hermitian Matrix Expression with Applications

Shao-Wen Yu

Department of Mathematics, East China University of Science and Technology, Shanghai 200237, China

Correspondence should be addressed to Shao-Wen Yu; [email protected]

Received 12 November 2012; Accepted 4 January 2013

Academic Editor: Yang Zhang

Copyright © 2013 Shao-Wen Yu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

∗ We establish the formulas of the maximal and minimal ranks of the quaternion Hermitian matrix expression 𝐶4 −𝐴4𝑋𝐴 4 where ∗ 𝑋 is a Hermitian solution to quaternion matrix equations 𝐴1𝑋=𝐶1, 𝑋𝐵1 =𝐶2,and𝐴3𝑋𝐴 3 =𝐶3.Asapplications,wegiveanew necessary and sufficient condition for the existence of Hermitian solution to the system of matrix equations 𝐴1𝑋=𝐶1, 𝑋𝐵1 =𝐶2, ∗ ∗ 𝐴3𝑋𝐴 3 =𝐶3,and𝐴4𝑋𝐴 4 =𝐶4, which was investigated by Wang and Wu, 2010, by rank equalities. In addition, extremal ranks of ∼ ∗ ∼ the generalized Hermitian Schur complement 𝐶4 −𝐴4𝐴3𝐴4 with respect to a Hermitian g-inverse 𝐴3 of 𝐴3, which is a common solution to quaternion matrix equations 𝐴1𝑋=𝐶1 and 𝑋𝐵1 =𝐶2, are also considered.

1. Introduction Mitra [2] investigated the system of matrix equations Throughout this paper, we denote the real number field by R, 𝐴1𝑋=𝐶1,𝑋𝐵1 =𝐶2. (2) the complex number field by C,thesetofall𝑚×𝑛matrices over the quaternion algebra Khatri and Mitra [3] gave necessary and sufficient con- ditions for the existence of the common Hermitian solution 2 2 to (2) and presented an explicit expression for the general H ={𝑎0 +𝑎1𝑖+𝑎2𝑗+𝑎3𝑘|𝑖 =𝑗 Hermitian solution to (2) by generalized inverses. Using the (1) =𝑘2 =𝑖𝑗𝑘=−1,𝑎,𝑎 ,𝑎 ,𝑎 ∈ R} singular value decomposition (SVD), Yuan [4] investigated 0 1 2 3 the general symmetric solution of (2)overtherealnumber field R. By the SVD, Dai and Lancaster [5] considered the 𝑚×𝑛 by H , the identity matrix with the appropriate size by 𝐼, symmetric solution of equation the column right space, the row left space of a matrix 𝐴 over 𝐴𝑋𝐴∗ =𝐶 H by R(𝐴), N(𝐴), respectively, the dimension of R(𝐴) by (3) ∽ dim R(𝐴), a Hermitian g-inverse of a matrix 𝐴 by 𝑋=𝐴 ∽ ∗ which satisfies 𝐴𝐴 𝐴=𝐴and 𝑋=𝑋,andtheMoore- over R, which was motivated and illustrated with an inverse † Penrose inverse of matrix 𝐴 over H by 𝐴 which satisfies four problem of vibration theory. Groß [6], Tian and Liu [7] † † † † † ∗ Penrose equations 𝐴𝐴 𝐴=𝐴,𝐴𝐴𝐴 =𝐴,(𝐴𝐴) = gave the solvability conditions for Hermitian solution and its † † ∗ † † C 𝐴𝐴 , and (𝐴 𝐴) =𝐴𝐴.Inthiscase𝐴 is unique and expressions of (3)over in terms of generalized inverses, † ∗ ∗ † respectively. Liu, Tian and Takane [8] investigated ranks (𝐴 ) =(𝐴) .Moreover,𝑅𝐴 and 𝐿𝐴 stand for the two † † of Hermitian and skew-Hermitian solutions to the matrix projectors 𝐿𝐴 =𝐼−𝐴𝐴, 𝑅𝐴 = 𝐼−𝐴𝐴 induced by 𝐴. equation (3). By using the generalized SVD, Chang and Wang Clearly, 𝑅𝐴 and 𝐿𝐴 are idempotent, Hermitian and 𝑅𝐴 =𝐿𝐴∗ . By [1], for a quaternion matrix 𝐴,dimR(𝐴) = dim N(𝐴). [9] examined the symmetric solution to the matrix equations dim R(𝐴) is called the rank of a quaternion matrix 𝐴 and 𝐴 𝑋𝐴∗ =𝐶,𝐴𝑋𝐴∗ =𝐶 denoted by 𝑟(𝐴). 3 3 3 4 4 4 (4) 2 Journal of Applied Mathematics over R. Note that all the matrix equations mentioned above A new necessary and sufficient condition for system (5)to are special cases of have Hermitian solution can be given by rank equalities, which is more practical than one given by generalized 𝐴 𝑋=𝐶,𝑋𝐵=𝐶, 1 1 1 2 inverses and range inclusion of matrices. (5) 𝐴 𝑋𝐴∗ =𝐶,𝐴𝑋𝐴∗ =𝐶. It is well known that Schur complement is one of the most 3 3 3 4 4 4 important matrix expressions in matrix theory; there have Wang and Wu [10] gave some necessary and sufficient been many results in the literature on Schur complements conditions for the existence of the common Hermitian and their applications [39–41]. Tian [36, 42] has investigated ∗ solution to (5) for operators between Hilbert C -modules the maximal and minimal ranks of Schur complements with by generalized inverses and range inclusion of matrices. In applications. view of the complicated computations of the generalized Motivated by the work mentioned above, we in this paper inverses of matrices, we naturally hope to establish a more investigate the extremal ranks of the quaternion Hermitian practical, necessary, and sufficient condition for system (5) matrix expression (9) subject to the consistent system of over quaternion algebra to have Hermitian solution by rank quaternion matrix equations (10)anditsapplications.In equalities. Section 2, we derive the formulas of extremal ranks of (9) Asisknowntous,solutionstomatrixequationsandranks with respect to Hermitian solution of (10). As applications, in of solutions to matrix equations have been considered previ- Section 3, we give a new, necessary, and sufficient condition ouslybymanyauthors[10–34], and extremal ranks of matrix for the existence of Hermitian solution to system (5)by expressions can be used to characterize their rank invariance, rank equalities. In Section 4, we derive extremal ranks of nonsingularity, range inclusion, and solvability conditions generalized Hermitian Schur complement subject to (2). We of matrix equations. Tian and Cheng [35]investigatedthe also consider the rank invariance problem in Section 5. maximal and minimal ranks of 𝐴−𝐵𝑋𝐶with respect to 𝑋 with applications; Tian [36] gave the maximal and minimal 2. Extremal Ranks of (9) Subject to System (10) ranks of 𝐴1 −𝐵1𝑋𝐶1 subject to a consistent matrix equation ∗ 𝐵2𝑋𝐶2 =𝐴2. Tian and Liu [7] established the solvability Corollary 8 in [10]overHilbertC -modules can be changed conditions for (4)tohaveaHermitiansolutionoverC by into the following lemma over H. the ranks of coefficient matrices. Wang and Jiang20 [ ] derived Lemma 1. 𝐴 ,𝐶 ∈ H𝑚×𝑛 𝐵 ,𝐶 ∈ H𝑛×𝑠,𝐴 ∈ extreme ranks of (skew)Hermitian solutions to a quaternion Let 1 1 , 1 2 3 ∗ ∗ H𝑟×𝑛,𝐶 ∈ H𝑟×𝑟 𝐹=𝐵∗𝐿 ,𝑀=𝑆𝐿,𝑆= matrix equation 𝐴𝑋𝐴 +𝐵𝑌𝐵 =𝐶. Wang, Yu and Lin 3 be given, and 1 𝐴1 𝐹 𝐶 −𝐴 𝑋𝐵 𝐴 𝐿 ,𝐷=𝐶∗−𝐵∗𝐴† 𝐶 ,𝐽=𝐴† 𝐶 +𝐹†𝐷, 𝐺 =𝐶 −𝐴 (𝐽+ [31] derived the extremal ranks of 4 4 4 subjecttoa 3 𝐴1 2 1 1 1 1 1 3 3 consistent system of matrix equations 𝐿 𝐿∗ 𝐽∗)𝐴∗ 𝐴1 𝐹 3; then the following statements are equivalent: 𝐴 𝑋=𝐶,𝑋𝐵=𝐶,𝐴𝑋𝐵 =𝐶 1 1 1 2 3 3 3 (6) (1) the system (10) have a Hermitian solution, ∗ over H andgaveanewsolvabilityconditiontosystem (2) 𝐶3 =𝐶3 , 𝐴 𝑋=𝐶,𝑋𝐵=𝐶, ∗ ∗ ∗ ∗ 1 1 1 2 𝐴1𝐶2 =𝐶1𝐵1,𝐴1𝐶1 =𝐶1𝐴1,𝐵1 𝐶2 =𝐶2 𝐵1, (11) (7) 𝐴3𝑋𝐵3 =𝐶3,𝐴4𝑋𝐵4 =𝐶4. 𝑅 𝐶 =0, 𝑅 𝐷=0, 𝑅 𝐺=0, 𝐴1 1 𝐹 𝑀 (12) In matrix theory and its applications, there are many ∗ matrix expressions that have symmetric patterns or involve (3) 𝐶3 =𝐶3 ;theequalitiesin(11) hold and Hermitian (skew-Hermitian) matrices. For example, 𝐴 𝐶 ∗ ∗ ∗ 1 1 𝐴1 𝐴−𝐵𝑋𝐵 , 𝐴−𝐵𝑋±𝑋 𝐵 , 𝑟[𝐴 𝐶 ]=𝑟(𝐴 ), 𝑟[ ]=𝑟[ ∗ ], 1 1 1 𝐵∗ 𝐶∗ 𝐵 (8) 1 2 1 𝐴−𝐵𝑋𝐵∗ −𝐶𝑌𝐶∗,𝐴−𝐵𝑋𝐶±(𝐵𝑋𝐶)∗, 𝐴 𝐶 𝐴∗ 1 1 3 𝐴 (13) ∗ 1 where 𝐴=±𝐴,𝐵,and𝐶 are given and 𝑋 and 𝑌 are [ ∗ ∗ ∗] [ ∗ ] 𝑟 [𝐵1 𝐶2 𝐴3] =𝑟 𝐵1 . variable matrices. In recent papers [7, 8, 37, 38], Liu and Tian 𝐴 𝐴 𝐶 [ 3] considered some maximization and minimization problems [ 3 3 ] on the ranks of Hermitian matrix expressions (8). Define a Hermitian matrix expression In that case, the general Hermitian solution of (10) can be ∗ expressed as 𝑓 (𝑋) =𝐶4 −𝐴4𝑋𝐴 4, (9) ∗ 𝐶 =𝐶∗ 𝑋=𝐽+𝐿 𝐿 𝐽∗ +𝐿 𝐿 𝑀†𝐺(𝑀†) 𝐿 𝐿 where 4 4 ; we have an observation that by investigating 𝐴1 𝐹 𝐴1 𝐹 𝐹 𝐴1 extremal ranks of (9), where 𝑋 is a Hermitian solution to a (14) +𝐿 𝐿 𝐿 𝑉𝐿 𝐿 +𝐿 𝐿 𝑉∗𝐿 𝐿 𝐿 , system of matrix equations 𝐴1 𝐹 𝑀 𝐹 𝐴1 𝐴1 𝐹 𝑀 𝐹 𝐴1 ∗ 𝐴1𝑋=𝐶1,𝑋𝐵1 =𝐶2,𝐴3𝑋𝐴 3 =𝐶3. (10) where 𝑉 is Hermitian matrix over H with compatible size. Journal of Applied Mathematics 3

𝑚×𝑛 Lemma 2 (see Lemma 2.4 in [24]). Let 𝐴∈H , 𝐵∈ where 𝑚×𝑘 𝑙×𝑛 𝑗×𝑘 𝑙×𝑖 H ,𝐶∈H ,𝐷∈H ,and𝐸∈H .Thenthefollowing rank equalities hold: 𝐶4 𝐴4 [ ] [𝐶∗𝐴∗ 𝐵∗ ] 𝐵∗ 𝑟(𝐶𝐿 )=𝑟[𝐴 ]−𝑟(𝐴) 𝑎=𝑟[ 2 4 1 ] −𝑟[ 1 ], (a) 𝐴 𝐶 , [ ] 𝐴1 ∗ 𝐶1𝐴4 𝐴1 𝐵𝐴𝐿 𝐵𝐴 [ ] (b) 𝑟 [ 𝐶 ] =𝑟[0𝐶]−𝑟(𝐶), 0𝐴∗ 𝐴∗ 𝐵 𝐴∗ 𝐶 𝐶0 4 3 1 1 (c) 𝑟[𝑅 𝐴 ]=𝑟[𝐴𝐵]−𝑟(𝐵), [ ] 𝐵 [𝐴4 𝐶4 000] [ ∗] 𝐴3 𝐴𝐵0 [𝐴 0−𝐶−𝐴 𝐶 −𝐴 𝐶 ] ∗ 𝐴𝐵𝐿𝐷 𝑏=𝑟[ 3 3 3 2 3 1 ] −2𝑟[𝐵 ] , (d) 𝑟[𝑅 𝐶0]=𝑟[𝐶0𝐸] − 𝑟(𝐷) − 𝑟(𝐸). [ ] 1 𝐸 0𝐷0 [𝐵∗ 0−𝐶∗𝐴∗ −𝐶∗𝐵 −𝐶∗𝐴∗] 𝐴 [ 1 2 3 2 1 2 1] [ 1] Lemma 2 plays an important role in simplifying ranks of 𝐴 0−𝐶𝐴∗ −𝐶 𝐵 −𝐶 𝐴∗ [ 1 1 3 1 1 1 1 ] various block matrices. Liu and Tian [38] has given the following lemma over a (19) field. The result can be generalized to H. 𝐶4 𝐴4 ∗ 𝑚×𝑚 𝑚×𝑛 𝑝×𝑚 [ ∗ ∗ ∗ ] Lemma 3. Let 𝐴=±𝐴 ∈ H , 𝐵∈H ,and𝐶∈H [𝐶 𝐴 𝐵 ] min 𝑟[𝑓(𝑋)]=2𝑟[ 2 4 1 ] be given; then [ ] 𝐶 𝐴∗ 𝐴 [ 1 4 1] 𝑟[𝐴−𝐵𝑋𝐶∓(𝐵𝑋𝐶)∗] max𝑛×𝑝 ∗ ∗ ∗ 𝑋∈H 0𝐴4 𝐴3 𝐵1 𝐴1 [ ] 𝐴𝐵 𝐴𝐶∗ [𝐴4 𝐶4 000] = {𝑟 [𝐴𝐵𝐶∗],𝑟[ ],𝑟[ ]} , [ ] min 𝐵∗ 0 𝐶0 [𝐴 0−𝐶−𝐴 𝐶 −𝐴 𝐶∗] (15) +𝑟[ 3 3 3 2 3 1 ] [ ∗ ∗ ∗ ∗ ∗ ∗] ∗ [𝐵 0−𝐶𝐴 −𝐶 𝐵 −𝐶 𝐴 ] min 𝑟[𝐴−𝐵𝑋𝐶∓(𝐵𝑋𝐶) ] [ 1 2 3 2 1 2 1] 𝑋∈H𝑛×𝑝 [ ] 𝐴 0−𝐶𝐴∗ −𝐶 𝐵 −𝐶 𝐴∗ ∗ [ 1 1 3 1 1 1 1 ] =2𝑟[𝐴𝐵𝐶]+max {𝑠1,𝑠2}, ∗ ∗ 0𝐴4 𝐵1 𝐴1 [ ] where [𝐴 𝐶 00] [ 4 4 ] [𝐴 0−𝐴𝐶 −𝐴 𝐶∗] ∗ −2𝑟[ 3 3 2 3 1 ] . 𝐴𝐵 𝐴𝐵𝐶 [ ∗ ∗ ∗ ∗] 𝑠 =𝑟[ ∗ ]−2𝑟[ ∗ ], [𝐵 0−𝐶𝐵 −𝐶 𝐴 ] 1 𝐵 0 𝐵 00 [ 1 2 1 2 1] [ ] ∗ ∗ (16) ∗ 𝐴𝐶 𝐴𝐵𝐶 𝐴1 0−𝐶1𝐵1 −𝐶1𝐴1 𝑠 =𝑟[ ]−2𝑟[ ]. [ ] 2 𝐶0 𝐶0 0 (20)

∗ If R(𝐵) ⊆ R(𝐶 ), Proof. By Lemma 1, the general Hermitian solution of the system (10) can be expressed as ∗ ∗ 𝐴𝐵 max𝑟[𝐴−𝐵𝑋𝐶−(𝐵𝑋𝐶) ]=min {𝑟𝐴𝐶 [ ],𝑟[ ∗ ]} , ∗ 𝑋 𝐵 0 𝑋=𝐽+𝐿 𝐿 𝐽∗ +𝐿 𝐿 𝑀†𝐺(𝑀†) 𝐿 𝐿 𝐴1 𝐹 𝐴1 𝐹 𝐹 𝐴1 (21) ∗ ∗ 𝐴𝐵 ∗ max𝑟[𝐴−𝐵𝑋𝐶−(𝐵𝑋𝐶) ]=min {𝑟𝐴𝐶 [ ],𝑟[ ∗ ]} . +𝐿𝐴 𝐿𝐹𝐿𝑀𝑉𝐿 𝐹𝐿𝐴 +𝐿𝐴 𝐿𝐹𝑉 𝐿𝑀𝐿𝐹𝐿𝐴 , 𝑋 𝐵 0 1 1 1 1 (17) where 𝑉 is Hermitian matrix over H with appropriate size. Substituting (21)into(9) yields Now we consider the extremal ranks of the matrix expression (9) subject to the consistent system (10). 𝑓 (𝑋) =𝐶 −𝐴 (𝐽 + 𝐿 𝐿 𝐽∗ 4 4 𝐴1 𝐹 Theorem 4. Let 𝐴1,𝐶1,𝐵1,𝐶2,𝐴3, and 𝐶3 be defined as 𝐶 ∈ H𝑡×𝑡, 𝐴 ∈ H𝑡×𝑛 ∗ ∗ Lemma 1, 4 and 4 .Thentheextremalranks +𝐿 𝐿 𝑀†𝐺(𝑀†) 𝐿 𝐿 )𝐴 𝑓(𝑋) 𝐴1 𝐹 𝐹 𝐴1 of the quaternion matrix expression defined as (9) subject 4 (22) to system (10) are the following: −𝐴 𝐿 𝐿 𝐿 𝑉𝐿 𝐿 𝐴∗ 4 𝐴1 𝐹 𝑀 𝐹 𝐴1 4 𝑟 [𝑓 (𝑋)] = {𝑎,} 𝑏 , −𝐴 𝐿 𝐿 𝑉∗𝐿 𝐿 𝐿 𝐴∗. max min (18) 4 𝐴1 𝐹 𝑀 𝐹 𝐴1 4 4 Journal of Applied Mathematics

Put 󸀠 ∗ ∗ 𝐴𝑁 𝐶4 −𝐴4𝐽 𝐴 𝐴4𝐿𝐴 𝐿𝐹𝐿𝑀 ∗ † † ∗ 𝑟[ ]=𝑟[ 4 1 ] 𝐶4 −𝐴4 (𝐽 + 𝐿𝐴 𝐿𝐹𝐽 +𝐿𝐴 𝐿𝐹𝑀 𝐺(𝑀 ) 𝐿𝐹𝐿𝐴 )𝐴4 =𝐴, ∗ ∗ 1 1 1 𝑁 0 𝑅 ∗ 𝑅 ∗ 𝑅 ∗ 𝐴 0 𝑀 𝐹 𝐴1 4 ∗ 𝐽+𝐿 𝐿 𝐽∗ +𝐿 𝐿 𝑀†𝐺(𝑀†) 𝐿 𝐿 =𝐽󸀠, 𝐶 −𝐴 𝐽󸀠𝐴∗ 𝐴 000 𝐴1 𝐹 𝐴1 𝐹 𝐹 𝐴1 4 4 4 4 [ ∗ ∗ ∗] [ 𝐴4 0𝐴3 𝐵1 𝐴1] 𝐴4𝐿𝐴 𝐿𝐹𝐿𝑀 =𝑁, [ ] 1 =𝑟[ ] [ 0𝐴3 000] 𝐿 𝐿 𝐴∗ =𝑃; [ ∗ ] 𝐹 𝐴1 4 0𝐵1 000 (23) [ 0𝐴1 000] −2𝑟(𝑀) −2𝑟(𝐹) −2𝑟(𝐴 ) then 1 𝐶 𝐴 000 𝑓 𝑋 =𝐴−𝑁𝑉𝑃− 𝑁𝑉𝑃 ∗. 4 4 ( ) ( ) (24) [ ] [ 𝐴∗ 0𝐴∗ 𝐵 𝐴∗] [ 4 3 1 1] 𝐴=𝐴∗ R(𝑁) ⊆ R(𝑃∗) [ ] Note that and . Thus, applying (17)to [ ] 𝐴 [𝐴 𝐽󸀠𝐴∗ 𝐴 000] 3 (24), we get the following: =𝑟[ 3 4 3 ] −2𝑟[𝐵∗ ] [ ] 1 ∗ [ ∗ 󸀠 ∗ ∗ ] [𝐴1] max 𝑟[𝑓(𝑋)]=max 𝑟(𝐴−𝑁𝑉𝑃−(𝑁𝑉𝑃) ) [𝐵1 𝐽 𝐴4 𝐵1 000] 𝑉 [ ] [ ] 󸀠 ∗ 𝐴𝑁 𝐴1𝐽 𝐴4 𝐴1 000 = {𝑟 [𝐴𝑃∗],𝑟[ ]} , [ ] min 𝑁∗ 0 𝐶4 𝐴4 000 𝑟[𝑓(𝑋)]= 𝑟(𝐴−𝑁𝑉𝑃−(𝑁𝑉𝑃)∗) [ ∗ ∗ ∗ ] min min [𝐴4 0𝐴3 𝐵1 𝐴1 ] 𝑉 [ ] [ 0𝐴 −𝐶 −𝐴 𝐶 −𝐴 𝐶∗] 𝐴𝑁 𝐴𝑁 [ 3 3 3 2 3 1 ] =2𝑟[𝐴𝑃∗]+𝑟[ ]−2𝑟[ ]. =𝑟[ ] 𝑁∗ 0 𝑃0 [ 0𝐵∗ −𝐶∗𝐴∗ −𝐶∗𝐵 −𝐶∗𝐴∗] [ 1 2 3 2 1 2 1] (25) [ ] 0𝐴 −𝐶 𝐴∗ −𝐶 𝐵 −𝐶 𝐴∗ [ 1 1 3 1 1 1 1 ] Now we simplify the ranks of block matrices in (25). 𝐴 In view of Lemma 2, block Gaussian elimination, (11), 3 −2𝑟[𝐵∗ ] (12), and (23), we have the following: 1 [𝐴1] 𝐵∗ 𝑟 (𝐹) =𝑟(𝐵∗𝐿 )=𝑟[ 1 ]−𝑟(𝐴 ), 0𝐴∗ 𝐴∗ 𝐵 𝐴∗ 1 𝐴1 𝐴 1 4 3 1 1 1 [ ] [𝐴 𝐶 000] [ 4 4 ] 𝑆 [𝐴 0−𝐶−𝐴 𝐶 −𝐴 𝐶∗] 𝑟 (𝑀) =𝑟(𝑆𝐿𝐹)=𝑟[ ]−𝑟(𝐹) [ 3 3 3 2 3 1 ] 𝐹 =𝑟[ ] [𝐵∗ 0−𝐶∗𝐴∗ −𝐶∗𝐵 −𝐶∗𝐴∗] [ 1 2 3 2 1 2 1] 𝐴3𝐿𝐴 [ ] =𝑟[ 1 ]−𝑟(𝐹) 𝐴 0−𝐶𝐴∗ −𝐶 𝐵 −𝐶 𝐴∗ 𝐵∗𝐿 [ 1 1 3 1 1 1 1 ] 1 𝐴1 𝐴 𝐴 3 3 −2𝑟[𝐵∗ ] , [ ∗ ] 1 =𝑟 𝐵 −𝑟(𝐴1)−𝑟(𝐹) , 1 [𝐴1] [𝐴1] 󸀠 ∗ 𝐴𝑁 𝐶4 −𝐴4𝐽 𝐴4 𝐴4𝐿𝐴 𝐿𝐹𝐿𝑀 𝑟[𝐴𝑃∗]=𝑟[𝐶 −𝐴 𝐽𝐴∗ 𝑃∗] 𝑟[ ]=𝑟[ ∗ 1 ] 4 4 4 𝑃0 𝑅 ∗ 𝑅 ∗ 𝐴 0 𝐹 𝐴1 4 ∗ 𝐶4 −𝐴4𝐽𝐴 𝐴4𝐿𝐴 =𝑟[ 4 1 ]−𝑟(𝐹) 𝐶4 𝐴4 00 0𝐹 [ ] [𝐴∗ 0𝐵 𝐴∗ ] [ 4 1 1 ] 𝐶 −𝐴 𝐽𝐴∗ 𝐴 [ ∗] 4 4 4 4 [ 0𝐴3 −𝐴3𝐶2 −𝐴3𝐶1 ] [ ∗ ] =𝑟[ ] =𝑟 0𝐵−𝑟(𝐹) −𝑟(𝐴1) [ ] 1 [ 0𝐵∗ −𝐶∗𝐵 −𝐶∗𝐴∗] [ 0𝐴] [ 1 2 1 2 1] 1 [ ] ∗ 0𝐴1 −𝐶1𝐵1 −𝐶1𝐴1 𝐶4 𝐴4 [ ] [ ∗ ∗ ∗ ] 𝐵∗ =𝑟[𝐶2 𝐴4 𝐵1 ] −𝑟[ 1 ], 𝐴3 ∗ [ ] 𝐴 𝐵1 1 −𝑟[𝐵∗ ] −𝑟[ ] 𝐶 𝐴∗ 𝐴 1 [ 1 4 1] 𝐴1 [𝐴1] Journal of Applied Mathematics 5

∗ Corollary 6. Supposethatthematrixequation𝐴3𝑋𝐴 3 =𝐶3 0𝐴∗ 𝐵 𝐴∗ is consistent; then the extremal ranks of the quaternion matrix 4 1 1 𝑓(𝑋) 𝐴 𝑋𝐴∗ =𝐶 [ ] expression defined as (9) subject to 3 3 3 are the [𝐴 𝐶 00] [ 4 4 ] following: [ ] [𝐴 0−𝐴𝐶 −𝐴 𝐶∗] =𝑟[ 3 3 2 3 1 ] [ ] max 𝑟[𝑓(𝑋)] [𝐵∗ 0−𝐶∗𝐵 −𝐶∗𝐴∗] [ 1 2 1 2 1] [ ] 0𝐴∗ 𝐴∗ ∗ { 4 3 } 𝐴1 0−𝐶1𝐵1 −𝐶1𝐴 𝐶 𝐴 [𝐴 𝐶 0 ] [ 1 ] = min {𝑟[ 4 4],𝑟 4 4 −2𝑟(𝐴3)} , { [𝐴3 0−𝐶3] } 𝐴 3 𝐵∗ 𝑟[𝑓(𝑋)]=2𝑟[𝐶 𝐴 ] [ ∗ ] 1 min 4 4 −𝑟[𝐵1 ] −𝑟[ ]. ∗ ∗ ∗ 𝐴1 0𝐴4 𝐴3 0𝐴4 [𝐴1] [ ] [ ] +𝑟 𝐴4 𝐶4 0 −2𝑟 𝐴4 𝐶4 . (26) [𝐴3 0−𝐶3] [𝐴3 0 ]

Substituting (26)into(25)yields(18)and(20). (29)

In Theorem 4,letting𝐶4 vanish and 𝐴4 be 𝐼 with appropriate size, respectively, we have the following. 3. A Practical Solvability Condition for

𝑚×𝑛 Hermitian Solution to System (5) Corollary 5. Assume that 𝐴1, 𝐶1 ∈ H , 𝐵1,𝐶2 ∈ 𝑛×𝑠 𝑟×𝑛 𝑟×𝑟 H ,𝐴3 ∈ H , and 𝐶3 ∈ H are given; then the maximal In this section, we use Theorem 4 to give a necessary and and minimal ranks of the Hermitian solution 𝑋 to the system sufficient condition for the existence of Hermitian solution (10) can be expressed as to system (5)byrankequalities. 𝑚×𝑛 𝑛×𝑠 Theorem 7. Let 𝐴1, 𝐶1 ∈ H , 𝐵1,𝐶2 ∈ H , 𝐴3 ∈ max 𝑟 (𝑋) = min {𝑎,} 𝑏 , 𝑟×𝑛 𝑟×𝑟 𝑡×𝑛 𝑡×𝑡 (27) H ,𝐶3 ∈ H ,𝐴4 ∈ H ,and𝐶4 ∈ H be given; then ∗ the system (5) have Hermitian solution if and only if 𝐶3 =𝐶3 , where (11), (13) hold, and the following equalities are all satisfied:

𝐶∗ 𝐵∗ 𝑟 [𝐴 𝐶 ] =𝑟(𝐴 ) , 𝑎=𝑛+𝑟[ 2 ]−𝑟[ 1 ], 4 4 4 (30) 𝐶1 𝐴 1 𝐶 𝐴 4 4 𝐴 ∗ [ ∗ ∗ ∗ ] 4 𝐶3 𝐴3𝐶2 𝐴3𝐶1 [𝐶 𝐴 𝐵 ] [ ∗ ] 𝑟 [ 2 4 1 ] =𝑟 𝐵1 , (31) [ ∗ ∗ ∗ ∗ ∗] 𝐴3 ∗ [𝐶 𝐴 𝐶 𝐵 𝐶 𝐴 ] ∗ 𝐶 𝐴 𝐴 𝐴1 𝑏=2𝑛+𝑟[ 2 3 2 1 2 1] −2𝑟[𝐵 ] , 1 4 1 [ ] [ ] 1 [ ] [𝐴1] 𝐶 𝐴∗ 𝐶 𝐵 𝐶 𝐴∗ ∗ ∗ ∗ [ 1 3 1 1 1 1 ] 0𝐴4 𝐴3 𝐵1 𝐴1 [ ] 𝐴 [𝐴 𝐶 000] 4 𝐶∗ [ 4 4 ] [ ] 2 [𝐴 0−𝐶−𝐴 𝐶 −𝐴 𝐶∗] [𝐴3] min 𝑟 (𝑋) =2𝑟[ ] 𝑟 [ 3 3 3 2 3 1 ] =2𝑟[ ] . 𝐶 [ ] [𝐵∗ ] 1 [𝐵∗ 0−𝐶∗𝐴∗ −𝐶∗𝐵 −𝐶∗𝐴∗] 1 [ 1 2 3 2 1 2 1] [𝐴 ] 𝐶 𝐴 𝐶 𝐴 𝐶∗ 𝐴 0−𝐶𝐴∗ −𝐶 𝐵 −𝐶 𝐴∗ 1 3 3 2 3 1 [ 1 1 3 1 1 1 1 ] [ ] [𝐶∗𝐴∗ 𝐶∗𝐵 𝐶∗𝐴∗] (32) +𝑟[ 2 3 2 1 2 1] [𝐶 𝐴∗ 𝐶 𝐵 𝐶 𝐴∗ ] 1 3 1 1 1 1 Proof. It is obvious that the system (5)haveHermitian ∗ solution if and only if the system (10) have Hermitian solution 𝐴3𝐶2 𝐴3𝐶1 [ ] and [ ∗ ∗ ∗] −2𝑟[𝐶2 𝐵1 𝐶2 𝐴1] . [ ] 𝑟[𝑓(𝑋)]=0, 𝐶 𝐵 𝐶 𝐴∗ min (33) [ 1 1 1 1 ] (28) where 𝑓(𝑋) is defined as (9)subjecttosystem(10). Let 𝑋0 be a Hermitian solution to the system (5); then 𝑋0 is a Hermitian ∗ In Theorem 4, assuming that 𝐴1,𝐵1,𝐶1,and𝐶2 vanish, solution to system (10)and𝑋0 satisfies 𝐴4𝑋0𝐴4 =𝐶4.Hence, ∗ we have the following. Lemma 1 yields 𝐶3 =𝐶3 ,(11), (13), and (30). It follows 6 Journal of Applied Mathematics from Hermitian g-inverse which is a solution to the system (2) if and only if (11) holds and the following equalities are all satisfied: 𝐼0000 [ ] [ 0𝐼000] [ ] [𝐴3𝑋0 0𝐼00] [ ∗ ] 𝐴 𝐶 𝐴 [𝐵1 𝑋0 00𝐼0] 1 1 𝐴 [𝐵∗ 𝐶∗𝐴] 1 𝐴 𝑋 000𝐼 1 2 [ ∗ ] [ 1 0 ] 𝑟 [ ] =𝑟 𝐵1 , 𝐴𝐴 𝐴 ∗ ∗ ∗ [ ] [ ] 0𝐴4 𝐴3 𝐵1 𝐴1 [𝐴 𝐶 000] (39) [ 4 4 ] 𝐴 𝐶 𝐵 [𝐴 0−𝐶−𝐴 𝐶 −𝐴 𝐶∗] 1 1 𝐴 [ 3 3 3 2 3 1 ] [𝐵∗ 𝐶∗𝐵] 1 × [ ] 𝑟 [ 1 2 ] =𝑟[𝐵∗ ] , [𝐵∗ 0−𝐶∗𝐴∗ −𝐶∗𝐵 −𝐶∗𝐴∗] 1 [ 1 2 3 2 1 2 1] 𝐵𝐵 𝐵 ∗ ∗ [ ] [ ] 𝐴1 0−𝐶1𝐴3 −𝐶1𝐵1 −𝐶1𝐴1 [ ] ∗ 0𝐵 𝐴1 𝐵 𝐴 ∗ ∗ ∗ ∗ 1 𝐼−𝑋0𝐴4 000 0𝐴4 𝐴3 𝐵1 𝐴1 [ 𝐵𝐵 0 0 ] 𝐵 [ ] [ ] [ ∗ ] [ 𝐴 ] 0𝐼000 𝐴4 0000 [ 𝐴 0 −𝐴2 −𝐴𝐶 −𝐴𝐶 ] [ ] [ ] [ ] 𝑟 [ 1 ] =2𝑟[ ] . × [ ] = [𝐴 0000] [ ∗ ∗ ∗ ∗ ∗] ∗ (40) [00𝐼00] [ 3 ] [𝐵 0−𝐶𝐴−𝐶𝐵 −𝐶 𝐴 ] [𝐵1 ] [000𝐼0] [𝐵∗ 0000] [ 1 2 2 1 2 1] 1 ∗ [𝐴1] 𝐴1 0−𝐶1𝐴−𝐶1𝐵1 −𝐶1𝐴 [0000𝐼] [𝐴1 0000] [ 1 ] (34) that (32)holds.Similarly,wecanobtain(31). ∗ 4. Extremal Ranks of Schur Complement Conversely, assume that 𝐶3 =𝐶3 ,(11), (13) hold; then by Lemma 1,system(10) have Hermitian solution. By (20), (31)- Subject to (2) (32), and As is well known, for a given block matrix ∗ ∗ 0𝐴4 𝐵1 𝐴1 [𝐴4 𝐶4 00] 𝐴4 [ ∗] 𝐴4 [𝐴3 0−𝐴3𝐶2 −𝐴3𝐶 ] [𝐴 ] 𝑟 [ 1 ] ≥𝑟[ 3] +𝑟[𝐵∗ ] 𝐴𝐵 [ ∗ ∗ ∗ ∗] [𝐵∗ ] 1 (35) 𝑀=[ ∗ ], (41) [𝐵1 0−𝐶2 𝐵1 −𝐶2 𝐴1] 1 𝐴 𝐵 𝐷 [ ] 𝐴 [ 1] 𝐴 0−𝐶𝐵 −𝐶 𝐴∗ [ 1] [ 1 1 1 1 1 ] we can get where 𝐴 and 𝐷 are Hermitian quaternion matrices with appropriate sizes, then the Hermitian Schur complement of 𝑟 [𝑓 (𝑋)] ≤0. min (36) 𝐴 in 𝑀 is defined as However,

min 𝑟[𝑓(𝑋)]≥0. (37) ∗ ∼ 𝑆𝐴 =𝐷−𝐵 𝐴 𝐵, (42) Hence (33) holds, implying that the system (5)haveHermi- tian solution. ∼ ∼ where 𝐴 is a Hermitian g-inverse of 𝐴,thatis,𝐴 ∈{𝑋| By Theorem 7,wecanalsogetthefollowing. ∗ 𝐴𝑋𝐴 = 𝐴, 𝑋= }. Corollary 8. Suppose that 𝐴3, 𝐶3, 𝐴4,and𝐶4 are those in Now we use Theorem 4 to establish the extremal ranks ∗ 𝑆 𝐴∼ Theorem 7; then the quaternion matrix equations 𝐴3𝑋𝐴 3 = of 𝐴 given by (42)withrespectto which is a solution to ∗ 𝐶3 and 𝐴4𝑋𝐴 4 =𝐶4 havecommonHermitiansolutionifand system (2). only if (30) hold and the following equalities are satisfied: 𝑚×𝑛 𝑛×𝑠 Theorem 10. Suppose 𝐴1,𝐶1 ∈ H , 𝐵1,𝐶2 ∈ H ,𝐷∈ H𝑡×𝑡 𝐵∈H𝑛×𝑡, 𝐴∈H𝑛×𝑛 𝑟[𝐴3 𝐶3]=𝑟(𝐴3), , and are given and system (2) is consistent; then the extreme ranks of 𝑆𝐴 given by (42) with ∗ ∗ 𝐴∼ 0𝐴4 𝐴3 (38) respect to which is a solution of (2) are the following: [ ] 𝐴3 𝑟 𝐴4 𝐶4 0 =2𝑟[ ]. 𝐴4 [𝐴3 0−𝐶3] 𝑚×𝑛 𝑛×𝑠 𝑟(𝑆 )= {𝑎,} 𝑏 , Corollary 9. Suppose that 𝐴1, 𝐶1 ∈ H , 𝐵1, 𝐶2 ∈ H , and max∼ 𝐴 min 𝑛×𝑛 𝐴1𝐴 =𝐶1 (43) 𝐴 𝐵∈H 𝐴 𝐵 ∼ , are Hermitian. Then and have a common 𝐴 𝐵1=𝐶2 Journal of Applied Mathematics 7 where 5. The Rank Invariance of (9) ∗ 𝐷𝐵 ∗ ∗ ∗ 𝐵 As another application of Theorem 4,weinthissection [𝐶 𝐵𝐵] 1 𝑎=𝑟 2 1 −𝑟[ ], consider the rank invariance of the matrix expression (9)with 𝐴1 [𝐶1𝐵𝐴1] respect to the Hermitian solution of system (10). ∗ 0𝐵 𝐴1 𝐵 𝐴1 Theorem 12. Suppose that (10) have Hermitian solution; then [ ∗ ] [𝐵 𝐷0 0] 0 𝐴 the rank of 𝑓(𝑋) defined by (9) with respect to the Hermitian [ 𝐴 0 −𝐴 −𝐴𝐶 −𝐴𝐶∗ ] [ ∗ ] 𝑏=𝑟[ 2 1 ] −2𝑟 𝐵1 , solution of (10) is invariant if and only if [ ∗ ∗ ∗ ∗ ∗] [𝐵1 0−𝐶2 𝐴−𝐶2 𝐵1 −𝐶2 𝐴1] [𝐴1] ∗ ∗ ∗ [𝐴1 0−𝐶1𝐴−𝐶1𝐵1 −𝐶1𝐴1 ] 0𝐴4 𝐵1 𝐴1 [𝐴 𝐶 00] [ 4 4 ] 𝐷𝐵∗ [𝐴 0−𝐴𝐶 −𝐴 𝐶∗] [ ∗ ∗ ] [ 3 3 2 3 1 ] min 𝑟(𝑆𝐴)=2𝑟 𝐶2 𝐵𝐵1 𝑟 [ ] 𝐴 𝐴∼=𝐶 [ ∗ ∗ ∗ ∗] 1 1 𝐶 𝐵𝐴 𝐵 0−𝐶𝐵1 −𝐶 𝐴 𝐴∼𝐵 =𝐶 [ 1 1] [ 1 2 2 1] 1 2 [ ] ∗ ∗ 𝐴1 0−𝐶1𝐵1 −𝐶1𝐴1 0𝐵 𝐴1 𝐵 𝐴1 [ ] [𝐵∗ 𝐷0 0] 0 [ ] [ 𝐴 0 −𝐴 −𝐴𝐶 −𝐴𝐶∗ ] 𝐶4 𝐴4 𝐴 [ 2 1 ] [ ] 3 +𝑟[ ] =𝑟[𝐶∗𝐴∗ 𝐵∗ ] +𝑟[𝐵∗ ] , [ ∗ ∗ ∗ ∗ ∗] 2 4 1 1 𝐵1 0−𝐶2 𝐴−𝐶2 𝐵1 −𝐶2 𝐴1 [ ] ∗ 𝐴1 [ ] [𝐶1𝐴4 𝐴1] [ ] 𝐴 0−𝐶𝐴−𝐶𝐵 −𝐶 𝐴∗ [ 1 1 1 1 1 1 ] 0𝐴∗ 𝐴∗ 𝐵 𝐴∗ 4 3 1 1 (47) [𝐴 𝐶 000] 0𝐵 𝐵 𝐴∗ [ 4 4 ] 1 1 [𝐴 0−𝐶−𝐴 𝐶 −𝐴 𝐶∗] [𝐵∗ 𝐷0] 0 [ 3 3 3 2 3 1 ] 𝐵∗ [ ] 𝑟 [ ] +𝑟[ 1 ] [ 𝐴0−𝐴𝐶 −𝐴𝐶∗ ] [𝐵∗ 0−𝐶∗𝐴∗ −𝐶∗𝐵 −𝐶∗𝐴∗] 𝐴 [ 2 1 ] [ 1 2 3 2 1 2 1] 1 −2𝑟[ ] . [ ] [𝐵∗ 0−𝐶∗𝐵 −𝐶∗𝐴∗] [ 1 2 1 2 1] 𝐴 0−𝐶𝐴∗ −𝐶 𝐵 −𝐶 𝐴∗ [ ] 1 1 3 1 1 1 1 ∗ [ ] 𝐴1 0−𝐶1𝐵1 −𝐶1𝐴1 [ ] 0𝐴∗ 𝐵 𝐴∗ (44) 4 1 1 [𝐴4 𝐶4 00] [ ∗] 𝐴3 [𝐴3 0−𝐴3𝐶2 −𝐴3𝐶1 ] [ ∗ ] Proof. It is obvious that =𝑟[ ] +𝑟 𝐵1 , [𝐵∗ 0−𝐶∗𝐵 −𝐶∗𝐴∗] 𝐴 𝑟(𝐷−𝐵∗𝐴∼𝐵) [ 1 2 1 2 1] [ 1] ∼ max∼ 𝐴1𝐴 =𝐶1,𝐴 𝐵1=𝐶2 ∗ [𝐴1 0−𝐶1𝐵1 −𝐶1𝐴1 ] ∗ = max 𝑟(𝐷−𝐵 𝑋𝐵) , 𝐴 𝑋=𝐶 ,𝑋𝐵 =𝐶 ,𝐴𝑋𝐴=𝐴 1 1 1 2 or ∗ ∼ (45) min 𝑟(𝐷−𝐵 𝐴 𝐵) 𝐴 𝐴∼=𝐶 ,𝐴∼𝐵 =𝐶 ∗ ∗ 1 1 1 2 0𝐴4 𝐵1 𝐴1 ∗ [𝐴4 𝐶4 00] = min 𝑟(𝐷−𝐵 𝑋𝐵) . [ ∗] 𝐴 𝑋=𝐶 ,𝑋𝐵 =𝐶 ,𝐴𝑋𝐴=𝐴 [𝐴 0−𝐴𝐶 −𝐴 𝐶 ] 1 1 1 2 𝑟 [ 3 3 2 3 1 ] [ ∗ ∗ ∗ ∗] ∗ [𝐵1 0−𝐶2 𝐵1 −𝐶2 𝐴1] Thus in Theorem 4 and its proof, letting 𝐴3 =𝐴3 =𝐶3 =𝐴, [ ] 𝐴 =𝐵∗, 𝐶 =𝐷 𝐴 0−𝐶𝐵 −𝐶 𝐴∗ 4 and 4 ,wecaneasilygettheproof. [ 1 1 1 1 1 ] (48) In Theorem 10,let𝐴1,𝐶1,𝐵1, and 𝐶2 vanish. Then we 𝐶4 𝐴4 𝐴3 can easily get the following. [ ∗ ∗ ∗ ] [ ∗ ] =𝑟[𝐶2 𝐴4 𝐵1 ] +𝑟[𝐵1 ] . ∗ Corollary 11. The extreme ranks of 𝑆𝐴 given by (42) with 𝐶 𝐴 𝐴 𝐴 ∼ [ 1 4 1] [ 1] respect to 𝐴 are the following:

{ 0𝐵𝐴 } Proof. It is obvious that the rank of 𝑓(𝑋) with respect to ∗ [ ∗ ] max𝑟(𝑆𝐴)=min 𝑟[𝐷𝐵],𝑟 𝐵 𝐷0 −2𝑟(𝐴) , Hermitian solution of system (10) is invariant if and only if 𝐴∼ { } { [ 𝐴0−𝐴] } 0𝐵𝐴 0𝐵 max 𝑟 [𝑓 (𝑋)] − min 𝑟 [𝑓 (𝑋)] =0. (49) ∗ [ ∗ ] [ ∗ ] min𝑟(𝑆𝐴)=2𝑟[𝐷𝐵]+𝑟 𝐵 𝐷0 −2𝑟 𝐵 𝐷 . 𝐴∼ [ 𝐴0−𝐴] [ 𝐴0] By (49), Theorem 4, and simplifications, we can get (47) (46) and (48). 8 Journal of Applied Mathematics

∗ Corollary 13. The rank of 𝑆𝐴 defined by (42) with respect to [6] J. Groß, “A note on the general Hermitian solution to 𝐴𝑋𝐴 = ∼ 𝐴 which is a solution to system (2) is invariant if and only if 𝐵,” Malaysian Mathematical Society,vol.21,no.2,pp.57–62, 1998. ∗ 0𝐵1 𝐵 𝐴1 [7] Y. G. Tian and Y. H. Liu, “Extremal ranks of some symmetric [ ∗ ] [𝐵 𝐷0] 0 𝐷𝐵∗ 𝐴 matrix expressions with applications,” SIAMJournalonMatrix [ ∗ ] 𝐴0−𝐴𝐶2 −𝐴𝐶1 [ ∗ ∗ ] [ ∗ ] Analysis and Applications,vol.28,no.3,pp.890–905,2006. 𝑟 [ ] =𝑟 𝐶2 𝐵𝐵1 +𝑟 𝐵1 , [𝐵∗ 0−𝐶∗𝐵 −𝐶∗𝐴∗] 𝐶 𝐵𝐴 𝐴 [8]Y.H.Liu,Y.G.Tian,andY.Takane,“RanksofHermitianand [ 1 2 1 2 1] [ 1 1] [ 1] ∗ skew-Hermitian solutions to the matrix equation 𝐴𝑋𝐴 =𝐵,” 𝐴 0−𝐶𝐵 −𝐶 𝐴∗ [ 1 1 1 1 1 ] Linear Algebra and its Applications,vol.431,no.12,pp.2359– ∗ 2372, 2009. 0𝐵 𝐴1 𝐵 𝐴1 ∗ [9]X.W.ChangandJ.S.Wang,“Thesymmetricsolutionofthe [𝐵 𝐷0 0] 0 𝑇 𝑇 [ ∗ ] matrix equations 𝐴𝑋 + 𝑌𝐴 =𝐶, 𝐴𝑋𝐴 +𝐵𝑌𝐵 =𝐶and [ 𝐴 0 −𝐴 −𝐴𝐶 −𝐴𝐶 ] ∗ 𝑇 𝑇 [ 2 1 ] 𝐵 (𝐴 𝑋𝐴, 𝐵 𝑋𝐵) = (𝐶, 𝐷) 𝑟 [ ] +𝑟[ 1 ] ,” Linear Algebra and its Applications, [ ∗ ∗ ∗ ∗ ∗] 𝐴 vol. 179, pp. 171–189, 1993. [𝐵1 0−𝐶2 𝐴−𝐶2 𝐵1 −𝐶2 𝐴1] 1 [ ] [10] Q.-W. Wang and Z.-C. Wu, “Common Hermitian solutions 𝐴 0−𝐶𝐴−𝐶𝐵 −𝐶 𝐴∗ 𝐶∗ [ 1 1 1 1 1 1 ] to some operator equations on Hilbert -modules,” Linear Algebra and its Applications,vol.432,no.12,pp.3159–3171,2010. ∗ 0𝐵1 𝐵 𝐴1 [11] F. O. Farid, M. S. Moslehian, Q.-W. Wang, and Z.-C. Wu, “On [𝐵∗ 𝐷0] 0 the Hermitian solutions to a system of adjointable operator [ ∗ ] 𝐴 [ 𝐴0−𝐴𝐶 −𝐴𝐶 ] ∗ equations,” Linear Algebra and its Applications,vol.437,no.7, =𝑟[ 2 1 ] +𝑟[𝐵 ] , [ ] 1 pp.1854–1891,2012. [𝐵∗ 0−𝐶∗𝐵 −𝐶∗𝐴∗] 𝐴 1 2 1 2 1 [ 1] [12] Z.-H. He and Q.-W.Wang, “Solutions to optimization problems ∗ [𝐴1 0−𝐶1𝐵1 −𝐶1𝐴1 ] on ranks and inertias of a matrix function with applications,” (50) Applied Mathematics and Computation,vol.219,no.6,pp.2989– 3001, 2012. or [13] Q.-W. Wang, “The general solution to a system of real quater- 0𝐵 𝐵 𝐴∗ nion matrix equations,” Computers & Mathematics with Appli- 1 1 cations,vol.49,no.5-6,pp.665–675,2005. [𝐵∗ 𝐷0] 0 [ ] [ 𝐴0−𝐴𝐶 −𝐴𝐶∗ ] [14] Q.-W. Wang, “Bisymmetric and centrosymmetric solutions to 𝑟 [ 2 1 ] systems of real quaternion matrix equations,” Computers & [ ∗ ∗ ∗ ∗] [𝐵1 0−𝐶2 𝐵1 −𝐶2 𝐴1] Mathematics with Applications,vol.49,no.5-6,pp.641–650, ∗ (51) 2005. [𝐴1 0−𝐶1𝐵1 −𝐶1𝐴 ] 1 [15] Q. W. Wang, “A system of four matrix equations over von 𝐷𝐵∗ 𝐴 Neumann regular rings and its applications,” Acta Mathematica [ ∗ ∗ ] [ ∗ ] Sinica,vol.21,no.2,pp.323–334,2005. =𝑟 𝐶2 𝐵𝐵1 +𝑟 𝐵1 . [16] Q.-W. Wang, “A system of matrix equations and a linear matrix [𝐶1𝐵𝐴1] [𝐴1] equation over arbitrary regular rings with identity,” Linear Algebra and its Applications,vol.384,pp.43–54,2004. Acknowledgments [17] Q.-W.Wang, H.-X. Chang, and Q. Ning, “The common solution to six quaternion matrix equations with applications,” Applied ThisresearchwassupportedbytheNationalNaturalScience Mathematics and Computation, vol. 198, no. 1, pp. 209–226, Foundation of China, Tian Yuan Foundation (11226067), the 2008. Fundamental Research Funds for the Central Universities [18] Q.-W. Wang, H.-X. Chang, and C.-Y. Lin, “P-(skew)symmetric (WM1214063), and China Postdoctoral Science Foundation common solutions to a pair of quaternion matrix equations,” (2012M511014). Applied Mathematics and Computation,vol.195,no.2,pp.721– 732, 2008. References [19]Q.W.WangandZ.H.He,“Somematrixequationswith applications,” Linear and Multilinear Algebra,vol.60,no.11-12, [1] T. W.Hungerford, Algebra, Springer, New York, NY, USA, 1980. pp. 1327–1353, 2012. [2]S.K.Mitra,“Thematrixequations𝐴𝑋 =𝐶, 𝑋𝐵 =𝐷,” Linear [20] Q. W. Wang and J. Jiang, “Extreme ranks of (skew-)Hermitian Algebra and its Applications,vol.59,pp.171–181,1984. solutions to a quaternion matrix equation,” Electronic Journal of [3] C. G. Khatri and S. K. Mitra, “Hermitian and nonnegative Linear Algebra,vol.20,pp.552–573,2010. definite solutions of linear matrix equations,” SIAM Journal on [21] Q.-W. Wang and C.-K. Li, “Ranks and the least-norm of the Applied Mathematics,vol.31,no.4,pp.579–585,1976. general solution to a system of quaternion matrix equations,” [4] Y. X. Yuan, “On the symmetric solutions of matrix equation Linear Algebra and its Applications,vol.430,no.5-6,pp.1626– (𝐴𝑋, 𝑋𝐶) = (𝐵, 𝐷),” Journal of East China Shipbuilding Institute, 1640, 2009. vol.15,no.4,pp.82–85,2001(Chinese). [22] Q.-W. Wang, X. Liu, and S.-W. Yu, “The common bisymmetric [5] H. Dai and P. Lancaster, “Linear matrix equations from an nonnegative definite solutions with extreme ranks and inertias inverse problem of vibration theory,” Linear Algebra and its to a pair of matrix equations,” Applied Mathematics and Com- Applications,vol.246,pp.31–47,1996. putation,vol.218,no.6,pp.2761–2771,2011. Journal of Applied Mathematics 9

[23] Q.-W. Wang, F. Qin, and C.-Y. Lin, “The common solution to [40] D. Carlson, E. Haynsworth, and T. Markham, “A generalization matrix equations over a regular ring with applications,” Indian of the Schur complement by means of the Moore-Penrose Journal of Pure and Applied Mathematics,vol.36,no.12,pp.655– inverse,” SIAM Journal on Applied Mathematics,vol.26,pp.169– 672, 2005. 175, 1974. [24] Q.-W. Wang, G.-J. Song, and C.-Y. Lin, “Extreme ranks of the [41] M. Fiedler, “Remarks on the Schur complement,” Linear Algebra solution to a consistent system of linear quaternion matrix and its Applications,vol.39,pp.189–195,1981. equations with an application,” Applied Mathematics and Com- [42] Y. G. Tian, “More on maximal and minimal ranks of Schur putation,vol.189,no.2,pp.1517–1532,2007. complements with applications,” Applied Mathematics and [25] Q.-W. Wang, G.-J. Song, and C.-Y. Lin, “Rank equalities related Computation,vol.152,no.3,pp.675–692,2004. (2) to the generalized inverse 𝐴𝑇,𝑆 with applications,” Applied Mathematics and Computation,vol.205,no.1,pp.370–382, 2008. [26] Q. W. Wang, G. J. Song, and X. Liu, “Maximal and minimal ranks of the common solution of some linear matrix equations over an arbitrary division ring with applications,” Algebra Colloquium,vol.16,no.2,pp.293–308,2009. [27] Q.-W. Wang, Z.-C. Wu, and C.-Y. Lin, “Extremal ranks of a quaternion matrix expression subject to consistent systems of quaternion matrix equations with applications,” Applied Mathematics and Computation,vol.182,no.2,pp.1755–1764, 2006. [28] Q. W. Wang and S. W. Yu, “Ranks of the common solution to some quaternion matrix equations with applications,” Bulletin of Iranian Mathematical Society,vol.38,no.1,pp.131–157,2012. [29]Q.W.Wang,S.W.Yu,andW.Xie,“Extremeranksofreal matrices in solution of the quaternion matrix equation 𝐴𝑋𝐵 = 𝐶 with applications,” Algebra Colloquium,vol.17,no.2,pp.345– 360, 2010. [30] Q.-W. Wang, S.-W. Yu, and Q. Zhang, “The real solutions to a system of quaternion matrix equations with applications,” Communications in Algebra,vol.37,no.6,pp.2060–2079,2009. [31] Q.-W. Wang, S.-W. Yu, and C.-Y. Lin, “Extreme ranks of a linear quaternion matrix expression subject to triple quaternion matrix equations with applications,” Applied Mathematics and Computation,vol.195,no.2,pp.733–744,2008. [32] Q.-W. Wang and F. Zhang, “The reflexive re-nonnegative definite solution to a quaternion matrix equation,” Electronic Journal of Linear Algebra,vol.17,pp.88–101,2008. [33] Q. W. Wang, X. Zhang, and Z. H. He, “On the Hermitian structures of the solution to a pair of matrix equations,” Linear and Multilinear Algebra,vol.61,no.1,pp.73–90,2013. [34] X. Zhang, Q.-W. Wang, and X. Liu, “Inertias and ranks of some Hermitian matrix functions with applications,” Central European Journal of Mathematics,vol.10,no.1,pp.329–351, 2012. [35] Y. G. Tian and S. Z. Cheng, “The maximal and minimal ranks of 𝐴−𝐵𝑋𝐶with applications,” New York Journal of Mathematics, vol. 9, pp. 345–362, 2003. [36] Y. G. Tian, “Upper and lower bounds for ranks of matrix expressions using generalized inverses,” Linear Algebra and its Applications,vol.355,pp.187–214,2002. [37] Y. H. Liu and Y. G. Tian, “More on extremal ranks of the ∗ ∗ matrix expressions 𝐴−𝐵𝑋±𝑋 𝐵 with statistical applications,” Numerical Linear Algebra with Applications,vol.15,no.4,pp. 307–325, 2008. [38] Y. H. Liu and Y. G. Tian, “Max-min problems on the ranks ∗ and inertias of the matrix expressions 𝐴−𝐵𝑋𝐶±(𝐵𝑋𝐶) with applications,” JournalofOptimizationTheoryandApplications, vol. 148, no. 3, pp. 593–622, 2011. [39] T. Ando, “Generalized Schur complements,” Linear Algebra and its Applications,vol.27,pp.173–186,1979. Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 217540, 14 pages http://dx.doi.org/10.1155/2013/217540

Research Article An Efficient Algorithm for the Reflexive Solution of the 𝐻 Quaternion Matrix Equation 𝐴𝑋𝐵 +𝐶𝑋 𝐷=𝐹

Ning Li,1,2 Qing-Wen Wang,1 and Jing Jiang3

1 Department of Mathematics, Shanghai University, Shanghai 200444, China 2 School of Mathematics and Quantitative Economics, Shandong University of Finance and Economics, Jinan 250002, China 3 Department of Mathematics, Qilu Normal University, Jinan 250013, China

Correspondence should be addressed to Qing-Wen Wang; [email protected]

Received 3 October 2012; Accepted 4 December 2012

Academic Editor: P. N. Shivakumar

Copyright © 2013 Ning Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

𝐻 We propose an iterative algorithm for solving the reflexive solution of the quaternion matrix equation 𝐴𝑋𝐵+𝐶𝑋 𝐷=𝐹.When the matrix equation is consistent over reflexive matrix X, a reflexive solution can be obtained within finite iteration steps inthe absence of roundoff errors. By the proposed iterative algorithm, the least Frobenius norm reflexive solution of the matrix equation can be derived when an appropriate initial iterative matrix is chosen. Furthermore, the optimal approximate reflexive solution to a given reflexive matrix 𝑋0 can be derived by finding the least Frobenius norm reflexive solution of a new corresponding quaternion matrix equation. Finally, two numerical examples are given to illustrate the efficiency of the proposed methods.

1. Introduction In the field of matrix algebra, quaternion matrix equa- tions have received much attention. Wang et al. [3]gave R𝑚×𝑛 H𝑚×𝑛 Throughout the paper, the notations and repre- necessary and sufficient conditions for the existence and 𝑚×𝑛 𝑚×𝑛 sent the set of all real matrices and the set of all the representations of P-symmetric and P-skew-symmetric H ={𝑎+𝑎 𝑖+ 𝑎 𝑗+ 𝑎 𝑘| matrices over the quaternion algebra 1 2 3 4 solutions of quaternion matrix equations 𝐴𝑎𝑋=𝐶𝑎 and 𝑖2 =𝑗2 =𝑘2 =𝑖𝑗𝑘=−1,𝑎,𝑎 ,𝑎 ,𝑎 ∈ R} 1 2 3 4 . We denote the 𝐴𝑏𝑋𝐵𝑏 =𝐶𝑏.YuanandWang[4] derived the expressions of identity matrix with the appropriate size by 𝐼.Wedenotethe the least squares 𝜂-Hermitian solution with the least norm conjugatetranspose,thetranspose,theconjugate,thetrace, and the expressions of the least squares anti-𝜂-Hermitian the column space, the real part, the 𝑚𝑛 ×1 vector formed solution with the least norm for the quaternion matrix by the vertical concatenation of the respective columns equation 𝐴𝑋𝐵 + 𝐶𝑋𝐷 =𝐸.JiangandWei[5] derived 𝐻 𝑇 of a matrix 𝐴 by 𝐴 ,𝐴 , 𝐴, tr(𝐴), 𝑅(𝐴), Re(𝐴), and vec(𝐴), the explicit solution of the quaternion matrix equation 𝑋− respectively. The Frobenius norm of 𝐴 is denoted by ‖𝐴‖,that 𝐴𝑋𝐵̃ .LiandWu[ =𝐶 6] gave the expressions of symmetric is, ‖𝐴‖ = √tr(𝐴𝐻𝐴).Moreover,𝐴⊗𝐵and 𝐴⊙𝐵stand for the and skew-antisymmetric solutions of the quaternion matrix Kronecker matrix product and Hadmard matrix product of equations 𝐴1𝑋=𝐶1 and 𝑋𝐵3 =𝐶3. Feng and Cheng [7] the matrices 𝐴 and 𝐵. gave a clear description of the solution set of the quaternion 𝑛×𝑛 Let 𝑄∈H be a generalized reflection matrix, that is, matrix equation 𝐴𝑋 − 𝑋𝐵 =0. 2 𝐻 𝑄 =𝐼and 𝑄 =𝑄.Amatrix𝐴 is called reflexive with The iterative method is a very important method to respect to the generalized reflection matrix 𝑄,if𝐴=𝑄𝐴𝑄. solve matrix equations. Peng [8]constructedafiniteiteration It is obvious that any matrix is reflexive with respect to 𝐼. method to solve the least squares symmetric solutions of 𝑛×𝑛 Let RH (𝑄) denote the set of order 𝑛 reflexive matrices linear matrix equation 𝐴𝑋𝐵.AlsoPeng[ =𝐶 9–11]presented with respect to 𝑄. The reflexive matrices with respect to a several efficient iteration methods to solve the constrained generalized reflection matrix 𝑄 have been widely used in least squares solutions of linear matrix equations 𝐴𝑋𝐵 = engineering and scientific computations [1, 2]. 𝐶 and 𝐴𝑋𝐵 + 𝐶𝑌𝐷 =𝐸, by using Paige’s algorithm [12] 2 Journal of Applied Mathematics

𝑚×𝑛 as the frame method. Duan et al. [13–17]proposediterative for 𝐴, 𝐵 ∈ H . This real inner product space is denoted as 𝑚×𝑛 algorithms for the (Hermitian) positive definite solutions (H , R, ⟨⋅, ⋅⟩). of some nonlinear matrix equations. Ding et al. proposed 𝑚×𝑛 the hierarchical gradient-based iterative algorithms [18]and Proof. (1) For 𝐴∈H ,let𝐴=𝐴1 +𝐴2𝑖+𝐴3𝑗+𝐴4𝑘,then hierarchical least squares iterative algorithms [19]forsolving 𝐻 general (coupled) matrix equations, based on the hierarchi- ⟨𝐴,⟩ 𝐴 = Re [tr (𝐴 𝐴)] (4) cal identification principle20 [ ]. Wang et al. [21]proposed 𝑇 𝑇 𝑇 𝑇 an iterative method for the least squares minimum-norm = tr (𝐴1𝐴1 +𝐴2𝐴2 +𝐴3𝐴3 +𝐴4𝐴4). symmetric solution of 𝐴X𝐵=𝐸. Dehghan and Hajarian It is obvious that ⟨𝐴, 𝐴⟩ >0 and ⟨𝐴, 𝐴⟩ = 0 ⇔ 𝐴 =0. constructed finite iterative algorithms to solve several linear 𝑚×𝑛 matrix equations over (anti)reflexive22 [ –24], generalized (2) For 𝐴, 𝐵 ∈ H ,let𝐴=𝐴1 +𝐴2𝑖+𝐴3𝑗+𝐴4𝑘 and centrosymmetric [25, 26], and generalized bisymmetric [27, 𝐵=𝐵1 +𝐵2𝑖+𝐵3𝑗+𝐵4𝑘,thenwehave

28] matrices. Recently, Wu et al. [29–31]proposediterative 𝐻 algorithms for solving various complex matrix equations. ⟨𝐴,⟩ 𝐵 = Re [tr (𝐵 𝐴)]

However, to the best of our knowledge, there has been 𝑇 𝑇 𝑇 𝑇 little information on iterative methods for finding a solution = tr (𝐵1 𝐴1 +𝐵2 𝐴2 +𝐵3 𝐴3 +𝐵4 𝐴4) (5) of a quaternion matrix equation. Due to the noncommutative 𝑇 𝑇 𝑇 𝑇 multiplication of quaternions, the study of quaternion matrix = tr (𝐴1𝐵1 +𝐴2𝐵2 +𝐴3𝐵3 +𝐴4𝐵4) equations is more complex than that of real and complex 𝐻 equations. Motivated by the work mentioned above and = Re [tr (𝐴 𝐵)] = ⟨𝐵,⟩ 𝐴 . keeping the interests and wide applications of quaternion 𝑚×𝑛 matrices in view (e.g., [32–45]), we, in this paper, consider (3) For 𝐴, 𝐵, 𝐶∈ H an iterative algorithm for the following two problems. ⟨𝐴+𝐵,𝐶⟩ = { [𝐶𝐻 (𝐴+𝐵)]} 𝑚×𝑛 Re tr Problem 1. For given matrices 𝐴, 𝐶 ∈ H ,𝐵,𝐷 ∈ 𝑛×𝑝 𝑚×𝑝 H ,𝐹∈H and the generalized reflection matrix 𝑄,find = [ (𝐶𝐻𝐴+𝐶𝐻𝐵)] 𝑛×𝑛 Re tr 𝑋∈RH (𝑄),suchthat (6) 𝐻 𝐻 𝐻 = Re [tr (𝐶 𝐴)] + Re [tr (𝐶 𝐵)] 𝐴𝑋𝐵 +𝐶𝑋 𝐷=𝐹. (1) Problem 2. When Problem 1 is consistent, let its solution = ⟨𝐴,⟩ 𝐶 + ⟨𝐵,⟩ 𝐶 . set be denoted by 𝑆𝐻. For a given reflexive matrix 𝑋0 ∈ 𝑛×𝑛 𝑛×𝑛 𝐴, 𝐵 ∈ H𝑚×𝑛 𝑎∈R RH (𝑄),find𝑋∈̆ RH (𝑄),suchthat (4) For and , 󵄩 󵄩 󵄩 ̆ 󵄩 󵄩 󵄩 𝐻 𝐻 󵄩𝑋−𝑋0󵄩 = min 󵄩𝑋−𝑋0󵄩 . ⟨𝑎𝐴,⟩ 𝐵 = Re {tr [𝐵 (𝑎𝐴)]} = Re [tr (𝑎𝐵 𝐴)] 󵄩 󵄩 𝑋∈𝑆 󵄩 󵄩 (2) 𝐻 (7) 𝐻 The remainder of this paper is organized as follows. =𝑎Re [tr (𝐵 𝐴)] = 𝑎 ⟨𝐴,⟩ 𝐵 . In Section 2,wegivesomepreliminaries.InSection 3,we 𝑚×𝑛 introduce an iterative algorithm for solving Problem 1.Then All the above arguments reveal that the space H over weprovethatthegivenalgorithmcanbeusedtoobtaina field R with the inner product defined in3 ( )isaninner reflexive solution for any initial matrix within finite steps product space. in the absence of roundoff errors. Also we prove that the ‖⋅‖ least Frobenius norm reflexive solution can be obtained by Let Δ represent the matrix norm induced by the inner ⟨⋅, ⋅⟩ 𝐴∈H𝑚×𝑛 choosing a special kind of initial matrix. In addition, the product . For an arbitrary quaternion matrix , optimal reflexive solution of Problem 2 by finding the least it is obvious that the following equalities hold: Frobenius norm reflexive solution of a new matrix equation √ √ 𝐻 √ 𝐻 is given. In Section 4,wegivetwonumericalexamplesto ‖𝐴‖Δ = ⟨𝐴,⟩ 𝐴 = Re [tr (𝐴 𝐴)] = tr (𝐴 𝐴) = ‖𝐴‖ , illustrate our results. In Section 5,wegivesomeconclusions (8) to end this paper. which reveals that the induced matrix norm is exactly the 2. Preliminary Frobenius norm. For convenience, we still use ‖⋅‖to denote the induced matrix norm. In this section, we provide some results which will play Let 𝐸𝑖𝑗 denote the 𝑚×𝑛quaternion matrix whose important roles in this paper. First, we give a real inner (𝑖, 𝑗) entry is 1, and the other elements are zeros. In inner 𝑚×𝑛 𝑚×𝑛 product for the space H over the real field R. product space (H , R, ⟨⋅, ⋅⟩),itiseasytoverifythat𝐸𝑖𝑗 ,𝐸𝑖𝑗 𝑖, 𝐸 𝑗, 𝐸 𝑘,𝑖 = 1,2,...,𝑚 𝑗 = 1,2,...,𝑛 𝑚×𝑛 𝑖𝑗 𝑖𝑗 , ,isanorthonormal Theorem 3. In the space H over the field R,arealinner basis, which reveals that the dimension of the inner product 𝑚×𝑛 product can be defined as space (H , R,⟨⋅,⋅⟩)is 4𝑚𝑛. ⟨𝐴,⟩ 𝐵 = [ (𝐵𝐻𝐴)] Next, we introduce a real representation of a quaternion Re tr (3) matrix. Journal of Applied Mathematics 3

For an arbitrary quaternion matrix 𝑀=𝑀1 +𝑀2𝑖+ 3. Main Results 𝑚×𝑛 4𝑚×4𝑛 𝑀3𝑗+4 𝑀 𝑘,amap𝜙(⋅),fromH to R , can be defined as 3.1. The Solution of Problem 1. In this subsection, we will construct an algorithm for solving Problem 1.Thensome 𝑀1 −𝑀2 −𝑀3 −𝑀4 lemmas will be given to analyse the properties of the proposed [ ] [𝑀2 𝑀1 −𝑀4 𝑀3 ] algorithm. Using these lemmas, we prove that the proposed 𝜙 (𝑀) = [ ] . (9) 𝑀3 𝑀4 𝑀1 −𝑀2 algorithm is convergent. [𝑀4 −𝑀3 𝑀2 𝑀1 ] Algorithm 7 (Iterative algorithm for Problem 1). 𝑛×𝑛 Lemma 4 (see [41]). Let 𝑀 and 𝑁 be two arbitrary quaternion (1) Choose an initial matrix 𝑋(1) ∈ RH (𝑄). matrices with appropriate size. The map 𝜙(⋅) defined by (9) (2) Calculate satisfies the following properties. 𝑅 (1) =𝐹−𝐴𝑋(1) 𝐵−𝐶𝑋𝐻 (1) 𝐷; (a) 𝑀 = 𝑁 ⇔ 𝜙(𝑀) =𝜙(𝑁). 1 (b) 𝜙(𝑀 + 𝑁) = 𝜙(𝑀) + 𝜙(𝑁), 𝜙(𝑀𝑁) =𝜙(𝑀)𝜙(𝑁), 𝑃 (1) = (𝐴𝐻𝑅 (1) 𝐵𝐻 +𝐷𝑅𝐻 (1) 𝐶 2 𝜙(𝑘𝑀) = 𝑘𝜙(𝑀), 𝑘∈ R. (15) 𝐻 𝐻 𝐻 𝐻 𝑇 +𝑄𝐴 𝑅 (1) 𝐵 𝑄+𝑄𝐷𝑅 (1) 𝐶𝑄) ; (c) 𝜙(𝑀 )=𝜙 (𝑀). −1 −1 −1 𝑘:=1. (d) 𝜙(𝑀)𝑚 =𝑇 𝜙(𝑀)𝑇𝑛 =𝑅𝑚 𝜙(𝑀)𝑅𝑛 =𝑆𝑚 𝜙(𝑀)𝑆𝑛, where (3) If 𝑅(𝑘),thenstopand =0 𝑋(𝑘) is the solution of 0−𝐼𝑡 00 Problem 1;elseif𝑅(𝑘) =0̸and 𝑃(𝑘),thenstop =0 [𝐼 000] 0−𝐼 𝑇 = [ 𝑡 ] ,𝑅=[ 2𝑡], and Problem 1 is not consistent; else 𝑘:=𝑘+1. 𝑡 [00 0𝐼] 𝑡 𝐼 0 𝑡 2𝑡 (4) Calculate [00−𝐼𝑡 0] (10) ‖𝑅 (𝑘−1)‖2 000−𝐼 𝑋 (𝑘) =𝑋(𝑘−1) + 𝑃 (𝑘−1) ; 𝑡 2 [00𝐼 0 ] ‖𝑃 (𝑘−1)‖ 𝑆 = [ 𝑡 ] ,𝑡=𝑚,𝑛. 𝑡 [0−𝐼 00] 𝑡 ‖𝑅 (𝑘−1)‖2 [𝐼𝑡 000] 𝑅 (𝑘) =𝑅(𝑘−1) − ‖𝑃 (𝑘−1)‖2

𝐻 By (9), it is easy to verify that ×(𝐴𝑃(𝑘−1) 𝐵+𝐶𝑃 (𝑘−1) 𝐷) ; (16) 󵄩 󵄩 1 𝐻 𝐻 𝐻 󵄩𝜙 (𝑀)󵄩 =2‖𝑀‖ . (11) 𝑃 (𝑘) = (𝐴 𝑅 (𝑘) 𝐵 +𝐷𝑅 (𝑘) 𝐶 2 Finally, we introduce the commutation matrix. +𝑄𝐴𝐻𝑅 (𝑘) 𝐵𝐻𝑄+𝑄𝐷𝑅𝐻 (𝑘) 𝐶𝑄) A commutation matrix 𝑃(𝑚, 𝑛) is a 𝑚𝑛 × 𝑚𝑛 matrix which has the following explicit form: ‖𝑅 (𝑘)‖2 + 𝑃 (𝑘−1) . 2 𝑚 𝑛 ‖𝑅 (𝑘−1)‖ 𝑇 𝑇 𝑃 (𝑚, 𝑛) =∑ ∑ 𝐸𝑖𝑗 ⊗𝐸𝑖𝑗 =[𝐸𝑖𝑗 ] 𝑖=1,...,𝑚 . (12) 𝑖=1 𝑗=1 𝑗=1,...,𝑛 (5)GotoStep(3). Lemma 8. {𝑅(𝑖)} {𝑃(𝑖)} Moreover, 𝑃(𝑚, 𝑛) is a permutation matrix and 𝑃(𝑚, 𝑛) = Assume that the sequences and 𝑃𝑇(𝑛, 𝑚)−1 =𝑃 (𝑛, 𝑚) are generated by Algorithm 7,then⟨𝑅(𝑖), 𝑅(𝑗)⟩ = .Wehavethefollowinglemmasonthe 0 ⟨𝑃(𝑖), 𝑃(𝑗)⟩ =0 𝑖, 𝑗 = 1, 2, ., commutation matrix. and for 𝑖 =𝑗̸. Lemma 5 𝐴 𝑚×𝑛 (see [46]). Let be a matrix. There is a ⟨𝑅(𝑖), 𝑅(𝑗)⟩ = ⟨𝑅(𝑗), 𝑅(𝑖)⟩ commutation matrix 𝑃(𝑚, 𝑛) such that Proof. Since and ⟨𝑃(𝑖), 𝑃(𝑗)⟩ = ⟨𝑃(𝑗), 𝑃(𝑖)⟩ for 𝑖, 𝑗 = 1, 𝑇 2,..., we only need to prove that ⟨𝑅(𝑖), vec (𝐴 )=𝑃(𝑚, 𝑛) vec (𝐴) . (13) 𝑅(𝑗)⟩ =0 and ⟨𝑃(𝑖), 𝑃(𝑗)⟩ =0 for 1≤𝑖<𝑗. Now we prove this conclusion by induction. Lemma 6 (see [46]). Let 𝐴 be a 𝑚×𝑛matrix and 𝐵 a 𝑝× 𝑞 matrix. There exist two commutation matrices 𝑃(𝑚, 𝑝) and Step 1.Weshowthat 𝑃(𝑛, 𝑞) such that ⟨𝑅 (𝑖) ,𝑅(𝑖+1)⟩ =0,

𝑇 (17) 𝐵⊗𝐴=𝑃 (𝑚, 𝑝) (𝐴⊗𝐵) 𝑃(𝑛,𝑞). (14) ⟨𝑃 (𝑖) ,𝑃(𝑖+1)⟩ =0 for 𝑖 = 1, 2, . . . 4 Journal of Applied Mathematics

𝑖=1 ‖𝑅 (2)‖2 We also prove (17)byinduction.When ,wehave = ‖𝑃 (1)‖2 ‖𝑅 (1)‖2

𝐻 𝐻 𝐻 𝐻 + Re {tr [𝑃 (1) (𝐴 𝑅 (2) 𝐵 +𝐷𝑅 (2) 𝐶)]} ⟨𝑅 (1) ,𝑅(2)⟩ 2 ‖𝑅 (2)‖ 2 𝐻 = ‖𝑃 (1)‖ = Re {tr [𝑅 (2) 𝑅 (1)]} ‖𝑅 (1)‖2

2 𝐻 𝐻 𝐻 ‖𝑅 (1)‖ 𝐻 + Re {tr [𝑅 (2) (𝐴𝑃 (1) 𝐵+𝐶𝑃 (1) 𝐷)]} = Re{tr[(𝑅 (1) − (𝐴𝑃 (1) 𝐵+𝐶𝑃 (1) 𝐷)) ‖𝑃 (1)‖2 ‖𝑅 (2)‖2 = ‖𝑃 (1)‖2 ‖𝑅 (1)‖2 ×𝑅(1) ]} ‖𝑃 (1)‖2 + { [𝑅𝐻 (2)(𝑅 (1) −𝑅(2))]} 2 Re tr ‖𝑅 (1)‖2 ‖𝑅 (1)‖ = 𝑅 (1) 2 − ‖ ‖ 2 ‖𝑃 (1)‖ ‖𝑅 (2)‖2 ‖𝑃 (1)‖2 = ‖𝑃 (1)‖2 − ‖𝑅 (2)‖2 𝐻 𝐻 𝐻 𝐻 ‖𝑅 (1)‖2 ‖𝑅 (1)‖2 × Re {tr [𝑃 (1) (𝐴 𝑅 (1) 𝐵 +𝐷𝑅 (1) 𝐶)]}

2 =0. 2 ‖𝑅(1)‖ = ‖𝑅(1)‖ − (19) ‖𝑃(1)‖2

𝐻 𝐻 𝐻 𝐻 𝐻 ×Re{tr [𝑃 (1) ((𝐴 𝑅 (1) 𝐵 +𝐷𝑅 (1) 𝐶+𝑄𝐴 𝑅 (1) Now assume that conclusion (17)holdsfor1≤𝑖≤𝑠−1,then ×𝐵𝐻𝑄+𝑄𝐷𝑅𝐻 (1) 𝐶𝑄) × (2)−1)]} ⟨𝑅 (𝑠) ,𝑅(𝑠+1)⟩ ‖𝑅 (1)‖2 = ‖𝑅 (1)‖2 − ‖𝑃 (1)‖2 2 𝐻 ‖𝑃 (1)‖ = Re {tr [𝑅 (𝑠+1) 𝑅 (𝑠)]}

=0. 2 𝐻 ‖𝑅 (𝑠)‖ 𝐻 (18) = Re {tr [(𝑅 (𝑠) − (𝐴𝑃 (𝑠) 𝐵+𝐶𝑃 (𝑠) 𝐷)) ‖𝑃 (𝑠)‖2

×𝑅(𝑠) ]} Also we can write

2 2 ‖𝑅 (𝑠)‖ = ‖𝑅 (s)‖ − 𝑃 𝑠 2 ⟨𝑃 (1) ,𝑃(2)⟩ ‖ ( )‖ 𝐻 𝐻 𝐻 𝐻 𝐻 × Re {tr [𝑃 (𝑠) (𝐴 𝑅 (𝑠) 𝐵 +𝐷𝑅 (𝑠) 𝐶)]} = Re {tr [𝑃 (2) 𝑃 (1)]} 2 2 ‖𝑅 (𝑠)‖ 1 = ‖𝑅 (𝑠)‖ − = { [( (𝐴𝐻𝑅 (2) 𝐵𝐻 +𝐷𝑅𝐻 (2) 𝐶 ‖𝑃 (𝑠)‖2 Re tr 2 𝐻 𝐻 𝐻 𝐻 × Re {tr [𝑃 (𝑠) ×((𝐴 𝑅 (𝑠) 𝐵 +𝐷𝑅 (𝑠) 𝐶 +𝑄𝐴𝐻𝑅 (2) 𝐵𝐻𝑄+𝑄𝐷𝑅𝐻 (2) 𝐶𝑄) +𝑄𝐴𝐻𝑅 (𝑠) 𝐵𝐻𝑄 𝐻 ‖𝑅 (2)‖2 + 𝑃 (1)) 𝑃 (1)]} 𝐻 −1 ‖𝑅 (1)‖2 +𝑄𝐷𝑅 (𝑠) 𝐶𝑄) × (2) )]} 2 2 2 ‖𝑅 (𝑠)‖ ‖𝑅 (2)‖ 2 = ‖𝑅 (𝑠)‖ − = ‖𝑃 (1)‖ 2 ‖𝑅 (1)‖2 ‖𝑃 (𝑠)‖ 2 𝐻 𝐻 𝐻 𝐻 𝐻 ‖𝑅 (𝑠)‖ + Re {tr [𝑃 (1) ×((𝐴 𝑅 (2) 𝐵 +𝐷𝑅 (2) 𝐶 × Re {tr [𝑃 (𝑠) (𝑃 (𝑠) − 𝑃 (𝑠−1))]} ‖𝑅 (𝑠−1)‖2 +𝑄𝐴𝐻𝑅 (2) 𝐵𝐻𝑄 =0. +𝑄𝐷𝑅𝐻 (2) 𝐶𝑄) × (2)−1)]} (20) Journal of Applied Mathematics 5

And it can also be obtained that It follows from Algorithm 7 that

⟨𝑅 (1) ,𝑅(𝑟+2)⟩

𝐻 ⟨𝑃 (𝑠) ,𝑃(𝑠+1)⟩ = Re {tr [𝑅 (𝑟+2) 𝑅 (1)]} 𝐻 = Re {tr [𝑃 (𝑠+1) 𝑃 (𝑠)]} ‖𝑅 (𝑟+1)‖2 = Re {tr [(𝑅 (𝑟+1) − ‖𝑃 (𝑟+1)‖2 𝐻 𝐻 𝐻 = Re {tr [(((𝐴 𝑅 (𝑠+1) 𝐵 +𝐷𝑅 (𝑠+1) 𝐶 𝐻 ×(𝐴𝑃(𝑟+1) 𝐵+𝐶𝑃𝐻 (𝑟+1) 𝐷) ) 𝑅 (1)]} +𝑄𝐴𝐻𝑅 (𝑠+1) 𝐵𝐻𝑄 2 𝐻 ‖𝑅 (𝑟+1)‖ +𝑄𝐷𝑅𝐻 (𝑠+1) 𝐶𝑄) × (2)−1) = Re {tr [𝑅 (𝑟+1) 𝑅 (1)]} − ‖𝑃 (𝑟+1)‖2 𝐻 ‖𝑅 (𝑠+1)‖2 𝐻 𝐻 𝐻 𝐻 + 𝑃(𝑠)) 𝑃 (𝑠)]} × Re {tr [𝑃 (𝑟+1) (𝐴 𝑅 (1) 𝐵 +𝐷𝑅 (1) 𝐶)]} ‖𝑅 (𝑠)‖2 2 𝐻 ‖𝑅 (𝑟+1)‖ 2 = { [𝑅 (𝑟+1) 𝑅 (1)]} − ‖𝑅 (𝑠+1)‖ Re tr 2 = ‖𝑃 (𝑠)‖2 ‖𝑃 (𝑟+1)‖ ‖𝑅 (𝑠)‖2 × { [𝑃𝐻 (𝑟+1) ((𝐴𝐻𝑅 (1) 𝐵𝐻 +𝐷𝑅𝐻 (1) 𝐶 𝐻 𝐻 𝐻 𝐻 Re tr + Re {tr [𝑃 (𝑠) (𝐴 𝑅 (𝑠+1) 𝐵 +𝐷𝑅 (𝑠+1) 𝐶)]} +𝑄𝐴𝐻𝑅 (1) 𝐵𝐻𝑄 2 ‖𝑅 (𝑠+1)‖ 2 = ‖𝑃 (𝑠)‖ 𝐻 −1 ‖𝑅 (𝑠)‖2 +𝑄𝐷𝑅 (1) 𝐶𝑄) × (2) )]}

𝐻 𝐻 𝐻 + Re {tr [𝑅 (𝑠+1) (𝐴𝑃 (𝑠) 𝐵+𝐶𝑃 (𝑠) 𝐷)]} = Re {tr [𝑅 (𝑟+1) 𝑅 (1)]}

2 2 ‖𝑅 (𝑠+1)‖ 2 ‖𝑅 (𝑟+1)‖ 𝐻 = ‖𝑃 (𝑠)‖ − Re {tr [𝑃 (𝑟+1) 𝑃 (1)]} ‖𝑅 (𝑠)‖2 ‖𝑃 (𝑟+1)‖2

2 ‖𝑃 (𝑠)‖ 𝐻 =0. + Re {tr [𝑅 (𝑠+1)(𝑅 (𝑠) −𝑅(𝑠+1))]} ‖𝑅 (𝑠)‖2 (24) =0. Also we can write (21) ⟨𝑃 (1) ,𝑃(𝑟+2)⟩

𝐻 = Re {tr [𝑃 (𝑟+2) 𝑃 (1)]} Therefore, the conclusion (17)holdsfor𝑖 = 1, 2, .. ⟨𝑅(𝑖), 𝑅(𝑖+𝑟)⟩ =0 ⟨𝑃(𝑖), 𝑃(𝑖+𝑟)⟩ =0 𝐻 𝐻 𝐻 Step 2. Assume that and = Re {tr [( ((𝐴 𝑅 (𝑟+2) 𝐵 +𝐷𝑅 (𝑟+2) 𝐶 for 𝑖≥1and 𝑟≥1.Wewillshowthat +𝑄𝐴𝐻𝑅 (𝑟+2) 𝐵𝐻𝑄+𝑄𝐷𝑅𝐻 (𝑟+2) 𝐶𝑄)

𝐻 ‖𝑅(𝑟 +‖ 2) 2 ⟨𝑅 (𝑖) ,𝑅(𝑖+𝑟+1)⟩ =0, ⟨𝑃 (𝑖) ,𝑃(𝑖+𝑟+1)⟩ =0. × (2)−1)+ 𝑃(𝑟 + 1)) 𝑃 (1)]} 2 (22) ‖𝑅(𝑟 +‖ 1)

𝐻 𝐻 𝐻 𝐻 = Re {tr [(𝐴 𝑅 (𝑟+2) 𝐵 +𝐷𝑅 (𝑟+2) 𝐶) 𝑃 (1)]}

2 We prove the conclusion (22) in two substeps. ‖𝑅 (𝑟+2)‖ 𝐻 + Re {tr [𝑃 (𝑟+1) 𝑃 (1)]} ‖𝑅 (𝑟+1)‖2 Substep 2.1. In this substep, we show that 𝐻 𝐻 = Re {tr [𝑅 (𝑟+2) (𝐴𝑃 (1) 𝐵+𝐶𝑃 (1) 𝐷)]}

2 ‖𝑅 (𝑟+2)‖ 𝐻 ⟨𝑅 (1) ,𝑅(𝑟+2)⟩ =0, ⟨𝑃 (1) ,𝑃(𝑟+2)⟩ =0. + Re {tr [𝑃 (𝑟+1) 𝑃 (1)]} (23) ‖𝑅 (𝑟+1)‖2 6 Journal of Applied Mathematics

‖𝑃 (1)‖2 2 𝐻 = { [𝑅𝐻 (𝑟+2)(𝑅 (1) −𝑅(2))]} ‖𝑅(𝑖 + 𝑟‖ +1) 2 Re tr + 𝑃(𝑖 + 𝑟)) 𝑃 (𝑖)]} ‖𝑅 (1)‖ ‖𝑅(𝑖 +‖ 𝑟) 2 2 ‖𝑅 (𝑟+2)‖ 𝐻 𝐻 𝐻 𝐻 𝐻 + Re {tr [𝑃 (𝑟+1) 𝑃 (1)]} = Re {tr [(𝐴 𝑅 (𝑖+𝑟+1) 𝐵 +𝐷𝑅 (𝑖+𝑟+1) 𝐶) 𝑃 (𝑖)]} ‖𝑅 (𝑟+1)‖2 𝐻 𝐻 =0. = Re {tr [𝑅 (𝑖+𝑟+1) (𝐴𝑃 (𝑖) B +𝐶𝑃 (𝑖) 𝐷)]} (25) 2 ‖𝑃 (𝑖)‖ 𝐻 = Re {tr [𝑅 (𝑖+𝑟+1)(𝑅 (𝑖) −𝑅(𝑖+1))]} ‖𝑅 (𝑖)‖2

2 ‖𝑃 (𝑖)‖ 𝐻 Substep 2.2.Inthissubstep,weprovetheconclusion(22)in = Re {tr [𝑅 (𝑖+𝑟+1) 𝑅 (𝑖)]} Step 2. It follows from Algorithm 7 that ‖𝑅 (𝑖)‖2 2 ‖𝑃 (𝑖)‖ 𝐻 − Re {tr [𝑅 (𝑖+𝑟+1) 𝑅 (𝑖+1)]} ‖𝑅 (𝑖)‖2 ⟨𝑅 (𝑖) ,𝑅(𝑖+𝑟+1)⟩ ‖𝑃 (𝑖)‖2‖𝑅 (𝑖+𝑟)‖2‖𝑅 (𝑖)‖2 𝐻 = = Re {tr [𝑅 (𝑖+𝑟+1) 𝑅 (𝑖)]} ‖𝑅 (𝑖)‖2‖𝑃 (𝑖+𝑟)‖2‖𝑅 (𝑖−1)‖2 2 ‖𝑅 (𝑖+𝑟)‖ × { [𝑃𝐻 (𝑖+𝑟) 𝑃 (𝑖−1)]} . = Re {tr [(𝑅 (𝑖+𝑟) − Re tr ‖𝑃 (𝑖+𝑟)‖2 (26) 𝐻 ×(𝐴𝑃(𝑖+𝑟) 𝐵+𝐶𝑃𝐻 (𝑖+𝑟) 𝐷) ) 𝑅 (𝑖) ]} Repeating the above process (26), we can obtain

2 𝐻 ‖𝑅 (𝑖+𝑟)‖ = Re {tr [𝑅 (𝑖+𝑟) 𝑅 (𝑖)]} − 𝐻 ‖𝑃 (𝑖+𝑟)‖2 ⟨𝑅 (𝑖) ,𝑅(𝑖+𝑟+1)⟩ =⋅⋅⋅=𝛼Re {tr [𝑃 (𝑟+2) 𝑃 (1)]} ;

𝐻 𝐻 𝐻 𝐻 𝐻 × Re {tr [𝑃 (𝑖+𝑟) (𝐴 𝑅 (𝑖) 𝐵 +𝐷𝑅 (𝑖) 𝐶)]} ⟨𝑃 (𝑖) ,𝑃(𝑖+𝑟+1)⟩ =⋅⋅⋅=𝛽Re {tr [𝑃 (𝑟+2) 𝑃 (1)]} . (27) ‖𝑅 (𝑖+𝑟)‖2 =− ‖𝑃 (𝑖+𝑟)‖2 Combining these two relations with (24)and(25), it implies 𝐻 𝐻 𝐻 𝐻 × Re {tr [𝑃 (𝑖+𝑟) ((𝐴 𝑅 (𝑖) 𝐵 +𝐷𝑅 (𝑖) 𝐶 that (22) holds. So, by the principle of induction, we know that Lemma 8 holds. +𝑄𝐴𝐻𝑅 (𝑖) 𝐵𝐻𝑄 Lemma 9. Assume that Problem 1 is consistent, and let 𝑋∈̃ 𝑛×𝑛 𝐻 −1 RH (𝑄) be its solution. Then, for any initial matrix 𝑋(1) ∈ +𝑄𝐷𝑅 (𝑖) 𝐶𝑄) × (2) )]} 𝑛×𝑛 RH (𝑄), the sequences {𝑅(𝑖)}, {𝑃(𝑖)},and{𝑋(𝑖)} generated ‖𝑅 (𝑖+𝑟)‖2 by Algorithm 7 satisfy =− ‖𝑃 (𝑖+𝑟)‖2 ̃ 2 2 ⟨𝑃 (𝑖) , 𝑋−𝑋(𝑖)⟩=‖𝑅 (𝑖)‖ , 𝑖=1,2,.... (28) 𝐻 ‖𝑅 (𝑖)‖ × Re {tr [𝑃 (𝑖+𝑟) (𝑃 (𝑖) − 𝑃 (𝑖−1))]} ‖𝑅 (𝑖−1)‖2

2 2 Proof. Wealsoprovethisconclusionbyinduction. ‖𝑅 (𝑖+𝑟)‖ ‖𝑅 (𝑖)‖ 𝐻 𝑖=1 = Re {tr [𝑃 (𝑖+𝑟) 𝑃 (𝑖−1)]} , When ,itfollowsfromAlgorithm 7 that ‖𝑃 (𝑖+𝑟)‖2‖𝑅 (𝑖−1)‖2

⟨𝑃 (𝑖) ,𝑃(𝑖+𝑟+1)⟩ ⟨𝑃 (1) , 𝑋−𝑋̃ (1)⟩

𝐻 = Re {tr [𝑃 (𝑖+𝑟+1) 𝑃 (𝑖)]} 𝐻 = Re {tr [(𝑋−𝑋̃ (1)) 𝑃 (1)]}

1 𝐻 = { [( (𝐴𝐻𝑅 (𝑖+𝑟+1) 𝐵𝐻 +𝐷𝑅𝐻 (𝑖+𝑟+1) 𝐶 = { [(𝑋−𝑋̃ (1)) Re tr 2 Re tr 𝐻 𝐻 𝐻 𝐻 𝐻 +𝑄𝐴𝐻𝑅 (𝑖+𝑟+1) 𝐵𝐻𝑄 ×((𝐴 𝑅 (1) 𝐵 +𝐷𝑅 (1) 𝐶+𝑄𝐴 𝑅 (1) 𝐵 𝑄

+𝑄𝐷𝑅𝐻 (𝑖+𝑟+1) 𝐶𝑄) +𝑄𝐷𝑅𝐻 (1) 𝐶𝑄) × (2)−1)]} Journal of Applied Mathematics 7

̃ 𝐻 𝐻 𝐻 𝐻 ‖𝑅 (𝑠+1)‖2 = Re {tr [(𝑋 −𝑋(1)) (𝐴 𝑅 (1) 𝐵 +𝐷𝑅 (1) 𝐶)]} = ‖𝑅 (𝑠+1)‖2 + ‖𝑅 (𝑠)‖2 𝐻 = { [𝑅𝐻 (1) (𝐴𝑋−𝑋 (̃ (1))𝐵+𝐶(𝑋−𝑋̃ (1)) 𝐷)]} Re tr ‖𝑅 (𝑠)‖2 ×{ 𝑅 (𝑠) 2 − { [𝑃𝐻 (𝑠) 𝑃 (𝑠)]}} ‖ ‖ 2 Re tr 𝐻 ‖𝑃 (𝑠)‖ = Re {tr [𝑅 (1) 𝑅 (1)]} = ‖𝑅 (𝑠+1)‖2. = ‖𝑅 (1)‖2. (31) (29) Therefore, Lemma 9 holds by the principle of induction. From the above two lemmas, we have the following This implies that (28)holdsfor𝑖=1. conclusions. Now it is assumed that (28)holdsfor𝑖=𝑠,thatis Remark 10. If there exists a positive number 𝑖 such that 𝑅(𝑖) =0̸and 𝑃(𝑖),thenwecangetfrom =0 Lemma 9 that 2 ⟨𝑃 (𝑠) , 𝑋−𝑋̃ (𝑠)⟩=‖𝑅 (𝑠)‖ . (30) Problem 1 is not consistent. Hence, the solvability of Problem 1 can be determined by Algorithm 7 automatically in the absence of roundoff errors. 𝑖=𝑠+1 Then, when Theorem 11. SupposethatProblem1 is consistent. Then for any 𝑛×𝑛 initial matrix 𝑋(1) ∈ RH (𝑄),asolutionofProblem1 can be obtained within finite iteration steps in the absence of roundoff ⟨𝑃 (𝑠+1) , 𝑋−𝑋̃ (𝑠+1)⟩ errors.

̃ 𝐻 Proof. In Section 2, it is known that the inner product = Re {tr [(𝑋−𝑋(𝑠+1)) 𝑃 (𝑠+1)]} 𝑚×𝑝 space (H ,𝑅,⟨⋅,⋅⟩)is 4𝑚𝑝-dimensional. According to 𝑅(𝑖) =0̸𝑖 = 1,2,...,4𝑚𝑝 𝐻 Lemma 9,if , ,thenwehave = Re {tr [(𝑋−𝑋̃ (𝑠+1)) 𝑃(𝑖) =0,̸ 𝑖=1,2,...,4𝑚𝑝.Hence𝑅(4𝑚𝑝+1) and 𝑃(4𝑚𝑝+1) can be computed. From Lemma 8, it is not difficult to get 1 ×( (𝐴𝐻𝑅 (𝑠+1) 𝐵𝐻 +𝐷𝑅𝐻 (𝑠+1) 𝐶 ⟨𝑅 (𝑖) , 𝑅 (𝑗)⟩ =0 for 𝑖,𝑗=1,2,...,4𝑚𝑝, 𝑖=𝑗.̸ (32) 2 Then 𝑅(1), 𝑅(2), . . . , 𝑅(4𝑚𝑝) is an orthogonal basis of the 𝐻 𝐻 𝑚×𝑝 +𝑄𝐴 𝑅 (𝑠+1) 𝐵 𝑄 inner product space (H ,𝑅,⟨⋅,⋅⟩). In addition, we can get from Lemma 8 that +𝑄𝐷𝑅𝐻 (𝑠+1) 𝐶𝑄) ⟨𝑅 (𝑖) , 𝑅 (4𝑚𝑝 + 1)⟩ =0 for 𝑖=1,2,...,4𝑚𝑝. (33) ‖𝑅(𝑠 +‖ 1) 2 + 𝑃 (𝑠))]} 𝑅(4𝑚𝑝+1) =0 𝑋(4𝑚𝑝+1) ‖𝑅(𝑠)‖2 It follows that , which implies that is a solution of Problem 1. 𝐻 = Re {tr [(𝑋−𝑋̃ (𝑠+1)) 3.2. The Solution of Problem 2. In this subsection, firstly we ×(𝐴𝐻𝑅 (𝑠+1) 𝐵𝐻 +𝐷𝑅𝐻 (𝑠+1) 𝐶) ]} introduce some lemmas. Then, we will prove that the least Frobenius norm reflexive solution of (1) can be derived by 2 choosing a suitable initial iterative matrix. Finally, we solve ‖𝑅 (𝑠+1)‖ 𝐻 + { [(𝑋−𝑋̃ (𝑠+1)) 𝑃 (𝑠)]} Problem 2 by finding the least Frobenius norm reflexive 2 Re tr ‖𝑅 (𝑠)‖ solution of a new-constructed quaternion matrix equation.

𝐻 = Re {tr [𝑅 (𝑠+1) Lemma 12 (see [47]). Assume that the consistent system of 𝑇 linear equations 𝑀𝑦 =𝑏 has a solution 𝑦0 ∈ 𝑅(𝑀 ) then 𝑦0 is ×(𝐴(𝑋−𝑋̃ (𝑠+1))𝐵 the unique least Frobenius norm solution of the system of linear equations. 𝐻 +𝐶(𝑋−𝑋̃ (𝑠+1)) 𝐷)]} Lemma 13. Problem 1 is consistent if and only if the system of 2 quaternion matrix equations ‖𝑅 (𝑠+1)‖ ̃ 𝐻 + {Re {tr [(𝑋−𝑋(𝑠)) 𝑃 (𝑠)]} 𝐻 ‖𝑅 (𝑠)‖2 𝐴𝑋𝐵 +𝐶𝑋 𝐷=𝐹, (34) 𝐻 𝐻 − Re {tr [(𝑋 (𝑠+1) −𝑋(𝑠)) 𝑃 (𝑠)]} } 𝐴𝑄𝑋𝑄𝐵 +𝐶𝑄𝑋 𝑄𝐷 =𝐹 8 Journal of Applied Mathematics is consistent. Furthermore, if the solution sets of Problem 1 and By (𝑑) in Lemma 4,wehavethat 1 (34) are denoted by 𝑆𝐻 and 𝑆𝐻,respectively,then,wehave𝑆𝐻 ⊆ 1 𝑆𝐻. −1 ̆ −1 𝑇𝑚 𝜙 (𝐴) 𝑇𝑛𝑋𝑇𝑛 𝜙 (𝐵) 𝑇𝑝 Proof. First, we assume that Problem 1 has a solution 𝑋.By 𝐻 𝐴𝑋𝐵 +𝐶𝑋 𝐷=𝐹and 𝑄𝑋𝑄 =𝑋,wecanobtain𝐴𝑋𝐵 + +𝑇−1𝜙 (𝐶) 𝑇 𝑋̆𝑇𝑇−1𝜙 (𝐷) 𝑇 =𝑇−1𝜙 (𝐹) 𝑇 , 𝐻 𝐻 𝑚 𝑛 𝑛 𝑝 𝑚 𝑝 𝐶𝑋 𝐷=𝐹and 𝐴𝑄𝑋𝑄𝐵 +𝐶𝑄𝑋 𝑄𝐷 =𝐹, which implies −1 ̆ −1 that 𝑋 isasolutionofquaternionmatrixequations(34), and 𝑅 𝜙 (𝐴) 𝑅𝑛𝑋𝑅 𝜙 (𝐵) 𝑅𝑝 1 𝑚 𝑛 𝑆𝐻 ⊆𝑆𝐻. 𝑋 −1 ̆𝑇 −1 −1 Conversely, suppose (34)isconsistent.Let be a solution +𝑅𝑚 𝜙 (𝐶) 𝑅𝑛𝑋 𝑅𝑛 𝜙 (𝐷) 𝑅𝑝 =𝑅𝑚 𝜙 (𝐹) 𝑅𝑝, of (34). Set 𝑋𝑎 =(𝑋+𝑄𝑋𝑄)/2.Itisobviousthat𝑋𝑎 ∈ 𝑛×𝑛 −1 ̆ −1 RH (𝑄).Nowwecanwrite 𝑆𝑚 𝜙 (𝐴) 𝑆𝑛𝑋𝑆𝑛 𝜙 (𝐵) 𝑆𝑝 𝐻 −1 ̆𝑇 −1 −1 𝐴𝑋𝑎𝐵+𝐶𝑋𝑎 𝐷 + S𝑚 𝜙 (𝐶) 𝑆𝑛𝑋 𝑆𝑛 𝜙 (𝐷) 𝑆𝑝 =𝑆𝑚 𝜙 (𝐹) 𝑆𝑝,

1 𝐻 −1 ̆ −1 = [𝐴 (𝑋+𝑄𝑋𝑄) 𝐵+𝐶(𝑋+𝑄𝑋𝑄) 𝐷] 𝑇𝑚 𝜙 (𝐴) 𝜙 (𝑄) 𝑇𝑛𝑋𝑇𝑛 𝜙 (𝑄) 𝜙 (𝐵) 𝑇𝑝 2 −1 ̆𝑇 −1 −1 1 +𝑇𝑚 𝜙 (𝐶) 𝜙 (𝑄) 𝑇𝑛𝑋 𝑇𝑛 𝜙 (𝑄) 𝜙 (𝐷) 𝑇𝑝 =𝑇𝑚 𝜙 (𝐹) 𝑇𝑝, = [𝐴𝑋𝐵 +𝐶𝑋𝐻𝐷+𝐴𝑄𝑋𝑄𝐵+𝐶𝑄𝑋𝐻𝑄𝐷] (35) 2 −1 ̆ −1 𝑅 𝜙 (𝐴) 𝜙 (𝑄) 𝑅𝑛𝑋𝑅 𝜙 (𝑄) 𝜙 (𝐵) 𝑅𝑝 1 𝑚 𝑛 = [𝐹+𝐹] −1 ̆𝑇 −1 −1 2 +𝑅𝑚 𝜙 (𝐶) 𝜙 (𝑄) 𝑅𝑛𝑋 𝑅𝑛 𝜙 (𝑄) 𝜙 (𝐷) 𝑅𝑝 =𝑅𝑚 𝜙 (𝐹) 𝑅𝑝, =𝐹. −1 ̆ −1 𝑆𝑚 𝜙 (𝐴) 𝜙 (𝑄) 𝑆𝑛𝑋𝑆𝑛 𝜙 (𝑄) 𝜙 (𝐵) 𝑆𝑝

Hence 𝑋𝑎 is a solution of Problem 1.Theproofiscompleted. −1 ̆𝑇 −1 −1 +𝑆𝑚 𝜙 (𝐶) 𝜙 (𝑄) 𝑆𝑛𝑋 𝑆𝑛 𝜙 (𝑄) 𝜙 (𝐷) 𝑆𝑝 =𝑆𝑚 𝜙 (𝐹) 𝑆𝑝. (40) Lemma 14. The system of quaternion matrix equations (34) is consistent if and only if the system of real matrix equations Hence 𝑇 𝜙 (𝐴) [𝑋 ] 𝜙 (𝐵) +𝜙(𝐶) [𝑋 ] 𝜙 (𝐷) =𝜙(𝐹) , 𝑖𝑗 4×4 𝑖𝑗 4×4 ̆ −1 ̆ −1 𝑇 𝜙 (𝐴) 𝜙 (𝑄) [𝑋 ] 𝜙 (𝑄) 𝜙 (𝐵) 𝜙 (𝐴) 𝑇𝑛𝑋𝑇𝑛 𝜙 (𝐵) +𝜙(𝐶) (𝑇𝑛𝑋𝑇𝑛 ) 𝜙 (𝐷) =𝜙(𝐹) , 𝑖𝑗 4×4 (36) 𝑇 ̆ −1 ̆ −1 𝑇 +𝜙(𝐶) 𝜙 (𝑄) [𝑋 ] 𝜙 (𝑄) 𝜙 (𝐷) =𝜙(𝐹) 𝜙 (𝐴) 𝑅𝑛𝑋𝑅𝑛 𝜙 (𝐵) +𝜙(𝐶) (R𝑛𝑋𝑅𝑛 ) 𝜙 (𝐷) =𝜙(𝐹) , 𝑖𝑗 4×4 ̆ −1 ̆ −1 𝑇 𝑛×𝑛 𝜙 (𝐴) 𝑆𝑛𝑋𝑆 𝜙 (𝐵) +𝜙(𝐶) (𝑆𝑛𝑋𝑆 ) 𝜙 (𝐷) =𝜙(𝐹) , is consistent, where 𝑋𝑖𝑗 ∈ H , 𝑖,𝑗= 1,2,3,4,aresubmatri- 𝑛 𝑛 ces of the unknown matrix. Furthermore, if the solution sets of ̆ −1 1 2 𝜙 (𝐴) 𝜙 (𝑄) 𝑇𝑛𝑋𝑇𝑛 𝜙 (𝑄) 𝜙 (𝐵) (34) and (36) are denoted by 𝑆𝐻 and 𝑆𝑅,respectively,then,we 1 2 𝜙(𝑆 )⊆𝑆 𝑇 have 𝐻 𝑅. ̆ −1 +𝜙(𝐶) 𝜙 (𝑄) (𝑇𝑛𝑋𝑇𝑛 ) 𝜙 (𝑄) 𝜙 (𝐷) =𝜙(𝐹) , (41) Proof. Suppose that (34)hasasolution ̆ −1 𝜙 (𝐴) 𝜙 (𝑄) 𝑅𝑛𝑋𝑅𝑛 𝜙 (𝑄) 𝜙 (𝐵) 𝑋=𝑋 +𝑋 𝑖+𝑋 𝑗+𝑋 𝑘. 1 2 3 4 (37) ̆ −1 𝑇 +𝜙(𝐶) 𝜙 (𝑄) (𝑅𝑛𝑋𝑅𝑛 ) 𝜙 (𝑄) 𝜙 (𝐷) =𝜙(𝐹) , (𝑎) (𝑏) (𝑐) Applying , ,and in Lemma 4 to (34)yields ̆ −1 𝜙 (𝐴) 𝜙 (𝑄) 𝑆𝑛𝑋𝑆𝑛 𝜙 (𝑄) 𝜙 (𝐵) 𝜙 (𝐴) 𝜙 (𝑋) 𝜙 (𝐵) +𝜙(𝐶) 𝜙𝑇 (𝑋) 𝜙 (𝐷) =𝜙(𝐹) , ̆ −1 𝑇 +𝜙(𝐶) 𝜙 (𝑄) (𝑆𝑛𝑋𝑆𝑛 ) 𝜙 (𝑄) 𝜙 (𝐷) =𝜙(𝐹) , 𝜙 (𝐴) 𝜙 (𝑄) 𝜙 (𝑋) 𝜙 (𝑄) 𝜙 (𝐵) (38)

𝑇 +𝜙(𝐶) 𝜙 (𝑄) 𝜙 (𝑋) 𝜙 (𝑄) 𝜙 (𝐷) =𝜙(𝐹) , ̆ −1 ̆ −1 ̆ −1 which implies that 𝑇𝑛𝑋𝑇𝑛 ,𝑅𝑛𝑋𝑅𝑛 ,and𝑆𝑛𝑋𝑆𝑛 are also 1 2 solutions of (36). Thus, which implies that 𝜙(𝑋) is a solution of (36)and𝜙(𝑆𝐻)⊆𝑆𝑅. Conversely, suppose that (36)hasasolution 1 ̆ ̆ −1 ̆ −1 ̆ −1 𝑋=[𝑋̆ ] . (𝑋+𝑇𝑛𝑋𝑇𝑛 +𝑅𝑛𝑋𝑅𝑛 +𝑆𝑛𝑋𝑆𝑛 ) (42) 𝑖𝑗 4×4 (39) 4 Journal of Applied Mathematics 9 is also a solution of (36), where 4

̆ ̆ −1 ̆ −1 ̆ −1 2 𝑋+𝑇𝑛𝑋𝑇𝑛 +𝑅𝑛𝑋𝑅𝑛 +𝑆𝑛𝑋𝑆𝑛 0 =[𝑋̃] , 𝑖,𝑗=1,2,3,4, 𝑖𝑗 4×4 −2 𝑋̃ =𝑋 +𝑋 +𝑋 +𝑋 , 11 11 22 33 44 −4 ̃ 𝑋12 =𝑋12 −𝑋21 +𝑋34 −𝑋43, −6

̃ − 𝑋13 =𝑋13 −𝑋24 −𝑋31 +𝑋42, 8

̃ −10 𝑋14 =𝑋14 +𝑋23 −𝑋32 −𝑋41, − ̃ 12 𝑋21 =−𝑋12 +𝑋21 −𝑋34 +𝑋43, 0 5 10 15 20 Iteration number k ̃ 𝑋22 =𝑋11 +𝑋22 +𝑋33 +𝑋44, log10r (k) ̃ 𝑋23 =𝑋14 +𝑋23 −𝑋32 −𝑋41, Figure 1: The convergence curve for the Frobenius norm of the (43) residuals from Example 18. ̃ 𝑋24 =−𝑋13 +𝑋24 +𝑋31 −𝑋42, ̃ 𝑋31 =−𝑋13 +𝑋24 +𝑋31 −𝑋42, 4

̃ 2 𝑋32 =−𝑋14 −𝑋23 +𝑋32 +𝑋41, ̃ 0 𝑋33 =𝑋11 +𝑋22 +𝑋33 +𝑋44, −2 ̃ 𝑋34 =𝑋12 −𝑋21 +𝑋34 −𝑋43, −4 ̃ 𝑋41 =−𝑋14 −𝑋23 +𝑋32 +𝑋41, −6 ̃ 𝑋42 =𝑋13 −𝑋24 −𝑋31 +𝑋42, −8

̃ − 𝑋43 =−𝑋12 +𝑋21 −𝑋34 +𝑋43, 10 ̃ −12 𝑋44 =𝑋11 +𝑋22 +𝑋33 +𝑋44. 0 5 10 15 20 Iteration number k Let log10r (k) 1 𝑋=̂ (𝑋 +𝑋 +𝑋 +𝑋 ) Figure 2: The convergence curve for the Frobenius norm of the 4 11 22 33 44 residuals from Example 19. 1 + (−𝑋 +𝑋 −𝑋 +𝑋 )𝑖 4 12 21 34 43 (44) 1 Lemma 15. + (−𝑋 +𝑋 +𝑋 −𝑋 )𝑗 There exists a permutation matrix P(4n,4n) such 4 13 24 31 42 that (36) is equivalent to 1 + (−𝑋 −𝑋 +𝑋 +𝑋 )𝑘. 4 14 23 32 41 𝜙𝑇 (𝐵) ⊗𝜙(𝐴) +(𝜙𝑇 (𝐷) ⊗𝜙(𝐶))𝑃(4𝑛, 4𝑛) [ ] 𝜙𝑇 (𝐵) 𝜙 (𝑄) ⊗𝜙(𝐴) 𝜙 (𝑄) +(𝜙𝑇 (𝐷) 𝜙 (𝑄) ⊗𝜙(𝐶) 𝜙 (𝑄))𝑃(4𝑛, 4𝑛) Then it is not difficult to verify that vec (𝜙 (𝐹)) × vec ([𝑋𝑖𝑗 ] )=[ ]. 1 4×4 vec (𝜙 (𝐹)) 𝜙 (𝑋)̂ = (𝑋̆+𝑇𝑋̆𝑇−1 +𝑅 𝑋̆𝑅−1 +𝑆 𝑋̆𝑆−1) . 4 𝑛 𝑛 𝑛 𝑛 𝑛 𝑛 (45) (46)

We have that 𝑋̂ is a solution of (34)by(𝑎) in Lemma 4.The Lemma 15 is easily proven through Lemma 5.Soweomit proof is completed. it here. 10 Journal of Applied Mathematics

∘ Theorem 16. When Problem 1 is consistent, let its solution set 𝑋 ∘ ∘ Noting (11), we derive from Lemmas 13, 14,and15 that be denoted by 𝑆𝐻.If𝑋∈𝑆𝐻,and𝑋 can be expressed as is the least Frobenius norm solution of Problem 1.

∘ 𝑋=𝐴𝐻𝐺𝐵𝐻 +𝐷𝐺𝐻𝐶+𝑄𝐴𝐻𝐺𝐵𝐻𝑄 From Algorithm 7, it is obvious that, if we consider (47) +𝑄𝐷𝐺𝐻𝐶𝑄, 𝐺∈ H𝑚×𝑝, 𝑋 (1) =𝐴𝐻𝐺𝐵𝐻 +𝐷𝐺𝐻𝐶+𝑄𝐴𝐻𝐺𝐵𝐻𝑄 (49) 𝐻 𝑚×𝑝 ∘ +𝑄𝐷𝐺 𝐶𝑄, 𝐺∈ H , then, 𝑋 is the least Frobenius norm solution of Problem 1. 𝑋(𝑘) Proof. By (𝑎), (𝑏),and(𝑐) in Lemmas 4, 5,and6,wehavethat then all generated by Algorithm 7 canbeexpressedas

𝐻 𝐻 𝐻 𝐻 𝐻 ∘ 𝑋 (𝑘) =𝐴 𝐺𝑘𝐵 +𝐷𝐺𝑘 𝐶+𝑄𝐴 𝐺𝑘𝐵 𝑄 vec (𝜙𝑋)) ( (50) 𝐻 𝑚×𝑝 +𝑄𝐷𝐺𝑘 𝐶𝑄,𝑘 𝐺 ∈ H .

𝑇 𝑇 𝑇 = vec (𝜙 (𝐴) 𝜙 (𝐺) 𝜙 (𝐵) +𝜙(𝐷) 𝜙 (𝐺) 𝜙 (𝐶) Using the above conclusion and considering Theorem 16, we propose the following theorem. +𝜙(𝑄) 𝜙𝑇 (𝐴) 𝜙 (𝐺) 𝜙𝑇 (𝐵) 𝜙 (𝑄) Theorem 17. Suppose that Problem 1 is consistent. Let the initial iteration matrix be +𝜙(𝑄) 𝜙 (𝐷) 𝜙𝑇 (𝐺) 𝜙 (𝐶) 𝜙 (𝑄)) 𝐻 𝐻 𝐻 𝐻 𝐻 𝑋 (1) =𝐴 𝐺𝐵H +𝐷𝐺 𝐶+𝑄𝐴 𝐺𝐵 𝑄+𝑄𝐷𝐺 𝐶𝑄, (51) =[𝜙(𝐵) ⊗𝜙𝑇 (𝐴) +(𝜙𝑇 (𝐶) ⊗𝜙(𝐷)) 𝑃 (4𝑚, 4𝑝) , where 𝐺 is an arbitrary quaternion matrix, or especially, 𝑋(1) = 0 𝑋∗ 𝑇 , then the solution , generated by Algorithm 7,is 𝜙 (𝑄) 𝜙 (𝐵) ⊗𝜙(𝑄) 𝜙 (𝐴) the least Frobenius norm solution of Problem 1.

Now we study Problem 2.WhenProblem1 is consistent, +(𝜙(𝑄) 𝜙𝑇 (𝐶) ⊗𝜙(𝑄) 𝜙 (𝐷)) 𝑃 (4𝑚, 4𝑝)] the solution set of Problem 1 denoted by 𝑆𝐻 is not empty. 𝑛×𝑛 Then, For a given reflexive matrix 𝑋0 ∈ RH (𝑄), (𝜙 (𝐺)) ×[vec ] (𝜙 (𝐺)) 𝐻 𝐻 vec 𝐴𝑋𝐵 +𝐶𝑋 𝐷=𝐹⇐⇒𝐴(𝑋−𝑋0)𝐵+𝐶(𝑋−𝑋0) 𝐷 =𝐹−𝐴𝑋𝐵−𝐶𝑋𝐻𝐷. 𝑇 𝑇 𝑇 0 0 =[ 𝜙 (𝐵) ⊗ 𝜙(𝐴) +(𝜙 (𝐷) ⊗ 𝜙(𝐶))𝑃(4𝑛, 4𝑛) ] 𝜙𝑇(𝐵)𝜙(𝑄) ⊗ 𝜙(𝐴)𝜙(𝑄)𝑇 +(𝜙 (𝐷)𝜙(𝑄) ⊗ 𝜙(𝐶)𝜙(𝑄))𝑃(4𝑛, 4𝑛) (52)

𝑋=𝑋−𝑋̇ 𝐹=𝐹−𝐴𝑋̇ 𝐵−𝐶𝑋𝐻𝐷 (𝜙 (𝐺)) Let 0 and 0 0 ,thenProblem ×[vec ] vec (𝜙 (𝐺)) 2 is equivalent to finding the least Frobenius norm reflexive solution of the quaternion matrix equation

𝑇 𝜙 (𝐵) ⊗𝜙(𝐴) 𝐻 ∈𝑅[ 𝐴𝑋𝐵̇ 𝑋 +𝐶 ̇𝐷=𝐹.̇ (53) 𝜙𝑇 (𝐵) 𝜙 (𝑄) ⊗𝜙(𝐴) 𝜙 (𝑄)

𝑋(1)̇ = 𝑇 By using Algorithm 7, let the initial iteration matrix 𝑇 𝐻 𝐻 𝐻 𝐻 𝐻 𝐻 +(𝜙 (𝐷) ⊗𝜙(𝐶))𝑃(4𝑛, 4𝑛) 𝐴 𝐺𝐵 +𝐷𝐺 𝐶+𝑄𝐴 𝐺𝐵 𝑄+𝑄𝐷𝐺 𝐶Q, where 𝐺 is an ] . 𝑚×𝑝 +(𝜙𝑇 (𝐷) 𝜙 (𝑄) ⊗𝜙(𝐶) 𝜙 (𝑄))𝑃(4𝑛, 4𝑛) arbitrary quaternion matrix in H ,orespecially,𝑋(1)̇ =, 0 𝑋̇∗ (48) we can obtain the least Frobenius norm reflexive solution of (53). Then we can obtain the solution of Problem 2,which is ∘ By Lemma 12, 𝜙(𝑋) is the least Frobenius norm solution of ̆ ̇∗ matrix equations (46). 𝑋=𝑋 +𝑋0. (54) Journal of Applied Mathematics 11

4. Examples Example 18. Consider the quaternion matrix equation 𝐻 In this section, we give two examples to illustrate the effi- 𝐴𝑋𝐵 +𝐶𝑋 𝐷=𝐹, (55) ciency of the theoretical results. with

1+𝑖+𝑗+𝑘 2+2𝑗+2𝑘 𝑖+3𝑗+3𝑘 2+𝑖+4𝑗+4𝑘 𝐴=[ ], 3+2𝑖+2𝑗+𝑘 3+3𝑖+𝑗 1+7𝑖+3𝑗+𝑘 6−7𝑖+2𝑗−4𝑘 −2+𝑖+3𝑗+𝑘 3+3𝑖+4𝑗+𝑘 [5 − 4𝑖 − 6𝑗 − 5𝑘 −1 − 5𝑖 +4𝑗3𝑘] 𝐵=[ ] , [ −1+2𝑖+𝑗+𝑘 2+2𝑗+6𝑘 ] [ 3−2𝑖+𝑗+𝑘 4+𝑖−2𝑗−4𝑘 ] 1+𝑖+𝑗+𝑘 2+2𝑖+2𝑗 𝑘 2+2𝑖+2𝑗+𝑘 𝐶=[ ], −1+𝑖+3𝑗+3𝑘 −2+3𝑖+4𝑗+2𝑘 6−2𝑖+6𝑗+4𝑘 3+𝑖+𝑗+5𝑘 (56) −1 + 4𝑗 + 2𝑘 1 + 2𝑖 + 3𝑗3𝑘 [7 + 6𝑖 + 5𝑗 + 6𝑘 3 + 7𝑖 −𝑗+9𝑘 ] 𝐷=[ ] , [ 4 + 𝑖 + 6𝑗 + 𝑘 7 + 2𝑖 +9𝑗𝑘 ] [ 1+𝑖+3𝑗−3𝑘 1+3𝑖+2𝑗+2𝑘] 1+3𝑖+2𝑗+𝑘 2−2𝑖+3𝑗+3𝑘 𝐹=[ ]. −1 + 4𝑖 + 2𝑗 + 𝑘 −2 + 2𝑖−1𝑗

We apply Algorithm 7 to find the reflexive solution with res- For the initial matrix 𝑋(1) = 𝑄,weobtainasolution,thatis pect to the generalized reflection matrix. 0.28 0 0.96𝑘 0 [ 0 −1 0.0 0 ] 𝑄=[ ] . [−0.96𝑘 0 −0.28 0 ] (57) [ 000−1]

𝑋∗ =𝑋(20)

0.27257 − 0.23500𝑖 + 0.034789𝑗 + 0.054677𝑘 0.085841 − 0.064349𝑖 + 0.10387𝑗 −0.26871𝑘 [ 0.028151 − 0.013246𝑖 − 0.091097𝑗 + 0.073137𝑘 −0.46775 + 0.048363𝑖 − 0.015746𝑗 +0.21642𝑘 = [ 0.043349 + 0.079178𝑖 + 0.085167𝑗 − 0.84221𝑘 0.35828 − 0.13849𝑖 − 0.085799𝑗 +0.11446𝑘 [ 0.13454 − 0.028417𝑖 − 0.063010𝑗 − 0.10574𝑘 0.10491 − 0.039417𝑖 + 0.18066𝑗 −0.066963𝑘 (58) −0.043349 + 0.079178𝑖 + 0.085167𝑗 + 0.84221𝑘 −0.014337 − 0.14868𝑖 − 0.011146𝑗 − 0.00036397𝑘 0.097516 + 0.12146𝑖 − 0.017662𝑗 − 0.037534𝑘 −0.064483 + 0.070885𝑖 − 0.089331𝑗 +0.074969𝑘 ] −0.21872 + 0.18532𝑖 + 0.011399𝑗 + 0.029390𝑘 0.00048529 + 0.014861𝑖 − 0.19824𝑗 −0.019117𝑘 ] , −0.14099 + 0.084013𝑖 − 0.037890𝑗 − 0.17938𝑘 −0.63928 − 0.067488𝑖 + 0.042030𝑗 +0.10106𝑘 ]

−11 with corresponding residual ‖𝑅(20)‖ = 2.2775 × 10 .The In order to find the optimal approximation reflexive solution ̇ ̇ convergence curve for the Frobenius norm of the residuals to the given matrix 𝑋0,let𝑋=𝑋−𝑋0 and 𝐹=𝐹− 𝑅(𝑘) 𝑟(𝑘) = ‖𝑅(𝑘)‖ 𝐻 is given in Figure 1,where . 𝐴𝑋0𝐵−𝐶𝑋0 𝐷.NowwecanobtaintheleastFrobenius ∗ norm reflexive solution 𝑋̇of the quaternion matrix equation Example 19. In this example, we choose the matrices 𝐴𝑋𝐵̇ 𝑋 +𝐶 ̇𝐻𝐷=𝐹̇ 𝑋(1)̇ = 0 𝐴, 𝐵, 𝐶, 𝐷,𝐹 𝑄 , by choosing the initial matrix , ,and as same as in Example 18.Let that is 1 −0.36𝑖 − 0.75𝑘 0 −0.5625𝑖 − 0.48𝑘 [0.36𝑖 1.28 0.48𝑗 −0.96𝑗 ] 𝑋0= 0 1 − 0.48𝑗 1 0.64 − 0.75𝑗 [0.48𝑘 0.96𝑗 0.64 0.72 ] (59) ∈ RH𝑛×𝑛 (𝑄) . 12 Journal of Applied Mathematics

𝑋̇∗ = 𝑋̇(20)

−0.47933 + 0.021111𝑖 + 0.11703𝑗 − 0.066934𝑘 −0.52020 − 0.19377𝑖 + 0.17420𝑗 −0.59403𝑘 [ −0.31132 − 0.11705𝑖 − 0.33980𝑗 + 0.11991𝑘 −0.86390 − 0.21715𝑖 − 0.037089𝑗 +0.40609𝑘 = [ [ −0.24134 − 0.025941𝑖 − 0.31930𝑗 − 0.26961𝑘 −0.44552 + 0.13065𝑖 + 0.14533𝑗 +0.39015𝑘 [ −0.11693 − 0.27043𝑖 + 0.22718𝑗 − 0.0000012004𝑘 −0.44790 − 0.31260𝑖 − 1.0271𝑗 +0.27275𝑘 (60) 0.24134 − 0.025941𝑖 − 0.31930𝑗 + 0.26961𝑘 0.089010 − 0.67960𝑖 + 0.43044𝑗 −0.045772𝑘 −0.089933 − 0.25485𝑖 + 0.087788𝑗 − 0.23349𝑘 −0.17004 − 0.37655𝑖 + 0.80481𝑗 +0.37636𝑘 ] ] , −0.63660 + 0.16515𝑖 − 0.13216𝑗 + 0.073845𝑘 −0.034329 + 0.32283𝑖 + 0.50970𝑗 −0.066757𝑘 ] 0.00000090029 + 0.17038𝑖 + 0.20282𝑗 − 0.087697𝑘 −0.44735 + 0.21846𝑖 − 0.21469𝑗 −0.15347𝑘 ]

−11 with corresponding residual ‖𝑅(20)‖̇ = 4.3455 × 10 .The Therefore, the optimal approximation reflexive solution convergence curve for the Frobenius norm of the residuals to the given matrix 𝑋0 is 𝑅(𝑘)̇is given in Figure 2,where𝑟(𝑘)𝑅(𝑘)‖ =‖ ̇.

̆ ̇∗ 𝑋=𝑋 +𝑋0 0.52067 + 0.021111𝑖 + 0.11703𝑗 − 0.066934𝑘 −0.52020 − 0.55377𝑖 + 0.17420𝑗 −1.3440𝑘 [ −0.31132 + 0.24295𝑖 − 0.33980𝑗 + 0.11991𝑘 0.41610 − 0.21715𝑖 − 0.037089𝑗 +0.40609𝑘 = [ [ −0.24134 − 0.025941𝑖 − 0.31930𝑗 − 0.26961𝑘 0.55448 + 0.13065𝑖 − 0.33467𝑗 +0.39015𝑘 [ −0.11693 − 0.27043𝑖 + 0.22718𝑗 + 0.48000𝑘 −0.44790 − 0.31260𝑖 − 0.067117𝑗 +0.27275𝑘 (61) 0.24134 − 0.025941𝑖 − 0.31930𝑗 + 0.26961𝑘 0.089010 − 1.2421𝑖 + 0.43044𝑗 −0.52577𝑘 −0.089933 − 0.25485𝑖 + 0.56779𝑗 − 0.23349𝑘 −0.17004 − 0.37655𝑖 − 0.15519𝑗 +0.37636𝑘 ] ] . 0.36340 + 0.16515𝑖 − 0.13216𝑗 + 0.073845𝑘 0.60567 + 0.32283𝑖 − 0.24030𝑗 −0.066757𝑘 ] 0.64000 + 0.17038𝑖 + 0.20282𝑗 − 0.087697𝑘 0.27265 + 0.21846𝑖 − 0.21469𝑗 −0.15347𝑘 ]

TheresultsshowthatAlgorithm 7 is quite efficient. (11ZR1412500), the Discipline Project at the corresponding level of Shanghai (A. 13-0101-12-005), and Shanghai Leading Academic Discipline Project (J50101). 5. Conclusions In this paper, an algorithm has been presented for solving the References reflexive solution of the quaternion matrix equation 𝐴𝑋𝐵 + 𝐻 [1]H.C.ChenandA.H.Sameh,“Amatrixdecompositionmethod 𝐶𝑋 𝐷=𝐹. By this algorithm, the solvability of the problem for orthotropic elasticity problems,” SIAM Journal on Matrix can be determined automatically. Also, when the quaternion Analysis and Applications,vol.10,no.1,pp.39–64,1989. 𝐴𝑋𝐵 +𝐶𝑋𝐻𝐷=𝐹 matrix equation is consistent over [2] H. C. Chen, “Generalized reflexive matrices: special properties 𝑋 reflexive matrix , for any reflexive initial iterative matrix, and applications,” SIAM Journal on Matrix Analysis and Appli- a reflexive solution can be obtained within finite iteration cations,vol.19,no.1,pp.140–153,1998. steps in the absence of roundoff errors. It has been proven [3] Q. W. Wang, H. X. Chang, and C. Y. Lin, “𝑃-(skew)symmetric that by choosing a suitable initial iterative matrix, we can common solutions to a pair of quaternion matrix equations,” derive the least Frobenius norm reflexive solution of the Applied Mathematics and Computation,vol.195,no.2,pp.721– 𝐻 quaternion matrix equation 𝐴𝑋𝐵 +𝐶𝑋 𝐷=𝐹through 732, 2008. Algorithm 7.Furthermore,byusingAlgorithm 7,wesolved [4]S.F.YuanandQ.W.Wang,“Twospecialkindsofleastsquares Problem 2. Finally, two numerical examples were given to solutions for the quaternion matrix equation 𝐴𝑋𝐵+𝐶𝑋𝐷 =𝐸,” show the efficiency of the presented algorithm. Electronic Journal of Linear Algebra, vol. 23, pp. 257–274, 2012. [5]T.S.JiangandM.S.Wei,“Onasolutionofthequaternion matrix equation 𝑋−𝐴𝑋𝐵=𝐶̃ and its application,” Acta Acknowledgments Mathematica Sinica (English Series),vol.21,no.3,pp.483–490, 2005. ThisresearchwassupportedbyGrantsfromInnovation [6]Y.T.LiandW.J.Wu,“Symmetricandskew-antisymmetric Program of Shanghai Municipal Education Commission solutions to systems of real quaternion matrix equations,” (13ZZ080), the National Natural Science Foundation of Computers & Mathematics with Applications,vol.55,no.6,pp. China (11171205), the Natural Science Foundation of Shanghai 1142–1147, 2008. Journal of Applied Mathematics 13

[7] L. G. Feng and W. Cheng, “The solution set to the quaternion reflexive and anti-reflexive matrices,” International Journal of matrix equation 𝐴𝑋 − 𝑋𝐵,” =0 Algebra Colloquium,vol.19,no. Control and Automation,vol.9,no.1,pp.118–124,2011. 1,pp.175–180,2012. [25] M. Dehghan and M. Hajarian, “An iterative algorithm for [8] Z. Y. Peng, “An iterative method for the least squares symmetric solving a pair of matrix equations 𝐴𝑌𝐵 =𝐸, 𝐶𝑌𝐷 =𝐹 solution of the linear matrix equation 𝐴𝑋𝐵=𝐶,” Applied over generalized centro-symmetric matrices,” Computers & Mathematics and Computation,vol.170,no.1,pp.711–723,2005. Mathematics with Applications,vol.56,no.12,pp.3246–3260, [9] Z. Y. Peng, “A matrix LSQR iterative method to solve matrix 2008. equation 𝐴𝑋𝐵 =𝐶,” International Journal of Computer [26] M. Dehghan and M. Hajarian, “Analysis of an iterative algo- Mathematics,vol.87,no.8,pp.1820–1830,2010. rithm to solve the generalized coupled Sylvester matrix equa- [10] Z. Y. Peng, “New matrix iterative methods for constraint tions,” Applied Mathematical Modelling,vol.35,no.7,pp.3285– solutions of the matrix equation 𝐴𝑋𝐵=𝐶,” Journal of 3300, 2011. Computational and Applied Mathematics,vol.235,no.3,pp. [27] M. Dehghan and M. Hajarian, “The general coupled matrix 726–735, 2010. equations over generalized bisymmetric matrices,” Linear Alge- [11] Z. Y. Peng, “Solutions of symmetry-constrained least-squares bra and Its Applications,vol.432,no.6,pp.1531–1552,2010. problems,” Numerical Linear Algebra with Applications,vol.15, [28] M. Dehghan and M. Hajarian, “An iterative method for solving no. 4, pp. 373–389, 2008. the generalized coupled Sylvester matrix equations over gener- [12] C. C. Paige, “Bidiagonalization of matrices and solutions of the alized bisymmetric matrices,” Applied Mathematical Modelling, linear equations,” SIAM Journal on Numerical Analysis,vol.11, vol.34,no.3,pp.639–654,2010. pp. 197–209, 1974. [29] A. G. Wu, B. Li, Y. Zhang, and G. R. Duan, “Finite iterative [13] X. F. Duan, A. P. Liao, and B. Tang, “On the nonlinear matrix solutions to coupled Sylvester-conjugate matrix equations,” 𝑋−∑𝑚 𝐴𝑥2𝑎; 𝑋𝛿𝑖 𝐴 =𝑄 equation 𝑖=1 𝑖 𝑖 ,” Linear Algebra and Its Applied Mathematical Modelling,vol.35,no.3,pp.1065–1080, Applications,vol.429,no.1,pp.110–121,2008. 2011. [14] X. F. Duan and A. P. Liao, “On the existence of Hermitian [30]A.G.Wu,L.L.Lv,andG.R.Duan,“Iterativealgorithms 𝑋𝑠+𝐴∗𝑋−𝑡𝐴= positive definite solutions of the matrix equation for solving a class of complex conjugate and transpose matrix 𝑄 ,” Linear Algebra and Its Applications,vol.429,no.4,pp.673– equations,” Applied Mathematics and Computation,vol.217,no. 687, 2008. 21, pp. 8343–8353, 2011. [15] X.F.DuanandA.P.Liao,“Onthenonlinearmatrixequation𝑋+ 𝐴∗𝑋−𝑞𝐴=𝑄(𝑞≥1) [31] A. G. Wu, L. L. Lv, and M. Z. Hou, “Finite iterative algorithms for ,” Mathematical and Computer Modelling, extended Sylvester-conjugate matrix equations,” Mathematical vol. 49, no. 5-6, pp. 936–945, 2009. and Computer Modelling,vol.54,no.9-10,pp.2363–2384,2011. [16] X. F. Duan and A. P. Liao, “On Hermitian positive definite 𝑚 ∗ 𝑟 [32] N. L. Bihan and J. Mars, “Singular value decomposition of solution of the matrix equation 𝑋−∑ 𝐴 𝑋 𝐴𝑖 =𝑄,” Journal 𝑖=1 𝑖 matrices of Quaternions matrices: a new tool for vector-sensor of Computational and Applied Mathematics,vol.229,no.1,pp. signal processing,” Signal Process,vol.84,no.7,pp.1177–1199, 27–36, 2009. 2004. [17] X.F.Duan,C.M.Li,andA.P.Liao,“Solutionsandperturbation 𝑚 ∗ −1 [33] N. L. Bihan and S. J. Sangwine, “Jacobi method for quaternion analysis for the nonlinear matrix equation 𝑋+∑ 𝐴 𝑋 𝐴𝑖 = 𝑖=1 𝑖 matrix singular value decomposition,” Applied Mathematics and 𝐼,” Applied Mathematics and Computation,vol.218,no.8,pp. Computation,vol.187,no.2,pp.1265–1271,2007. 4458–4466, 2011. [18]F.Ding,P.X.Liu,andJ.Ding,“Iterativesolutionsofthe [34] F. O. Farid, Q. W. Wang, and F. Zhang, “On the eigenvalues of generalized Sylvester matrix equations by using the hierarchical quaternion matrices,” Linear and Multilinear Algebra,vol.59, identification principle,” Applied Mathematics and Computa- no. 4, pp. 451–473, 2011. tion,vol.197,no.1,pp.41–50,2008. [35] S. de Leo and G. Scolarici, “Right eigenvalue equation in [19] F. Ding and T. Chen, “Iterative least-squares solutions of quaternionic quantum mechanics,” Journal of Physics A,vol.33, coupled Sylvester matrix equations,” Systems & Control Letters, no.15,pp.2971–2995,2000. vol. 54, no. 2, pp. 95–107, 2005. [36] S. J. Sangwine and N. L. Bihan, “Quaternion singular value [20] F.Ding and T.Chen, “Hierarchical gradient-based identification decomposition based on bidiagonalization to a real or com- of multivariable discrete-time systems,” Automatica,vol.41,no. plex matrix using quaternion Householder transformations,” 2, pp. 315–325, 2005. Applied Mathematics and Computation,vol.182,no.1,pp.727– [21]M.H.Wang,M.S.Wei,andS.H.Hu,“Aniterativemethodfor 738, 2006. the least-squares minimum-norm symmetric solution,” CMES: [37]C.C.Took,D.P.Mandic,andF.Zhang,“Ontheunitary Computer Modeling in Engineering & Sciences,vol.77,no.3-4, diagonalisation of a special class of quaternion matrices,” pp.173–182,2011. Applied Mathematics Letters, vol. 24, no. 11, pp. 1806–1809, 2011. [22] M. Dehghan and M. Hajarian, “An iterative algorithm for the [38] Q. W. Wang, H. X. Chang, and Q. Ning, “The common solution reflexive solutions of the generalized coupled Sylvester matrix to six quaternion matrix equations with applications,” Applied equations and its optimal approximation,” Applied Mathematics Mathematics and Computation, vol. 198, no. 1, pp. 209–226, and Computation,vol.202,no.2,pp.571–588,2008. 2008. [23] M. Dehghan and M. Hajarian, “Finite iterative algorithms [39] Q. W. Wang, J. W. van der Woude, and H. X. Chang, “A system for the reflexive and anti-reflexive solutions of the matrix of real quaternion matrix equations with applications,” Linear equation 𝐴1𝑋1𝐵1 +𝐴2𝑋2𝐵2 =𝐶,” Mathematical and Computer Algebra and Its Applications,vol.431,no.12,pp.2291–2303,2009. Modelling,vol.49,no.9-10,pp.1937–1959,2009. [40]Q.W.Wang,S.W.Yu,andW.Xie,“Extremeranksofreal [24] M. Hajarian and M. Dehghan, “Solving the generalized matrices in solution of the quaternion matrix equation 𝐴𝑋𝐵 = 𝑝 𝑞 𝐶 with applications,” Algebra Colloquium,vol.17,no.2,pp.345– Sylvester matrix equation ∑𝐴𝑖𝑋𝐵𝑖 + ∑ 𝐶𝑗𝑌𝐷𝑗 =𝐸over 𝑖=1 𝑗=1 360, 2010. 14 Journal of Applied Mathematics

[41] Q. W. Wang and J. Jiang, “Extreme ranks of (skew-)Hermitian solutions to a quaternion matrix equation,” Electronic Journal of Linear Algebra,vol.20,pp.552–573,2010. [42]Q.W.Wang,X.Liu,andS.W.Yu,“Thecommonbisymmetric nonnegative definite solutions with extreme ranks and inertias to a pair of matrix equations,” Applied Mathematics and Com- putation,vol.218,no.6,pp.2761–2771,2011. [43]Q.W.Wang,Y.Zhou,andQ.Zhang,“Ranksofthecommon solution to six quaternion matrix equations,” Acta Mathemati- cae Applicatae Sinica (English Series),vol.27,no.3,pp.443–462, 2011. [44] F. Zhang, “Gerˇsgorin type theorems for quaternionic matrices,” Linear Algebra and Its Applications,vol.424,no.1,pp.139–153, 2007. [45] F. Zhang, “Quaternions and matrices of quaternions,” Linear Algebra and Its Applications,vol.251,pp.21–57,1997. [46] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, UK, 1991. [47]Y.X.Peng,X.Y.Hu,andL.Zhang,“Aniterationmethodforthe symmetric solutions and the optimal approximation solution of the matrix equation 𝐴𝑋𝐵 =𝐶,” Applied Mathematics and Computation,vol.160,no.3,pp.763–777,2005. Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2013, Article ID 961568, 6 pages http://dx.doi.org/10.1155/2013/961568

Research Article The Optimization on Ranks and Inertias of a Quadratic Hermitian Matrix Function and Its Applications

Yirong Yao

Department of Mathematics, Shanghai University, Shanghai 200444, China

Correspondence should be addressed to Yirong Yao; [email protected]

Received 3 December 2012; Accepted 9 January 2013

Academic Editor: Yang Zhang

Copyright © 2013 Yirong Yao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

∗ We solve optimization problems on the ranks and inertias of the quadratic Hermitian matrix function 𝑄−𝑋𝑃𝑋 subject to a consistent system of matrix equations 𝐴𝑋 =𝐶 and 𝑋𝐵=𝐷. As applications, we derive necessary and sufficient conditions for the ∗ solvability to the systems of matrix equations and matrix inequalities 𝐴𝑋=𝐶,𝑋𝐵=𝐷,and𝑋𝑃𝑋 = (>, <, ≥, ≤)𝑄 in the Lowner¨ partial ordering to be feasible, respectively. The findings of this paper widely extend the known results in the literature.

1. Introduction The investigation on maximal and minimal ranks and inertias of linear and quadratic matrix function is active Throughout this paper, we denote the complex number field in recent years (see, e.g., [1–24]). Tian [21] considered the C C𝑚×𝑛 C𝑚×𝑚 by .Thenotations and ℎ standforthesets maximal and minimal ranks and inertias of the Hermitian 𝑚×𝑛 𝑚×𝑚 of all complex matrices and all complex quadratic matrix function Hermitian matrices, respectively. The identity matrix with an appropriate size is denoted by 𝐼. For a complex matrix 𝐴, ∗ ∗ ∗ ∗ ∗ ∗ the symbols 𝐴 and 𝑟(𝐴) standfortheconjugatetranspose ℎ (𝑋) =𝐴𝑋𝐵𝑋 𝐴 +𝐴𝑋𝐶+𝐶 𝑋 𝐴 +𝐷, (3) and the rank of 𝐴, respectively. The Moore-Penrose inverse of 𝐴∈C𝑚×𝑛 𝐴† , denoted by ,isdefinedtobetheuniquesolution 𝐵 𝐷 𝑋 to the following four matrix equations where and are Hermitian matrices. Moreover, Tian [22] investigated the maximal and minimal ranks and inertias of (1) 𝐴𝑋𝐴 =𝐴, (2) 𝑋𝐴𝑋 =𝑋, the quadratic Hermitian matrix function

∗ ∗ (1) (3)(𝐴𝑋) =𝐴𝑋, (4)(𝑋𝐴) =𝑋𝐴. ∗ 𝑓 (𝑋) =𝑄−𝑋𝑃𝑋 (4)

Furthermore, 𝐿𝐴 and 𝑅𝐴 stand for the two projectors 𝐿𝐴 = 𝐼−𝐴†𝐴 𝑅 =𝐼−𝐴𝐴† 𝐴 and 𝐴 induced by ,respectively.Itis such that 𝐴𝑋=𝐶. 𝐿 =𝐿∗ 𝑅 =𝑅∗ 𝐴∈C𝑚×𝑚 known that 𝐴 𝐴 and 𝐴 𝐴.For ℎ , its inertia The goal of this paper is to give the maximal and minimal ranks and inertias of the matrix function (4)subjecttothe I (𝐴) = (𝑖 (𝐴) ,𝑖 (𝐴) ,𝑖 (𝐴)) 𝑛 + − 0 (2) consistent system of matrix equations isthetripleconsistingofthenumbersofthepositive,nega- tive, and zero eigenvalues of 𝐴,countedwithmultiplicities, 𝐴𝑋=𝐶, 𝑋𝐵=𝐷, (5) respectively. It is easy to see that 𝑖+(𝐴) +− 𝑖 (𝐴) = 𝑟(𝐴).For two Hermitian matrices 𝐴 and 𝐵 ofthesamesizes,wesay 𝑛×𝑛 𝑝×𝑝 𝐴>𝐵(𝐴≥𝐵)in the Lowner¨ partial ordering if 𝐴−𝐵is where 𝑄∈Cℎ , 𝑃∈Cℎ are given complex matrices. positive (nonnegative) definite. As applications, we consider the necessary and sufficient 2 Journal of Applied Mathematics

𝑚×𝑛 𝑚×𝑘 𝑙×𝑛 conditions for the solvability to the systems of matrix equa- Lemma 2 (see [4]). Let 𝐴∈C , 𝐵∈C , 𝐶∈C , 𝐷∈ 𝑚×𝑝 𝑞×𝑛 𝑚 ×𝑘 𝑙×𝑛 tions and inequality C , 𝐸∈C , 𝑄∈C 1 ,and𝑃∈C 1 be given. Then

∗ 𝐴𝑋=𝐶, 𝑋𝐵=𝐷, 𝑋𝑃𝑋 =𝑄, (1) 𝑟 (𝐴) +𝑟(𝑅𝐴𝐵) = 𝑟 (𝐵) +𝑟(𝑅𝐵𝐴) = 𝑟 [𝐴𝐵], ∗ 𝐴 𝐴𝑋=𝐶, 𝑋𝐵=𝐷, 𝑋𝑃𝑋 >𝑄, (2) 𝑟 (𝐴) +𝑟(𝐶𝐿 )=𝑟(𝐶) +𝑟(𝐴𝐿 )=𝑟[ ], 𝐴 𝐶 𝐶 ∗ 𝐴𝑋=𝐶, 𝑋𝐵=𝐷, 𝑋𝑃𝑋 <𝑄, (6) 𝐴𝐵 (3) 𝑟 (𝐵) +𝑟(𝐶) +𝑟(𝑅 𝐴𝐿 )=𝑟[ ], 𝐴𝑋=𝐶, 𝑋𝐵=𝐷,∗ 𝑋𝑃𝑋 ≥𝑄, 𝐵 𝐶 𝐶0

𝐴𝑋=𝐶, 𝑋𝐵=𝐷,∗ 𝑋𝑃𝑋 ≤𝑄, 𝐴𝐵0 𝐴𝐵𝐿 (4) 𝑟 (𝑃) +𝑟(𝑄) +𝑟[ 𝑄]=𝑟[𝐶0𝑃] , 𝑅𝑃𝐶0 [0𝑄0] in the Lownerpartialorderingtobefeasible,respectively.¨ 𝐴𝐷𝐵 𝑅 𝐴𝐿 𝑅 𝐷 (5) 𝑟[ 𝐵 𝐶 𝐵 ]+𝑟(𝐵) +𝑟(𝐶) =𝑟[𝐸00] . 𝐸𝐿 𝐶 0 2. The Optimization on Ranks and Inertias [𝐶00] of (4) Subject to (5) (10) In this section, we consider the maximal and minimal ranks 𝑚×𝑚 𝑚×𝑛 𝑛×𝑛 and inertias of the quadratic Hermitian matrix function (4) Lemma 3 (see [23]). Let 𝐴∈Cℎ , 𝐵∈C , 𝐶∈Cℎ , 𝑚×𝑛 𝑝×𝑛 𝑚×𝑚 subject to (5). We begin with the following lemmas. 𝑄∈C ,and𝑃∈C be given, and, 𝑇∈C be

𝑚×𝑚 𝑚×𝑝 𝑞×𝑚 nonsingular. Then Lemma 1 (see [3]). Let 𝐴∈Cℎ , 𝐵∈C ,and𝐶∈C be given and denote ∗ (1) 𝑖± (𝑇𝐴𝑇 )=𝑖± (𝐴) ,

𝐴𝐵 𝐴𝐶∗ 𝐴0 𝑃 =[ ], 𝑃 =[ ], (2) 𝑖± [ ]=𝑖± (𝐴) +𝑖± (𝐶) , 1 𝐵∗ 0 2 𝐶0 0𝐶 (7) 𝐴𝐵𝐶∗ 𝐴𝐵𝐶∗ 0𝑄 (11) 𝑃 =[ ], 𝑃 =[ ]. (3) 𝑖± [ ∗ ]=𝑟(𝑄) , 3 𝐵∗ 00 4 𝐶0 0 𝑄 0 𝐴𝐵0 𝐴𝐵𝐿𝑃 [ ∗ ∗] (4) 𝑖± [ ∗ ]+𝑟(𝑃) =𝑖± 𝐵 0𝑃 . Then 𝐿𝑃𝐵 0 [ 0𝑃0]

∗ max 𝑟[𝐴−𝐵𝑌𝐶−(𝐵𝑌𝐶) ] 𝑌∈C𝑝×𝑞 Lemma 4. Let 𝐴, 𝐶, 𝐵,and𝐷 be given. Then the following ∗ statements are equivalent. = min {𝑟 [𝐴𝐵𝐶],𝑟(𝑃1),𝑟(𝑃2)} , ∗ min 𝑟[𝐴−𝐵𝑌𝐶−(𝐵𝑌𝐶) ] 𝑌∈C𝑝×𝑞 (1) System (5) is consistent.

∗ =2𝑟[𝐴𝐵𝐶] (2) Let

+ max {𝑤+ +𝑤−,𝑔+ +𝑔−,𝑤+ +𝑔−,𝑤− +𝑔+}, 𝐷 ∗ 𝑟[𝐴𝐶]=𝑟(𝐴) ,[]=𝑟(𝐵) ,𝐴𝐷=𝐶𝐵. (12) max 𝑖± [𝐴 − 𝐵𝑌𝐶 − (𝐵𝑌𝐶) ]=min {𝑖± (𝑃1),𝑖± (𝑃2)} , 𝐵 𝑌∈C𝑝×𝑞 ∗ min 𝑖± [𝐴 − 𝐵𝑌𝐶 − (𝐵𝑌𝐶) ] 𝑌∈C𝑝×𝑞 ∗ In this case, the general solution can be written as =𝑟[𝐴𝐵𝐶]+max {𝑖± (𝑃1)−𝑟(𝑃3),𝑖± (𝑃2)−𝑟(𝑃4)} , (8) † † 𝑋=𝐴𝐶+𝐿𝐴𝐷𝐵 +𝐿𝐴𝑉𝑅𝐵, (13) where where 𝑉 is an arbitrary matrix over C with appropriate size.

𝑤± =𝑖± (𝑃1)−𝑟(𝑃3), 𝑔± =𝑖± (𝑃2)−𝑟(𝑃4). (9) Now we give the fundamental theorem of this paper. Journal of Applied Mathematics 3

Theorem 5. Let 𝑓(𝑋) be as given in (4) and assume that 𝐴𝑋 = Note that 𝐶 𝑋𝐵 =𝐷 and in (5) is consistent. Then ∗ 𝑟[𝑄−(𝑋0 +𝐿𝐴𝑉𝑅𝐵)𝑃(𝑋0 +𝐿𝐴𝑉𝑅𝐵) ] ∗ max 𝑟(𝑄−𝑋𝑃𝑋 ) 𝐴𝑋=𝐶,𝑋𝐵=𝐷 𝑄(𝑋+𝐿 𝑉𝑅 )𝑃 =𝑟[ 0 𝐴 𝐵 ]−𝑟(𝑃) 𝑃(𝑋 +𝐿 𝑉𝑅 )∗ 𝑃 { 0𝑃𝑃 0 𝐴 𝐵 [ ] = min {𝑛+𝑟 𝐴𝑄 𝐶𝑃 0 −𝑟(𝐴) −𝑟(𝐵) (19) ∗ ∗ 𝑄𝑋0𝑃 𝐿𝐴 { [−𝐷 0𝐵] =𝑟[[ ∗ ]+[ ]𝑉[0𝑅𝐵𝑃] 𝑃𝑋0 𝑃 0 −𝑟(𝑃) ,2𝑛+𝑟(𝐴𝑄𝐴∗ −𝐶𝑃𝐶∗) 𝐿 ∗ +([ 𝐴]𝑉[0𝑅𝑃]) ]−𝑟(𝑃) , 0 𝐵 𝑄0𝐷 } [ ] −2𝑟 (𝐴) ,𝑟 0−𝑃𝐵−2𝑟(𝐵)} , ∗ ∗ ∗ 𝑖 [𝑄 − (𝑋 +𝐿 𝑉𝑅 )𝑃(𝑋 +𝐿 𝑉𝑅 ) ] [𝐷 𝐵 0 ] } ± 0 𝐴 𝐵 0 𝐴 𝐵 ∗ min 𝑟(𝑄−𝑋𝑃𝑋 ) 𝑄(𝑋0 +𝐿𝐴𝑉𝑅𝐵)𝑃 𝐴𝑋=𝐶,𝑋𝐵=𝐷 =𝑖± [ ∗ ]−𝑖± (𝑃) 𝑃(𝑋0 +𝐿𝐴𝑉𝑅𝐵) 𝑃 0𝑃𝑃 [ ] 𝑄𝑋0𝑃 𝐿𝐴 =2𝑛+2𝑟 𝐴𝑄 𝐶𝑃 0 −2𝑟(𝐴) −2𝑟(𝐵) −𝑟(𝑃) =𝑖± [[ ∗ ]+[ ]𝑉[0𝑅𝐵𝑃] ∗ ∗ 𝑃𝑋 𝑃 0 [−𝐷 0𝐵] 0 𝐿 ∗ + {𝑠 +𝑠 ,𝑡 +𝑡 ,𝑠 +𝑡 ,𝑠 +𝑡 }, +([ 𝐴]𝑉[0𝑅𝑃]) ]−𝑖 (𝑃) . max + − + − + − − + 0 𝐵 ± ∗ max 𝑖± (𝑄 − 𝑋𝑃𝑋 ) (20) 𝐴𝑋=𝐶,𝑋𝐵=𝐷 Let { = 𝑛+𝑖 (𝐴𝑄𝐴∗ −𝐶𝑃𝐶∗) min { ± 𝑄𝑋0𝑃 𝐿𝐴 𝑞 (𝑉) =[ ∗ ]+[ ]𝑉[0𝑅𝐵𝑃] { 𝑃𝑋0 𝑃 0 (21) 𝑄0𝐷 } 𝐿 ∗ [ ] +([ 𝐴]𝑉[0𝑅𝑃]) . −𝑟 (𝐴) ,𝑖± 0−𝑃𝐵−𝑟(𝐵) , 0 𝐵 ∗ ∗ } [𝐷 𝐵 0 ] } (14) Applying Lemma 1 to (19)and(20) yields 𝑖 (𝑄 − 𝑋𝑃𝑋∗) min ± max 𝑟[𝑞(𝑉)]=min {𝑟 (𝑀) ,𝑟(𝑀1),𝑟(𝑀2)} , 𝐴𝑋=𝐶,𝑋𝐵=𝐷 𝑉

0𝑃𝑃 min 𝑟[𝑞(𝑉)] =𝑛+𝑟[ 𝐴𝑄 𝐶𝑃 0 ] −𝑟(𝐴) −𝑟(𝐵) (15) 𝑉 −𝐷∗ 0𝐵∗ [ ] =2𝑟(𝑀) + max {𝑠+ +𝑠−,𝑡+ +𝑡−,𝑠+ +𝑡−,𝑠− +𝑡+}, (22) −𝑖 (𝑃) + {𝑠 ,𝑡 }, ± max ± ± max 𝑖± [𝑞 (𝑉)]=min {𝑖± (𝑀1),𝑖± (𝑀2)} , 𝑉 where min 𝑖± [𝑞 (𝑉)]=𝑟(𝑀) + max {𝑠±,𝑡±}, ∗ 𝑉 ∗ ∗ 𝐶𝑃 𝐴𝑄𝐴 𝑠± =−𝑛+𝑟(𝐴)−𝑖∓ (𝑃)+𝑖± (𝐴𝑄𝐴 −𝐶𝑃𝐶 )−𝑟[ ∗ ∗ ∗ ], 𝐵 𝐷 𝐴 where 𝑄0𝐷 0𝑃𝐵 𝑄𝑋𝑃𝐿 𝑄𝑋𝑃𝐿 0 0 𝐴 𝑡 =−𝑛+𝑟(𝐴)−𝑖 (𝑃)+𝑖 [ 0−𝑃𝐵]−[𝐴𝑄 𝐶𝑃 0] . 𝑀=[ 0 𝐴 ], 𝑀 =[𝑃𝑋∗ 𝑃0] , ± ∓ ± 𝑃𝑋∗ 𝑃0𝑃𝑅 1 0 𝐷∗ 𝐵∗ 0 𝐷∗ 𝐵∗ 0 0 𝐵 [ ] [ ] [ 𝐿𝐴 00] (16) 𝑄𝑋0𝑃0 𝑄𝑋0𝑃𝐿𝐴 0 [ ∗ ] [ ∗ ] Proof. It follows from Lemma 4 that the general solution of 𝑀2 = 𝑃𝑋0 𝑃𝑃𝑅𝐵 ,𝑀3 = 𝑃𝑋0 𝑃0𝑃𝑅𝐵 , (4) can be expressed as [ 0𝑅𝐵𝑃0] [ 𝐿𝐴 000] 𝑄𝑋𝑃𝐿 0 𝑋=𝑋0 +𝐿𝐴𝑉𝑅𝐵, (17) 0 𝐴 [ ∗ ] 𝑀4 = 𝑃𝑋0 𝑃0𝑃𝑅𝐵 , where 𝑉 is an arbitrary matrix over C and 𝑋0 is a special [ 0𝑅𝐵𝑃0] 0 solution of (5). Then 𝑠± =𝑖± (𝑀1)−𝑟(𝑀3), 𝑡± =𝑖± (𝑀2)−𝑟(𝑀4). ∗ ∗ 𝑄−𝑋𝑃𝑋 =𝑄−(𝑋0 +𝐿𝐴𝑉𝑅𝐵)𝑃(𝑋0 +𝐿𝐴𝑉𝑅𝐵) . (18) (23) 4 Journal of Applied Mathematics

Applying Lemmas 2 and 3, elementary matrix operations and (c) 𝐴𝑋 =𝐶 and 𝑋𝐵 =𝐷 have a common solution such ∗ congruence matrix operations, we obtain that 𝑄−𝑋𝑃𝑋 >0if and only if 𝑖 (𝐴𝑄𝐴∗ −𝐶𝑃𝐶∗)−𝑟(𝐴) ≥0, 0𝑃𝑃 + [ ] 𝑟 (𝑀) =𝑛+𝑟 𝐴𝑄 𝐶𝑃 0 −𝑟(𝐴) −𝑟(𝐵) , 𝑄0𝐷 ∗ ∗ (27) −𝐷 0𝐵 [ ] [ ] 𝑖+ 0−𝑃𝐵−𝑟(𝐵) ≥𝑛. ∗ ∗ ∗ ∗ [𝐷 𝐵 0 ] 𝑟(𝑀1)=2𝑛+𝑟(𝐴𝑄𝐴 −𝐶𝑃𝐶 )−2𝑟(𝐴) +𝑟(𝑃) , ∗ ∗ (d) 𝐴𝑋 =𝐶 and 𝑋𝐵 =𝐷 have a common solution such 𝑖± (𝑀1)=𝑛+𝑖± (𝐴𝑄𝐴 −𝐶𝑃𝐶 )−𝑟(𝐴) +𝑖± (𝑃) , ∗ that 𝑄−𝑋𝑃𝑋 <0if and only if 𝑄0𝐷 𝑖 (𝐴𝑄𝐴∗ −𝐶𝑃𝐶∗)−𝑟(𝐴) ≥0, [ ] − 𝑟(𝑀2)=𝑟 0−𝑃𝐵−2𝑟(𝐵) +𝑟(𝑃) , 𝐷∗ 𝐵∗ 0 [ ] 𝑄0𝐷 (28) [ ] 𝑖− 0−𝑃𝐵−𝑟(𝐵) ≥𝑛. 𝑄0𝐷 ∗ ∗ [ ] [𝐷 𝐵 0 ] 𝑖± (𝑀2)=𝑖± 0−𝑃𝐵−𝑟(𝐵) +𝑖± (𝑃) , 𝐷∗ 𝐵∗ 0 [ ] (e) All common solutions of 𝐴𝑋 =𝐶 and 𝑋𝐵 =𝐷 satisfy 𝑄−𝑋𝑃𝑋∗ ≥0 𝐶𝑃 𝐴𝑄𝐴∗ if and only if 𝑟(𝑀3)=2𝑛+𝑟(𝑃) −2𝑟(𝐴) −𝑟(𝐵) +𝑟[ ∗ ∗ ∗ ], ∗ ∗ 𝐵 𝐷 𝐴 𝑛+𝑖− (𝐴𝑄𝐴 −𝐶𝑃𝐶 )−𝑟(𝐴) =0,

0𝑃𝐵 𝑄0𝐷 (29) [𝐴𝑄 𝐶𝑃 0] [ ] 𝑟(𝑀4)=𝑛+𝑟(𝑃) +𝑟 −2𝑟(𝐵) −𝑟(𝐴) . or,𝑖− 0−𝑃𝐵−𝑟(𝐵) =0. ∗ ∗ ∗ ∗ [𝐷 𝐵 0] [𝐷 𝐵 0 ] (24) (f) All common solutions of 𝐴𝑋 =𝐶 and 𝑋𝐵 =𝐷 satisfy 𝑄−𝑋𝑃𝑋∗ ≤0 Substituting (24)into(22), we obtain the results. if and only if ∗ ∗ 𝑛+𝑖+ (𝐴𝑄𝐴 −𝐶𝑃𝐶 )−𝑟(𝐴) =0, Using immediately Theorem 5,wecaneasilygetthe 𝑄0𝐷 (30) following. [ ] or,𝑖+ 0−𝑃𝐵−𝑟(𝐵) =0. 𝐷∗ 𝐵∗ 0 Theorem 6. Let 𝑓(𝑋) be as given in (4), 𝑠± and let 𝑡± be as [ ] given in Theorem 5 and assume that 𝐴𝑋 =𝐶 and 𝑋𝐵 =𝐷 in (g) All common solutions of 𝐴𝑋 =𝐶 and 𝑋𝐵 =𝐷 satisfy (5) are consistent. Then we have the following. ∗ 𝑄−𝑋𝑃𝑋 >0if and only if (a) 𝐴𝑋 =𝐶 and 𝑋𝐵 =𝐷 have a common solution such ∗ 0𝑃𝑃 that 𝑄−𝑋𝑃𝑋 ≥0if and only if [ ] 𝑛+𝑟 𝐴𝑄 𝐶𝑃 0 −𝑟(𝐴) −𝑟(𝐵) −𝑖+ (𝑃) +𝑠+ =𝑛, ∗ ∗ [−𝐷 0𝐵] 0𝑃𝑃 [ ] (31) 𝑛+𝑟 𝐴𝑄 𝐶𝑃 0 −𝑟(𝐴) −𝑟(𝐵) −𝑖− (𝑃) +𝑠− ≤0, ∗ ∗ [−𝐷 0𝐵] or 0𝑃𝑃 0𝑃𝑃 [ ] 𝑛+𝑟 𝐴𝑄 𝐶𝑃 0 −𝑟(𝐴) −𝑟(𝐵) −𝑖+ (𝑃) +𝑡+ =𝑛. [ ] ∗ ∗ 𝑛+𝑟 𝐴𝑄 𝐶𝑃 0 −𝑟(𝐴) −𝑟(𝐵) −𝑖− (𝑃) +𝑡− ≤0. −𝐷 0𝐵 −𝐷∗ 0𝐵∗ [ ] [ ] (32) (25) (h) All common solutions of 𝐴𝑋 =𝐶 and 𝑋𝐵 =𝐷 satisfy ∗ 𝑄−𝑋𝑃𝑋 <0if and only if (b) 𝐴𝑋 =𝐶 and 𝑋𝐵 =𝐷 have a common solution such ∗ that 𝑄−𝑋𝑃𝑋 ≤0if and only if 0𝑃𝑃 [ ] 𝑛+𝑟 𝐴𝑄 𝐶𝑃 0 −𝑟(𝐴) −𝑟(𝐵) −𝑖− (𝑃) +𝑠− =𝑛, ∗ ∗ 0𝑃𝑃 [−𝐷 0𝐵] [ ] 𝑛+𝑟 𝐴𝑄 𝐶𝑃 0 −𝑟(𝐴) −𝑟(𝐵) −𝑖+ (𝑃) +𝑠+ ≤0, (33) −𝐷∗ 0𝐵∗ [ ] or 0𝑃𝑃 0𝑃𝑃 [ ] [ ] 𝑛+𝑟 𝐴𝑄 𝐶𝑃 0 −𝑟(𝐴) −𝑟(𝐵) −𝑖+ (𝑃) +𝑡+ ≤0. 𝑛+𝑟 𝐴𝑄 𝐶𝑃 0 −𝑟(𝐴) −𝑟(𝐵) −𝑖− (𝑃) +𝑡− =𝑛. ∗ ∗ ∗ ∗ [−𝐷 0𝐵] [−𝐷 0𝐵] (26) (34) Journal of Applied Mathematics 5

∗ ∗ 𝐴𝑋=𝐶 𝑋𝐵 =𝐷 𝑄=𝑋𝑃𝑋 max 𝑖− (𝑄 − 𝑋𝑋 ) (i) , ,and have a common 𝐴𝑋=𝐶,𝑋𝐵=𝐷 solution if and only if = min {𝑛 +− 𝑖 (𝑇2)−𝑟(𝐴) ,𝑛+𝑖− (𝑇3)−𝑟(𝐵)}, ∗ min 𝑖+ (𝑄 − 𝑋𝑋 ) 0𝑃𝑃 𝐴𝑋=𝐶,𝑋𝐵=𝐷 [ ] 2𝑛+2𝑟 𝐴𝑄 𝐶𝑃 0 −2𝑟(𝐴)−2𝑟(𝐵)−𝑟(𝑃)+𝑠+ +𝑠− ≤0, ∗ ∗ =𝑟(𝑇)+ {𝑖 (𝑇 )−𝑟(𝑇),𝑖 (𝑇 )−𝑛−𝑟(𝑇)} , [−𝐷 0𝐵] 1 max + 2 4 + 3 5 ∗ 0𝑃𝑃 min 𝑖− (𝑄 − 𝑋𝑋 ) [ ] 𝐴𝑋=𝐶,𝑋𝐵=𝐷 2𝑛+2𝑟 𝐴𝑄 𝐶𝑃 0 −2𝑟(𝐴)−2𝑟(𝐵)−𝑟(𝑃)+𝑡+ +𝑡− ≤0, ∗ ∗ [−𝐷 0𝐵] =𝑟(𝑇1)+max {𝑖− (𝑇2)−𝑟(𝑇4),𝑖− (𝑇3)−𝑟(𝑇5)} . 0𝑃𝑃 (37) [ ] 2𝑛+2𝑟 𝐴𝑄 𝐶𝑃 0 −2𝑟(𝐴)−2𝑟(𝐵)−𝑟(𝑃) +𝑠+ +𝑡− ≤0, ∗ ∗ Remark 8. Corollary 7 is one of the results in [24]. [−𝐷 0𝐵] 𝐵 𝐷 0𝑃𝑃 Let and vanish in Theorem 5,thenwecanobtain [ ] the maximal and minimal ranks and inertias of (4)subject 2𝑛+2𝑟 𝐴𝑄 𝐶𝑃 0 −2𝑟(𝐴)−2𝑟(𝐵)−𝑟(𝑃) +𝑠− +𝑡+ ≤0. ∗ ∗ to 𝐴𝑋=𝐶. [−𝐷 0𝐵] (35) Corollary 9. Let 𝑓(𝑋) be as given in (4) and assume that 𝐴𝑋=𝐶is consistent. Then ∗ Let 𝑃=𝐼in Theorem 5,wegetthefollowingcorollary. max 𝑟(𝑄−𝑋𝑃𝑋 ) 𝐴𝑋=𝐶 Corollary 7. 𝑄∈C𝑛×𝑛 𝐴 𝐵 𝐶 𝐷 Let , , , ,and be given. Assume = min {𝑛 + 𝑟 [𝐴𝑄 𝐶𝑃]−𝑟(𝐴) −𝑟(𝐵) , that (5) is consistent. Denote 2𝑛 + 𝑟 (𝐴𝑄𝐴∗ −𝐶𝑃𝐶∗)−2𝑟(𝐴) ,𝑟(𝑄) +𝑟(𝑃)}

∗ 𝐶𝐴𝑄 min 𝑟(𝑄−𝑋𝑃𝑋 ) 𝑇 =[ ], 𝑇 =𝐴𝑄𝐴∗ −𝐶𝐶∗, 𝐴𝑋=𝐶 1 𝐵∗ 𝐷∗ 2 =2𝑛+2𝑟[𝐴𝑄 𝐶𝑃]−2𝑟(𝐴) 𝑄𝐷 𝐶𝐴𝑄𝐴∗ 𝑇3 =[ ∗ ∗ ], 𝑇4 =[ ∗ ∗ ∗ ], (36) 𝐷 𝐵 𝐵 𝐵 𝐷 𝐴 + max {𝑠+ +𝑠−,𝑡+ +𝑡−,𝑠+ +𝑡−,𝑠− +𝑡+}, 𝐶𝐵 𝐴𝑄 𝑖 (𝑄 − 𝑋𝑃𝑋∗) 𝑇 =[ ]. max ± 5 𝐵∗𝐵𝐷∗ 𝐴𝑋=𝐶 ∗ ∗ = min {𝑛 +± 𝑖 (𝐴𝑄𝐴 −𝐶𝑃𝐶 )−𝑟(𝐴) ,𝑖± (𝑄) +𝑖∓ (𝑃)}, ∗ min 𝑖± (𝑄 − 𝑋𝑃𝑋 ) Then, 𝐴𝑋=𝐶

=𝑛+𝑟[𝐴𝑄 𝐶𝑃]−𝑟(𝐴) +𝑖∓ (𝑃) + max {𝑠±,𝑡±}, ∗ max 𝑟(𝑄−𝑋𝑋 ) (38) 𝐴𝑋=𝐶,𝑋𝐵=𝐷 where = min {𝑛 + 𝑟1 (𝑇 )−𝑟(𝐴) −𝑟(𝐵) ,2𝑛+𝑟(𝑇2) 𝑠± =−𝑛+𝑟(𝐴) −𝑖∓ (𝑃) −2𝑟 (𝐴) ,𝑛+𝑟(𝑇3)−2𝑟(𝐵)}, +𝑖 (𝐴𝑄𝐴∗ −𝐶𝑃𝐶∗)−𝑟[𝐶𝑃 𝐴𝑄𝐴∗], ∗ ± (39) min 𝑟(𝑄−𝑋𝑋 ) 𝐴𝑋=𝐶,𝑋𝐵=𝐷 𝑡± =−𝑛+𝑟(𝐴) +𝑖± (𝑄) −𝑟(𝑃) −[𝐴𝑄 𝐶𝑃].

=2𝑟(𝑇1)+max {𝑟 2(𝑇 )−2𝑟(𝑇4),−𝑛+𝑟(𝑇3) Remark 10. Corollary 9 is one of the results in [22].

−2𝑟(𝑇5),𝑖+ (𝑇2)+𝑖− (𝑇3) References −𝑟(𝑇4)−𝑟(𝑇5),−𝑛+𝑖− (𝑇2) [1] D. L. Chu, Y. S. Hung, and H. J. Woerdeman, “Inertia and rank +𝑖+ (𝑇3)−𝑟(𝑇4)−𝑟(𝑇5)} characterizations of some matrix expressions,” SIAM Journal on ∗ Matrix Analysis and Applications,vol.31,no.3,pp.1187–1226, max 𝑖+ (𝑄 − 𝑋𝑋 ) 𝐴𝑋=𝐶,𝑋𝐵=𝐷 2009. [2] Z. H. He and Q. W. Wang, “Solutions to optimization problems = min {𝑛 ++ 𝑖 (𝑇2)−𝑟(𝐴) ,𝑖+ (𝑇3)−𝑟(𝐵)}, on ranks and inertias of a matrix function with applications,” 6 Journal of Applied Mathematics

Applied Mathematics and Computation,vol.219,no.6,pp.2989– [19]Q.W.Wang,S.W.Yu,andC.Y.Lin,“Extremeranksofa 3001, 2012. linear quaternion matrix expression subject to triple quaternion [3] Y.Liu and Y.Tian, “Max-min problems on the ranks and inertias matrix equations with applications,” Applied Mathematics and ∗ of the matrix expressions 𝐴−𝐵𝑋𝐶±(𝐵𝑋𝐶) with applications,” Computation,vol.195,no.2,pp.733–744,2008. JournalofOptimizationTheoryandApplications,vol.148,no.3, [20]Q.W.Wang,G.J.Song,andC.Y.Lin,“Extremeranksof pp. 593–622, 2011. the solution to a consistent system of linear quaternion matrix [4]G.MarsagliaandG.P.H.Styan,“Equalitiesandinequalitiesfor equations with an application,” Applied Mathematics and Com- ranks of matrices,” Linear and Multilinear Algebra,vol.2,pp. putation,vol.189,no.2,pp.1517–1532,2007. 269–292, 1974/75. [21] Y. Tian, “Formulas for calculating the extremum ranks and [5]X.Zhang,Q.W.Wang,andX.Liu,“Inertiasandranksof inertias of a four-term quadratic matrix-valued function and some Hermitian matrix functions with applications,” Central their applications,” Linear Algebra and its Applications,vol.437, European Journal of Mathematics,vol.10,no.1,pp.329–351, no.3,pp.835–859,2012. 2012. [22] Y. Tian, “Solving optimization problems on ranks and inertias [6]Q.W.Wang,Y.Zhou,andQ.Zhang,“Ranksofthecommon of some constrained nonlinear matrix functions via an algebraic solution to six quaternion matrix equations,” Acta Mathemati- linearization method,” Nonlinear Analysis. Theory, Methods & cae Applicatae Sinica,vol.27,no.3,pp.443–462,2011. Applications,vol.75,no.2,pp.717–734,2012. [23] Y. Tian, “Maximization and minimization of the rank and [7]Q.ZhangandQ.W.Wang,“The(𝑃, 𝑄)-(skew)symmetric ∗ inertia of the Hermitian matrix expression 𝐴−𝐵𝑋−(𝐵𝑋) with extremal rank solutions to a system of quaternion matrix applications,” Linear Algebra and its Applications,vol.434,no. equations,” Applied Mathematics and Computation,vol.217,no. 10, pp. 2109–2139, 2011. 22, pp. 9286–9296, 2011. [24] Q. W. Wang, X. Zhang, and Z. H. He, “On the Hermitian [8] D.Z.Lian,Q.W.Wang,andY.Tang,“Extremeranksofapartial structures of the solution to a pair of matrix equations,” Linear banded block quaternion matrix expression subject to some and Multilinear Algebra,vol.61,no.1,pp.73–90,2013. matrix equations with applications,” Algebra Colloquium,vol.18, no. 2, pp. 333–346, 2011. [9] H. S. Zhang and Q. W.Wang, “Ranks of submatrices in a general solution to a quaternion system with applications,” Bulletin of the Korean Mathematical Society,vol.48,no.5,pp.969–990, 2011. [10] Q. W. Wang and J. Jiang, “Extreme ranks of (skew-)Hermitian solutions to a quaternion matrix equation,” Electronic Journal of Linear Algebra,vol.20,pp.552–573,2010. [11] I. A. Khan, Q. W. Wang, and G. J. Song, “Minimal ranks of some quaternion matrix expressions with applications,” Applied Mathematics and Computation,vol.217,no.5,pp.2031–2040, 2010. [12] Q. Wang, S. Yu, and W. Xie, “Extreme ranks of real matrices in solution of the quaternion matrix equation 𝐴𝑋𝐵 =𝐶 with applications,” Algebra Colloquium,vol.17,no.2,pp.345–360, 2010. [13] Q. W. Wang, H. S. Zhang, and G. J. Song, “A new solvable con- dition for a pair of generalized Sylvester equations,” Electronic Journal of Linear Algebra,vol.18,pp.289–301,2009. [14] Q. W. Wang, S. W. Yu, and Q. Zhang, “The real solutions to a system of quaternion matrix equations with applications,” Communications in Algebra,vol.37,no.6,pp.2060–2079,2009. [15] Q. W. Wang and C. K. Li, “Ranks and the least-norm of the general solution to a system of quaternion matrix equations,” Linear Algebra and its Applications,vol.430,no.5-6,pp.1626– 1640, 2009. [16]Q.W.Wang,H.S.Zhang,andS.W.Yu,“Onsolutionstothe quaternion matrix equation 𝐴𝑋𝐵+𝐶𝑌𝐷 =𝐸,” Electronic Journal of Linear Algebra,vol.17,pp.343–358,2008. [17] Q. W. Wang, G. J. Song, and X. Liu, “Maximal and minimal ranks of the common solution of some linear matrix equations over an arbitrary division ring with applications,” Algebra Colloquium,vol.16,no.2,pp.293–308,2009. [18]Q.W.Wang,G.J.Song,andC.Y.Lin,“Rankequalitiesrelated (2) to the generalized inverse 𝐴𝑇,𝑆 with applications,” Applied Mathematics and Computation,vol.205,no.1,pp.370–382, 2008. Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2012, Article ID 471573, 19 pages doi:10.1155/2012/471573

Research Article Constrained Solutions of a System of Matrix Equations

Qing-Wen Wang1 and Juan Yu1, 2

1 Department of Mathematics, Shanghai University, 99 Shangda Road, Shanghai 200444, China 2 Department of Basic Mathematics, China University of Petroleum, Qingdao 266580, China

Correspondence should be addressed to Qing-Wen Wang, [email protected]

Received 26 September 2012; Accepted 7 December 2012

Academic Editor: Panayiotis J. Psarrakos

Copyright q 2012 Q.-W. Wang and J. Yu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We derive the necessary and sufficient conditions of and the expressions for the orthogonal solutions, the symmetric orthogonal solutions, and the skew-symmetric orthogonal solutions of the system of matrix equations AX  B and XC  D, respectively. When the matrix equations are not consistent, the least squares symmetric orthogonal solutions and the least squares skew- symmetric orthogonal solutions are respectively given. As an auxiliary, an algorithm is provided to compute the least squares symmetric orthogonal solutions, and meanwhile an example is presented to show that it is reasonable.

1. Introduction

m×n n×n n×n n×n Throughout this paper, the following notations will be used. R , OR , SR ,andASR denote the set of all m × n real matrices, the set of all n × n orthogonal matrices, the set of all n × n symmetric matrices, and the set of all n × n skew-symmetric matrices, respectively. In is T the identity matrix of order n. · and tr· represent the transpose and the trace of the real matrix, respectively. ·stands for the Frobenius norm induced by the inner product. The following two definitions will also be used.

n×n Definition 1.1 see 1. A real matrix X ∈ R is said to be a symmetric orthogonal matrix if T T X  X and X X  In.

m× m Definition 1.2 see 2. A real matrix X ∈ R2 2 is called a skew-symmetric orthogonal T T matrix if X  −X and X X  In.

The set of all n × n symmetric orthogonal matrices and the set of all 2m × 2m skew- n×n m× m symmetric orthogonal matrices are, respectively, denoted by SOR and SSOR2 2 . Since 2 Journal of Applied Mathematics the linear matrix equations and its optimal approximation problem have great applications in structural design, biology, control theory, and linear optimal control, and so forth, see, for example, 3–5, there has been much attention paid to the linear matrix equations.The well-known system of matrix equations

AX  B, XC  D, 1.1 as one kind of linear matrix equations, has been investigated by many authors, and a series of important and useful results have been obtained. For instance, the system 1.1 with unknown matrix X being bisymmetric, centrosymmetric, bisymmetric nonnegative definite, Hermitian and nonnegative definite, and P, Q-skew symmetric has been, respectively, investigated by Wang et al. 6, 7, Khatri and Mitra 8, and Zhang and Wang 9. Of course, if the solvability conditions of system 1.1 are not satisfied, we may consider its least squares solution. For example, Li et al. 10 presented the least squares mirrorsymmetric solution. Yuan 11 got the least-squares solution. Some results concerning the system 1.1 can also be found in 12–18. Symmetric orthogonal matrices and skew-symmetric orthogonal matrices play impor- tant roles in numerical analysis and numerical solutions of partial differential equations. Papers 1, 2, respectively, derived the symmetric orthogonal solution X of the matrix equation XC  D and the skew-symmetric orthogonal solution X of the matrix equation AX  B. Motivated by the work mentioned above, we in this paper will, respectively, study the orthogonal solutions, symmetric orthogonal solutions, and skew-symmetric orthogonal solutions of the system 1.1. Furthermore, if the solvability conditions are not satisfied, the least squares skew-symmetric orthogonal solutions and the least squares symmetric orthogonal solutions of the system 1.1 will be also given. The remainder of this paper is arranged as follows. In Section 2, some lemmas are provided to give the main results of this paper. In Sections 3, 4,and5, the necessary and sufficient conditions of and the expression for the orthogonal, the symmetric orthogonal, and the skew-symmetric orthogonal solutions of the system 1.1 are, respectively, obtained. In Section 6, the least squares skew-symmetric orthogonal solutions and the least squares symmetric orthogonal solutions of the system 1.1 are presented, respectively. In addition, an algorithm is provided to compute the least squares symmetric orthogonal solutions, and meanwhile an example is presented to show that it is reasonable. Finally, in Section 7, some concluding remarks are given.

2. Preliminaries

In this section, we will recall some lemmas and the special C-S decomposition which will be used to get the main results of this paper.

m×n m×n Lemma 2.1 see 1, Lemmas 1 and 2. Given C ∈ R2 , D ∈ R2 . The matrix equation YC  D has a solution Y ∈ OR2m×2m if and only if DT D  CT C. Let the singular value decompositions of C and D be, respectively,

Π 0 T Π 0 T C  U V ,D W V , 2.1 00 00 Journal of Applied Mathematics 3 where

Π σ ,...,σ > ,k C  D , diag 1 k 0 rank  rank  U  U U ∈ OR2m×2m,U∈ R2m×k,   1 2 1 2.2 W  W W ∈ OR2m×2m,W∈ R2m×k,V V V ∈ ORn×n,V∈ Rn×k. 1 2 1 1 2 1

Then the orthogonal solutions of YC  D can be described as Ik 0 T Y  W U , 2.3 0 P where P ∈ OR2m−k×2m−k is arbitrary.

Lemma 2.2 see 2, Lemmas 1 and 2. Given A ∈ Rn×m, B ∈ Rn×m. The matrix equation AX  B has a solution X ∈ ORm×m if and only if AAT  BBT . Let the singular value decompositions of A and B be, respectively, Σ 0 T Σ 0 T A  U V ,B U Q , 2.4 00 00 where Σ δ ,...,δ > ,l A  B ,U U U ∈ ORn×n,U∈ Rn×l, diag 1 l 0 rank  rank  1 2 1 V  V V ∈ ORm×m,V∈ Rm×l,Q Q Q ∈ ORm×m,Q∈ Rm×l. 1 2 1 1 2 1 2.5

Then the orthogonal solutions of AX  B can be described as Il 0 T X  V Q , 2.6 0 W where W ∈ ORm−l×m−l is arbitrary.

Lemma 2.3 see 2, Theorem 1. If X X X  11 12 ∈ OR2m×2m,X∈ ASRk×k,   X X 11 2.7 21 22 then the C-S decomposition of X can be expressed as D T X X D Σ Σ 1 0 11 12 1 0  11 12 ,   D X X R Σ Σ 2.8 0 2 21 22 0 2 21 22 4 Journal of Applied Mathematics

D ∈ ORk×k,D,R ∈ OR2m−k×2m−k where 1 2 2 , ⎛ ⎞ ⎛ ⎞ I ⎜ 00⎟ 000 Σ  , Σ  ⎝ S ⎠, 11 ⎝0 C 0⎠ 12 0 0 000 00I ⎛ ⎞ ⎛ ⎞ I 000 ⎜ 00⎟ Σ  ⎝ S ⎠, Σ  T , 21 0 0 22 ⎝0 −C 0⎠ 00I 000 01 2.9 I  diag I ,...,Ir , I  ··· Ir  , C  diagC ,...,Cl , 1 1 1 1 −10 1 1 c C  0 i ,i ,...,l S  S ,...,S , i 1 1; diag 1 l  −ci 0 1 s S  i 0 ,ST S CT C  I ,i ,...,l i i i i i 2 1 1; 0 si r l  X , Σ ΣT ,k− r  X . 2 1 2 1 rank 11 21 12 2 1 rank 21

Lemma 2.4 see 1, Theorem 1. If K K K  11 12 ∈ ORm×m,K∈ SRl×l,   K K 11 2.10 21 22 then the C-S decomposition of K can be described as

D T K K D Π Π 1 0 11 12 1 0  11 12 ,   D K K R Π Π 2.11 0 2 21 22 0 2 21 22

D ∈ ORl×l D R ∈ ORm−l×m−l where 1 , 2, 2 , ⎛ ⎞ ⎛ ⎞ I ⎜ 00⎟ 000 Π  , Π  ⎝ S ⎠, 11 ⎝0 C 0⎠ 12 0 0 000 00I ⎛ ⎞ ⎛ ⎞ I 000 ⎜ 00⎟ Π  ⎝ S ⎠, Π  , 21 0 0 22 ⎝0 −C 0⎠ 00I 000 2.12

I  i ,...,i ,i ± ,j ,...,r diag 1 r  j 1 1 ;

S  s  ,...,s , C  c  ,...,c , diag 1 l  diag 1 l  s2  − c2,i ,...,l r l  K , Π ΠT . i 1 i 1 ; rank 11 21 12 Journal of Applied Mathematics 5

Remarks 2.5. In order to know the C-S decomposition of an orthogonal matrix with a k × k leading skew- symmetric submatrix for details, one can deeply study the proof of Theorem 1 in 1 and 2.

Lemma 2.6. Given A ∈ Rn×m, B ∈ Rn×m. Then the matrix equation AX  B has a solution X ∈ SORm×m if and only if AAT  BBT and ABT  BAT . When these conditions are satisfied, the general symmetric orthogonal solutions can be expressed as I 2l−r 0 T X  V Q , 2.13 0 G where

Q  JQ I,D ∈ ORm×m, V  JV I,R ∈ ORm×m, diag 2 diag 2 ⎛ ⎞ I 00 2.14 ⎝ ⎠ J  00I , 0 I 0 and G ∈ SORm−2l r×m−2l r is arbitrary.

m×m Proof. The Necessity. Assume X ∈ SOR is a solution of the matrix equation AX  B, then we have

BBT  AXXT AT  AAT , 2.15 BAT  AXAT  AXT AT  ABT .

T T The Sufficiency. Since the equality AA  BB holds, then by Lemma 2.2, the singular value decompositions of A and B can be, respectively, expressed as 2.4. Moreover, the T T condition AB  BA means Σ 0 T Σ 0 T Σ 0 T Σ 0 T U V Q U  U Q V U , 2.16 00 00 00 00 which can be written as

V T Q  QT V .   1 1 1 1 2.17

From Lemma 2.2, the orthogonal solutions of the matrix equation AX  B can be described as 2.6. Now we aim to find that X in 2.6 is also symmetric. Suppose that X is symmetric, then we have Il 0 T T Il 0 T V Q  Q V , 2.18 0 W 0 W 6 Journal of Applied Mathematics together with the partitions of the matrices Q and V in Lemma 2.2,weget

V T Q  QT V ,   1 1 1 1 2.19 QT V W  V T Q ,   1 2 1 2 2.20 QT V W  WT V T Q .   2 2 2 2 2.21

By 2.17, we can get 2.19. Now we aim to find the orthogonal solutions of the system       QT V QT V T  of matrix equations 2.20 and 2.21 . Firstly, we obtain from 2.20 that 1 2 1 2 V T Q V T Q T   W   l × l 2 2 , then by Lemma 2.2, 2.20 has an orthogonal solution .By 2.17 ,the 1 1 T leading principal submatrix of the orthogonal matrix V Q is symmetric. Then we have, from Lemma 2.4,

V T Q  D Π RT ,   1 2 1 12 2 2.22 QT V  D Π DT ,   1 2 1 12 2 2.23 V T Q  D Π RT .   2 2 2 22 2 2.24

From 2.20, 2.22,and2.23, the orthogonal solution W of 2.20 is

⎛ ⎞ G 00 W  D ⎝ I ⎠RT ,   2 0 0 2 2.25 00I

m− l r×m− l r T where G ∈ OR 2 2 is arbitrary. Combining 2.21, 2.24,and2.25 yields G  G, that is,Gis a symmetric orthogonal matrix. Denote

V  V I,D , Q  Q I,R ,   diag 2 diag 2 2.26 then the symmetric orthogonal solutions of the matrix equation AX  B can be expressed as

⎛ ⎞ I 00 ⎝ ⎠ T X  V 0 G 0 Q . 2.27 00I

Let the partition matrix J be

⎛ ⎞ I 00 ⎝ ⎠ J  00I , 2.28 0 I 0 Journal of Applied Mathematics 7 compatible with the block matrix

⎛ ⎞ I 00 ⎝ ⎠ 0 G 0 . 2.29 00I

Put

V  JV , Q  JQ, 2.30 then the symmetric orthogonal solutions of the matrix equation AX  B can be described as 2.13.

Setting A  CT , B  DT ,andX  Y T in 2, Theorem 2, and then by Lemmas 2.1 and 2.3, we can have the following result.

Lemma 2.7. Given C ∈ R2m×n, D ∈ R2m×n. Then the equation has a solution Y ∈ SSOR2m×2m if and only if DT D  CT C and DT C  −CT D. When these conditions are satisfied, the skew-symmetric orthogonal solutions of the matrix equation YC  D can be described as

I 0 T Y  W U , 2.31 0 H where

W  JW I,−D ∈ OR2m×2m, U  JU I,R ∈ OR2m×2m, diag 2 diag 2 ⎛ ⎞ I 00 2.32  ⎝ ⎠ J  00I , 0 I 0 and H ∈ SSOR2r×2r is arbitrary.

3. The Orthogonal Solutions of the System 1.1

The following theorems give the orthogonal solutions of the system 1.1.

Theorem 3.1. Given A, B ∈ Rn×m and C, D ∈ Rm×n, suppose the singular value decompositions of A and B are, respectively, as 2.4. Denote

C D QT C  1 ,VT D  1 ,   C D 3.1 2 2 8 Journal of Applied Mathematics

C D ∈ Rl×n C ,D ∈ Rm−l×n C D where 1, 1 , and 2 2 . Let the singular value decompositions of 2 and 2 be, respectively, Π 0 T Π 0 T C  U V ,D W V , 3.2 2 00 2 00

m−l×m−l n×n k×k where U, W ∈ OR , V ∈ OR , Π ∈ R is diagonal, whose diagonal elements are C D   nonzero singular values of 2 or 2. Then the system 1.1 has orthogonal solutions if and only if

AAT  BBT ,C D ,DT D  CT C .   1 1 2 2 2 2 3.3

In which case, the orthogonal solutions can be expressed as Ik l 0 T X  V  Q , 3.4 0 G where I I l 0 m×m l 0 m×m V  V ∈ OR , Q  Q ∈ OR , 3.5 0 W 0 U and G ∈ ORm−k−l×m−k−l is arbitrary.

Proof. Let the singular value decompositions of A and B be, respectively, as 2.4. Since the matrix equation AX  B has orthogonal solutions if and only if

T T AA  BB , 3.6 then by Lemma 2.2, its orthogonal solutions can be expressed as 2.6. Substituting 2.6 and   XC  D C  D WC  D 3.1 into the matrix equation , we have 1 1 and 2 2.ByLemma 2.1,the WC  D W matrix equation 2 2 has orthogonal solution if and only if

DT D  CT C .   2 2 2 2 3.7

C D Let the singular value decompositions of 2 and 2 be, respectively, Π 0 T Π 0 T C  U V ,D W V , 3.8 2 00 2 00

m−l×m−l n×n k×k where U, W ∈ OR , V ∈ OR , Π ∈ R is diagonal, whose diagonal elements are C D nonzero singular values of 2 or 2. Then the orthogonal solutions can be described as Ik 0 T W  W  U , 3.9 0 G Journal of Applied Mathematics 9

 m−k−l×m−k−l where G ∈ OR is arbitrary. Therefore, the common orthogonal solutions of the system 1.1 can be expressed as ⎛ ⎞ I I I l 00 I I X  V l 0 QT  V l 0 ⎝ I ⎠ l 0 QT  V k l 0 Q T ,   0 k 0 T  3.10 0 W 0 W  0 U 0 G 00G where I I l 0 m×m l 0 m×m V  V ∈ OR , Q  Q ∈ OR , 3.11 0 W 0 U

 m−k−l×m−k−l and G ∈ OR is arbitrary.

The following theorem can be shown similarly.

Theorem 3.2. Given A, B ∈ Rn×m and C, D ∈ Rm×n, let the singular value decompositions of C and D be, respectively, as 2.1. Partition AW  A A ,BU B B ,   1 2 1 2 3.12

A ,B ∈ Rn×k A ,B ∈ Rn×m−k A B where 1 1 , 2 2 . Assume the singular value decompositions of 2 and 2 are, respectively, Σ 0 T Σ 0 T A  U V ,B U Q , 3.13 2 00 2 00

m−k×m−k n×n l×l where V , Q ∈ OR , U ∈ OR , Σ ∈ R is diagonal, whose diagonal elements are A B   nonzero singular values of 2 or 2. Then the system 1.1 has orthogonal solutions if and only if

DT D  CT C, A  B ,AAT  B BT .   1 1 2 2 2 2 3.14

In which case, the orthogonal solutions can be expressed as Ik l 0 T X  W  U , 3.15 0 H where I I k 0 m×m k 0 m×m W  W ∈ OR , U  U ∈ OR , 3.16 0 W 0 U

  and H ∈ ORm−k−l ×m−k−l  is arbitrary. 10 Journal of Applied Mathematics

4. The Symmetric Orthogonal Solutions of the System 1.1

We now present the symmetric orthogonal solutions of the system 1.1.

Theorem 4.1. Given A, B ∈ Rn×m, C, D ∈ Rm×n. Let the symmetric orthogonal solutions of the matrix equation AX  B be described as in Lemma 2.6. Partition C D Q T C  1 , V T D  1 ,   C D 4.1 2 2

C ,D ∈ R2l−r×n C ,D ∈ Rm−2l r×n   where 1 1 , 2 2 . Then the system 1.1 has symmetric orthogonal solutions if and only if

AAT  BBT ,ABT  BAT ,C  D ,DT D  CT C ,DT C  CT D .   1 1 2 2 2 2 2 2 2 2 4.2

In which case, the solutions can be expressed as I 0 T X  V  Q , 4.3 0 G where I I 2l−r 0 m×m 2l−r 0 m×m V  V ∈ OR , Q  Q ∈ OR , 4.4 0 W 0 U

    and G ∈ SORm−2l r−2l r ×m−2l r−2l r  is arbitrary.

Proof. From Lemma 2.6, we obtain that the matrix equation AX  B has symmetric T T T T orthogonal solutions if and only if AA  BB and AB  BA . When these conditions are satisfied, the general symmetric orthogonal solutions can be expressed as I 2l−r 0 T X  V Q , 4.5 0 G

m− l r×m− l r m×m m×m where G ∈ SOR 2 2 is arbitrary, Q ∈ OR , V ∈ OR . Inserting 4.1 and 4.5 XC  D C  D GC  D   into the matrix equation ,weget 1 1 and 2 2.By 1, Theorem 2 ,the GC  D matrix equation 2 2 has a symmetric orthogonal solution if and only if

DT D  CT C ,DT C  CT D .   2 2 2 2 2 2 2 2 4.6

In which case, the solutions can be described as I   2l −r 0 T G  W  U , 4.7 0 G Journal of Applied Mathematics 11

 m− l r− l r×m− l r− l r m− l r×m− l r where G ∈ SOR 2 2 2 2 is arbitrary, W ∈ OR 2 2 ,andU ∈ ORm−2l r×m−2l r. Hence the system 1.1 has symmetric orthogonal solutions if and only if all equalities in 4.2 hold. In which case, the solutions can be expressed as

⎛ ⎞ I I I 2l−r 0 I X  V 2l−r 0 Q T  V 2l−r 0 ⎝ I ⎠ 2l−r 0 Q T ,   0 T 4.8 0 G 0 W 0  0 U 0 G that is, the expression in 4.3.

The following theorem can also be obtained by the method used in the proof of Theorem 4.1.

Theorem 4.2. Given A, B ∈ Rn×m, C, D ∈ Rm×n. Let the symmetric orthogonal solutions of the matrix equation XC  D be described as

I 2k−r 0 T X  M N , 4.9 0 G

m×m m− k r×m− k r where M, N ∈ OR , G ∈ SOR 2 2 . Partition

A  M M ,B  N N ,M,N ∈ Rn×2k−r,M,N ∈ Rn×m−2k r.   M 1 2 N 1 2 1 1 2 2 4.10

Then the system 1.1 has symmetric orthogonal solutions if and only if

DT D  CT C, DT C  CT D, M  N ,MMT  N NT ,MNT  N MT . 1 1 2 2 2 2 2 2 2 2 4.11

In which case, the solutions can be expressed as

I 0 T X  M  N , 4.12 0 H where I I M  M 2k−r 0 ∈ ORm×m, N  N 2k−r 0 ∈ ORm×m,   W U 4.13 0 1 0 1

    and H ∈ SORm−2k r−2k r ×m−2k r−2k r  is arbitrary. 12 Journal of Applied Mathematics

5. The Skew-Symmetric Orthogonal Solutions of the System 1.1

In this section, we show the skew-symmetric orthogonal solutions of the system 1.1.

Theorem 5.1. Given A, B ∈ Rn×2m, C, D ∈ R2m×n. Suppose the matrix equation AX  B has skew- symmetric orthogonal solutions with the form I 0 T X  V Q , 5.1 1 0 H 1

H ∈ SSOR2r×2r V Q ∈ OR2m×2m where is arbitrary, 1, 1 . Partition Q V Q T C  1 , V T D  1 ,   1 Q 1 V 5.2 2 2

Q ,V ∈ R2m−2r×n Q ,V ∈ R2r×n   where 1 1 , 2 2 . Then the system 1.1 has skew-symmetric orthogonal solutions if and only if

AAT  BBT ,ABT  −BAT ,Q V ,QT Q  V T V ,QT V  −V T Q .   1 1 2 2 2 2 2 2 2 2 5.3

In which case, the solutions can be expressed as I 0 T X  V  Q , 5.4 1 0 J 1 where I I V  V 2m−2r 0 ∈ OR2m×2m, Q  Q 2m−2r 0 ∈ OR2m×2m,   1 1 5.5 0 W 0 U

  and J ∈ SSOR2k ×2k is arbitrary.

Proof. By 2, Theorem 2, the matrix equation AX  B has the skew-symmetric orthogonal T T T T solutions if and only if AA  BB and AB  −BA . When these conditions are satisfied, the general skew-symmetric orthogonal solutions can be expressed as 5.1. Substituting 5.1   XC  D Q  V HQ  V and 5.2 into the matrix equation ,weget 1 1 and 2 2.FromLemma 2.7, HQ  V H equation 2 2 has a skew-symmetric orthogonal solution if and only if

QT Q  V T V ,QT V  −V T Q .   2 2 2 2 2 2 2 2 5.6

When these conditions are satisfied, the solution can be described as I 0 T H  W  U , 5.7 0 J Journal of Applied Mathematics 13

 k× k r× r where J ∈ SSOR2 2 is arbitrary, W, U ∈ OR2 2 . Inserting 5.7 into 5.1 yields that the system 1.1 has skew-symmetric orthogonal solutions if and only if all equalities in 5.3 hold. In which case, the solutions can be expressed as 5.4.

Similarly, the following theorem holds.

Theorem 5.2. Given A, B ∈ Rn×2m, C, D ∈ R2m×n. Suppose the matrix equation XC  D has skew-symmetric orthogonal solutions with the form I 0 T X  W U , 5.8 0 K

p× p m× m where K ∈ SSOR2 2 is arbitrary, W, U ∈ OR2 2 . Partition AW  W W ,BU  U U ,   1 2 1 2 5.9

W ,U ∈ Rn×2m−2p W ,U ∈ Rn×2p   where 1 1 , 2 2 . Then the system 1.1 has skew-symmetric orthogonal solutions if and only if

DT D  CT C, DT C  −CT D, W  U ,WWT  U UT ,UWT  −W UT . 1 1 2 2 2 2 2 2 2 2 5.10

In which case, the solutions can be expressed as I 0 T X  W  U , 5.11 1 0 J 1 where I I W  W 2m−2p 0 ∈ OR2m×2m, U  2m−2p 0 U ∈ OR2m×2m,   1 W 1 U T 5.12 0 1 0 1 and J ∈ SSOR2q×2q is arbitrary.

6. The Least Squares (Skew-) Symmetric Orthogonal Solutions of the System 1.1

If the solvability conditions of a system of matrix equations are not satisfied, it is natural to consider its least squares solution. In this section, we get the least squares skew- symmetric orthogonal solutions of the system 1.1, that is, seek X ∈ SSOR2m×2mSORn×n such that

min AX − B2 XC − D2.   X∈SSOR2m×2mSORn×n 6.1 14 Journal of Applied Mathematics

With the help of the definition of the Frobenius norm and the properties of the skew- symmetric orthogonal matrix, we get that T T T AX − B2 XC − D2  A2 B2 C2 D2 − 2tr X A B DC . 6.2

Let

AT B DCT  K,   KT − K 6.3  T. 2

Then, it follows from the skew-symmetric matrix X that T T T T T K − K tr X A B DC  tr X K  tr−XK  tr X  trXT. 6.4 2

Therefore, 6.1 holds if and only if 6.4 reaches its maximum. Now, we pay our attention to find the maximum value of 6.4. Assume the eigenvalue decomposition of T is Λ 0 T T  E E 6.5 00 with α Λ Λ ,...,Λ , Λ  0 i ,α> ,i ,...,l l  T .   diag 1 l i i 0 1 ;2rank  6.6 −αi 0

Denote X X ET XE  11 12 ,   X X 6.7 21 22 partitioned according to Λ 0 , 6.8 00 then 6.4 has the following form: T Λ 0 trXT  tr E XE  trX Λ. 6.9 00 11 Journal of Applied Mathematics 15

Thus, by X  I  I ,...,I ,   11 diag 1 l 6.10 where 0 −1 Ii  ,i 1,...,l. 6.11 10

T Equation 6.9 gets its maximum. Since E XE is skew-symmetric, it follows from

I 0 T X  E E , 6.12 0 G

 m− l× m− l where G ∈ SSOR 2 2 2 2 is arbitrary, that 6.1 obtains its minimum. Hence we have the following theorem.

Theorem 6.1. Given A, B ∈ Rn×2m and C, D ∈ R2m×n, denote

KT − K AT B DCT  K,  T, 6.13 2 and let the spectral decomposition of T be 6.5. Then the least squares skew-symmetric orthogonal solutions of the system 1.1 can be expressed as 6.12.

If X in 6.1 is a symmetric orthogonal matrix, then by the definition of the Frobenius norm and the properties of the symmetric orthogonal matrix, 6.2 holds. Let

HT H AT B DCT  H,  N. 6.14 2

Then we get that min AX − B2 XC − D2  min A2 B2 C2 D2 − 2trXN .   X∈SORn×n X∈SORn×n 6.15

Thus 6.15 reaches its minimum if and only if trXN obtains its maximum. Now, we focus on finding the maximum value of trXN. Let the spectral decomposition of the symmetric matrix N be Σ 0 T N  M M , 6.16 00 16 Journal of Applied Mathematics where

Σ 0 Σ − , Σ  diagλ ,...,λs, 0 Σ 1   λ ≥ λ ≥···≥λ > , Σ−  λ ,...,λ , 6.17 1 2 s 0 diag s 1 t λ ≤ λ ≤···≤λ < ,t N . t t−1 s 1 0 rank 

Denote X X MT XM  11 12   X X 6.18 21 22 being compatible with Σ 0 . 6.19 00

Then Σ X X Σ XN  MT XM 0  11 12 0  X Σ .   tr tr tr X X tr 11 6.20 00 21 22 00

Therefore, it follows from I X  I  s 0   11 6.21 0 −It−s

T that 6.20 reaches its maximum. Since M XM is a symmetric orthogonal matrix, then when X has the form I 0 T X  M M , 6.22 0 L

n−t×n−t where L ∈ SOR is arbitrary, 6.15 gets its minimum. Thus we obtain the following theorem.

Theorem 6.2. Given A, B ∈ Rn×m, C, D ∈ Rm×n, denote

HT H AT B DCT  H,  N, 6.23 2 and let the eigenvalue decomposition of N be 6.16. Then the least squares symmetric orthogonal solutions of the system 1.1 can be described as 6.22. Journal of Applied Mathematics 17

Algorithm 6.3. Consider the following. n×m m×n Step 1. Input A, B ∈ R and C, D ∈ R . Step 2. Compute

AT B DCT  H,   HT H 6.24 N  . 2

Step 3. Compute the spectral decomposition of N with the form 6.16. Step 4. Compute the least squares symmetric orthogonal solutions of 1.1 according to 6.22.

Example 6.4. Assume ⎛ ⎞ ⎛ ⎞ 12.28.4 −5.66.39.410.7 8.59.43.67.86.34.7 ⎜ ⎟ ⎜ ⎟ ⎜ . . . . . − . ⎟ ⎜ ...... ⎟ ⎜11 829856996 7 8⎟ ⎜ 2 73679945678⎟ A  ⎜ ...... ⎟,B ⎜ ...... ⎟, ⎜10 623115786789 ⎟ ⎜ 3 76786983429⎟ ⎝ 3.67.84.911.99.45.9 ⎠ ⎝−4.36.25.77.45.49.5⎠ 4.56.77.83.15.611.6 2.93.9 −5.26.37.84.6 ⎛ ⎞ ⎛ ⎞   5.46.43.75.69.7 7.99.55.42.88.6 6.25 ⎜ ⎟ ⎜ ⎟ ⎜ . . . − . . ⎟ ⎜ . . . . . ⎟ ⎜ 3 64278 6 378 ⎟ ⎜8 726678481⎟ ⎜ . . − . . . ⎟ ⎜ . . − . . . ⎟ C  ⎜ 6 735 4 62928 ⎟,D ⎜5 739 2 95219⎟. ⎜− . . . . . ⎟ ⎜ . . . − . . ⎟ ⎜ 2 7721083738 ⎟ ⎜4 85818 7 258⎟ ⎝ 1.93.98.25.611.2⎠ ⎝2.87.94.56.79.6⎠ 8.97.89.47.95.6 9.54.13.49.83.9

It can be verified that the given matrices A, B, C,andD do not satisfy the solvability conditions in Theorem 4.1 or Theorem 4.2. So we intend to derive the least squares symmetric orthogonal solutions of the system 1.1.ByAlgorithm 6.3, we have the following results: 1 the least squares symmetric orthogonal solution ⎛ ⎞ 0.01200 0.06621 −0.24978 0.27047 0.91302 −0.16218 ⎜ ⎟ ⎜ . . . − . . . ⎟ ⎜ 0 06621 0 65601 0 22702 0 55769 0 24587 0 37715 ⎟ ⎜− . . . . . − . ⎟ X  ⎜ 0 24978 0 22702 0 80661 0 40254 0 04066 0 26784⎟,   ⎜ . − . . . . . ⎟ 6.26 ⎜ 0 27047 0 55769 0 40254 0 06853 0 23799 0 62645 ⎟ ⎝ 0.91302 0.24587 0.04066 0.23799 −0.12142 −0.18135⎠ −0.16218 0.37715 −0.26784 0.62645 −0.181354 0.57825

2

min AX − B2 XC − D2  1.98366, X∈SORm×m   6.27 XT X − I  . × −15, XT − X  . . 6 2 84882 10 0 00000 18 Journal of Applied Mathematics

Remark 6.5. 1 There exists a unique symmetric orthogonal solution such that 6.1 holds if and only if the matrix

HT H N  , 6.28 2 where

T T H  A B DC , 6.29 is invertible. Example 6.4 just illustrates it. 2 The algorithm about computing the least squares skew-symmetric orthogonal solutions of the system 1.1 can be shown similarly; we omit it here.

7. Conclusions

This paper is devoted to giving the solvability conditions of and the expressions of the orthogonal solutions, the symmetric orthogonal solutions, and the skew-symmetric orthogonal solutions to the system 1.1, respectively, and meanwhile obtaining the least squares symmetric orthogonal and skew-symmetric orthogonal solutions of the system 1.1. In addition, an algorithm and an example have been provided to compute its least squares symmetric orthogonal solutions.

Acknowledgments

This research was supported by the grants from Innovation Program of Shanghai Municipal Education Commission 13ZZ080, the National Natural Science Foundation of China 11171205, the Natural Science Foundation of Shanghai 11ZR1412500, the Discipline Project at the corresponding level of Shanghai A. 13-0101-12-005, and Shanghai Leading Academic Discipline Project J50101.

References

1 C. J. Meng and X. Y. Hu, “The inverse problem of symmetric orthogonal matrices and its optimal approximation,” Mathematica Numerica Sinica, vol. 28, pp. 269–280, 2006 Chinese. 2 C. J. Meng, X. Y. Hu, and L. Zhang, “The skew-symmetric orthogonal solutions of the matrix equation AX  B,” Linear Algebra and Its Applications, vol. 402, no. 1–3, pp. 303–318, 2005. 3 D. L. Chu, H. C. Chan, and D. W. C. Ho, “Regularization of singular systems by derivative and proportional output feedback,” SIAM Journal on Matrix Analysis and Applications, vol. 19, no. 1, pp. 21–38, 1998. 4 K. T. Joseph, “Inverse eigenvalue problem in structural design,” AIAA Journal, vol. 30, no. 12, pp. 2890–2896, 1992. 5 A. Jameson and E. Kreinder, “Inverse problem of linear optimal control,” SIAM Journal on Control, vol. 11, no. 1, pp. 1–19, 1973. 6 Q. W. Wang, “Bisymmetric and centrosymmetric solutions to systems of real quaternion matrix equations,” Computers and Mathematics with Applications, vol. 49, no. 5-6, pp. 641–650, 2005. 7 Q. W. Wang, X. Liu, and S. W. Yu, “The common bisymmetric nonnegative definite solutions with extreme ranks and inertias to a pair of matrix equations,” Applied Mathematics and Computation, vol. 218, pp. 2761–2771, 2011. Journal of Applied Mathematics 19

8 C. G. Khatri and S. K. Mitra, “Hermitian and nonnegative definite solutions of linear matrix equations,” SIAM Journal on Applied Mathematics, vol. 31, pp. 578–585, 1976. 9 Q. Zhang and Q. W. Wang, “The P, Q-skewsymmetric extremal rank solutions to a system of quaternion matrix equations,” Applied Mathematics and Computation, vol. 217, no. 22, pp. 9286–9296, 2011. 10 F. L. Li, X. Y. Hu, and L. Zhang, “Least-squares mirrorsymmetric solution for matrix equations AX  B, XC  D,” Numerical Mathematics, vol. 15, pp. 217–226, 2006. 11 Y. X. Yuan, “Least-squares solutions to the matrix equations AX  B and XC  D,” Applied Mathematics and Computation, vol. 216, no. 10, pp. 3120–3125, 2010. 12 A. Dajic´ and J. J. Koliha, “Positive solutions to the equations AX  C and XB  D for Hilbert space operators,” Journal of Mathematical Analysis and Applications, vol. 333, no. 2, pp. 567–576, 2007. 13 F. L. Li, X. Y. Hu, and L. Zhang, “The generalized reflexive solution for a class of matrix equations AX  B, XC  D,” Acta Mathematica Scientia, vol. 28, no. 1, pp. 185–193, 2008. 14 S. Kumar Mitra, “The matrix equations AX  C, XB  D,” Linear Algebra and Its Applications, vol. 59, pp. 171–181, 1984. 15 Y. Qiu, Z. Zhang, and J. Lu, “The matrix equations AX  B, XC  D with PX  sXP constraint,” Applied Mathematics and Computation, vol. 189, no. 2, pp. 1428–1434, 2007. 16 Y. Y. Qiu and A. D. Wang, “Least squares solutions to the equations AX  B, XC  B with some constraints,” Applied Mathematics and Computation, vol. 204, no. 2, pp. 872–880, 2008. 17 Q. W. Wang, X. Zhang, and Z. H. He, “On the Hermitian structures of the solution to a pair of matrix equations,” Linear Multilinear Algebra, vol. 61, no. 1, pp. 73–90, 2013. 18 Q. X. Xu, “Common Hermitian and positive solutions to the adjointable operator equations AX  C, XB  D,” Linear Algebra and Its Applications, vol. 429, no. 1, pp. 1–11, 2008. Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2012, Article ID 398085, 14 pages doi:10.1155/2012/398085

Research Article On the Hermitian R-Conjugate Solution of a System of Matrix Equations

Chang-Zhou Dong,1 Qing-Wen Wang,2 and Yu-Ping Zhang3

1 School of Mathematics and Science, Shijiazhuang University of Economics, Shijiazhuang, Hebei 050031, China 2 Department of Mathematics, Shanghai University, Shanghai, Shanghai 200444, China 3 Department of Mathematics, Ordnance Engineering College, Shijiazhuang, Hebei 050003, China

Correspondence should be addressed to Qing-Wen Wang, [email protected]

Received 3 October 2012; Accepted 26 November 2012

Academic Editor: Yang Zhang

Copyright q 2012 Chang-Zhou Dong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

− T Let R be an n by n nontrivial real symmetric involution matrix, that is, R R 1 R / In.Ann × n complex matrix A is termed R-conjugate if A RAR,whereA denotes the conjugate of A.We give necessary and sufficient conditions for the existence of the Hermitian R-conjugate solution to the system of complex matrix equations AX C and XB D and present an expression of the Hermitian R-conjugate solution to this system when the solvability conditions are satisfied. In addition, the solution to an optimal approximation problem is obtained. Furthermore, the least squares Hermitian R-conjugate solution with the least norm to this system mentioned above is considered. The representation of such solution is also derived. Finally, an algorithm and numerical examples are given.

1. Introduction

m×n Throughout, we denote the complex m × n matrix space by C ,therealm × n matrix space m×n m×n m×n T ∗ † by R , and the set of all matrices in R with rank r by Rr . The symbols I,A, A ,A ,A , and A stand for the identity matrix with the appropriate size, the conjugate, the transpose, the conjugate transpose, the Moore-Penrose generalized inverse, and the Frobenius norm of m×n A ∈ C , respectively. We use Vn to denote the n × n backward matrix having the elements 1 along the southwest diagonal and with the remaining elements being zeros. Recall that an n × n complex matrix A is centrohermitian if A VnAVn. Centro- hermitian matrices and related matrices, such as k-Hermitian matrices, Hermitian Toeplitz matrices, and generalized centrohermitian matrices, appear in digital signal processing and others areas see, 1–4. As a generalization of a centrohermitian matrix and related matrices, 2 Journal of Applied Mathematics

n×n Trench 5 gave the definition of R-conjugate matrix. A matrix A ∈ C is R-conjugate if A RAR, where R is a nontrivial real symmetric involution matrix, that is, R R−1 RT and R / In. At the same time, Trench studied the linear equation Az w for R-conjugate matrices in 5, where z, w are known column vectors. Investigating the matrix equation

AX B 1.1 with the unknown matrix X being symmetric, reflexive, Hermitian-generalized Hamiltonian, and repositive definite is a very active research topic 6–14. As a generalization of 1.1,the classical system of matrix equations

AX C, XB D 1.2 has attracted many author’s attention. For instance, 15 gave the necessary and sufficient conditions for the consistency of 1.2, 16, 17 derived an expression for the general solution by using singular value decomposition of a matrix and generalized inverses of matrices, respectively. Moreover, many results have been obtained about the system 1.2 with various constraints, such as bisymmetric, Hermitian, positive semidefinite, reflexive, and generalized reflexive solutions see, 18–28. To our knowledge, so far there has been little investigation of the Hermitian R-conjugate solution to 1.2. Motivated by the work mentioned above, we investigate Hermitian R-conjugate solutions to 1.2. We also consider the optimal approximation problem X − E minX − E, 1.3 X∈SX

n×n where E is a given matrix in C and SX the set of all Hermitian R-conjugate solutions to 1.2. In many cases the system 1.2 has not Hermitian R-conjugate solution. Hence, we need n×n to further study its least squares solution, which can be described as follows: Let RHC n×n denote the set of all Hermitian R-conjugate matrices in C :

2 2 SL X | min AX − C  XB − D . 1.4 X∈RHCn×n

n×n Find X ∈ C such that X minX. 1.5 X∈SL

In Section 2, we present necessary and sufficient conditions for the existence of the Hermitian R-conjugate solution to 1.2 and give an expression of this solution when the solvability conditions are met. In Section 3, we derive an optimal approximation solution to 1.3.InSection4, we provide the least squares Hermitian R-conjugate solution to 1.5.In Section 5, we give an algorithm and a numerical example to illustrate our results. Journal of Applied Mathematics 3

2. R-Conjugate Hermitian Solution to 1.2

In this section, we establish the solvability conditions and the general expression for the Hermitian R-conjugate solution to 1.2. n×n n×n We denote RC and RHC the set of all R-conjugate matrices and Hermitian R- conjugate matrices, respectively, that is,

RCn×n A | A RAR , 2.1 HRCn×n A | A RAR, A A∗ ,

where R is n × n nontrivial real symmetric involution matrix. Chang et al. in 29 mentioned that for nontrivial symmetric involution matrix R ∈ n×n R , there exist positive integer r and n × n real orthogonal matrix P, Q such that

⎡ ⎤⎡ ⎤ T Ir 0 P R P, Q ⎣ ⎦⎣ ⎦, 2.2 T 0 −In−r Q

n×r n×n−r where P ∈ R ,Q∈ R .By2.2,

T T T T RP P, RQ −Q, P P Ir ,QQ In−r ,PQ 0,QP 0. 2.3

Throughout this paper, we always assume that the nontrivial symmetric involution matrix R is fixed which is given by 2.2 and 2.3. Now, we give a criterion of judging a matrix is R-conjugate Hermitian matrix.

Theorem 2.1. AmatrixK ∈ HRCn×n if and only if there exists a symmetric matrix H ∈ Rn×n such that K ΓHΓ∗,where

Γ P, iQ , 2.4

with P, Q being the same as 2.2.

Proof. If K ∈ HRCn×n, then K RKR.By2.2,

⎡ ⎤⎡ ⎤ ⎡ ⎤⎡ ⎤ T T Ir 0 P Ir 0 P K RKR P, Q ⎣ ⎦⎣ ⎦K P, Q ⎣ ⎦⎣ ⎦, 2.5 T T 0 −In−r Q 0 −In−r Q 4 Journal of Applied Mathematics which is equivalent to

⎡ ⎤ T P ⎣ ⎦K P, Q QT ⎡ ⎤⎡ ⎤ ⎡ ⎤ 2.6 T Ir 0 P Ir 0 ⎣ ⎦⎣ ⎦K P, Q ⎣ ⎦. T 0 −In−r Q 0 −In−r

Suppose that

⎡ ⎤ ⎡ ⎤ P T K K 11 12 ⎣ ⎦K P, Q ⎣ ⎦. 2.7 QT K K 21 22

Substituting 2.7 into 2.6,weobtain

⎡ ⎤ ⎡ ⎤⎡ ⎤⎡ ⎤ ⎡ ⎤ K K I K K I K −K 11 12 r 0 11 12 r 0 11 12 ⎣ ⎦ ⎣ ⎦⎣ ⎦⎣ ⎦ ⎣ ⎦. 2.8 K K −I K K −I −K K 21 22 0 n−r 21 22 0 n−r 21 22

K K , K −K , K −K , K K , ,K ,iK ,iK ,K Hence, 11 11 12 12 21 21 22 22 that is 11 12 21 22 are real M iK ,N −iK   matrices. If we denote 12 21, then by 2.7

⎡ ⎤⎡ ⎤ ⎡ ⎤⎡ ⎤ K K P T K M P T 11 12 11 K P, Q ⎣ ⎦⎣ ⎦ P, iQ ⎣ ⎦⎣ ⎦. 2.9 K K QT NK −iQT 21 22 22

Let ΓP, iQ,and

⎡ ⎤ K M 11 H ⎣ ⎦. 2.10 NK 22

∗ Then, K can be expressed as ΓHΓ , where Γ is unitary matrix and H is a real matrix. By K K∗

T ∗ ∗ ∗ ΓH Γ K K ΓHΓ , 2.11

T we obtain H H . Journal of Applied Mathematics 5

n×n ∗ Conversely, if there exists a symmetric matrix H ∈ R such that K ΓHΓ , then it follows from 2.3 that ⎡ ⎤ ⎡ ⎤ T T P P RKR RΓHΓ∗R R P, iQ H⎣ ⎦R P, −iQ H⎣ ⎦ ΓHΓ∗ K, −iQT iQT 2.12 K∗ ΓHT Γ∗ ΓHΓ∗ K,

n×n that is, K ∈ HRC .

Theorem 2.1 implies that an arbitrary complex Hermitian R-conjugate matrix is equiv- alent to a real symmetric matrix. Lemma 2.2. A ∈ Cm×n,A A  iA For any matrix 1 2,where

A  A A − A A ,A . 2.13 1 2 2 2i

A ∈ Cm×n A A  iA A ,A Proof. For any matrix , it is obvious that 1 2, where 1 2 are defined   A A  iA B ,B as 2.13 . Now, we prove that the decomposition 1 2 is unique. If there exist 1 2 A B  iB such that 1 2, then

A − B  i B − A .   1 1  2 2 0 2.14

A ,A ,B , B It follows from 1 2 1 and 2 are real matrix that

A B ,A B .   1 1 2 2 2.15

A A  iA A ,A   Hence, 1 2 holds, where 1 2 are defined as 2.13 .

n×n By Theorem 2.1,forX ∈ HRC , we may assume that

∗ X ΓYΓ , 2.16

n×n where Γ is defined as 2.4 and Y ∈ R is a symmetric matrix. AΓA  iA ∈ Cm×n,CΓC  iC ∈ Cm×n, Γ∗B B  iB ∈ Cn×l Suppose that 1 2 1 2 1 2 ,and Γ∗D D  iD ∈ Cn×l 1 2 , where

AΓAΓ AΓ − AΓ CΓCΓ CΓ − CΓ A ,A ,C ,C , 1 2 2 2i 1 2 2 2i 2.17 Γ∗B  Γ∗B Γ∗B − Γ∗B Γ∗D  Γ∗D Γ∗D − Γ∗D B ,B ,D ,D . 1 2 2 2i 1 2 2 2i 6 Journal of Applied Mathematics

Then, system 1.2 can be reduced into

A  iA Y C  iC ,YB  iB D  iD ,    1 2 1 2  1 2 1 2 2.18 which implies that ⎡ ⎤ ⎡ ⎤ A C 1 1 ⎣ ⎦Y ⎣ ⎦,YB ,B D ,D .   1 2 1 2 2.19 A C 2 2

Let ⎡ ⎤ ⎡ ⎤ A C 1 1 F ⎣ ⎦,G ⎣ ⎦,K B ,B , 1 2 A C 2 2 ⎡ ⎤ ⎡ ⎤ 2.20 F G L D ,D ,M ⎣ ⎦,N ⎣ ⎦. 1 2 KT LT

n×n Then, system 1.2 has a solution X in HRC if and only if the real system

MY N 2.21

n×n has a symmetric solution Y in R .

m×n Lemma 2.3 Theorem 1 in 7. Let A ∈ R .TheSV D of matrix A is as follows ⎡ ⎤ Σ 0 T A U⎣ ⎦V , 2.22 00

U U ,U ∈ Rm×m V V ,V ∈ Rn×n Σ σ ,..., where 1 2 and 1 2 are orthogonal matrices, diag 1 σ ,σ > i ,...,r,r A,U ∈ Rm×r ,V ∈ Rn×r   r i 0 1 rank 1 1 . Then, 1.1 has a symmetric solution if and only if

ABT BAT ,UT B .   2 0 2.23

In that case, it has the general solution

X V Σ−1UT B  V V T BT U Σ−1V T  V GV T ,   1 1 2 2 1 1 2 2 2.24 where G is an arbitrary n − r × n − r symmetric matrix.

By Lemma 2.3, we have the following theorem. Journal of Applied Mathematics 7

Theorem 2.4. A ∈ Cm×n,C ∈ Cm×n,B ∈ Cn×l D ∈ Cn×l A ,A,C, Given , and .Let 1 2 1 C B ,B,D,D,F,G,K,L,M,andN     2, 1 2 1 2 be defined in 2.17 , 2.20 , respectively. Assume that the SV D of M ∈ R2m2l×n is as follows

⎡ ⎤ M 1 0 T M U⎣ ⎦V , 2.25 00

U U ,U ∈ R2m2l×2m2l V V ,V ∈ Rn×n M where 1 2 and 1 2 are orthogonal matrices, 1 σ ,...,σ ,σ > i ,...,r,r M,U ∈ R2m2l×r ,V ∈ Rn×r diag 1 r i 0 1 rank 1 1 . Then, system n×n 1.2 has a solution in HRC if and only if

MNT NMT ,UT N .   2 0 2.26

In that case, it has the general solution

X Γ V M−1UT N  V V T NT U M−1V T  V GV T Γ∗,   1 1 1 2 2 1 1 1 2 2 2.27 where G is an arbitrary n − r × n − r symmetric matrix.

3. The Solution of Optimal Approximation Problem 1.3

When the set SX of all Hermitian R-conjugate solution to 1.2 is nonempty, it is easy to verify SX is a closed set. Therefore, the optimal approximation problem 1.3 has a unique solution by 30. Theorem 3.1. A ∈ Cm×n,C∈ Cm×n,B∈ Cn×l,D∈ Cn×l,E∈ Cn×n E  / Γ∗EΓ Given , and 1 1 2 ∗ Γ EΓ. Assume SX is nonempty, then the optimal approximation problem 1.3 has a unique solution X and X Γ V M−1UT N  V V T NT U M−1V T  V V T E V V T Γ∗.   1 1 1 2 2 1 1 1 2 2 1 2 2 3.1

∗ Proof. Since SX is nonempty, X ∈ SX has the form of 2.27. By Lemma 2.2, Γ EΓ can be written as

Γ∗EΓE  iE ,   1 2 3.2 where

1 ∗ 1 ∗ E Γ EΓΓ∗EΓ ,E Γ EΓ − Γ∗EΓ . 3.3 1 2 2 2i 8 Journal of Applied Mathematics

According to 3.2 and the unitary invariance of Frobenius norm X − E Γ V M−1UT N  V V T NT U M−1V T  V GV T Γ∗ − E 1 1 2 2 1 1 2 2   3.4 E − V M−1UT N − V V T NT U M−1V T − V GV T  iE . 1 1 1 2 2 1 1 2 2 2

We get 2 X − E2 E − V M−1UT N − V V T NT U M−1V T − V GV T  E 2.   1 1 1 2 2 1 1 2 2 2 3.5

X − E G ∈ Rn−r×n−r Then, minX∈SX is consistent if and only if there exists such that E − V M−1UT N − V V T NT U M−1V T − V GV T .   min 1 1 1 2 2 1 1 2 2 3.6

For the orthogonal matrix V 2 E − V M−1UT N − V V T NT U M−1V T − V GV T 1 1 1 2 2 1 1 2 2 2 V T E − V M−1UT N − V V T NT U M−1V T − V GV T V 1 1 1 2 2 1 1 2 2 3.7 2 2 V T E − V M−1UT N V  V T E − V M−1UT NV 1 1 1 1 1 1 1 1 1 2 2  V T E − V V T NT U M−1V T V  V T E − V GV T V . 2 1 2 2 1 1 1 2 1 2 2 2

Therefore, E − V M−1UT N − V V T NT U M−1V T − V GV T   min 1 1 1 2 2 1 1 2 2 3.8 is equivalent to

G V T E V .   2 1 2 3.9

Substituting 3.9 into 2.27,weobtain3.1.

4. The Solution of Problem 1.5

In this section, we give the explicit expression of the solution to 1.5. Theorem 4.1. A ∈ Cm×n,C ∈ Cm×n,B ∈ Cn×l D ∈ Cn×l A ,A,C,C, Given , and .Let 1 2 1 2 B ,B,D,D,F,G,K,L,M N     1 2 1 2 , and be defined in 2.17 , 2.20 , respectively. Assume that the Journal of Applied Mathematics 9

 m l×n n×n SV D of M ∈ R 2 2 is as 2.25 and system 1.2 has not a solution in HRC . Then, X ∈ SL can be expressed as ⎡ ⎤ M−1UT NV M−1UT NV 1 2 1 1 1 1 T ∗ X ΓV ⎣ ⎦V Γ , 4.1 V T NT U M−1 Y 2 1 1 22

Y ∈ Rn−r×n−r where 22 is an arbitrary symmetric matrix.

Proof. It yields from 2.17–2.21 and 2.25 that

AX − C2  XB − D2 MY − N2 ⎡ ⎤ M 2 1 0 ⎣ ⎦ T U V Y − N 00 4.2 ⎡ ⎤ M 2 1 0 ⎣ ⎦ T T V YV − U NV . 00

Assume that ⎡ ⎤ Y Y 11 12 V T YV ⎣ ⎦,Y∈ Rr×r ,Y ∈ Rn−r×n−r.   11 22 4.3 Y Y 21 22

Then, we have

AX − C2  XB − D2 2 2 M Y − UT NV  M Y − UT NV 1 11 1 1 1 12 1 2 4.4 2 2  UT NV  UT NV . 2 1 2 2

Hence, min AX − C2  XB − D2 4.5

Y ,Y is solvable if and only if there exist 11 12 such that 2 M Y − UT NV , 1 11 1 1 min 4.6 2 M Y − UT NV . 1 12 1 2 min 10 Journal of Applied Mathematics

It follows from 4.6 that

Y M−1UT NV , 11 1 1 1 4.7 Y M−1UT NV . 12 1 1 2

Substituting 4.7 into 4.3 and then into 2.16, we can get that the form of elements in SL is 4.1.

Theorem 4.2. Assume that the notations and conditions are the same as Theorem 4.1. Then, X minX 4.8 X∈SL if and only if ⎡ ⎤ M−1UT NV M−1UT NV 1 2 1 1 1 1 T ∗ X ΓV ⎣ ⎦V Γ . 4.9 V T NT U M−1 2 1 1 0

  X X Proof. In Theorem 4.1, it implies from 4.1 that minX∈SL is equivalent to has the   Y   expression 4.1 with 22 0. Hence, 4.9 holds.

5. An Algorithm and Numerical Example

Base on the main results of this paper, we in this section propose an algorithm for finding the solution of the approximation problem 1.3 and the least squares problem with least norm 1.5. All the tests are performed by MATLAB 6.5 which has a machine precision of around − 10 16.

m×n m×n n×l n×l Algorithm 5.1. 1 Input A ∈ C ,C∈ C ,B∈ C ,D∈ C .   A ,A,C,C,B,B,D,D,F,G,K,L,M, N   2 Compute 1 2 1 2 1 2 1 2 and by 2.17 and 2.20. 3 Compute the singular value decomposition of M with the form of 2.25. n×n 4 If 2.26 holds, then input E ∈ C and compute the solution X of problem 1.3 according 3.1, else compute the solution X to problem 1.5 by 4.9. To show our algorithm is feasible, we give two numerical example. Let an nontrivial symmetric involution be ⎡ ⎤ 1000 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢0 −10 0⎥ ⎢ ⎥ R ⎢ ⎥. 5.1 ⎢ ⎥ ⎢0010⎥ ⎣ ⎦ 000−1 Journal of Applied Mathematics 11

We obtain P, Q in 2.2 by using the spectral decomposition of R, then by 2.4

⎡ ⎤ 1000 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢00i 0⎥ ⎢ ⎥ Γ⎢ ⎥. 5.2 ⎢ ⎥ ⎢0100⎥ ⎣ ⎦ 000i

× × × × Example 5.2. Suppose A ∈ C2 4,C∈ C2 4,B∈ C4 3,D∈ C4 3,and

⎡ ⎤ 3.33 − 5.987i 45i 7.21 −i A ⎣ ⎦, 0 −0.66i 7.694 1.123i ⎡ ⎤ 0.2679 − 0.0934i 0.0012  4.0762i −0.0777 − 0.1718i −1.2801i C ⎣ ⎦, 0.2207 −0.1197i 0.0877 0.7058i ⎡ ⎤ 4  12i 2.369i 4.256 − 5.111i ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 4i 4.66i 8.21 − 5i ⎥ ⎢ ⎥ B ⎢ ⎥, 5.3 ⎢ ⎥ ⎢ 04.83i 56  i ⎥ ⎣ ⎦ 2.22i −4.666 7i ⎡ ⎤ 0.0616  0.1872i −0.0009  0.1756i 1.6746 − 0.0494i ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0.0024  0.2704i 0.1775  0.4194i 0.7359 − 0.6189i ⎥ ⎢ ⎥ D ⎢ ⎥. ⎢ ⎥ ⎢−0.0548  0.3444i 0.0093 − 0.3075i −0.4731 − 0.1636i⎥ ⎣ ⎦ 0.0337i 0.1209 − 0.1864i −0.2484 − 3.8817i

We can verify that 2.26 holds. Hence, system 1.2 has an Hermitian R-conjugate solution. Given

⎡ ⎤ 7.35i 8.389i 99.256 − 6.51i −4.6i ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 1.55 4.56i 7.71 − 7.5ii⎥ ⎢ ⎥ E ⎢ ⎥. 5.4 ⎢ ⎥ ⎢ 5i 0 −4.556i −7.99⎥ ⎣ ⎦ 4.22i 05.1i 0 12 Journal of Applied Mathematics

Applying Algorithm 5.1, we obtain the following: ⎡ ⎤ 1.5597 0.0194i 2.8705 0.0002i ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢−0.0194i 9.0001 0.2005i −3.9997⎥ ⎢ ⎥ X ⎢ ⎥. 5.5 ⎢ ⎥ ⎢ 2.8705 −0.2005i −0.0452 7.9993i ⎥ ⎣ ⎦ −0.0002i −3.9997 −7.9993i 5.6846

Example 5.2 illustrates that we can solve the optimal approximation problem with Algorithm 5.1 when system 1.2 have Hermitian R-conjugate solutions.

Example 5.3. Let A, B, and C be the same as Example 5.2,andletD in Example 5.2 be changed into ⎡ ⎤ 0.0616  0.1872i −0.0009  0.1756i 1.6746  0.0494i ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0.0024  0.2704i 0.1775  0.4194i 0.7359 − 0.6189i ⎥ ⎢ ⎥ D ⎢ ⎥. 5.6 ⎢ ⎥ ⎢−0.0548  0.3444i 0.0093 − 0.3075i −0.4731 − 0.1636i⎥ ⎣ ⎦ 0.0337i 0.1209 − 0.1864i −0.2484 − 3.8817i

We can verify that 2.26 does not hold. By Algorithm 5.1,weget ⎡ ⎤ 0.52 2.2417i 0.4914 0.3991i ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢−2.2417i 8.6634 0.1921i −2.8232⎥ ⎢ ⎥ X ⎢ ⎥. 5.7 ⎢ ⎥ ⎢ 0.4914 −0.1921i 0.1406 1.3154i ⎥ ⎣ ⎦ −0.3991i −2.8232 −1.3154i 6.3974

Example 5.3 demonstrates that we can get the least squares solution with Algo- rithm 5.1 when system 1.2 has not Hermitian R-conjugate solutions.

Acknowledgments

This research was supported by the Grants from the Key Project of Scientific Research Innovation Foundation of Shanghai Municipal Education Commission 13ZZ080,the National Natural Science Foundation of China 11171205, the Natural Science Foundation of Shanghai 11ZR1412500, the Ph.D. Programs Foundation of Ministry of Education of China 20093108110001, the Discipline Project at the corresponding level of Shanghai A. 13-0101-12-005, Shanghai Leading Academic Discipline Project J50101, the Natural Science Foundation of Hebei province A2012403013, and the Natural Science Foundation of Hebei province A2012205028. The authors are grateful to the anonymous referees for their helpful comments and constructive suggestions. Journal of Applied Mathematics 13

References

1 R. D. Hill, R. G. Bates, and S. R. Waters, “On centro-Hermitian matrices,” SIAM Journal on Matrix Analysis and Applications, vol. 11, no. 1, pp. 128–133, 1990. 2 R. Kouassi, P. Gouton, and M. Paindavoine, “Approximation of the Karhunen-Loeve tranformation and Its application to colour images,” Signal Processing: Image Communication, vol. 16, pp. 541–551, 2001. 3 A. Lee, “Centro-Hermitian and skew-centro-Hermitian matrices,” Linear Algebra and Its Applications, vol. 29, pp. 205–210, 1980. 4 Z.-Y. Liu, H.-D. Cao, and H.-J. Chen, “A note on computing matrix-vector products with generalized centrosymmetric centrohermitian matrices,” Applied Mathematics and Computation, vol. 169, no. 2, pp. 1332–1345, 2005. 5 W. F. Trench, “Characterization and properties of matrices with generalized symmetry or skew symmetry,” Linear Algebra and Its Applications, vol. 377, pp. 207–218, 2004. 6 K.-W. E. Chu, “Symmetric solutions of linear matrix equations by matrix decompositions,” Linear Algebra and Its Applications, vol. 119, pp. 35–50, 1989. 7 H. Dai, “On the symmetric solutions of linear matrix equations,” Linear Algebra and Its Applications, vol. 131, pp. 1–7, 1990. 8 F. J. H. Don, “On the symmetric solutions of a linear matrix equation,” Linear Algebra and Its Applications, vol. 93, pp. 1–7, 1987. 9 I. Kyrchei, “Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations,” Linear Algebra and Its Applications, vol. 438, no. 1, pp. 136–152, 2013. 10 Y. Li, Y. Gao, and W. Guo, “A Hermitian least squares solution of the matrix equation AXB C subject to inequality restrictions,” Computers & Mathematics with Applications, vol. 64, no. 6, pp. 1752– 1760, 2012. 11 Z.-Y. Peng and X.-Y. Hu, “The reflexive and anti-reflexive solutions of the matrix equation AX B,” Linear Algebra and Its Applications, vol. 375, pp. 147–155, 2003. 12 L. Wu, “The re-positive definite solutions to the matrix inverse problem AX B,” Linear Algebra and Its Applications, vol. 174, pp. 145–151, 1992. 13 S. F. Yuan, Q. W. Wang, and X. Zhang, “Least-squares problem for the quaternion matrix equation AXB  CYD E over different constrained matrices,” Article ID 722626, International Journal of Computer Mathematics. In press. 14 Z.-Z. Zhang, X.-Y. Hu, and L. Zhang, “On the Hermitian-generalized Hamiltonian solutions of linear matrix equations,” SIAM Journal on Matrix Analysis and Applications, vol. 27, no. 1, pp. 294–303, 2005. 15 F. Cecioni, “Sopra operazioni algebriche,” Annali della Scuola Normale Superiore di Pisa, vol. 11, pp. 17–20, 1910. 16 K.-W. E. Chu, “Singular value and generalized singular value decompositions and the solution of linear matrix equations,” Linear Algebra and Its Applications, vol. 88-89, pp. 83–98, 1987.   A XB C ,A XB C 17 S. K. Mitra, “A pair of simultaneous linear matrix equations 1 1 1 2 2 2 and a matrix programming problem,” Linear Algebra and Its Applications, vol. 131, pp. 107–123, 1990. 18 H.-X. Chang and Q.-W. Wang, “Reflexive solution to a system of matrix equations,” Journal of Shanghai University, vol. 11, no. 4, pp. 355–358, 2007. 19 A. Dajic´ and J. J. Koliha, “Equations ax c and xb d in rings and rings with involution with applications to Hilbert space operators,” Linear Algebra and Its Applications, vol. 429, no. 7, pp. 1779– 1809, 2008. 20 A. Dajic´ and J. J. Koliha, “Positive solutions to the equations AX C and XB D for Hilbert space operators,” Journal of Mathematical Analysis and Applications, vol. 333, no. 2, pp. 567–576, 2007. 21 C.-Z. Dong, Q.-W. Wang, and Y.-P. Zhang, “The common positive solution to adjointable operator equations with an application,” Journal of Mathematical Analysis and Applications, vol. 396, no. 2, pp. 670–679, 2012. ∗ 22 X. Fang, J. Yu, and H. Yao, “Solutions to operator equations on Hilbert C -modules,” Linear Algebra and Its Applications, vol. 431, no. 11, pp. 2142–2153, 2009. 23 C. G. Khatri and S. K. Mitra, “Hermitian and nonnegative definite solutions of linear matrix equations,” SIAM Journal on Applied Mathematics, vol. 31, no. 4, pp. 579–585, 1976. 24 I. Kyrchei, “Analogs of Cramer’s rule for the minimum norm least squares solutions of some matrix equations,” Applied Mathematics and Computation, vol. 218, no. 11, pp. 6375–6384, 2012. 25 F. L. Li, X. Y. Hu, and L. Zhang, “The generalized reflexive solution for a class of matrix equations AX C, XB D,” Acta Mathematica Scientia, vol. 28B, no. 1, pp. 185–193, 2008. 14 Journal of Applied Mathematics

26 Q.-W. Wang and C.-Z. Dong, “Positive solutions to a system of adjointable operator equations over Hilbert C∗-modules,” Linear Algebra and Its Applications, vol. 433, no. 7, pp. 1481–1489, 2010. 27 Q. Xu, “Common Hermitian and positive solutions to the adjointable operator equations AX C, XB D,” Linear Algebra and Its Applications, vol. 429, no. 1, pp. 1–11, 2008.   A XA∗ 28 F. Zhang, Y. Li, and J. Zhao, “Common Hermitian least squares solutions of matrix equations 1 B A XA∗ B 1 1 and 2 2 2 subject to inequality restrictions,” Computers & Mathematics with Applications , vol. 62, no. 6, pp. 2424–2433, 2011. 29 H.-X. Chang, Q.-W. Wang, and G.-J. Song, “R, S-conjugate solution to a pair of linear matrix equations,” Applied Mathematics and Computation, vol. 217, no. 1, pp. 73–82, 2010. 30 E. W. Cheney, Introduction to Approximation Theory, McGraw-Hill, New York, NY, USA, 1966. Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2012, Article ID 784620, 13 pages doi:10.1155/2012/784620

Research Article Perturbation Analysis for the Matrix Equation X − m A∗XA n B∗XB I i1 i i j1 j j

Xue-Feng Duan1, 2 and Qing-Wen Wang3

1 School of Mathematics and Computational Science, Guilin University of Electronic Technology, Guilin 541004, China 2 Research Center on Data Science and Social Computing, Guilin University of Electronic Technology, Guilin 541004, China 3 Department of Mathematics, Shanghai University, Shanghai 200444, China

Correspondence should be addressed to Qing-Wen Wang, [email protected]

Received 18 September 2012; Accepted 19 November 2012

Academic Editor: Yang Zhang

Copyright q 2012 X.-F. Duan and Q.-W. Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. X − m A∗XA n B∗XB I We consider the perturbation analysis of the matrix equation i1 i i j1 j j . Based on the matrix differentiation, we first give a precise perturbation bound for the positive definite solution. A numerical example is presented to illustrate the sharpness of the perturbation bound.

1. Introduction

In this paper, we consider the matrix equation

m n X − A∗XA B∗XB I, i i j j 1.1 i1 j1

A ,A ,...,A ,B ,B ,...,B n × n I n × n where 1 2 m 1 2 n are complex matrices, is an identity matrix, m, n are nonnegative integers and the positive definite solution X is practical interest. ∗ ∗ Here, Ai and Bi denote the conjugate transpose of the matrices Ai and Bi, respectively. Equation 1.1 arises in solving some nonlinear matrix equations with Newton method. See, for example, the nonlinear matrix equation which appears in Sakhnovich 1. Solving these 2 Journal of Applied Mathematics nonlinear matrix equations gives rise to 1.1. On the other hand, 1.1 is the general case of the generalized Lyapunov equation

t ∗ ∗ ∗ ∗ MYS SYM NkYNk CC 0, 1.2 k1 whose positive definite solution is the controllability Gramian of the bilinear control system see 2, 3 for more details

t Mx˙t Sxt Nkxtukt Cut. 1.3 k1

Set

E √1 M − S I,F √1 M S I,G S − I, Q CC∗, 2 2

−1/2 −1/2 −1/2 1/2 X Q YQ ,Ai Q NiQ ,i 1, 2,...,t, 1.4 A Q−1/2FQ1/2,A Q−1/2GQ1/2, t1 t2

B Q−1/2EQ1/2,B Q−1/2CQ1/2, 1 2 then 1.2 can be equivalently written as 1.1 with m t 2andn 2. Some special cases of 1.1 have been studied. Based on the kronecker product and fixed point theorem in partially ordered sets, Reurings 4 and Ran and Reurings 5, 6 gave ffi some su cient conditions for the existence of a unique positive definite solution of the linear X − m A∗XA I X n B∗XB I matrix equations i1 i i and j1 i i . And the expressions for these unique positive definite solutions were also derived under some constraint conditions. For the general linear matrix equation 1.1, Reurings 4, Page 61 pointed out that it is hard to find sufficient conditions for the existence of a positive definite solution, because the map

m n GX I A∗XA − B∗XB i i j j 1.5 i1 j1 is not monotone and does not map the set of n × n positive definite matrices into itself. Recently, Berzig 7 overcame these difficulties by making use of Bhaskar-Lakshmikantham coupled fixed point theorem and gave a sufficient condition for 1.1 existing a unique positive definite solution. An iterative method was constructed to compute the unique positive definite solution, and the error estimation was given too. Recently, the matrix equations of the form 1.1 have been studied by many authors see 8–14. Some numerical methods for solving the well-known Lyapunov equation T X A XA Q, such as Bartels-Stewart method and Hessenberg-Schur method, have been proposed in 12. Based on the fixed point theorem, the sufficient and necessary conditions for s ∗ −t the existence of a positive definite solution of the matrix equation X ± A X A Q, s, t ∈ N have been given in 8, 9. The fixed point iterative method and inversion-free iterative method Journal of Applied Mathematics 3

∗ −α were developed for solving the matrix equations X ± A X A Q, α > 0in13, 14.By making use of the fixed point theorem of mixed monotone operator, the matrix equation m X − A∗Xδi A Q, < |δ | <   ffi i1 i i 0 i 1 was studied in 10 , and derived a su cient condition for the existence of a positive definite solution. Assume that F maps positive definite matrices either into positive definite matrices or into negative definite matrices, the general nonlinear ∗ matrix equation X A FXA Q was studied in 11, and the fixed point iterative method was constructed to compute the positive definite solution under some additional conditions. Motivated by the works and applications in 2–6, we continue to study the matrix equation 1.1. Based on a new mathematical tool i.e., the matrix differentiation,wefirstly give a differential bound for the unique positive definite solution of 1.1, and then use it to derive a precise perturbation bound for the unique positive definite solution. A numerical example is used to show that the perturbation bound is very sharp. Throughout this paper, we write B>0 B ≥ 0 if the matrix B is positive definite semidefinite.IfB − C is positive definite semidefinite, then we write B>C B ≥ C.If a positive definite matrix X satisfies B ≤ X ≤ C, we denote that X ∈ B, C. The symbols λ B λ B n × n 1 and n denote the maximal and minimal eigenvalues of an Hermitian matrix n×n B, respectively. The symbol H stands for the set of n × n Hermitian matrices. The symbol B denotes the spectral norm of the matrix B.

2. Perturbation Analysis for the Matrix Equation 1.1

Based on the matrix differentiation, we firstly give a differential bound for the unique positive definite solution XU of 1.1, and then use it to derive a precise perturbation bound for XU in this section.

Definition 2.1 15, Definition 3.6.LetF fij m×n, then the matrix differentiation of F is dF df ij m×n. For example, let s ts2 − 2t F . 2.1 2s t3 t2

Then ds dt 2sds − 2dt dF . 2.2 2ds 3t2dt 2tdt

Lemma 2.2 15, Theorem 3.2. The matrix differentiation has the following properties:   dF ± F dF ± dF 1 1 2 1 2; 2 dkFkdF,wherek is a complex number; ∗ ∗ 3 dF dF ;   dF F F dF F F F dF F F F dF  4 1 2 3 1 2 3 1 2 3 1 2 3 ; − − − 5 dF 1 −F 1dFF 1; 6 dF 0,whereF is a constant matrix. 4 Journal of Applied Mathematics

Lemma 2.3 7, Theorem 3.1. If

m n A∗A < 1I, B∗B < 1I, i i j j 2.3 i1 2 j1 2 then 1.1 has a unique positive definite solution XU and ⎡ ⎤ n m ⎣ ∗ ∗ ⎦ XU ∈ I − 2 Bj Bj ,I 2 Ai Ai . 2.4 j1 i1

Theorem 2.4. If

m n 2 1 2 1 Ai < , Bj < , 2.5 i1 2 j1 2 then 1.1 has a unique positive definite solution XU, and it satisfies

  m A 2 m A dA  n B dB 2 1 2 i1 i i1 i i j1 j j dXU ≤ . 2.6 − m A 2 − n B 2 1 i1 i j1 j

Proof. Since   λ A∗A ≤ A∗A ≤ A 2,i , ,...,m, 1 i i i i  i 1 2   2.7 λ B∗B ≤ B∗B ≤ B 2,j , ,...,n, 1 j j j j j 1 2 then   A∗A ≤ λ A∗A I ≤ A∗A I ≤ A 2I, i , ,...,m, i i 1 i i i i  i 1 2   2.8 B∗B ≤ λ B∗B I ≤ B∗B ≤ B 2I, j , ,...,n, j j 1 j j j j j 1 2 consequently,

m m ∗ 2 Ai Ai ≤ Ai I, i1 i1 2.9 n n ∗ 2 Bj Bj ≤ Bj I. j1 j1 Journal of Applied Mathematics 5

Combining 2.5–2.9 we have

m m ∗ 2 1 Ai Ai ≤ Ai I< I, i1 i1 2 2.10 n n ∗ 2 1 Bj Bj ≤ Bj I< I. j1 j1 2

Then by Lemma 2.3 we obtain that 1.1 has a unique positive definite solution XU, which satisfies

⎡ ⎤ n m ⎣ ∗ ∗ ⎦ XU ∈ I − 2 Bj Bj ,I 2 Ai Ai . 2.11 j1 i1

Noting that XU is the unique positive definite solution of 1.1, then

m n X − A∗X A B∗X B I. U i U i j U j 2.12 i1 j1

It is known that the elements of XU are differentiable functions of the elements of Ai and Bi. Differentiating 2.12,andbyLemma 2.2, we have

m    ∗ ∗ ∗ dXU − dAi XUAi Ai dXUAi Ai XUdAi i1 2.13 n   ∗ ∗ ∗ dBj XUBj Bj dXUBj Bj XU dBj 0, j1

which implies that

m n ∗ ∗ dXU − Ai dXUAi Bj dXUBj i1 j1 2.14 m   m n n   ∗ ∗ ∗ ∗ dAi XUAi Ai XUdAi − dBj XUBj − Bj XU dBj . i1 i1 j1 j1 6 Journal of Applied Mathematics

By taking spectral norm for both sides of 2.14,weobtainthat

m n ∗ ∗ dXU − Ai dXUAi Bj dXUBj

i1 j1

m   m n n   ∗ ∗ ∗ ∗ dAi XUAi Ai XUdAi − dBj XUBj − Bj XU dBj

i1 i1 j1 j1

m   m n n   ∗ ∗ ∗ ∗ ≤ dAi XUAi Ai XUdAi dBj XUBj Bj XU dBj

i1 i1 j1 j1 m   m n n   ∗ ∗ ∗ ∗ ≤ dAi XUAi Ai XUdAi dBj XUBj Bj XU dBj i1 i1 j1 j1 m m n n ∗ ∗ ∗ ∗ ≤ dAi XUAi Ai XUdAi dBj XU Bj Bj XU dBj i1 i1 j1 j1 m n  

2 AiXUdAi 2 Bj XU dBj i1 j1 ⎡ ⎤ m n   ⎣ ⎦ 2 AidAi Bj dBj XU, i1 j1 2.15 and noting 2.11 we obtain that

m m X  ≤ I A∗A ≤ A 2.   U 2 i i 1 2 i 2.16 i1 i1

Then

m n ∗ ∗ dXU − Ai dXUAi Bj dXUBj

i1 j1 ⎡ ⎤ m n   ⎣ ⎦ ≤ 2 AidAi Bj dBj XU 2.17 i1 j1  ⎡ ⎤ m m n   2 ⎣ ⎦ ≤ 2 1 2 Ai AidAi Bj dBj , i1 i1 j1 Journal of Applied Mathematics 7

m n ∗ ∗ dXU − Ai dXUAi Bj dXUBj

i1 j1

m n ∗ ∗ ≥ dXU − Ai dXUAi − Bj dXUBj

i1 j1

m n ∗ ∗   ≥ dXU − Ai dXUAi − Bj dXUBj 2.18 i1 j1 m n ∗ ∗ ≥ dXU − Ai dXUAi − Bj dXU Bj i1 j1 ⎛ ⎞ m n ⎝ 2 2⎠ 1 − Ai − Bj dXU. i1 j1

Due to 2.5 we have

m n 2 2 1 − Ai − Bj > 0. 2.19 i1 j1

Combining 2.17, 2.18 and noting 2.19, we have

⎛ ⎞ m n ⎝ 2 2⎠ 1 − Ai − Bj dXU i1 j1

m n ∗ ∗ ≤ dXU − Ai dXUAi Bj dXUBj 2.20

i1 j1  ⎡ ⎤ m m n   2 ⎣ ⎦ ≤ 2 1 2 Ai AidAi Bj dBj , i1 i1 j1

which implies that

  m A 2 m A dA  n B dB 2 1 2 i1 i i1 i i j1 j j dXU ≤ . 2.21 − m A 2 − n B 2 1 i1 i j1 j 8 Journal of Applied Mathematics

Theorem 2.5. A , A ,...,A , B , B ,...,B A ,A ,...,A , Let 1 2 m 1 2 n be perturbed matrices of 1 2 m B ,B,...,B   Δ A − A ,i , ,...,m, Δ B − B ,j , ,...,n 1 2 n in 1.1 and i i i 1 2 j j j 1 2 .If

m n 2 1 2 1 Ai < , Bj < , 2.22 i1 2 j1 2 m m m 2 1 2 2 AiΔi Ai < − Ai , 2.23 i1 i1 2 i1 n   n n 2 1 2 2 Bj Δj Δj < − Bj , 2.24 j1 j1 2 j1 then 1.1 and its perturbed equation

m n X − A∗X B∗XB I i Ai j j 2.25 i1 j1

have unique positive definite solutions XU and XU, respectively, which satisfy

X − X ≤ S ,   U U err 2.26 where

    m 2 m 2 n 2 2 1 2 i Ai Δi i Ai Δi Δi j Bj Δj Δj S 1 1 1 . err   − m A  Δ 2 − n B Δ 2 1 i1 i i j1 j j 2.27

Proof. By 2.22 and Theorem 2.4, we know that 1.1 has a unique positive definite solution XU. And by 2.23 we have

m m m 2 2 2 Ai Ai Δi ≤ Ai Δi i1 i1 i1 m 2 2 Ai 2AiΔi Δi i1 2.28 m m m 2 2 Ai 2 AiΔi Δi i1 i1 i1 m m 2 1 2 1 < Ai − Ai , i1 2 i1 2 Journal of Applied Mathematics 9 similarly, by 2.24 we have

n 2 1 Bj < . 2.29 j1 2

By 2.28, 2.29,andTheorem 2.4 we obtain that the perturbed equation 2.25 has a unique positive definite solution XU.

Set

Ait Ai tΔi,Bj t Bj tΔj ,t∈ 0, 1, 2.30 then by 2.23 we have

m m m 2 2 2 Ait Ai tΔi ≤ Ai tΔi i1 i1 i1 m 2 2 2 Ai 2tAiΔi t Δi i1 m 2 2 ≤ Ai 2AiΔi Δi 2.31 i1 m m m 2 2 Ai 2 AiΔi Δi i1 i1 i1 m m 2 1 2 1 < Ai − Ai , i1 2 i1 2 similarly, by 2.24 we have

n m 2 2 1 Bj t Bj tΔj < . 2.32 j1 i1 2

Therefore, by 2.31, 2.32,andTheorem 2.4 we derive that for arbitrary t ∈ 0, 1,the matrix equation

m n X − A∗tXA t B∗tXB t I i i j j 2.33 i1 j1 has a unique positive definite solution XUt, especially,

XU0 XU,XU1 XU. 2.34 10 Journal of Applied Mathematics

From Theorem 2.4 it follows that

1 1 X − X X   − X   dX t ≤ dX t U U U 1 U 0 U U 0 0   m A t2 m A tdA t n B t dB t 1 2 1 2 i1 i i1 i i j1 j j ≤ − m A t2 − n B t 2 0 1 i1 i j1 j   m A t2 m A tΔ dt n B t Δ dt 1 2 1 2 i1 i i1 i i j1 j j ≤ − m A t2 − n B t 2 0 1 i1 i j1 j   m A t2 m A tΔ  n B t Δ 1 2 1 2 i1 i i1 i i j1 j j dt. − m A t2 − n B t 2 0 1 i1 i j1 j 2.35

Noting that

Ait Ai tΔi ≤ Ai tΔi,i 1, 2,...,m,   2.36 Bj t Bj tΔi ≤ Bj tΔi,j 1, 2,...,n, and combining Mean Value Theorem of Integration, we have

XU − XU   m A t2 m A tΔ  n B t Δ 1 2 1 2 i1 i i1 i i j1 j j ≤ dt − m A t2 − n B t 2 0 1 i1 i j1 j     m A  tΔ 2 m A  tΔ 2Δ  n B t Δ 2 Δ 1 2 1 2 i1 i i i1 i i i j1 j j j ≤   dt − m A  tΔ 2 − n B t Δ 2 0 1 i1 i i j1 j j     m A  ξΔ 2 m A  ξΔ 2Δ  n B ξ Δ 2 Δ 2 1 2 i1 i i i1 i i i j1 j j j   − m A  ξΔ 2 − n B ξ Δ 2 1 i1 i i j1 j j × 1 − 0,ξ∈ 0, 1     m 2 m 2 n 2 2 1 2 i Ai Δi i Ai Δi Δi j Bj Δj Δj ≤ 1 1 1 S .   err − m A  Δ 2 − n B Δ 2 1 i1 i i j1 j j 2.37 Journal of Applied Mathematics 11

3. Numerical Experiments

In this section, we use a numerical example to confirm the correctness of Theorem 2.5 and the precision of the perturbation bound for the unique positive definite solution XU of 1.1.

Example 3.1. Consider the symmetric linear matrix equation

X − A∗XA − A∗XA B∗XB B∗XB I,   1 1 2 2 1 1 2 2 3.1 and its perturbed equation

X − A∗XA − A∗XA B∗XB B∗XB I,   1 1 2 2 1 1 2 2 3.2 where ⎛ ⎞ ⎛ ⎞ 0.02 −0.10 −0.02 0.08 −0.10 −0.02 A ⎝ . − . . ⎠,A ⎝ . − . . ⎠, 1 0 08 0 10 0 02 2 0 08 0 10 0 02 −0.06 −0.12 0.14 −0.06 −0.12 0.14 ⎛ ⎞ ⎛ ⎞ 0.47 0.02 0.04 0.10 0.10 0.05 B ⎝− . . − . ⎠,B ⎝ . . . ⎠, 1 0 10 0 36 0 02 2 0 15 0 275 0 075 −0.04 0.01 0.47 0.05 0.05 0.175 ⎛ ⎞ ⎛ ⎞ 0.50.1 −0.2 −0.40.1 −0.2 A A ⎝− . . . ⎠ × −j , A A ⎝ . . − . ⎠ × −j , 1 1 0 40206 10 2 2 0 507 1 3 10 −0.20.1 −0.1 1.10.90.6 ⎛ ⎞ ⎛ ⎞ 0.80.20.05 0.20.20.1 B B ⎝ − . . . ⎠ × −j , B B ⎝− . . − . ⎠ × −j ,j∈ N. 1 1 0 2012 0 14 10 2 2 0 3015 0 15 10 −0.25 −0.20.26 0.1 −0.10.25 3.3

It is easy to verify that the conditions 2.22–2.24 are satisfied, then 3.1 and its perturbed equation 3.2 have unique positive definite solutions XU and XU, respectively. From Berzig 7 it follows that the sequences {Xk} and {Yk} generated by the iterative method

X ,Y I, 0 0 0 2 ∗ ∗ ∗ ∗ Xk I A XkA A XkA − B YkB − B YkB ,   1 1 1 2 2 1 1 2 2 3.4 Y I A∗Y A A∗Y A − B∗X B − B∗X B ,k , , ,... k1 1 k 1 2 k 2 1 k 1 2 k 2 0 1 2

− both converge to XU. Choose τ 1.0 × 10 15 as the termination scalar, that is,

RX X − A∗XA − A∗XA B∗XB B∗XB − I ≤ τ . × −15.   1 1 2 2 1 1 2 2 1 0 10 3.5

By using the iterative method 3.4 we can get the computed solution Xk of 3.1. Since − RXk < 1.0 × 10 15, then the computed solution Xk has a very high precision. For simplicity, 12 Journal of Applied Mathematics

Table 1: Numerical results for the different values of j. j 234 5 6 − − − − − XU − XU/XU 2.13 × 10 4 6.16 × 10 6 5.23 × 10 8 4.37 × 10 10 8.45 × 10 12 S / X × −4 × −6 × −8 × −10 × −12 err  U 3.42 10 7.13 10 6.63 10 5.12 10 9.77 10

we write the computed solution as the unique positive definite solution XU. Similarly, we can also get the unique positive definite solution XU of the perturbed equation 3.2. Some numerical results on the perturbation bounds for the unique positive definite solution XU are listed in Table 1. From Table 1,weseethatTheorem 2.5 gives a precise perturbation bound for the unique positive definite solution of 3.1.

4. Conclusion

In this paper, we study the matrix equation 1.1 which arises in solving some nonlinear matrix equations and the bilinear control system. A new method of perturbation analysis is developed for the matrix equation 1.1. By making use of the matrix differentiation and its elegant properties, we derive a precise perturbation bound for the unique positive definite solution of 1.1. A numerical example is presented to illustrate the sharpness of the perturbation bound.

Acknowledgments

This research was supported by the National Natural Science Foundation of China 11101100; 11171205; 11261014; 11226323, the Natural Science Foundation of Guangxi Province 2012GXNSFBA053006; 2011GXNSFA018138, the Key Project of Scientific Research Innovation Foundation of Shanghai Municipal Education Commission 13ZZ080,the Natural Science Foundation of Shanghai 11ZR1412500, the Ph.D. Programs Foundation of Ministry of Education of China 20093108110001, the Discipline Project at the corresponding level of Shanghai A. 13-0101-12-005, and Shanghai Leading Academic Discipline Project J50101. The authors wish to thank Professor Y. Zhang and the anonymous referees for providing very useful suggestions for improving this paper. The authors also thank Dr. Xiaoshan Chen for discussing the properties of matrix differentiation.

References

1 L. A. Sakhnovich, Interpolation Theory and Its Applications, Mathematics and Its Applications,Kluwer Academic Publishers, Dordrecht, The Netherlands, 1997. 2 T. Damm, “Direct methods and ADI-preconditioned Krylov subspace methods for generalized Lyapunov equations,” Numerical Linear Algebra with Applications, vol. 15, no. 9, pp. 853–871, 2008.   H 3 L. Zhang and J. Lam, “On 2 model reduction of bilinear systems,” Automatica,vol.38,no.2,pp. 205–216, 2002. 4 M. C. B. Reurings, Symmetric matrix equations [Ph.D. thesis], Vrije Universiteit, Amsterdam, The Netherlands, 2003. 5 A. C. M. Ran and M. C. B. Reurings, “The symmetric linear matrix equation,” Electronic Journal of Linear Algebra, vol. 9, pp. 93–107, 2002. Journal of Applied Mathematics 13

6 A. C. M. Ran and M. C. B. Reurings, “A fixed point theorem in partially ordered sets and some applications to matrix equations,” Proceedings of the American Mathematical Society, vol. 132, no. 5, pp. 1435–1443, 2004. 7 M. Berzig, “Solving a class of matrix equation via Bhaskar-Lakshmikantham coupled fixed point theorem,” Applied Mathematics Letters, vol. 25, no. 11, pp. 1638–1643, 2012. 8 J. Cai and G. Chen, “Some investigation on Hermitian positive definite solutions of the matrix equation Xs A∗X−tA Q,” Linear Algebra and Its Applications, vol. 430, no. 8-9, pp. 2448–2456, 2009. 9 X. Duan and A. Liao, “On the existence of Hermitian positive definite solutions of the matrix equation s ∗ −t X A X A Q,” Linear Algebra and its Applications, vol. 429, no. 4, pp. 673–687, 2008. m   X − A∗Xδi A Q 10 X. Duan, A. Liao, and B. Tang, “On the nonlinear matrix equation i1 i i ,” Linear Algebra and its Applications, vol. 429, no. 1, pp. 110–121, 2008. 11 S. M. El-Sayed and A. C. M. Ran, “On an iteration method for solving a class of nonlinear matrix equations,” SIAM Journal on Matrix Analysis and Applications, vol. 23, no. 3, pp. 632–645, 2001. 12 Z. Gajic and M. T. J. Qureshi, Lyapunov Matrix Equation in System Stability and Control, Academic Press, London, UK, 1995. 13 V. I. Hasanov, “Notes on two perturbation estimates of the extreme solutions to the equations X ± ∗ − A X 1A Q,” Applied Mathematics and Computation, vol. 216, no. 5, pp. 1355–1362, 2010. 14 Z.-Y. Peng, S. M. El-Sayed, and X.-l. Zhang, “Iterative methods for the extremal positive definite ∗ −α solution of the matrix equation X A X A Q,” Journal of Computational and Applied Mathematics, vol. 200, no. 2, pp. 520–527, 2007. 15 B.R.Fang,J.D.Zhou,andY.M.Li,Matrix Theory, Tsinghua University Press, Beijing, China, 2006. Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2012, Article ID 483624, 11 pages doi:10.1155/2012/483624

Research Article Refinements of Kantorovich Inequality for Hermitian Matrices

Feixiang Chen

School of Mathematics and Statistics, Chongqing Three Gorges University, Wanzhou, Chongqing 404000, China

Correspondence should be addressed to Feixiang Chen, [email protected]

Received 28 August 2012; Accepted 5 November 2012

Academic Editor: K. C. Sivakumar

Copyright q 2012 Feixiang Chen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Some new Kantorovich-type inequalities for Hermitian matrix are proposed in this paper. We consider what happens to these inequalities when the positive definite matrix is allowed to be invertible and provides refinements of the classical results.

1. Introduction and Preliminaries

We first state the well-known Kantorovich inequality for a positive definite Hermite matrix A ∈ M λ ≤ see 1, 2 ,let n be a positive definite Hermitian matrix with real eigenvalues 1 λ ≤···≤λ 2 n. Then

2 λ λn ≤ x∗Axx∗A−1x ≤ 1 , 1.1 1 λ λ 4 1 n

n ∗ for any x ∈ C , x 1, where A denotes the conjugate transpose of matrix A. A matrix ∗ A ∈ Mn is Hermitian if A A . An equivalent form of this result is incorporated in

2 λn − λ ≤ x∗Axx∗A−1x − ≤ 1 , 1.2 0 1 λ λ 4 1 n

n for any x ∈ C , x 1. Attributed to Kantorovich, the inequality has built up a considerable literature. This typically comprises generalizations. Examples are 3–5 for matrix versions. Operator 2 Journal of Applied Mathematics versions are developed in 6, 7. Multivariate versions have been useful in statistics to assess the robustness of least squares, see 8, 9 and the references therein. Due to the important applications of the original Kantorovich inequality for matrices 10 in Statistics 8, 11, 12 and Numerical Analysis 13, 14, any new inequality of this type will have a flow of consequences in the areas of applications. Motivated by the interest in both pure and applied mathematics outlined above we establish in this paper some improvements of Kantorovich inequalities. The classical Kantorovich-type inequalities are modified to apply not only to positive definite but also to invertible Hermitian matrices. As natural tools in deriving the new results, the recent Gruss-¨ type inequalities for vectors in inner product in 6, 15–19 are utilized. To simplify the proof, we first introduce some lemmas.

2. Lemmas B μ ≤ μ ≤ ··· ≤ μ A − B Let be a Hermitian matrix with real eigenvalues 1 2 n,if is positive semidefinite, we write

A ≥ B, 2.1

λ ≥ μ i , ,...,n Cn x, y that is, i i, 1 2 .On , we have the standard inner product defined by n x y x x ,...,x ∗ ∈ Cn y y ,...,y ∗ ∈ Cn i1 i i, where 1 n and 1 n .

Lemma 2.1. Let a, b, c, and d be real numbers, then one has the following inequality:    a2 − b2 c2 − d2 ≤ ac − bd2. 2.2

Lemma 2.2. Let A and B be Hermitian matrices, if AB BA,then

A B2 AB ≤ . 2.3 4

Lemma 2.3. Let A ≥ 0, B ≥ 0,ifAB BA,then

AB ≥ 0. 2.4

3. Some Results

The following lemmas can be obtained from 16–19 by replacing Hilbert space H, ·, · with n inner product spaces C , so we omit the details.

n Lemma 3.1. Let u, v, and e be vectors in C , and e 1.Ifα, β, δ, and γ are real or complex numbers such that     Re βe − u, u − αe ≥ 0, Re δe − v, v − γe ≥ 0, 3.1 Journal of Applied Mathematics 3 then

1       / |u, v − u, ee, v| ≤ β − α δ − γ − Re βe − u, u − αe Re δe − v, v − γe 1 2. 3.2 4

Lemma 3.2. With the assumptions in Lemma 3.1, one has

   1 α β γ δ |u, v − u, ee, v| ≤ β − α γ − δ − u, e − v, e − . 3.3 4 2 2

Lemma 3.3. With the assumptions in Lemma 3.1,ifReβα ≥ 0, Reδγ ≥ 0, one has

β − α δ − γ |u, v − u, ee, v| ≤ |u, ee, v|.     / 3.4 4 Re βα Re δγ 1 2

Lemma 3.4. With the assumptions in Lemma 3.3, one has

|u, v − u, ee, v|        1/2 3.5 1/2 1/2 1/2 ≤ α β − 2 Re βα δ γ − 2 Re δγ |u, ee, v| .

4. New Kantorovich Inequalities for Hermitian Matrices

For a Hermitian matrix A,asin6, we define the following transform:

C A λ I − A A − λ I . n 1 4.1

A λ λ > When is invertible, if 1 n 0, then,      C A−1 1 I − A−1 A−1 − 1 I . λ λ 4.2 1 n

λ λ < Otherwise, 1 n 0, then,      C A−1 1 I − A−1 A−1 − 1 I , λ λ 4.3 k1 k where

λ ≤···≤λ < <λ ≤···≤λ . 1 k 0 k1 n 4.4

− From Lemma 2.3 we can conclude that CA ≥ 0andCA 1 ≥ 0. 4 Journal of Applied Mathematics

n For two Hermitian matrices A and B,andx ∈ C , x 1, we define the following functional:

GA, B; x Ax, Bx − Ax, xx, Bx. 4.5

When A B, we denote

GA; x Ax2 − Ax, x2, 4.6

n for x ∈ C , x 1.

n Lemma 4.1. With notations above, and for x ∈ C , x 1,then

2 λn − λ 0 ≤ CAx, x ≤ 1 , 4.7 4

λ λ > if n 1 0,

    2 − λn − λ 0 ≤ C A 1 x, x ≤ 1 , 4.8 λ λ 2 4 n 1

λ λ < if n 1 0,

    2 − λk − λk 0 ≤ C A 1 x, x ≤ 1 . 4.9 λ λ 2 4 k1 k

Proof. From CA ≥ 0, then

CAx, x ≥ 0. 4.10

While, from Lemma 2.2, we can get

λ − λ 2 n 1 CA λnI − AA − λ I ≤ I. 4.11 1 4

CAx, x≤λ − λ 2/ CA−1 Then n 1 4 is straightforward. The proof for is similar.

n Lemma 4.2. With notations above, and for x ∈ C , x 1,then

  ∗ ∗ − 2 − x Axx A 1x − 1 ≤ GA; xG A 1; x . 4.12 Journal of Applied Mathematics 5

Proof. Thus,

   ∗ ∗ − 2 ∗ ∗ − − ∗ 2 x Axx A 1x − 1 x x A 1x I − A 1 x AxI − Ax     4.13  2 ≤  x∗A−1x I − A−1 x x∗AxI − Ax2,

while

  ∗ ∗ ∗ ∗ x AxI − Ax2 x x Ax2I − 2x AxA A2 x

x∗A2x − x∗Ax2 4.14 Ax2 − Ax, x2

GA; x.

∗ − − − Similarly, we can get x A 1xI − A 1x2 GA 1; x, then we complete the proof.

Theorem 4.3. Let A, B be two Hermitian matrices, and CA ≥ 0, CB ≥ 0 are defined as above, then

  1 1/2 |GA, B; x| ≤ λn − λ μn − μ − CAx, xCBx, x , 4 1 1         λ λ μ μ 1 1 n 1 n |GA, B; x| ≤ λn − λ μn − μ − A − x, x B − x, x , 4 1 1 2 2 4.15 for any x ∈ Cn, x 1. λ λ > μ μ > If 1 n 0, 1 n 0,then

  λ − λ μ − μ n 1 n 1 |GA, B; x| ≤   |Ax, xBx, x|, λ λ μ μ 1/2 4 n 1 n 1      1/2 G A, B x ≤ λ λ − λ λ 1/2 μ μ − μ μ 1/2 Ax, x Bx, x 1/2, | ; | | 1 n| 2 1 n 1 n 2 1 n |  | 4.16

n for any x ∈ C , x 1.

Proof. The proof follows by Lemmas 3.1, 3.2, 3.3,and3.4 on choosing u Ax, v Bx,and e x β λ α λ δ μ γ μ x ∈ Cn x , n, 1, n,and 1, , 1, respectively. 6 Journal of Applied Mathematics

Corollary 4.4. Let A be a Hermitian matrices, and CA ≥ 0 is defined as above, then

1 2 |GA; x| ≤ λn − λ − CAx, x, 4.17 4 1    λ λ 2 1 2 1 n |GA; x| ≤ λn − λ − A − I x, x , 4.18 4 1 2

n for any x ∈ C , x 1. λ λ > If 1 n 0,then

λ − λ 2 |GA x| ≤ n 1 |Ax, x|2, ; λ λ 4.19 4 n 1    G A x ≤ λ λ − λ λ Ax, x , | ; | | 1 n| 2 1 n | | 4.20

n for any x ∈ C , x 1.

Proof. The proof follows by Theorem 4.3 on choosing A B, respectively.

− Corollary 4.5. Let A be a Hermitian matrices and CA 1 ≥ 0 is defined as above, then one has the following. λ λ > If 1 n 0,then

  2     − λn − λ − G A 1; x ≤ 1 − C A 1 x, x , 4.21 λ λ 2 4 n 1      λ − λ 2 λ λ 2 − n 1 − 1 n G A 1; x ≤ − A 1 − I x, x , 4.22 λ λ 2 λnλ 4 n 1 2 1

  2   λn − λ 2 G A−1 x ≤ 1 A−1x, x , 4.23 ; λ λ 4 n 1       − |λ λn| 1 − G A 1; x ≤ 1 − 2 A 1x, x , 4.24 λ λn λ λ 1 1 n

n for any x ∈ C , x 1. λ λ < If 1 n 0,then

  2     − λk − λk − G A 1; x ≤ 1 − C A 1 x, x , 4.25 λ λ 2 4 k k1 Journal of Applied Mathematics 7

CA−1 /λ I − A−1A−1 − /λ I where 1 k1 1 k , and

     λ − λ 2 λ λ 2 − k k1 − k1 k G A 1; x ≤ − A 1 − I x, x , 4.26 λ λ 2 λkλk 4 k k1 2 1

n for any x ∈ C , x 1.

− Proof. The proof follows by Corollary 4.4 by replacing A with A 1, respectively. Theorem 4.6. A n×n λ ≤ λ ≤···≤λ Let be an invertible Hermitian matrix with real eigenvalues 1 2 n, then one has the following. λ λ > If 1 n 0,then

2  λn − λ     x∗Axx∗A−1x − ≤ 1 − CAx, x C A−1 x, x , 1 λ λ 4.27 4 n 1       λ − λ 2 λ λ λ λ x∗Axx∗A−1x − ≤ n 1 − A − 1 n I x, x A−1 − 1 n I x, x , 1 λ λ λ λ 4 n 1 2 2 n 1 4.28

2   ∗ ∗ − λn − λ − x Axx A 1x − 1 ≤ 1 Ax, x A 1x, x , 4.29 λ λ 2 4 n 1     2 |λn| − |λ |  ∗ ∗ − 1 x Axx A 1x − 1 ≤  Ax, xA−1x, x, 4.30 λ λ n 1

n for any x ∈ C , x 1. λ λ < If 1 n 0,then

 λn − λ λk − λk     x∗Axx∗A−1x − ≤ 1 1 − CAx, x C A−1 x, x , 1 λ λ 4.31 4| k k1|

CA−1 /λ I − A−1A−1 − /λ I where 1 k1 1 k ,

λ − λ λ − λ x∗Axx∗A−1x − ≤ n 1 k1 k 1 λ λ 4| k k1|       4.32 λ λ λ λ − A − 1 n I x, x A−1 − k k1 I x, x . λ λ 2 2 1 k1

Proof. Considering

  ∗ ∗ − 2 − x Axx A 1x − 1 ≤ GA; xG A 1; x . 4.33 8 Journal of Applied Mathematics

λ λ > When 1 n 0, from 4.17 and 4.21 ,weget         2 λ − λ 2 ∗ ∗ −1 1 2 n 1 −1 x Axx A x − 1 ≤ λn − λ − CAx, x − C A x, x . 4.34 1 λ λ 2 4 4 n 1

From a2 − b2c2 − d2 ≤ ac − bd2, we have   2  2 2 λn − λ     x∗Axx∗A−1x − ≤ 1 − CAx, x C A−1 x, x , 4.35 1 λ λ 4 n 1 then, the conclusion 4.27 holds. Similarly, from 4.18 and 4.22,weget      2 λ λ 2 ∗ ∗ −1 1 2 1 n x Axx A x − 1 ≤ λn − λ − A − I x, x 4 1 2      λ − λ 2 λ λ 2 n 1 − 1 n × − A 1 − I x, x λ λ 2 λnλ 4 n 1 2 1         λ − λ 2 λ λ λ λ 2 ≤ n 1 − A − 1 n I x, x A−1 − 1 n I x, x , λ λ λ λ 4 n 1 2 2 n 1 4.36 then, the conclusion 4.28 holds. From 4.19 and 4.23,weget

  2 λ − λ 2 λ − λ 2 2 x∗Axx∗A−1x − ≤ n 1 |Ax, x|2 n 1 A−1x, x , 1 λ λ λ λ 4.37 4 n 1 4 n 1 then, the conclusion 4.29 holds. From 4.20 and 4.24,weget       2  |λ λ | x∗Axx∗A−1x − ≤ λ λ − λ λ Ax, x 1 n − 1 A−1x, x , 1 | 1 n| 2 1 n | | 2 λ λn λ λ 1 1 n 4.38 then, the conclusion 4.30 holds. λ λ < When 1 n 0, from 4.17 and 4.25 ,weget         2 λ − λ 2 ∗ ∗ −1 1 2 k k1 −1 x Axx A x − 1 ≤ λn − λ − CAx, x − C A x, x , 4.39 1 λ λ 2 4 4 k k1 then, the conclusion 4.31 holds. Journal of Applied Mathematics 9

From 4.18 and 4.26,weget

     2 λ λ 2 ∗ ∗ −1 1 2 1 n x Axx A x − 1 ≤ λn − λ − A − I x, x 4 1 2      λ − λ 2 λ λ 2 k k1 − k1 k × − A 1 − I x, x λ λ 2 2λkλk 4 k k1 1  4.40 λ − λ λ − λ ≤ n 1 k1 k λ λ 4| k k1|        λ λ λ λ 2 − A − 1 n I x, x A−1 − k k1 I x, x , λ λ 2 2 1 k1 then, the conclusion 4.32 holds.

n Corollary 4.7. With the notations above, for any x ∈ C , x 1, one lets

1 2 |G A; x| λn − λ − CAx, x, 1 4 1    4.41 λ λ 2 1 2 1 n |G A; x| λn − λ − A − I x, x . 2 4 1 2

λ λ > If 1 n 0, one lets

λ − λ 2 G A x n 1 Ax, x 2, | 3 ; | | | 4λnλ 1 4.42    G A x λ λ − λ λ Ax, x . | 4 ; | | 1 n| 2 1 n | |

λ λ > If 1 n 0

  2     − λn − λ − G1 A 1; x 1 − C A 1 x, x , λ λ 2 4 n 1      λ − λ 2 λ λ 2 − n 1 − 1 n G2 A 1; x − A 1 − I x, x , λ λ 2 λnλ 4 n 1 2 1 4.43     λ − λ 2 2 G3 A−1 x n 1 A−1x, x , ; λ λ 4 n 1       − |λ λn| 1 − G4 A 1; x 1 − 2 A 1x, x . λ λn λ λ 1 1 n 10 Journal of Applied Mathematics

λ λ < If 1 n 0,then

  2     − λk − λk − G5 A 1; x 1 − C A 1 x, x , 4.44 λ λ 2 4 k k1

CA−1 /λ I − A−1A−1 − /λ I where 1 k1 1 k , and      λ − λ 2 λ λ 2 − k k1 − k1 k G6 A 1; x − A 1 − I x, x . 4.45 λ λ 2 λkλk 4 k k1 2 1

Then, one has the following. λ λ > If 1 n 0,    ∗ ∗ − x Axx A 1x − 1 ≤ G∗A; xG∗˙ A−1; x , 4.46 where

G∗ A x G A x ,G A x ,G A x ,G A x , ; min{ 1 ; 2 ; 3 ; 4 ; }         4.47 ∗ − − − − G A; x min G1 A 11; x ,G2 A 1; x ,G3 A 1; x ,G4 A 1; x .

λ λ < If 1 n 0    ∗ ∗ − x Axx A 1x − 1 ≤ G◦A; xG• A−1; x , 4.48 where

G◦ A x G A x ,G A x , ; min{ 1 ; 2 ; }     4.49 • − − G A; x min G5 A 1; x ,G6 A 1; x .

Proof. The proof follows from that the conclusions in Corollaries 4.4 and 4.5 are independent.

λ > λ > Remark 4.8. It is easy to see that if 1 0, n 0, our result coincides with the inequality of operator versions in 6. So we conclude that our results give an improvement of the Kantorovich inequality 6 that applies to all invertible Hermite matrices.

5. Conclusion

In this paper, we introduce some new Kantorovich-type inequalities for the invertible Hermitian matrices. Inequalities 4.27 and 4.31 are the same as 4, but our proof is simple. λ > λ > In Theorem 4.6,if 1 0, n 0, the results are similar to the well-known Kantorovich-type inequalities for operators in 6. Moreover, for any invertible Hermitian matrix, there exists a similar inequality. Journal of Applied Mathematics 11

Acknowledgments

The authors are grateful to the associate editor and two anonymous referees whose comments have helped to improve the final version of the present work. This work was supported by the Natural Science Foundation of Chongqing Municipal Education Commission no. KJ091104.

References

1 R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK, 1985. 2 A. S. Householder, The Theory of Matrices in Numerical Analysis, Blaisdell, New York, NY, USA, 1964. 3 S. Liu and H. Neudecker, “Several matrix Kantorovich-type inequalities,” Journal of Mathematical Analysis and Applications, vol. 197, no. 1, pp. 23–26, 1996. 4 A. W. Marshall and I. Olkin, “Matrix versions of the Cauchy and Kantorovich inequalities,” Aequationes Mathematicae, vol. 40, no. 1, pp. 89–93, 1990. 5 J. K. Baksalary and S. Puntanen, “Generalized matrix versions of the Cauchy-Schwarz and Kantorovich inequalities,” Aequationes Mathematicae, vol. 41, no. 1, pp. 103–110, 1991. 6 S. S. Dragomir, “New inequalities of the Kantorovich type for bounded linear operators in Hilbert spaces,” Linear Algebra and its Applications, vol. 428, no. 11-12, pp. 2750–2760, 2008. 7 P. G. Spain, “Operator versions of the Kantorovich inequality,” Proceedings of the American Mathematical Society, vol. 124, no. 9, pp. 2813–2819, 1996. 8 S. Liu and H. Neudecker, “Kantorovich inequalities and efficiency comparisons for several classes of estimators in linear models,” Statistica Neerlandica, vol. 51, no. 3, pp. 345–355, 1997. 9 P. Bloomfield and G. S. Watson, “The inefficiency of least squares,” Biometrika, vol. 62, pp. 121–128, 1975. 10 L. V. Kantorovic,ˇ “Functional analysis and applied mathematics,” Uspekhi Matematicheskikh Nauk, vol. 3, no. 628, pp. 89–185, 1948 Russian. 11 C. G. Khatri and C. R. Rao, “Some extensions of the Kantorovich inequality and statistical applications,” Journal of Multivariate Analysis, vol. 11, no. 4, pp. 498–505, 1981. 12 C. T. Lin, “Extrema of quadratic forms and statistical applications,” Communications in Statistics A, vol. 13, no. 12, pp. 1517–1520, 1984. 13 A. Galantai,´ “A study of Auchmuty’s error estimate,” Computers & Mathematics with Applications, vol. 42, no. 8-9, pp. 1093–1102, 2001. 14 P. D. Robinson and A. J. Wathen, “Variational bounds on the entries of the inverse of a matrix,” IMA Journal of Numerical Analysis, vol. 12, no. 4, pp. 463–486, 1992. 15 S. S. Dragomir, “A generalization of Gruss’s¨ inequality in inner product spaces and applications,” Journal of Mathematical Analysis and Applications, vol. 237, no. 1, pp. 74–82, 1999. 16 S. S. Dragomir, “Some Gruss¨ type inequalities in inner product spaces,” Journal of Inequalities in Pure and Applied Mathematics, vol. 4, no. 2, article 42, 2003. 17 S. S. Dragomir, “On Bessel and Gruss¨ inequalities for orthonormal families in inner product spaces,” Bulletin of the Australian Mathematical Society, vol. 69, no. 2, pp. 327–340, 2004. 18 S. S. Dragomir, “Reverses of Schwarz, triangle and Bessel inequalities in inner product spaces,” Journal of Inequalities in Pure and Applied Mathematics, vol. 5, no. 3, article 76, 2004. 19 S. S. Dragomir, “Reverses of the Schwarz inequality generalising a Klamkin-McLenaghan result,” Bulletin of the Australian Mathematical Society, vol. 73, no. 1, pp. 69–78, 2006. 20 Z. Liu, L. Lu, and K. Wang, “On several matrix Kantorovich-type inequalities,” Journal of Inequalities and Applications, vol. 2010, Article ID 571629, 5 pages, 2010.