<<

Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2014, Article ID 709356, 7 pages http://dx.doi.org/10.1155/2014/709356

Research Article Least-Squares Solutions of the Equations 𝐴𝑋𝐡 + πΆπ‘Œπ· =𝐻 and 𝐴𝑋𝐡 + 𝐢𝑋𝐷 =𝐻 for Symmetric Arrowhead Matrices and Associated Approximation Problems

Yongxin Yuan

School of Mathematics and , Hubei Normal University, Huangshi 435002, China

Correspondence should be addressed to Yongxin Yuan; yuanyx [email protected]

Received 7 March 2014; Accepted 10 June 2014; Published 30 June 2014

Academic Editor: Shan Zhao

Copyright Β© 2014 Yongxin Yuan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The least-squares solutions of the matrix equations 𝐴𝑋𝐡 + πΆπ‘Œπ· =𝐻 and 𝐴𝑋𝐡 + 𝐢𝑋𝐷 =𝐻 for symmetric arrowhead matrices are discussed. By using the Kronecker product and stretching function of matrices, the explicit representations of the general solution are given. Also, it is shown that the best approximation solution is unique and an explicit expression of the solution is derived.

1. Introduction equations for symmetric arrowhead matrices and associated approximation problems, which can be described as follows. An 𝑛×𝑛matrix 𝐴 is called an arrowhead matrix if it has the π‘šΓ—π‘› 𝑛×𝑝 π‘šΓ—π‘ž following form: Problem 1. Given 𝐴∈R ,𝐡 ∈ R ,𝐢 ∈ R ,𝐷 ∈ π‘žΓ—π‘ π‘šΓ—π‘ R ,and𝐻∈R , find nontrivial real-valued symmetric 𝑛×𝑛 π‘žΓ—π‘ž π‘Ž1 𝑏1 𝑏2 β‹…β‹…β‹… π‘π‘›βˆ’1 arrowhead matrices π‘‹βˆˆR and π‘ŒβˆˆR such that [ ] [ 𝑐1 π‘Ž2 0 β‹…β‹…β‹… 0 ] [ ] 𝐴=[ 𝑐2 0π‘Ž3 β‹…β‹…β‹… 0 ] . ‖𝐴𝑋𝐡 + πΆπ‘Œπ· βˆ’π»β€– = min . (2) [ ] (1) [ . . . . ] . . . d . 𝑐 0 0 β‹…β‹…β‹… π‘Ž Problem 2. Given real-valued symmetric arrowhead matrices [ π‘›βˆ’1 𝑛 ] Μƒ 𝑛×𝑛 Μƒ π‘žΓ—π‘ž Μ‚ Μ‚ π‘‹βˆˆR , π‘ŒβˆˆR ,find(𝑋, π‘Œ) ∈ S1 such that

If 𝑏𝑖 =𝑐𝑖, 𝑖 = 1,...,π‘›βˆ’1,then𝐴 is said to be a symmetric σ΅„© σ΅„©2 σ΅„© σ΅„©2 σ΅„© σ΅„©2 σ΅„© σ΅„©2 arrowhead matrix. We denote all real-valued symmetric 𝑛×𝑛 󡄩𝑋̂ βˆ’ 𝑋̃󡄩 + σ΅„©π‘ŒΜ‚ βˆ’ π‘ŒΜƒσ΅„© = min (σ΅„©π‘‹βˆ’π‘‹Μƒσ΅„© + σ΅„©π‘Œβˆ’π‘ŒΜƒσ΅„© ) , 𝑛×𝑛 σ΅„© σ΅„© σ΅„© σ΅„© (𝑋,π‘Œ)∈S σ΅„© σ΅„© σ΅„© σ΅„© arrowhead matrices by SAR . Such matrices arise in the 1 description of radiationless transitions in isolated molecules (3) [1], oscillators vibrationally coupled with a Fermi liquid [2] and quantum optics [3], and so forth. Numerically efficient where S1 is the solution set of Problem 1. algorithms for computing eigenvalues and eigenvectors of π‘šΓ—π‘› 𝑛×𝑝 π‘šΓ—π‘ arrowhead matrices were discussed in [4–8]. The inverse Problem 3. Given 𝐴, 𝐢 ∈ R ,𝐡,𝐷∈R ,and𝐻∈R , problem of constructing the symmetric arrowhead matrix find nontrivial real-valued symmetric arrowhead matrix 𝑋 from spectral data has been investigated by Xu [9], Peng such that et al. [10], and Borges et al. [11]. In this paper, we will further consider the least-squares solutions of the matrix ‖𝐴𝑋𝐡 + 𝐢𝑋𝐷 βˆ’π»β€– = min . (4) 2 Journal of Applied Mathematics

Problem 4. Given a real-valued symmetric arrowhead matrix Let 𝑑1 =2π‘›βˆ’1and 𝑑2 =2π‘žβˆ’1.Itiseasilyseenthat Μƒ 𝑛×𝑛 Μ‚ (SAR𝑛×𝑛)=𝑑 (SARπ‘žΓ—π‘ž)=𝑑 π‘‹βˆˆR ,findπ‘‹βˆˆS3 such that dim 1 and dim 2.Define 𝑍 σ΅„© σ΅„© σ΅„© σ΅„© 𝑖𝑗 σ΅„©π‘‹βˆ’Μ‚ 𝑋̃󡄩 = σ΅„©π‘‹βˆ’π‘‹Μƒσ΅„© , √ σ΅„© σ΅„© min σ΅„© σ΅„© (5) 2 (𝑛) (𝑛) ⊀ (𝑛) (𝑛) ⊀ π‘‹βˆˆS3 { (𝑒 (𝑒 ) +𝑒 (𝑒 ) ), 𝑖=1; 𝑗=2,...,𝑛, = { 2 𝑖 𝑗 𝑗 𝑖 (𝑛) (𝑛) ⊀ {𝑒𝑖 (𝑒𝑖 ) , 𝑖=𝑗=1,...,𝑛, where S3 is the solution set of Problem 3. (7)

Recently, Li et al. [12] considered the least-squares π‘Šπ‘˜π‘™ solutions of the matrix equation 𝐴𝑋𝐡 + πΆπ‘Œπ· =𝐸 for √2 ⊀ ⊀ symmetric arrowhead matrices. By using Moore-Penrose { (𝑒(π‘ž)(𝑒(π‘ž)) +𝑒(𝑝)(𝑒(π‘ž)) ), π‘˜=1;𝑙=2,...,π‘ž, = 2 π‘˜ 𝑙 𝑙 π‘˜ inverses and the Kronecker product, the minimum-norm and { ⊀ 𝑒(π‘ž)(𝑒(π‘ž)) , π‘˜=𝑙=1,...,π‘ž, least-squares solution to the matrix equation for symmetric { π‘˜ π‘˜ arrowhead matrices was provided. However, we can easily see (8) that the method used in [12]involvescomplicatedcomputa- (𝑛) tions for Moore-Penrose generalized inverses of partitioned where 𝑒𝑖 is the 𝑖th column vector of the 𝐼𝑛.It matrices, and the expression of the minimum-norm and is easy to verify that {𝑍𝑖𝑗 } and {π‘Šπ‘˜π‘™} form orthonormal bases 𝑛×𝑛 π‘žΓ—π‘ž least-squares solution was not explicit. Compared with the of the subspaces SAR and SAR ,respectively.Thatis, approach proposed in [12],themethodinthispaperismore concise and easy to perform. 0, 𝑖 =π‘˜ΜΈ 𝑗 =𝑙,ΜΈ (𝑍 ,𝑍 )={ or The paper is organized as follows. In Section 2,usingthe 𝑖𝑗 π‘˜π‘™ 1, 𝑖 = π‘˜, 𝑗 =𝑙, Kronecker product and stretching function vec(β‹…) of matrices, S (9) we give an explicit representation of the solution set 1 of 0, 𝑖 =π‘˜ΜΈ 𝑗 =𝑙,ΜΈ (π‘Š ,π‘Š )={ or Problem 1. Furthermore, we show that there exists a unique 𝑖𝑗 π‘˜π‘™ 1, 𝑖 = π‘˜, 𝑗 =𝑙. solution in Problem 2 and present the expression of the Μ‚ Μ‚ unique solution (𝑋, π‘Œ) of Problem 2.InSection 3,weprovide π‘‹βˆˆSAR𝑛×𝑛 π‘ŒβˆˆSARπ‘žΓ—π‘ž 𝑋 π‘Œ S Now, if and ,then and can be an explicit representation of the solution set 3 of Problem expressed as 3 and present the expression of the unique solution 𝑋̂ of Problem 4.InSection 4, a numerical algorithm to acquire 𝑋=βˆ‘π›Ό 𝑍 ,π‘Œ=βˆ‘π›½ π‘Š , 𝑖𝑗 𝑖𝑗 π‘˜π‘™ π‘˜π‘™ (10) the optimal approximation solution for Problem 2 under the 𝑖,𝑗 π‘˜,𝑙 Frobenius norm sense is described and a numerical example where the real numbers 𝛼𝑖𝑗 , 𝑖 = 1,𝑗 = 2,...,𝑛;𝑖 =𝑗 is provided. Some concluding remarks are given in Section 5. 1,...,𝑛 𝛽 ,π‘˜ = 1,𝑙 = 2,...,π‘ž;π‘˜ = 𝑙 =1,...,π‘ž Throughout this paper, we denote the real π‘šΓ—π‘›matrix ,and π‘˜π‘™ ,are π‘šΓ—π‘› space by R andthetransposeandtheMoore-Penrose yet to be determined. ⊀ + generalized inverse of a real matrix 𝐴 by 𝐴 and 𝐴 , It follows from (10) that the relation of (2)canbe equivalently written as respectively. 𝐼𝑛 represents the identity matrix of size 𝑛.For m×𝑛 π‘šΓ—π‘› σ΅„© σ΅„© 𝐴, 𝐡 ∈ R ,aninnerproductinR is defined by (𝐴, 𝐡) = σ΅„© σ΅„© ⊀ π‘šΓ—π‘› σ΅„© σ΅„© trace(𝐡 𝐴);thenR is a Hilbert space. The matrix norm β€–β‹…β€– σ΅„©βˆ‘π›Ό 𝐴𝑍 𝐡+βˆ‘π›½ πΆπ‘Š π·βˆ’π»σ΅„© = . σ΅„© 𝑖𝑗 𝑖𝑗 π‘˜π‘™ π‘˜π‘™ σ΅„© min (11) induced by the inner product is the Frobenius norm. Given σ΅„© 𝑖,𝑗 π‘˜,𝑙 σ΅„© π‘šΓ—π‘› π‘Γ—π‘ž two matrices 𝐴=[π‘Žπ‘–π‘— ]∈R and 𝐡=[𝑏𝑖𝑗 ]∈R ,the Kronecker product of 𝐴 and 𝐡 is defined by π΄βŠ—π΅=[π‘Žπ‘–π‘— 𝐡] ∈ When setting π‘šπ‘Γ—π‘›π‘ž ⊀ R .Also,foranπ‘šΓ—π‘›matrix 𝐴=[π‘Ž1,π‘Ž2,...,π‘Žπ‘›],where 𝛼=[𝛼11,...,𝛼𝑛,𝑛,𝛼12,...,𝛼1,𝑛] , π‘Žπ‘–, 𝑖 = 1,...,𝑛,isthe𝑖th column vector of 𝐴,thestretching ⊀ ⊀ ⊀ ⊀ ⊀ (12) function vec(𝐴) is defined by vec(𝐴) = 1[π‘Ž ,π‘Ž2 ,...,π‘Žπ‘› ] . 𝛽=[𝛽11,...,π›½π‘ž,π‘ž,𝛽12,...,𝛽1,π‘ž] ,

𝐺=vec [ (𝑍11),...,vec (𝑍𝑛,𝑛),vec (𝑍12),...,vec (𝑍1,𝑛)] 2. The Solutions of Problems 1 and 2 𝑛2×𝑑 ∈ R 1 To begin with, we introduce two lemmas. (13) π‘šΓ—π‘ž π‘š Lemma 5 (see [13]). If 𝐿∈R ,π‘βˆˆR , then the general + 𝐿= [ (π‘Š ),..., (π‘Š ), (π‘Š ),..., (π‘Š )] solution of ‖𝐿𝑦 βˆ’ 𝑏‖= min can be expressed as 𝑦=𝐿 𝑏+(πΌπ‘ž βˆ’ vec 11 vec π‘ž,π‘ž vec 12 vec 1,π‘ž 𝐿+𝐿)𝑧 π‘§βˆˆRπ‘ž ,where is an arbitrary vector. π‘ž2×𝑑 ∈ R 2 , π‘šΓ—π‘› 𝑛×𝑙 𝑙×𝑠 Lemma 6 (see [14]). Let 𝐷∈R ,𝐻∈R ,𝐽∈R .Then (14) ⊀ ⊀ 𝑀=(𝐡 βŠ—π΄)𝐺, 𝑁=(𝐷 βŠ—πΆ)𝐿, β„Ž=Vec (𝐻) . ⊀ vec (𝐷𝐻𝐽) =(𝐽 βŠ—π·)vec (𝐻) . (6) (15) Journal of Applied Mathematics 3

By Lemma 6, we see that the relation of (11)isequivalentto From (17), we can easily obtain the following corollary. σ΅„© σ΅„© 󡄩𝑀𝛼 + 𝑁𝛽 βˆ’β„Žσ΅„© = min . (16) Corollary 8. Under the same assumptions as in Theorem 7, the matrix equation We note that 𝐴𝑋𝐡 + πΆπ‘Œπ· =𝐻 σ΅„© σ΅„©2 (24) 󡄩𝑀𝛼 + 𝑁𝛽 βˆ’β„Žσ΅„© has a solution if and only if σ΅„© + σ΅„©2 = 󡄩𝑀[𝛼 +𝑀 (𝑁𝛽 βˆ’ β„Ž)]𝑀 +𝐸 (𝑁𝛽 βˆ’ β„Ž)σ΅„© + 𝐸𝑀𝑁(𝐸𝑀𝑁) 𝐸𝑀𝑁=πΈπ‘€β„Ž. (25) σ΅„© σ΅„©2 σ΅„© σ΅„©2 = 󡄩𝑀[𝛼+ 𝑀 (𝑁𝛽 βˆ’ β„Ž)]σ΅„© + 󡄩𝐸 (𝑁𝛽 βˆ’ β„Ž)σ΅„© σ΅„© σ΅„© σ΅„© 𝑀 σ΅„© In this case, the solution set S1 of (24) is given by (21). σ΅„© + σ΅„©2 = 󡄩𝑀[𝛼 +𝑀 (𝑁𝛽 βˆ’ β„Ž)]σ΅„© It follows from Theorem 7 that the solution set S1 is σ΅„© always nonempty. It is easy to verify that S1 is a closed convex σ΅„© + 𝑛×𝑛 π‘žΓ—π‘ž + 󡄩𝐸𝑀𝑁[π›½βˆ’(𝐸𝑀𝑁) πΈπ‘€β„Ž] subset of SAR Γ— SAR . From the best approximation (𝑋,Μ‚ π‘Œ)Μ‚ σ΅„©2 theorem [15], we know there exists a unique solution + σ΅„© S βˆ’(πΌπ‘šπ‘ βˆ’πΈπ‘€π‘(𝐸𝑀𝑁) )πΈπ‘€β„Žσ΅„© in 1 such that (3)holds. We now focus our attention on seeking the unique solu- σ΅„© σ΅„©2 σ΅„© + σ΅„©2 σ΅„© + σ΅„© (𝑋,Μ‚ π‘Œ)Μ‚ S = 󡄩𝑀[𝛼 +𝑀 (𝑁𝛽 βˆ’ β„Ž)]σ΅„© + 󡄩𝐸𝑀𝑁[𝛽 π‘€βˆ’ (𝐸 𝑁) πΈπ‘€β„Ž]σ΅„© tion in 1.Forthereal-valuedsymmetricarrowhead matrices 𝑋̃ and π‘ŒΜƒ,itiseasilyseenthat𝑋,Μƒ π‘ŒΜƒ can be expressed σ΅„© σ΅„©2 σ΅„© + σ΅„© {𝑍 } + σ΅„©(πΌπ‘šπ‘ βˆ’πΈπ‘€π‘(𝐸𝑀𝑁) )πΈπ‘€β„Žσ΅„© , as the linear combinations of the orthonormal bases 𝑖𝑗 and {π‘Š } (17) 𝑖j ;thatis, + 𝑋=Μƒ βˆ‘π›Ύ 𝑍 , π‘Œ=Μƒ βˆ‘π›Ώ π‘Š , where 𝐸𝑀 =πΌπ‘šπ‘ βˆ’π‘€π‘€ .ItfollowsfromLemma 5 and (17) 𝑖𝑗 𝑖𝑗 π‘˜π‘™ π‘˜π‘™ 𝑖,𝑗 (26) that ‖𝑀𝛼 + 𝑁𝛽 βˆ’β„Žβ€–= min if and only if π‘˜,𝑙 + + where 𝛾𝑖𝑗 , 𝑖 = 1,𝑗 = 2,...,𝑛;𝑖 = 𝑗,and =1,...,𝑛 π›Ώπ‘˜π‘™,π‘˜ = 𝛼=βˆ’π‘€ 𝑁𝛽 +𝑀 β„Ž+𝐹𝑀V, (18) 1,𝑙 = 2,...,π‘ž;π‘˜ = 𝑙 = 1,...,π‘ž, are uniquely determined by + the elements of 𝑋̃ and π‘ŒΜƒ.Let 𝛽=(𝐸𝑀𝑁) πΈπ‘€β„Ž+π‘Šπ‘’, (19) ⊀ + + 𝛾=[𝛾11,...,𝛾𝑛,𝑛,𝛾12,...,𝛾1,𝑛] , where 𝐹𝑀 =𝐼𝑑 βˆ’π‘€ 𝑀, π‘Šπ‘‘ =𝐼 βˆ’(𝐸𝑀𝑁) 𝐸𝑀𝑁,andπ‘’βˆˆ 1 2 (27) 𝑑 𝑑 ⊀ R 2 , V ∈ R 1 are arbitrary vectors. 𝛿=[𝛿11,...,π›Ώπ‘ž,π‘ž,𝛿12,...,𝛿1,π‘ž] . Substituting (19)into(18), we obtain (𝑋, π‘Œ) ∈ S1 + Then, for any pair of matrices in (21), by the 𝛼=π›Όβˆ’π‘€Μƒ π‘π‘Šπ‘’π‘€ +𝐹 V, (20) relations of (9)and(26), we see that σ΅„© σ΅„©2 σ΅„© σ΅„©2 𝛼=𝑀̃ +β„Žβˆ’π‘€+𝑁(𝐸 𝑁)+𝐸 β„Ž σ΅„© Μƒσ΅„© σ΅„© Μƒσ΅„© where 𝑀 𝑀 . 𝑓=σ΅„©π‘‹βˆ’π‘‹σ΅„© + σ΅„©π‘Œβˆ’π‘Œσ΅„© In summary of the above discussion, we have proved the σ΅„© σ΅„©2 σ΅„© σ΅„©2 σ΅„© σ΅„© σ΅„© σ΅„© following result. σ΅„© σ΅„© σ΅„© σ΅„© = σ΅„©βˆ‘(𝛼𝑖𝑗 βˆ’π›Ύπ‘–π‘— )𝑍𝑖𝑗 σ΅„© + σ΅„©βˆ‘(π›½π‘˜π‘™ βˆ’π›Ώπ‘˜π‘™)π‘Šπ‘˜π‘™σ΅„© π‘šΓ—π‘› 𝑛×𝑝 σ΅„© σ΅„© σ΅„© σ΅„© Theorem 7. Suppose that 𝐴∈R ,𝐡 ∈ R ,𝐢 ∈ σ΅„© 𝑖,𝑗 σ΅„© σ΅„© π‘˜,𝑙 σ΅„© π‘šΓ—π‘ž π‘žΓ—π‘ π‘šΓ—π‘ R , 𝐷∈R ,and𝐻∈R .Let{𝑍𝑖𝑗 }, {π‘Šπ‘˜π‘™}, 𝐺, 𝐿, 𝑀, 𝑁,β„Ž be given as in (7), (8), (13), (14),and(15),respectively.Write =(βˆ‘ (𝛼 βˆ’π›Ύ )𝑍 , βˆ‘ (𝛼 βˆ’π›Ύ )𝑍 ) 𝑑 =2π‘›βˆ’1,𝑑 =2π‘žβˆ’1,𝐸 =𝐼 βˆ’π‘€π‘€+,𝐹 = 𝑖𝑗 𝑖𝑗 𝑖𝑗 𝑖𝑗 𝑖𝑗 𝑖𝑗 1 2 𝑀 π‘šπ‘ 𝑀 𝑖,𝑗 𝑖,𝑗 𝐼 βˆ’π‘€+𝑀, π‘Š =𝐼 βˆ’(𝐸 𝑁)+𝐸 𝑁 𝛼=𝑀̃ +β„Žβˆ’ 𝑑1 𝑑2 𝑀 𝑀 ,and 𝑀+𝑁(𝐸 𝑁)+𝐸 β„Ž S 𝑀 𝑀 .Thenthesolutionset 1 of Problem 1 can +(βˆ‘ (𝛽 βˆ’π›Ώ )π‘Š , βˆ‘ (𝛽 βˆ’π›Ώ )π‘Š ) be expressed as π‘˜π‘™ π‘˜π‘™ π‘˜π‘™ π‘˜π‘™ π‘˜π‘™ π‘˜π‘™ π‘˜,𝑙 π‘˜,𝑙 𝑛×𝑛 π‘žΓ—π‘ž (28) S1 ={(𝑋, π‘Œ) ∈ SAR Γ— SAR |

(21) = βˆ‘ (𝛼𝑖𝑗 βˆ’π›Ύπ‘–π‘— )(𝑍𝑖𝑗 , βˆ‘ (𝛼𝑖𝑗 βˆ’π›Ύπ‘–π‘— )𝑍𝑖𝑗 ) 𝑋=𝐾1 (𝛼 βŠ—π‘› 𝐼 ), π‘Œ=𝐾2 (𝛽 βŠ—π‘ž 𝐼 )} , 𝑖,𝑗 𝑖,𝑗 where + βˆ‘ (π›½π‘˜π‘™ βˆ’π›Ώπ‘˜π‘™)(π‘Šπ‘˜π‘™, βˆ‘ (π›½π‘˜π‘™ βˆ’π›Ώπ‘˜π‘™)π‘Šπ‘˜π‘™) 𝑛×𝑛𝑑1 𝐾1 = [𝑍11,...,𝑍𝑛,𝑛,𝑍12,...,𝑍1,𝑛] ∈ R , (22) π‘˜,𝑙 π‘˜,𝑙 2 π‘žΓ—π‘žπ‘‘2 2 𝐾2 =[π‘Š11,...,π‘Šπ‘ž,π‘ž,π‘Š12,...,π‘Š1,π‘ž]∈R , (23) = βˆ‘(𝛼𝑖𝑗 βˆ’π›Ύπ‘–π‘— ) + βˆ‘(π›½π‘˜π‘™ βˆ’π›Ώπ‘˜π‘™) 𝑖,𝑗 π‘˜,𝑙 𝑑 𝛼, 𝛽 π‘’βˆˆR 2 , V ∈ are, respectively, given by (20) and (19) with σ΅„© σ΅„©2 σ΅„© σ΅„©2 𝑑 = σ΅„©π›Όβˆ’π›Ύσ΅„© + σ΅„©π›½βˆ’π›Ώσ΅„© . R 1 being arbitrary vectors. σ΅„© σ΅„© σ΅„© σ΅„© 4 Journal of Applied Mathematics

Substituting (19)and(20) into the function of 𝑓,wehave 3. The Solutions of Problems 3 and 4 σ΅„© + σ΅„©2 𝑓=σ΅„©π›Όβˆ’π‘€Μƒ π‘π‘Šπ‘’π‘€ +𝐹 V βˆ’π›Ύσ΅„© It follows from (10) that the minimization problem of (4)can be equivalently written as σ΅„© σ΅„©2 σ΅„© + σ΅„© + σ΅„©(𝐸𝑀𝑁) πΈπ‘€β„Ž+π‘Šπ‘’βˆ’π›Ώσ΅„© σ΅„© σ΅„© σ΅„© σ΅„© σ΅„© σ΅„© σ΅„© σ΅„© ⊀ ⊀ ⊀ + ⊀ + σ΅„©βˆ‘π›Ό 𝐴𝑍 𝐡+βˆ‘π›Ό 𝐢𝑍 π·βˆ’π»σ΅„© = . =𝑒 π‘Šπ‘ (𝑀𝑀 ) π‘π‘Šπ‘’+2(π›Ύβˆ’π›Ό)Μƒ 𝑀 π‘π‘Šπ‘’ σ΅„© 𝑖𝑗 𝑖𝑗 𝑖𝑗 𝑖𝑗 σ΅„© min (36) σ΅„© 𝑖,𝑗 𝑖,𝑗 σ΅„© βˆ’2(π›Ύβˆ’π›Ό)Μƒ ⊀𝐹 V + V⊀𝐹 V +(π›Ύβˆ’π›Ό)Μƒ ⊀ (𝛾 βˆ’ 𝛼)Μƒ + π‘’βŠ€π‘Šπ‘’ 𝑀 𝑀 Using Lemma 6, we see that the relation of (36)isequivalent + ⊀ to βˆ’2(π›Ώβˆ’(𝐸𝑀𝑁) πΈπ‘€β„Ž) π‘Šπ‘’ ‖𝑀𝛼 + 𝑄𝛼 βˆ’β„Žβ€– = , + ⊀ + min (37) +(π›Ώβˆ’(𝐸𝑀𝑁) πΈπ‘€β„Ž) (𝛿 βˆ’ (𝐸𝑀𝑁) πΈπ‘€β„Ž) . ⊀ (29) where 𝑄=(𝐷 βŠ—πΆ)𝐺.ItfollowsfromLemma 5 that the general solution of ‖𝑀𝛼 + 𝑄𝛼 βˆ’β„Žβ€–= min with respect to Therefore, 𝛼 can be expressed as πœ•π‘“ ⊀ ⊀ + ⊀ + ⊀ =2π‘Šπ‘ (𝑀𝑀 ) π‘π‘Šπ‘’ + 2π‘Šπ‘ (𝑀 ) (𝛾 βˆ’ 𝛼)Μƒ 𝛼=π‘ˆ+β„Ž+(𝐼 βˆ’π‘ˆ+π‘ˆ) 𝑧, πœ•π‘’ 𝑑1 (38) + +2π‘Šπ‘’βˆ’2π‘Š(π›Ώβˆ’(𝐸 𝑁) 𝐸 β„Ž) , 𝑑 𝑀 𝑀 (30) where π‘ˆ=𝑀+𝑄and π‘§βˆˆR 1 is an arbitrary vector. πœ•π‘“ To summarize, we have obtained the following result. =2𝐹 V βˆ’2𝐹 (𝛾 βˆ’ 𝛼)Μƒ . 𝑀 𝑀 π‘šΓ—π‘› 𝑛×𝑝 πœ•V Theorem 10. Suppose that 𝐴∈R ,𝐡 ∈ R ,𝐢 ∈ π‘šΓ—π‘ž π‘žΓ—π‘ π‘šΓ—π‘ Μƒ 2 Μƒ 2 R ,𝐷∈R ,and𝐻∈R .Let{𝑍𝑖𝑗 }, 𝐺, 𝑀, β„Ž be given as Clearly, ‖𝑋 βˆ’ 𝑋‖ +β€–π‘Œβˆ’π‘Œβ€– = min if and only if ⊀ in (7), (13),and(15),respectively.Write𝑄=(𝐷 βŠ—πΆ)𝐺,𝑑1 = πœ•π‘“ πœ•π‘“ 2𝑛 βˆ’ 1,andπ‘ˆ=𝑀+𝑄.ThenthesolutionsetS3 of Problem 3 =0, =0 (31) πœ•π‘’ πœ•V can be expressed as which yields 𝑛×𝑛 S3 ={π‘‹βˆˆSAR |𝑋=𝐾1 (𝛼 βŠ—π‘› 𝐼 )} , (39) βˆ’1 π‘Šπ‘’ = (𝐼 +π‘Šπ‘βŠ€(π‘€π‘€βŠ€)+π‘π‘Š) 𝑑 𝑑1 2 where 𝐾1 and 𝛼 are given by (22) and (38) with π‘§βˆˆR being + ⊀ + ⊀ arbitrary vectors. Γ—π‘Š(π›Ώβˆ’(𝐸𝑀𝑁) πΈπ‘€β„Žβˆ’π‘ (𝑀 ) (𝛾 βˆ’ 𝛼))Μƒ , Similarly, for the real-valued symmetric arrowhead 𝐹𝑀V =𝐹𝑀 (𝛾 βˆ’ 𝛼)Μƒ . matrix 𝑋̃,itiseasilyseenthat𝑋̃ canbeexpressedasthe (32) linear combination of the orthonormal basis {𝑍𝑖𝑗 };thatis, Μƒ Upon substituting (32)into(19)and(20), we obtain 𝑋=βˆ‘π‘–,𝑗 𝛾𝑖𝑗 𝑍𝑖𝑗 ,where𝛾𝑖𝑗 , 𝑖=1,𝑗=2,...,𝑛;𝑖=𝑗=1,...,𝑛, Μƒ + ⊀ ⊀ + βˆ’1 are uniquely determined by the elements of 𝑋.Then,forany 𝛼=Μ‚ βˆ’π‘€ π‘π‘Š(𝐼𝑑 +π‘Šπ‘ (𝑀𝑀 ) π‘π‘Š) 2 matrix π‘‹βˆˆS3 in (39), by the relation of (9), we have + ⊀ + ⊀ Γ—π‘Š(π›Ώβˆ’(𝐸 𝑁) 𝐸 β„Žβˆ’π‘ (𝑀 ) (𝛾 βˆ’ 𝛼))Μƒ (33) σ΅„© σ΅„©2 𝑀 𝑀 σ΅„© σ΅„© σ΅„© σ΅„©2 σ΅„© σ΅„© σ΅„©π‘‹βˆ’π‘‹Μƒσ΅„© = σ΅„©βˆ‘(𝛼 βˆ’π›Ύ )𝑍 σ΅„© + 𝛼+𝐹̃ 𝛾, σ΅„© σ΅„© σ΅„© 𝑖𝑗 𝑖𝑗 𝑖𝑗 σ΅„© 𝑀 σ΅„© 𝑖,𝑗 σ΅„© βˆ’1 𝛽=(𝐸̂ 𝑁)+𝐸 β„Ž+(𝐼 +π‘Šπ‘βŠ€(π‘€π‘€βŠ€)+π‘π‘Š) 𝑀 𝑀 𝑑2 (34) =(βˆ‘ (𝛼𝑖𝑗 βˆ’π›Ώπ‘–π‘— )𝑍𝑖𝑗 , βˆ‘ (𝛼𝑖𝑗 βˆ’π›Ύπ‘–π‘— )𝑍𝑖𝑗 ) + ⊀ + ⊀ 𝑖,𝑗 𝑖,𝑗 Γ—π‘Š(π›Ώβˆ’(𝐸𝑀𝑁) πΈπ‘€β„Žβˆ’π‘ (𝑀 ) (𝛾 βˆ’ 𝛼))Μƒ . By now, we have proved the following result. = βˆ‘ (𝛼𝑖𝑗 βˆ’π›Ύπ‘–π‘— )(𝑍𝑖𝑗 , βˆ‘ (𝛼𝑖𝑗 βˆ’π›Ύπ‘–π‘— )𝑍𝑖𝑗 ) Theorem 9. Let the real-valued symmetric arrowhead matri- 𝑖,𝑗 𝑖,𝑗 Μƒ Μƒ ces 𝑋 and π‘Œ be given. Then Problem 2 has a unique solution 2 σ΅„© σ΅„©2 σ΅„© + σ΅„©2 = βˆ‘(𝛼𝑖𝑗 βˆ’π›Ύπ‘–π‘— ) = σ΅„©π›Όβˆ’π›Ύσ΅„© = 󡄩𝐽𝑧 βˆ’ (𝛾 βˆ’π‘ˆ β„Ž)σ΅„© , and the unique solution of Problem 2 can be expressed as 𝑖,𝑗 Μ‚ 𝑋=𝐾1 (π›ΌβŠ—πΌΜ‚ 𝑛), (40) (35) π‘Œ=𝐾̂ (π›½βŠ—πΌΜ‚ ), 𝐽=𝐼 βˆ’π‘ˆ+π‘ˆ, 𝛾 = [𝛾 ,...,𝛾 ,𝛾 ,...,𝛾 ]⊀ 2 π‘ž where 𝑑1 11 𝑛,𝑛 12 1,𝑛 . In order to solve Problem 4, we need the following lemma Μ‚ where 𝛼,Μ‚ 𝛽 are given by (33) and (34),respectively. [16]. Journal of Applied Mathematics 5

π‘žΓ—π‘š π‘žΓ—π‘ž π‘šΓ—π‘š Lemma 11. Suppose that π‘ƒβˆˆR ,Ξ”βˆˆR ,andΞ“βˆˆR 4. A Numerical Example 2 ⊀ 2 ⊀ where Ξ” =Ξ”=Ξ” and Ξ“ =Ξ“=Ξ“ .Then Based on Theorems 7 and 9 we can state the following β€–π‘ƒβˆ’Ξ”π·Ξ“β€– = min β€–π‘ƒβˆ’Ξ”πΈΞ“β€– 𝐸∈Rπ‘žΓ—π‘š (41) algorithm. if and only if Ξ”(𝑃 βˆ’ 𝐷)Ξ“,inwhichcase, =0 ‖𝑃 βˆ’ Δ𝐷Γ‖ = Algorithm 13 (an algorithm for solving the optimal approxi- ‖𝑃 βˆ’ Δ𝑃Γ‖. mation solution of Problem 2). Consider the following. 𝐽2 =𝐽=𝐽⊀ It follows from Lemma 11 and that (1) Input 𝐴, 𝐡, 𝐢, 𝐷,𝐻, 𝑋,Μƒ π‘ŒΜƒ. σ΅„© Μƒσ΅„© σ΅„© + σ΅„© σ΅„©π‘‹βˆ’π‘‹σ΅„© = 󡄩𝐽𝑧 βˆ’ (𝛾 βˆ’π‘ˆ β„Ž)σ΅„© = min (42) σ΅„© σ΅„© (2) Form the orthonormal bases {𝑍𝑖𝑗 } and {π‘Šπ‘˜π‘™} by (7) + and (8), respectively. if and only if 𝐽(𝛾 βˆ’π‘ˆ β„Žβˆ’π‘§)=0;thatis, 𝐺, 𝐿, 𝑀, 𝑁,β„Ž 𝐽𝑧 = 𝐽 (𝛾 βˆ’π‘ˆ+β„Ž) . (3) Compute according to (13), (14)and (43) (15), respectively.

Substituting (43)into(38), we obtain 𝐸 =𝐼 βˆ’π‘€π‘€+ 𝐹 =𝐼 βˆ’π‘€+𝑀 π‘Š= (4) Compute 𝑀 π‘šπ‘ , 𝑀 𝑑1 , 𝛼=π‘ˆΜ‚ +β„Ž+𝐽(π›Ύβˆ’π‘ˆ+β„Ž) . 𝐼 βˆ’(𝐸 𝑁)+𝐸 𝑁 𝛼=𝑀̃ +β„Žβˆ’π‘€+𝑁(𝐸 𝑁)+𝐸 β„Ž (44) 𝑑2 𝑀 𝑀 , 𝑀 𝑀 .

By now, we have proved the following result. (5) Form the vectors 𝛾, 𝛿 by (26), (27). Theorem 12. Let the real-valued symmetric arrowhead matrix (6) Compute 𝐾1,𝐾2 by (22)and(23), respectively. 𝑋̃ be given. Then Problem 4 has a unique solution and the Μ‚ unique solution of Problem 4 can be expressed as (7) Compute 𝛼,Μ‚ 𝛽 by (33)and(34), respectively. 𝑋=𝐾̂ (π›ΌβŠ—πΌΜ‚ ), 1 𝑛 (45) (8) Compute the unique optimal approximation solution (𝑋,Μ‚ π‘Œ)Μ‚ 𝐽=𝐼 βˆ’π‘ˆ+π‘ˆ, 𝛾 = [𝛾 ,...,𝛾 ,𝛾 ,...,𝛾 ]⊀ 𝛼̂ of (2)by(35). where 𝑑1 11 𝑛,𝑛 12 1,𝑛 ,and is given by (44). Example 14. Given

9.9733 3.8706 2.0328 6.1573 3.1647 0.0822 9.9417 5.1706 [ ] [1.0809 0.7027 0.6888 5.9745 1.9242 5.8168 3.8545 1.5801] [ ] [ ] [4.5676 6.8105 8.0014 6.8524 2.3317 5.5635 7.8625 7.4438] [ ] [1.3305 8.5615 4.2306 9.7382 2.1984 6.8938 8.6961 4.8908] [ ] [7.0384 0.971 8.9232 2.8033 0.7944 7.6095 0.852 6.4964] [ ] 𝐴=[ ] , [3.0627 3.9186 9.9191 1.7397 1.2387 8.2429 4.8371 0.4176] [ ] [0.6432 3.7595 4.0088 9.2191 5.3758 5.2769 8.4455 3.3482] [ ] [ ] [1.7377 5.1552 3.4069 2.6024 6.8886 3.4151 4.6584 4.3336] [ ] [ 1.925 8.9427 3.1674 3.7506 7.2012 0.6829 5.9924 0.9859] [6.7035 6.4182 3.6429 6.4474 9.388 2.6404 2.0815 6.8092] 3.3991 4.2993 5.1435 1.0278 8.3341 7.1445 [ ] 0.2712 0.8096 1.0469 2.7236 [0.1929 0.4898 0.4551 3.8017 4.7598 2.015 ] [ ] [ ] [3.2085 3.0149 3.2898 4.2318] [ ] [ ] [7.6829 9.0488 9.9136 7.5245 3.4013 1.0817] [0.5248 1.8002 2.6111 6.9898] [ ] [ ] [4.5788 9.7823 3.0961 1.4917 5.4221 7.3803] [ ] [ ] [3.6956 7.1617 3.884 8.1743] [0.4356 6.5778 2.5647 6.6088 5.615 7.6202] 𝐡=[ ] ,𝐢=[ ] , [3.8388 7.9026 8.9737 8.2547] [0.9956 7.2696 6.101 2.3694 9.2522 2.7898] [ ] [ ] [ ] [ ] [3.3588 3.3133 7.6678 4.0326] [9.5692 4.8863 4.4698 7.2761 2.7439 8.4522] [ ] [ ] [8.9069 7.8162 2.4429 6.516 ] [ 6.088 6.8578 4.1844 2.5853 2.9564 6.2439] [ ] [2.4675 3.6294 4.1184 4.6951 6.1255 6.2167] [2.3814 7.7666 4.5465 1.5243] [9.2952 4.01 7.1406 8.5751 8.9064 9.5489] 6 Journal of Applied Mathematics

2.2053 6.8314 1.5734 7.5244 [ ] [8.7751 8.7539 9.8088 6.155 ] [ ] [ ] 2.9541 4.8815 6.0597 0.6076 [7.2756 5.0769 2.9875 2.5417] [ ] [ ] [6.3182 9.4265 8.1108 8.4724] [8.5308 4.6789 1.546 3.5041] [ ] [ ] [5.3129 3.4653 4.6176 4.5411] [1.7768 2.4484 9.3574 6.1301] [ ] [ ] 𝐷=[ ] ,𝐻=[ ] , [8.7117 1.9969 0.9372 2.4818] [ 5.403 5.6603 6.7386 3.3772] [ ] [ ] [5.8446 0.981 1.027 9.8858] [4.7218 7.6072 4.2791 6.3599] [ ] [ ] [6.0263 6.5212 8.7454 5.6432] [7.3239 3.4517 8.3281 5.166 ] [ ] [ 9.806 4.4755 1.7132 5.1954] [6.6682 5.1579 4.4064 6.2645] βˆ’41 βˆ’710βˆ’613βˆ’211 [ 1βˆ’300000 0] βˆ’72 1993βˆ’15 [ ] [βˆ’7060000 0] [ 2βˆ’13000 0] [ ] [ ] [1000βˆ’20000] [ 19 0 8 0 0 0 ] 𝑋=Μƒ [ ] , π‘Œ=Μƒ [ ] . [βˆ’6 0 0 0 βˆ’4 0 0 0 ] [ 900βˆ’600] [ ] [ ] [1300001800] [ 3000βˆ’30] [ ] βˆ’2 0 0 0 0 0 βˆ’7 0 [βˆ’15000028] [11000000βˆ’24] (46)

According to Algorithm 13,wecanfigureout

1.6112 βˆ’0.34773 βˆ’0.19013 βˆ’0.3608 0.0018101 0.043733 0.12845 0.24356 [ ] [ βˆ’0.34773 0.087563 0 0 0 0 0 0 ] [ ] [ βˆ’0.19013 0 0.020941 0 0 0 0 0 ] [ ] [ ] Μ‚ [ βˆ’0.3608 0 0 0.070032 0 0 0 0 ] 𝑋=[ ] , [0.0018101 0 0 0 0.10853 0 0 0 ] [ ] [ 0.043733 0 0 0 0 0.1462 0 0 ] [ ] [ 0.12845 0 0 0 0 0 0.048087 0 ] [ 0.24356 0 0 0 0 0 0 βˆ’0.055789] (47) 0.032787 0.030243 βˆ’0.064815 0.038152 0.00093986 βˆ’0.072695 [ ] [ 0.030243 βˆ’0.030004 0 0 0 0 ] [ ] [ βˆ’0.064815 0 βˆ’0.0048278 0 0 0 ] π‘Œ=Μ‚ [ ] . [ 0.038152 0 0 0.025828 0 0 ] [ ] [0.00093986 0 0 0 0.03291 0 ] [ βˆ’0.072695 0 0 0 0 βˆ’0.0038235]

5. Concluding Remarks the optimal approximation solution of Problem 2 is described, which can serve as the basis for numerical The symmetric arrowhead matrix arises in many important computation. The approach is demonstrated by a numerical practical applications. In this paper, the least-squares example and reasonable results are produced. solutions of the matrix equations 𝐴𝑋𝐡 + πΆπ‘Œπ· =𝐻 and 𝐴𝑋𝐡 + 𝐢𝑋𝐷 =𝐻 for symmetric arrowhead matrices are provided by using the Kronecker product and stretching Conflict of Interests function of matrices. The explicit representations of the general solution are given. The best approximation solution to The author declares that there is no conflict of interests the given matrices is derived. A simple recipe for constructing regarding the publication of this paper. Journal of Applied Mathematics 7

Acknowledgment The author would like to express his heartfelt thanks to the anonymous reviewers for their valuable comments and suggestions which helped to improve the presentation of this paper.

References

[1] M. Bixon and J. Jortner, β€œIntramolecular radiationless transi- tions,” The Journal of Chemical Physics,vol.48,no.2,pp.715– 726, 1968. [2] J. W. Gadzuk, β€œLocalized vibrational modes in Fermi liquids. General theory,” Physical Review B,vol.24,no.4,pp.1651–1663, 1981. [3]D.Mogilevtsev,A.Maloshtan,S.Kilin,L.E.Oliveira,andS.B. Cavalcanti, β€œSpontaneous emission and qubit transfer in spin- 1/2 chains,” JournalofPhysicsB:Atomic,MolecularandOptical Physics,vol.43,no.9,ArticleID095506,2010. [4]A.M.Lietuofu,The Stability of the Nonlinear Adjustment Systems, Science Press, 1959, (Chinese). [5]G.W.Bing,Introduction to the Nonlinear Control Systems, Science Press, 1988, (Chinese). [6]D.P.O’LearyandG.W.Stewart,β€œComputingtheeigenvalues and eigenvectors of symmetric arrowhead matrices,” Journal of Computational Physics,vol.90,no.2,pp.497–505,1990. [7] O. Walter, L. S. Cederbaum, and J. Schirmer, β€œThe eigenvalue problem for β€œarrow” matrices,” Journal of Mathematical Physics, vol. 25, no. 4, pp. 729–737, 1984. [8] H. Pickmann, J. Egana,˜ and R. L. Soto, β€œExtremal inverse eigenvalue problem for bordered diagonal matrices,” and Its Applications,vol.427,no.2-3,pp.256–271,2007. [9] Y. F. Xu, β€œAn inverse eigenvalue problem for a special kind of matrices,” Mathematica Applicata,vol.6,no.1,pp.68–75,1996 (Chinese). [10]J.Peng,X.Y.Hu,andL.Zhang,β€œTwoinverseeigenvalue problems for a special kind of matrices,” Linear Algebra and Its Applications,vol.416,no.2-3,pp.336–347,2006. [11] C. F. Borges, R. Frezza, and W. B. Gragg, β€œSome inverse eigenproblems for Jacobi and arrow matrices,” Numerical Linear Algebra with Applications,vol.2,no.3,pp.195–203,1995. [12]H.Li,Z.Gao,andD.Zhao,β€œLeastsquaressolutionsofthe matrix equation 𝐴𝑋𝐡 + πΆπ‘Œπ· =𝐸 with the least norm for symmetric arrowhead matrices,” Applied Mathematics and Computation,vol.226,pp.719–724,2014. [13]A.Ben-IsraelandT.N.E.Greville,Generalized Inverses: Theory and Applications, Wiley, New York, NY, USA, 1974. [14] P. Lancaster and M. Tismenetsky, The Theory of Matrices, Academic Press, New York, NY, USA, 2nd edition, 1985. [15] J.P.Aubin,Applied Functional Analysis,JohnWiley&Sons,New York, NY, USA, 1979. [16] W. F. Trench, β€œInverse eigenproblems and associated approx- imation problems for matrices with generalized symmetry or skew symmetry,” Linear Algebra and Its Applications,vol.380, pp. 199–211, 2004. Advances in Advances in Journal of Journal of Operations Research Decision Sciences Applied Mathematics Algebra Probability and Statistics Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

The Scientific International Journal of World Journal Differential Equations Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Submit your manuscripts at http://www.hindawi.com

International Journal of Advances in Combinatorics Mathematical Physics Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Journal of Journal of Mathematical Problems Abstract and Discrete Dynamics in Complex Analysis Mathematics in Engineering Applied Analysis Nature and Society Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

International Journal of Journal of Mathematics and Mathematical Discrete Mathematics Sciences

Journal of International Journal of Journal of Function Spaces Stochastic Analysis Optimization Hindawi Publishing Corporation Hindawi Publishing Corporation Volume 2014 Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014