Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2014, Article ID 709356, 7 pages http://dx.doi.org/10.1155/2014/709356
Research Article Least-Squares Solutions of the Matrix Equations π΄ππ΅ + πΆππ· =π» and π΄ππ΅ + πΆππ· =π» for Symmetric Arrowhead Matrices and Associated Approximation Problems
Yongxin Yuan
School of Mathematics and Statistics, Hubei Normal University, Huangshi 435002, China
Correspondence should be addressed to Yongxin Yuan; yuanyx [email protected]
Received 7 March 2014; Accepted 10 June 2014; Published 30 June 2014
Academic Editor: Shan Zhao
Copyright Β© 2014 Yongxin Yuan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The least-squares solutions of the matrix equations π΄ππ΅ + πΆππ· =π» and π΄ππ΅ + πΆππ· =π» for symmetric arrowhead matrices are discussed. By using the Kronecker product and stretching function of matrices, the explicit representations of the general solution are given. Also, it is shown that the best approximation solution is unique and an explicit expression of the solution is derived.
1. Introduction equations for symmetric arrowhead matrices and associated approximation problems, which can be described as follows. An πΓπmatrix π΄ is called an arrowhead matrix if it has the πΓπ πΓπ πΓπ following form: Problem 1. Given π΄βR ,π΅ β R ,πΆ β R ,π· β πΓπ πΓπ R ,andπ»βR , find nontrivial real-valued symmetric πΓπ πΓπ π1 π1 π2 β β β ππβ1 arrowhead matrices πβR and πβR such that [ ] [ π1 π2 0 β β β 0 ] [ ] π΄=[ π2 0π3 β β β 0 ] . βπ΄ππ΅ + πΆππ· βπ»β = min . (2) [ ] (1) [ . . . . ] . . . d . π 0 0 β β β π Problem 2. Given real-valued symmetric arrowhead matrices [ πβ1 π ] Μ πΓπ Μ πΓπ Μ Μ πβR , πβR ,find(π, π) β S1 such that
If ππ =ππ, π = 1,...,πβ1,thenπ΄ is said to be a symmetric σ΅© σ΅©2 σ΅© σ΅©2 σ΅© σ΅©2 σ΅© σ΅©2 arrowhead matrix. We denote all real-valued symmetric πΓπ σ΅©πΜ β πΜσ΅© + σ΅©πΜ β πΜσ΅© = min (σ΅©πβπΜσ΅© + σ΅©πβπΜσ΅© ) , πΓπ σ΅© σ΅© σ΅© σ΅© (π,π)βS σ΅© σ΅© σ΅© σ΅© arrowhead matrices by SAR . Such matrices arise in the 1 description of radiationless transitions in isolated molecules (3) [1], oscillators vibrationally coupled with a Fermi liquid [2] and quantum optics [3], and so forth. Numerically efficient where S1 is the solution set of Problem 1. algorithms for computing eigenvalues and eigenvectors of πΓπ πΓπ πΓπ arrowhead matrices were discussed in [4β8]. The inverse Problem 3. Given π΄, πΆ β R ,π΅,π·βR ,andπ»βR , problem of constructing the symmetric arrowhead matrix find nontrivial real-valued symmetric arrowhead matrix π from spectral data has been investigated by Xu [9], Peng such that et al. [10], and Borges et al. [11]. In this paper, we will further consider the least-squares solutions of the matrix βπ΄ππ΅ + πΆππ· βπ»β = min . (4) 2 Journal of Applied Mathematics
Problem 4. Given a real-valued symmetric arrowhead matrix Let π1 =2πβ1and π2 =2πβ1.Itiseasilyseenthat Μ πΓπ Μ (SARπΓπ)=π (SARπΓπ)=π πβR ,findπβS3 such that dim 1 and dim 2.Define π σ΅© σ΅© σ΅© σ΅© ππ σ΅©πβΜ πΜσ΅© = σ΅©πβπΜσ΅© , β σ΅© σ΅© min σ΅© σ΅© (5) 2 (π) (π) β€ (π) (π) β€ πβS3 { (π (π ) +π (π ) ), π=1; π=2,...,π, = { 2 π π π π (π) (π) β€ {ππ (ππ ) , π=π=1,...,π, where S3 is the solution set of Problem 3. (7)
Recently, Li et al. [12] considered the least-squares πππ solutions of the matrix equation π΄ππ΅ + πΆππ· =πΈ for β2 β€ β€ symmetric arrowhead matrices. By using Moore-Penrose { (π(π)(π(π)) +π(π)(π(π)) ), π=1;π=2,...,π, = 2 π π π π inverses and the Kronecker product, the minimum-norm and { β€ π(π)(π(π)) , π=π=1,...,π, least-squares solution to the matrix equation for symmetric { π π arrowhead matrices was provided. However, we can easily see (8) that the method used in [12]involvescomplicatedcomputa- (π) tions for Moore-Penrose generalized inverses of partitioned where ππ is the πth column vector of the identity matrix πΌπ.It matrices, and the expression of the minimum-norm and is easy to verify that {πππ } and {πππ} form orthonormal bases πΓπ πΓπ least-squares solution was not explicit. Compared with the of the subspaces SAR and SAR ,respectively.Thatis, approach proposed in [12],themethodinthispaperismore concise and easy to perform. 0, π =πΜΈ π =π,ΜΈ (π ,π )={ or The paper is organized as follows. In Section 2,usingthe ππ ππ 1, π = π, π =π, Kronecker product and stretching function vec(β ) of matrices, S (9) we give an explicit representation of the solution set 1 of 0, π =πΜΈ π =π,ΜΈ (π ,π )={ or Problem 1. Furthermore, we show that there exists a unique ππ ππ 1, π = π, π =π. solution in Problem 2 and present the expression of the Μ Μ unique solution (π, π) of Problem 2.InSection 3,weprovide πβSARπΓπ πβSARπΓπ π π S Now, if and ,then and can be an explicit representation of the solution set 3 of Problem expressed as 3 and present the expression of the unique solution πΜ of Problem 4.InSection 4, a numerical algorithm to acquire π=βπΌ π ,π=βπ½ π , ππ ππ ππ ππ (10) the optimal approximation solution for Problem 2 under the π,π π,π Frobenius norm sense is described and a numerical example where the real numbers πΌππ , π = 1,π = 2,...,π;π =π is provided. Some concluding remarks are given in Section 5. 1,...,π π½ ,π = 1,π = 2,...,π;π = π =1,...,π Throughout this paper, we denote the real πΓπmatrix ,and ππ ,are πΓπ space by R andthetransposeandtheMoore-Penrose yet to be determined. β€ + generalized inverse of a real matrix π΄ by π΄ and π΄ , It follows from (10) that the relation of (2)canbe equivalently written as respectively. πΌπ represents the identity matrix of size π.For mΓπ πΓπ σ΅© σ΅© π΄, π΅ β R ,aninnerproductinR is defined by (π΄, π΅) = σ΅© σ΅© β€ πΓπ σ΅© σ΅© trace(π΅ π΄);thenR is a Hilbert space. The matrix norm ββ β σ΅©βπΌ π΄π π΅+βπ½ πΆπ π·βπ»σ΅© = . σ΅© ππ ππ ππ ππ σ΅© min (11) induced by the inner product is the Frobenius norm. Given σ΅© π,π π,π σ΅© πΓπ πΓπ two matrices π΄=[πππ ]βR and π΅=[πππ ]βR ,the Kronecker product of π΄ and π΅ is defined by π΄βπ΅=[πππ π΅] β When setting ππΓππ β€ R .Also,foranπΓπmatrix π΄=[π1,π2,...,ππ],where πΌ=[πΌ11,...,πΌπ,π,πΌ12,...,πΌ1,π] , ππ, π = 1,...,π,istheπth column vector of π΄,thestretching β€ β€ β€ β€ β€ (12) function vec(π΄) is defined by vec(π΄) = 1[π ,π2 ,...,ππ ] . π½=[π½11,...,π½π,π,π½12,...,π½1,π] ,
πΊ=vec [ (π11),...,vec (ππ,π),vec (π12),...,vec (π1,π)] 2. The Solutions of Problems 1 and 2 π2Γπ β R 1 To begin with, we introduce two lemmas. (13) πΓπ π Lemma 5 (see [13]). If πΏβR ,πβR , then the general + πΏ= [ (π ),..., (π ), (π ),..., (π )] solution of βπΏπ¦ β πβ= min can be expressed as π¦=πΏ π+(πΌπ β vec 11 vec π,π vec 12 vec 1,π πΏ+πΏ)π§ π§βRπ ,where is an arbitrary vector. π2Γπ β R 2 , πΓπ πΓπ πΓπ Lemma 6 (see [14]). Let π·βR ,π»βR ,π½βR .Then (14) β€ β€ π=(π΅ βπ΄)πΊ, π=(π· βπΆ)πΏ, β=Vec (π») . β€ vec (π·π»π½) =(π½ βπ·)vec (π») . (6) (15) Journal of Applied Mathematics 3
By Lemma 6, we see that the relation of (11)isequivalentto From (17), we can easily obtain the following corollary. σ΅© σ΅© σ΅©ππΌ + ππ½ ββσ΅© = min . (16) Corollary 8. Under the same assumptions as in Theorem 7, the matrix equation We note that π΄ππ΅ + πΆππ· =π» σ΅© σ΅©2 (24) σ΅©ππΌ + ππ½ ββσ΅© has a solution if and only if σ΅© + σ΅©2 = σ΅©π[πΌ +π (ππ½ β β)]π +πΈ (ππ½ β β)σ΅© + πΈππ(πΈππ) πΈππ=πΈπβ. (25) σ΅© σ΅©2 σ΅© σ΅©2 = σ΅©π[πΌ+ π (ππ½ β β)]σ΅© + σ΅©πΈ (ππ½ β β)σ΅© σ΅© σ΅© σ΅© π σ΅© In this case, the solution set S1 of (24) is given by (21). σ΅© + σ΅©2 = σ΅©π[πΌ +π (ππ½ β β)]σ΅© It follows from Theorem 7 that the solution set S1 is σ΅© always nonempty. It is easy to verify that S1 is a closed convex σ΅© + πΓπ πΓπ + σ΅©πΈππ[π½β(πΈππ) πΈπβ] subset of SAR Γ SAR . From the best approximation (π,Μ π)Μ σ΅©2 theorem [15], we know there exists a unique solution + σ΅© S β(πΌππ βπΈππ(πΈππ) )πΈπβσ΅© in 1 such that (3)holds. We now focus our attention on seeking the unique solu- σ΅© σ΅©2 σ΅© + σ΅©2 σ΅© + σ΅© (π,Μ π)Μ S = σ΅©π[πΌ +π (ππ½ β β)]σ΅© + σ΅©πΈππ[π½ πβ (πΈ π) πΈπβ]σ΅© tion in 1.Forthereal-valuedsymmetricarrowhead matrices πΜ and πΜ,itiseasilyseenthatπ,Μ πΜ can be expressed σ΅© σ΅©2 σ΅© + σ΅© {π } + σ΅©(πΌππ βπΈππ(πΈππ) )πΈπβσ΅© , as the linear combinations of the orthonormal bases ππ and {π } (17) πj ;thatis, + π=Μ βπΎ π , π=Μ βπΏ π , where πΈπ =πΌππ βππ .ItfollowsfromLemma 5 and (17) ππ ππ ππ ππ π,π (26) that βππΌ + ππ½ βββ= min if and only if π,π + + where πΎππ , π = 1,π = 2,...,π;π = π,and =1,...,π πΏππ,π = πΌ=βπ ππ½ +π β+πΉπV, (18) 1,π = 2,...,π;π = π = 1,...,π, are uniquely determined by + the elements of πΜ and πΜ.Let π½=(πΈππ) πΈπβ+ππ’, (19) β€ + + πΎ=[πΎ11,...,πΎπ,π,πΎ12,...,πΎ1,π] , where πΉπ =πΌπ βπ π, ππ =πΌ β(πΈππ) πΈππ,andπ’β 1 2 (27) π π β€ R 2 , V β R 1 are arbitrary vectors. πΏ=[πΏ11,...,πΏπ,π,πΏ12,...,πΏ1,π] . Substituting (19)into(18), we obtain (π, π) β S1 + Then, for any pair of matrices in (21), by the πΌ=πΌβπΜ πππ’π +πΉ V, (20) relations of (9)and(26), we see that σ΅© σ΅©2 σ΅© σ΅©2 πΌ=πΜ +ββπ+π(πΈ π)+πΈ β σ΅© Μσ΅© σ΅© Μσ΅© where π π . π=σ΅©πβπσ΅© + σ΅©πβπσ΅© In summary of the above discussion, we have proved the σ΅© σ΅©2 σ΅© σ΅©2 σ΅© σ΅© σ΅© σ΅© following result. σ΅© σ΅© σ΅© σ΅© = σ΅©β(πΌππ βπΎππ )πππ σ΅© + σ΅©β(π½ππ βπΏππ)πππσ΅© πΓπ πΓπ σ΅© σ΅© σ΅© σ΅© Theorem 7. Suppose that π΄βR ,π΅ β R ,πΆ β σ΅© π,π σ΅© σ΅© π,π σ΅© πΓπ πΓπ πΓπ R , π·βR ,andπ»βR .Let{πππ }, {πππ}, πΊ, πΏ, π, π,β be given as in (7), (8), (13), (14),and(15),respectively.Write =(β (πΌ βπΎ )π , β (πΌ βπΎ )π ) π =2πβ1,π =2πβ1,πΈ =πΌ βππ+,πΉ = ππ ππ ππ ππ ππ ππ 1 2 π ππ π π,π π,π πΌ βπ+π, π =πΌ β(πΈ π)+πΈ π πΌ=πΜ +ββ π1 π2 π π ,and π+π(πΈ π)+πΈ β S π π .Thenthesolutionset 1 of Problem 1 can +(β (π½ βπΏ )π , β (π½ βπΏ )π ) be expressed as ππ ππ ππ ππ ππ ππ π,π π,π πΓπ πΓπ (28) S1 ={(π, π) β SAR Γ SAR |
(21) = β (πΌππ βπΎππ )(πππ , β (πΌππ βπΎππ )πππ ) π=πΎ1 (πΌ βπ πΌ ), π=πΎ2 (π½ βπ πΌ )} , π,π π,π where + β (π½ππ βπΏππ)(πππ, β (π½ππ βπΏππ)πππ) πΓππ1 πΎ1 = [π11,...,ππ,π,π12,...,π1,π] β R , (22) π,π π,π 2 πΓππ2 2 πΎ2 =[π11,...,ππ,π,π12,...,π1,π]βR , (23) = β(πΌππ βπΎππ ) + β(π½ππ βπΏππ) π,π π,π π πΌ, π½ π’βR 2 , V β are, respectively, given by (20) and (19) with σ΅© σ΅©2 σ΅© σ΅©2 π = σ΅©πΌβπΎσ΅© + σ΅©π½βπΏσ΅© . R 1 being arbitrary vectors. σ΅© σ΅© σ΅© σ΅© 4 Journal of Applied Mathematics
Substituting (19)and(20) into the function of π,wehave 3. The Solutions of Problems 3 and 4 σ΅© + σ΅©2 π=σ΅©πΌβπΜ πππ’π +πΉ V βπΎσ΅© It follows from (10) that the minimization problem of (4)can be equivalently written as σ΅© σ΅©2 σ΅© + σ΅© + σ΅©(πΈππ) πΈπβ+ππ’βπΏσ΅© σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© σ΅© β€ β€ β€ + β€ + σ΅©βπΌ π΄π π΅+βπΌ πΆπ π·βπ»σ΅© = . =π’ ππ (ππ ) πππ’+2(πΎβπΌ)Μ π πππ’ σ΅© ππ ππ ππ ππ σ΅© min (36) σ΅© π,π π,π σ΅© β2(πΎβπΌ)Μ β€πΉ V + Vβ€πΉ V +(πΎβπΌ)Μ β€ (πΎ β πΌ)Μ + π’β€ππ’ π π Using Lemma 6, we see that the relation of (36)isequivalent + β€ to β2(πΏβ(πΈππ) πΈπβ) ππ’ βππΌ + ππΌ βββ = , + β€ + min (37) +(πΏβ(πΈππ) πΈπβ) (πΏ β (πΈππ) πΈπβ) . β€ (29) where π=(π· βπΆ)πΊ.ItfollowsfromLemma 5 that the general solution of βππΌ + ππΌ βββ= min with respect to Therefore, πΌ can be expressed as ππ β€ β€ + β€ + β€ =2ππ (ππ ) πππ’ + 2ππ (π ) (πΎ β πΌ)Μ πΌ=π+β+(πΌ βπ+π) π§, ππ’ π1 (38) + +2ππ’β2π(πΏβ(πΈ π) πΈ β) , π π π (30) where π=π+πand π§βR 1 is an arbitrary vector. ππ To summarize, we have obtained the following result. =2πΉ V β2πΉ (πΎ β πΌ)Μ . π π πΓπ πΓπ πV Theorem 10. Suppose that π΄βR ,π΅ β R ,πΆ β πΓπ πΓπ πΓπ Μ 2 Μ 2 R ,π·βR ,andπ»βR .Let{πππ }, πΊ, π, β be given as Clearly, βπ β πβ +βπβπβ = min if and only if β€ in (7), (13),and(15),respectively.Writeπ=(π· βπΆ)πΊ,π1 = ππ ππ 2π β 1,andπ=π+π.ThenthesolutionsetS3 of Problem 3 =0, =0 (31) ππ’ πV can be expressed as which yields πΓπ S3 ={πβSAR |π=πΎ1 (πΌ βπ πΌ )} , (39) β1 ππ’ = (πΌ +ππβ€(ππβ€)+ππ) π π1 2 where πΎ1 and πΌ are given by (22) and (38) with π§βR being + β€ + β€ arbitrary vectors. Γπ(πΏβ(πΈππ) πΈπββπ (π ) (πΎ β πΌ))Μ , Similarly, for the real-valued symmetric arrowhead πΉπV =πΉπ (πΎ β πΌ)Μ . matrix πΜ,itiseasilyseenthatπΜ canbeexpressedasthe (32) linear combination of the orthonormal basis {πππ };thatis, Μ Upon substituting (32)into(19)and(20), we obtain π=βπ,π πΎππ πππ ,whereπΎππ , π=1,π=2,...,π;π=π=1,...,π, Μ + β€ β€ + β1 are uniquely determined by the elements of π.Then,forany πΌ=Μ βπ ππ(πΌπ +ππ (ππ ) ππ) 2 matrix πβS3 in (39), by the relation of (9), we have + β€ + β€ Γπ(πΏβ(πΈ π) πΈ ββπ (π ) (πΎ β πΌ))Μ (33) σ΅© σ΅©2 π π σ΅© σ΅© σ΅© σ΅©2 σ΅© σ΅© σ΅©πβπΜσ΅© = σ΅©β(πΌ βπΎ )π σ΅© + πΌ+πΉΜ πΎ, σ΅© σ΅© σ΅© ππ ππ ππ σ΅© π σ΅© π,π σ΅© β1 π½=(πΈΜ π)+πΈ β+(πΌ +ππβ€(ππβ€)+ππ) π π π2 (34) =(β (πΌππ βπΏππ )πππ , β (πΌππ βπΎππ )πππ ) + β€ + β€ π,π π,π Γπ(πΏβ(πΈππ) πΈπββπ (π ) (πΎ β πΌ))Μ . By now, we have proved the following result. = β (πΌππ βπΎππ )(πππ , β (πΌππ βπΎππ )πππ ) Theorem 9. Let the real-valued symmetric arrowhead matri- π,π π,π Μ Μ ces π and π be given. Then Problem 2 has a unique solution 2 σ΅© σ΅©2 σ΅© + σ΅©2 = β(πΌππ βπΎππ ) = σ΅©πΌβπΎσ΅© = σ΅©π½π§ β (πΎ βπ β)σ΅© , and the unique solution of Problem 2 can be expressed as π,π Μ π=πΎ1 (πΌβπΌΜ π), (40) (35) π=πΎΜ (π½βπΌΜ ), π½=πΌ βπ+π, πΎ = [πΎ ,...,πΎ ,πΎ ,...,πΎ ]β€ 2 π where π1 11 π,π 12 1,π . In order to solve Problem 4, we need the following lemma Μ where πΌ,Μ π½ are given by (33) and (34),respectively. [16]. Journal of Applied Mathematics 5
πΓπ πΓπ πΓπ Lemma 11. Suppose that πβR ,ΞβR ,andΞβR 4. A Numerical Example 2 β€ 2 β€ where Ξ =Ξ=Ξ and Ξ =Ξ=Ξ .Then Based on Theorems 7 and 9 we can state the following βπβΞπ·Ξβ = min βπβΞπΈΞβ πΈβRπΓπ (41) algorithm. if and only if Ξ(π β π·)Ξ,inwhichcase, =0 βπ β Ξπ·Ξβ = Algorithm 13 (an algorithm for solving the optimal approxi- βπ β ΞπΞβ. mation solution of Problem 2). Consider the following. π½2 =π½=π½β€ It follows from Lemma 11 and that (1) Input π΄, π΅, πΆ, π·,π», π,Μ πΜ. σ΅© Μσ΅© σ΅© + σ΅© σ΅©πβπσ΅© = σ΅©π½π§ β (πΎ βπ β)σ΅© = min (42) σ΅© σ΅© (2) Form the orthonormal bases {πππ } and {πππ} by (7) + and (8), respectively. if and only if π½(πΎ βπ ββπ§)=0;thatis, πΊ, πΏ, π, π,β π½π§ = π½ (πΎ βπ+β) . (3) Compute according to (13), (14)and (43) (15), respectively.
Substituting (43)into(38), we obtain πΈ =πΌ βππ+ πΉ =πΌ βπ+π π= (4) Compute π ππ , π π1 , πΌ=πΜ +β+π½(πΎβπ+β) . πΌ β(πΈ π)+πΈ π πΌ=πΜ +ββπ+π(πΈ π)+πΈ β (44) π2 π π , π π .
By now, we have proved the following result. (5) Form the vectors πΎ, πΏ by (26), (27). Theorem 12. Let the real-valued symmetric arrowhead matrix (6) Compute πΎ1,πΎ2 by (22)and(23), respectively. πΜ be given. Then Problem 4 has a unique solution and the Μ unique solution of Problem 4 can be expressed as (7) Compute πΌ,Μ π½ by (33)and(34), respectively. π=πΎΜ (πΌβπΌΜ ), 1 π (45) (8) Compute the unique optimal approximation solution (π,Μ π)Μ π½=πΌ βπ+π, πΎ = [πΎ ,...,πΎ ,πΎ ,...,πΎ ]β€ πΌΜ of (2)by(35). where π1 11 π,π 12 1,π ,and is given by (44). Example 14. Given
9.9733 3.8706 2.0328 6.1573 3.1647 0.0822 9.9417 5.1706 [ ] [1.0809 0.7027 0.6888 5.9745 1.9242 5.8168 3.8545 1.5801] [ ] [ ] [4.5676 6.8105 8.0014 6.8524 2.3317 5.5635 7.8625 7.4438] [ ] [1.3305 8.5615 4.2306 9.7382 2.1984 6.8938 8.6961 4.8908] [ ] [7.0384 0.971 8.9232 2.8033 0.7944 7.6095 0.852 6.4964] [ ] π΄=[ ] , [3.0627 3.9186 9.9191 1.7397 1.2387 8.2429 4.8371 0.4176] [ ] [0.6432 3.7595 4.0088 9.2191 5.3758 5.2769 8.4455 3.3482] [ ] [ ] [1.7377 5.1552 3.4069 2.6024 6.8886 3.4151 4.6584 4.3336] [ ] [ 1.925 8.9427 3.1674 3.7506 7.2012 0.6829 5.9924 0.9859] [6.7035 6.4182 3.6429 6.4474 9.388 2.6404 2.0815 6.8092] 3.3991 4.2993 5.1435 1.0278 8.3341 7.1445 [ ] 0.2712 0.8096 1.0469 2.7236 [0.1929 0.4898 0.4551 3.8017 4.7598 2.015 ] [ ] [ ] [3.2085 3.0149 3.2898 4.2318] [ ] [ ] [7.6829 9.0488 9.9136 7.5245 3.4013 1.0817] [0.5248 1.8002 2.6111 6.9898] [ ] [ ] [4.5788 9.7823 3.0961 1.4917 5.4221 7.3803] [ ] [ ] [3.6956 7.1617 3.884 8.1743] [0.4356 6.5778 2.5647 6.6088 5.615 7.6202] π΅=[ ] ,πΆ=[ ] , [3.8388 7.9026 8.9737 8.2547] [0.9956 7.2696 6.101 2.3694 9.2522 2.7898] [ ] [ ] [ ] [ ] [3.3588 3.3133 7.6678 4.0326] [9.5692 4.8863 4.4698 7.2761 2.7439 8.4522] [ ] [ ] [8.9069 7.8162 2.4429 6.516 ] [ 6.088 6.8578 4.1844 2.5853 2.9564 6.2439] [ ] [2.4675 3.6294 4.1184 4.6951 6.1255 6.2167] [2.3814 7.7666 4.5465 1.5243] [9.2952 4.01 7.1406 8.5751 8.9064 9.5489] 6 Journal of Applied Mathematics
2.2053 6.8314 1.5734 7.5244 [ ] [8.7751 8.7539 9.8088 6.155 ] [ ] [ ] 2.9541 4.8815 6.0597 0.6076 [7.2756 5.0769 2.9875 2.5417] [ ] [ ] [6.3182 9.4265 8.1108 8.4724] [8.5308 4.6789 1.546 3.5041] [ ] [ ] [5.3129 3.4653 4.6176 4.5411] [1.7768 2.4484 9.3574 6.1301] [ ] [ ] π·=[ ] ,π»=[ ] , [8.7117 1.9969 0.9372 2.4818] [ 5.403 5.6603 6.7386 3.3772] [ ] [ ] [5.8446 0.981 1.027 9.8858] [4.7218 7.6072 4.2791 6.3599] [ ] [ ] [6.0263 6.5212 8.7454 5.6432] [7.3239 3.4517 8.3281 5.166 ] [ ] [ 9.806 4.4755 1.7132 5.1954] [6.6682 5.1579 4.4064 6.2645] β41 β710β613β211 [ 1β300000 0] β72 1993β15 [ ] [β7060000 0] [ 2β13000 0] [ ] [ ] [1000β20000] [ 19 0 8 0 0 0 ] π=Μ [ ] , π=Μ [ ] . [β6 0 0 0 β4 0 0 0 ] [ 900β600] [ ] [ ] [1300001800] [ 3000β30] [ ] β2 0 0 0 0 0 β7 0 [β15000028] [11000000β24] (46)
According to Algorithm 13,wecanfigureout
1.6112 β0.34773 β0.19013 β0.3608 0.0018101 0.043733 0.12845 0.24356 [ ] [ β0.34773 0.087563 0 0 0 0 0 0 ] [ ] [ β0.19013 0 0.020941 0 0 0 0 0 ] [ ] [ ] Μ [ β0.3608 0 0 0.070032 0 0 0 0 ] π=[ ] , [0.0018101 0 0 0 0.10853 0 0 0 ] [ ] [ 0.043733 0 0 0 0 0.1462 0 0 ] [ ] [ 0.12845 0 0 0 0 0 0.048087 0 ] [ 0.24356 0 0 0 0 0 0 β0.055789] (47) 0.032787 0.030243 β0.064815 0.038152 0.00093986 β0.072695 [ ] [ 0.030243 β0.030004 0 0 0 0 ] [ ] [ β0.064815 0 β0.0048278 0 0 0 ] π=Μ [ ] . [ 0.038152 0 0 0.025828 0 0 ] [ ] [0.00093986 0 0 0 0.03291 0 ] [ β0.072695 0 0 0 0 β0.0038235]
5. Concluding Remarks the optimal approximation solution of Problem 2 is described, which can serve as the basis for numerical The symmetric arrowhead matrix arises in many important computation. The approach is demonstrated by a numerical practical applications. In this paper, the least-squares example and reasonable results are produced. solutions of the matrix equations π΄ππ΅ + πΆππ· =π» and π΄ππ΅ + πΆππ· =π» for symmetric arrowhead matrices are provided by using the Kronecker product and stretching Conflict of Interests function of matrices. The explicit representations of the general solution are given. The best approximation solution to The author declares that there is no conflict of interests the given matrices is derived. A simple recipe for constructing regarding the publication of this paper. Journal of Applied Mathematics 7
Acknowledgment The author would like to express his heartfelt thanks to the anonymous reviewers for their valuable comments and suggestions which helped to improve the presentation of this paper.
References
[1] M. Bixon and J. Jortner, βIntramolecular radiationless transi- tions,β The Journal of Chemical Physics,vol.48,no.2,pp.715β 726, 1968. [2] J. W. Gadzuk, βLocalized vibrational modes in Fermi liquids. General theory,β Physical Review B,vol.24,no.4,pp.1651β1663, 1981. [3]D.Mogilevtsev,A.Maloshtan,S.Kilin,L.E.Oliveira,andS.B. Cavalcanti, βSpontaneous emission and qubit transfer in spin- 1/2 chains,β JournalofPhysicsB:Atomic,MolecularandOptical Physics,vol.43,no.9,ArticleID095506,2010. [4]A.M.Lietuofu,The Stability of the Nonlinear Adjustment Systems, Science Press, 1959, (Chinese). [5]G.W.Bing,Introduction to the Nonlinear Control Systems, Science Press, 1988, (Chinese). [6]D.P.OβLearyandG.W.Stewart,βComputingtheeigenvalues and eigenvectors of symmetric arrowhead matrices,β Journal of Computational Physics,vol.90,no.2,pp.497β505,1990. [7] O. Walter, L. S. Cederbaum, and J. Schirmer, βThe eigenvalue problem for βarrowβ matrices,β Journal of Mathematical Physics, vol. 25, no. 4, pp. 729β737, 1984. [8] H. Pickmann, J. Egana,Λ and R. L. Soto, βExtremal inverse eigenvalue problem for bordered diagonal matrices,β Linear Algebra and Its Applications,vol.427,no.2-3,pp.256β271,2007. [9] Y. F. Xu, βAn inverse eigenvalue problem for a special kind of matrices,β Mathematica Applicata,vol.6,no.1,pp.68β75,1996 (Chinese). [10]J.Peng,X.Y.Hu,andL.Zhang,βTwoinverseeigenvalue problems for a special kind of matrices,β Linear Algebra and Its Applications,vol.416,no.2-3,pp.336β347,2006. [11] C. F. Borges, R. Frezza, and W. B. Gragg, βSome inverse eigenproblems for Jacobi and arrow matrices,β Numerical Linear Algebra with Applications,vol.2,no.3,pp.195β203,1995. [12]H.Li,Z.Gao,andD.Zhao,βLeastsquaressolutionsofthe matrix equation π΄ππ΅ + πΆππ· =πΈ with the least norm for symmetric arrowhead matrices,β Applied Mathematics and Computation,vol.226,pp.719β724,2014. [13]A.Ben-IsraelandT.N.E.Greville,Generalized Inverses: Theory and Applications, Wiley, New York, NY, USA, 1974. [14] P. Lancaster and M. Tismenetsky, The Theory of Matrices, Academic Press, New York, NY, USA, 2nd edition, 1985. [15] J.P.Aubin,Applied Functional Analysis,JohnWiley&Sons,New York, NY, USA, 1979. [16] W. F. Trench, βInverse eigenproblems and associated approx- imation problems for matrices with generalized symmetry or skew symmetry,β Linear Algebra and Its Applications,vol.380, pp. 199β211, 2004. Advances in Advances in Journal of Journal of Operations Research Decision Sciences Applied Mathematics Algebra Probability and Statistics Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014
The Scientific International Journal of World Journal Differential Equations Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014
Submit your manuscripts at http://www.hindawi.com
International Journal of Advances in Combinatorics Mathematical Physics Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014
Journal of Journal of Mathematical Problems Abstract and Discrete Dynamics in Complex Analysis Mathematics in Engineering Applied Analysis Nature and Society Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014
International Journal of Journal of Mathematics and Mathematical Discrete Mathematics Sciences
Journal of International Journal of Journal of Function Spaces Stochastic Analysis Optimization Hindawi Publishing Corporation Hindawi Publishing Corporation Volume 2014 Hindawi Publishing Corporation Hindawi Publishing Corporation Hindawi Publishing Corporation http://www.hindawi.com Volume 2014 http://www.hindawi.com http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014