
Filomat 33:13 (2019), 4261–4280 Published by Faculty of Sciences and Mathematics, https://doi.org/10.2298/FIL1913261D University of Nis,ˇ Serbia Available at: http://www.pmf.ni.ac.rs/filomat Classification and Approximation of Solutions to Sylvester Matrix Equation Bogdan D. Djordjevi´ca, Nebojˇsa C.ˇ Dinˇci´cb aDepartment of Mathematics, Mathematical Institute of the Serbian Academy of Sciences and Arts, Belgrade, Republic of Serbia bDepartment of Mathematics, Faculty of Sciences and Mathematics, University of Niˇs,P.O. Box 224, Niˇs18000, Republic of Serbia Abstract. In this paper we solve Sylvester matrix equation with infinitely-many solutions and conduct their classification. If the conditions for their existence are not met, we provide a way for their approximation by least-squares minimal-norm method. 1. Introduction and preliminaries For given vector spaces V1 and V2, let A L(V2), B L(V1) and C L(V1; V2) be linear operators. Equations of the form 2 2 2 AX XB = C (1) − with solution X L(V1; V2) are called Sylvester equations, Sylvester-Rosenblum equations or algebraic Ricatti equations.2 Such equations have various application in vast fields of mathematics, physics, computer science and engineering (see e.g. [5], [19] and references therein). Fundamental results, established by Sylvester and Rosenblum themselves, are now-days the starting point in solving contemporary problems where these equations occur. These results are Theorem 1.1. [23] (Sylvester matrix equation) Let A, B and C be matrices. Equation AX XB = C has unique solution X iff σ(A) σ(B) = . − \ ; Theorem 1.2. [22] (Rosenblum operator equation) Let A, B and C be bounded linear operators. Equation AX XB = C has unique solution X if σ(A) σ(B) = . − \ ; Equations with unique solutions have been extensively studied so far. There are numerous results re- garding this case, some of them theoretical (e. g. Lyapunov stability criteria and spectral operators), which can be found in [2], [5] or [9], and some of them computational (matrix sign function, factorization of matri- ces and operators, various iterative methods etc.). It should be mentioned that matrix eq. (1) with unique solution X has been solved numerically (among others) in [4], [6], [13], [14], [18] and [21]. The case where 2010 Mathematics Subject Classification. Primary 47A62; 15A18; Secondary 65F15 Keywords. Sylvester equation, matrix spectrum, matrix approximations, eigenvalues and eigenvectors, Jordan normal form, Frobenius norm, least-squares solution, minimal-norm solution Received: 13 February 2019; Accepted: 13 May 2019 Communicated by Dijana Mosic´ Research supported by Ministry of Science, Republic of Serbia, Grant No. 174007 Email addresses: [email protected] (Bogdan D. Djordjevic),´ [email protected] (Nebojsaˇ C.ˇ Dinciˇ c)´ B. D. Djordjevi´c,N. C.ˇ Dinˇci´c / Filomat 33:13 (2019), 4261–4280 4262 A, B and C are unbounded operators but solution X is unique and bounded has been studied in [16] and [20]. Solvability of eq. (1) in matrices, discarding uniqueness of solution, has been studied in [7] and partially in [18]. Main results in [7] are based on the idea that solutions X can be provided as parametric matrices, where number of parameters at hand depends on dimensions of the corresponding eigenspaces for A and B. The case where A, B and C are unbounded, with infinitely many unbounded solutions X has been studied in [8]. This particular research paper provides insight on new solutions (called the weak solutions), which are only defined on the corresponding eigenspaces for A and B. This research paper concerns the case when A and B are matrices whose spectra intersect, while ma- trix C is a rectangular matrix of appropriate dimensions. We obtain sufficient conditions for existence of infinitely-many solutions and provide a way for their classification. If the conditions for their existence are not met, we give a way of approximating particular solutions. This study relies on the eigenspace-analysis conducted in [7] and [8]. We assume V1 and V2 to be finite dimensional Hilbert spaces over the same scalar filed C or R, while A (V2), B (V1) and C (V1; V2) are assumed to be operators which correspond to the afore-mentioned2 B matrices.2 B Further, 2(L B) and (L) denote null-space and range of the given operator L. Recall that every finite-dimensional subspaceN WR of a Hilbert space V is closed. Consequently, there exists orthogonal projector from V to W, which will be denoted as PW. 2. Existence and classification of solutions Through out this paper, we assume that A and B share s common eigenvalues and denote that set by σ: λ ; : : : ; λs =: σ = σ(A) σ(B): f 1 g \ k k For more elegant notation, we introduce EB = (B λkI) and EA = (A λkI) whenever λk σ. Differ- N − N k− 2 ent eigenvalues generate mutually orthogonal eigenvectors, so the spaces EB form an orthogonal sum. Put s P k EB := EB. It is a closed subspace of V1 and there exists EB? such that V1 = EB EB?: Take B = BE B1 with k=1 ⊕ ⊕ respect to that decomposition and denote C = CP . 1 EB? Proposition 2.1. Let V be a Hilbert space and L (V). If W is L invariant subspace of V, then W? is 2 B − L∗ invariant subspace of V. − k k Theorem 2.1. (Existence of solutions) For every k 1;:::; s , let λk, EA and EB be provided as in the previous paragraph. If 2 f g k (C )? = (B ) and C E (A λkI); (2) N 1 R 1 B ⊂ R − then there exist infinitely many solutions X to the equation (1). k Proof. For every 1 k s, let E , EB, E?, BE and B be provided as in the previous paragraph. Note that ≤ ≤ B B 1 (C )? = (C∗ ), where C∗ (V ; E?). N 1 R 1 1 2 B 2 B Step 1: solutions on EB?. B. D. Djordjevi´c,N. C.ˇ Dinˇci´c / Filomat 33:13 (2019), 4261–4280 4263 We first conduct analysis on E?. Space EB is BPE invariant subspace of V1 and Proposition 2.1 yields B B? − E? to be (BPE )∗ invariant subspace of V1, so without loss of generality we can observe B∗ as B∗ : E? E?. B B? − 1 1 B ! B Since σ(BE) = λ ; : : : ; λs , it follows that f 1 g σ(B∗ ) 0 σ(B∗) λ¯ ;:::; λ¯ s : 1 ⊆ f g [ n f 1 g Case 1. Assume that σ(B∗ ) σ(A∗) = . Then there exists unique X∗ (V ; E?) such that 1 \ ; 1 2 B 2 B X∗ A∗ B∗ X∗ = C∗ ; 1 − 1 1 1 that is, there exists unique X (E?; V ) such that 1 2 B B 2 AX X B = C 1 − 1 1 1 holds. Case 2. Assume that σ(A∗) σ(B∗ ) , . It follows that σ(A∗) σ(B∗ ) = 0 . But then A∗ cannot be \ 1 ; \ 1 f g nilpotent. Truly, if σ(A∗) = 0 = σ(A), then by assumption, σ(B) σ(A) , , therefore, 0 σ(B), that is, f g \ ; 2 0 σ. If u (B ), it follows that B u = 0 and u E?, but then Bu = B u = 0, so u (B) EB, therefore 2 2 N 1 1 2 B 1 2 N ⊂ u EB EB? = 0 . Hence contradiction, implying that A∗ is not nilpotent, but rather has finite ascend, 2 \ f g m asc(A∗) = m 1, where ((A∗) ) is a proper subspace of V . ≥ N 2 Now observe B∗ : E? E?, which is not invertible by assumption. Take arbitrary Z∗ ( (A∗); (B∗ )) 1 B ! B 0 2 B N N 1 operator. Then for every d (A∗), there exists (by (2)) unique u (B∗ )? such that 2 N 2 N 1 B1∗ u = C1∗ d: Define X∗ (Z∗ ) on (A∗) as X∗ (Z∗ )d := Z∗ d + u. Since asc(A∗) = m, the following recursive formula applies. 1 0 N 1 0 0 Assume that m = 1. Precisely, decompose V = (A∗) (A∗)? and A∗ = 0 A∗ . Then A∗ is injective 2 N ⊕ N ⊕ 1 1 from (A∗)? to (A∗)? and X∗ can be defined on (A∗)? as restriction of X∗ from Case 1. N N 1 N 1 Assume that m > 1. Then proceed to decompose (A∗)? = (A∗ ) (A∗ )? and and define X∗ on (A∗ ) N N 1 ⊕ N 1 1 N 1 as X∗ (N∗ )u := N∗ u + d, where Z∗ ( (A∗ ); (B∗ )) is arbitrary operator and 1 1 1 1 2 B N 1 N 1 B1∗ u = C1∗ d: If A∗ is injective on (A∗ )?, i.e. if m = 2, then X can be defined on (A∗ )? as restriction of X from Case 1 N 1 1 N 1 1 1. If not, then proceed to decompose (A1∗ )? = (A2∗ ) (A2∗ )? and so on. Eventually, one would get to iteration no. m, in a manner that N N ⊕ N V = (A∗) (A∗ ) (A∗ ) ::: (A∗ ) (A∗ )? 2 N ⊕ N 1 ⊕ N 2 ⊕ ⊕ N m ⊕ N m and A∗ : (A∗ )? (A∗ )? is injective. Then σ(B∗ ) σ(A∗ ) = , ergo define X∗ on (A∗ )? as restriction m N m !N m 1 \ m ; 1 N m of X∗ from Case 1 to (A∗ )? . Further, for 0 n m, let Z∗ ( (A∗ ); (B∗ )) be arbitrary operators. Then 1 N m ≤ ≤ n 2 B N n N 1 define X∗ on (A∗ ) as 1 N n X1∗ (Zn∗ )d := Zn∗ d + u; where once again u (B∗ )? is unique element such that B∗ u = C∗ d. Equivalently, there exists X 2 N 1 1 1 1 2 (EB?; V2) such that B AX X B = C ; 1 − 1 1 1 where X1 = X1(Z0∗ ; Z1∗ ;:::; Zm∗ ): Condition (C∗ ) = (B∗ )? = (B ) yields X to be well defined on the entire E?.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages20 Page
-
File Size-