Classification and Approximation of Solutions to Sylvester Matrix Equation

Total Page:16

File Type:pdf, Size:1020Kb

Classification and Approximation of Solutions to Sylvester Matrix Equation Filomat 33:13 (2019), 4261–4280 Published by Faculty of Sciences and Mathematics, https://doi.org/10.2298/FIL1913261D University of Nis,ˇ Serbia Available at: http://www.pmf.ni.ac.rs/filomat Classification and Approximation of Solutions to Sylvester Matrix Equation Bogdan D. Djordjevi´ca, Nebojˇsa C.ˇ Dinˇci´cb aDepartment of Mathematics, Mathematical Institute of the Serbian Academy of Sciences and Arts, Belgrade, Republic of Serbia bDepartment of Mathematics, Faculty of Sciences and Mathematics, University of Niˇs,P.O. Box 224, Niˇs18000, Republic of Serbia Abstract. In this paper we solve Sylvester matrix equation with infinitely-many solutions and conduct their classification. If the conditions for their existence are not met, we provide a way for their approximation by least-squares minimal-norm method. 1. Introduction and preliminaries For given vector spaces V1 and V2, let A L(V2), B L(V1) and C L(V1; V2) be linear operators. Equations of the form 2 2 2 AX XB = C (1) − with solution X L(V1; V2) are called Sylvester equations, Sylvester-Rosenblum equations or algebraic Ricatti equations.2 Such equations have various application in vast fields of mathematics, physics, computer science and engineering (see e.g. [5], [19] and references therein). Fundamental results, established by Sylvester and Rosenblum themselves, are now-days the starting point in solving contemporary problems where these equations occur. These results are Theorem 1.1. [23] (Sylvester matrix equation) Let A, B and C be matrices. Equation AX XB = C has unique solution X iff σ(A) σ(B) = . − \ ; Theorem 1.2. [22] (Rosenblum operator equation) Let A, B and C be bounded linear operators. Equation AX XB = C has unique solution X if σ(A) σ(B) = . − \ ; Equations with unique solutions have been extensively studied so far. There are numerous results re- garding this case, some of them theoretical (e. g. Lyapunov stability criteria and spectral operators), which can be found in [2], [5] or [9], and some of them computational (matrix sign function, factorization of matri- ces and operators, various iterative methods etc.). It should be mentioned that matrix eq. (1) with unique solution X has been solved numerically (among others) in [4], [6], [13], [14], [18] and [21]. The case where 2010 Mathematics Subject Classification. Primary 47A62; 15A18; Secondary 65F15 Keywords. Sylvester equation, matrix spectrum, matrix approximations, eigenvalues and eigenvectors, Jordan normal form, Frobenius norm, least-squares solution, minimal-norm solution Received: 13 February 2019; Accepted: 13 May 2019 Communicated by Dijana Mosic´ Research supported by Ministry of Science, Republic of Serbia, Grant No. 174007 Email addresses: [email protected] (Bogdan D. Djordjevic),´ [email protected] (Nebojsaˇ C.ˇ Dinciˇ c)´ B. D. Djordjevi´c,N. C.ˇ Dinˇci´c / Filomat 33:13 (2019), 4261–4280 4262 A, B and C are unbounded operators but solution X is unique and bounded has been studied in [16] and [20]. Solvability of eq. (1) in matrices, discarding uniqueness of solution, has been studied in [7] and partially in [18]. Main results in [7] are based on the idea that solutions X can be provided as parametric matrices, where number of parameters at hand depends on dimensions of the corresponding eigenspaces for A and B. The case where A, B and C are unbounded, with infinitely many unbounded solutions X has been studied in [8]. This particular research paper provides insight on new solutions (called the weak solutions), which are only defined on the corresponding eigenspaces for A and B. This research paper concerns the case when A and B are matrices whose spectra intersect, while ma- trix C is a rectangular matrix of appropriate dimensions. We obtain sufficient conditions for existence of infinitely-many solutions and provide a way for their classification. If the conditions for their existence are not met, we give a way of approximating particular solutions. This study relies on the eigenspace-analysis conducted in [7] and [8]. We assume V1 and V2 to be finite dimensional Hilbert spaces over the same scalar filed C or R, while A (V2), B (V1) and C (V1; V2) are assumed to be operators which correspond to the afore-mentioned2 B matrices.2 B Further, 2(L B) and (L) denote null-space and range of the given operator L. Recall that every finite-dimensional subspaceN WR of a Hilbert space V is closed. Consequently, there exists orthogonal projector from V to W, which will be denoted as PW. 2. Existence and classification of solutions Through out this paper, we assume that A and B share s common eigenvalues and denote that set by σ: λ ; : : : ; λs =: σ = σ(A) σ(B): f 1 g \ k k For more elegant notation, we introduce EB = (B λkI) and EA = (A λkI) whenever λk σ. Differ- N − N k− 2 ent eigenvalues generate mutually orthogonal eigenvectors, so the spaces EB form an orthogonal sum. Put s P k EB := EB. It is a closed subspace of V1 and there exists EB? such that V1 = EB EB?: Take B = BE B1 with k=1 ⊕ ⊕ respect to that decomposition and denote C = CP . 1 EB? Proposition 2.1. Let V be a Hilbert space and L (V). If W is L invariant subspace of V, then W? is 2 B − L∗ invariant subspace of V. − k k Theorem 2.1. (Existence of solutions) For every k 1;:::; s , let λk, EA and EB be provided as in the previous paragraph. If 2 f g k (C )? = (B ) and C E (A λkI); (2) N 1 R 1 B ⊂ R − then there exist infinitely many solutions X to the equation (1). k Proof. For every 1 k s, let E , EB, E?, BE and B be provided as in the previous paragraph. Note that ≤ ≤ B B 1 (C )? = (C∗ ), where C∗ (V ; E?). N 1 R 1 1 2 B 2 B Step 1: solutions on EB?. B. D. Djordjevi´c,N. C.ˇ Dinˇci´c / Filomat 33:13 (2019), 4261–4280 4263 We first conduct analysis on E?. Space EB is BPE invariant subspace of V1 and Proposition 2.1 yields B B? − E? to be (BPE )∗ invariant subspace of V1, so without loss of generality we can observe B∗ as B∗ : E? E?. B B? − 1 1 B ! B Since σ(BE) = λ ; : : : ; λs , it follows that f 1 g σ(B∗ ) 0 σ(B∗) λ¯ ;:::; λ¯ s : 1 ⊆ f g [ n f 1 g Case 1. Assume that σ(B∗ ) σ(A∗) = . Then there exists unique X∗ (V ; E?) such that 1 \ ; 1 2 B 2 B X∗ A∗ B∗ X∗ = C∗ ; 1 − 1 1 1 that is, there exists unique X (E?; V ) such that 1 2 B B 2 AX X B = C 1 − 1 1 1 holds. Case 2. Assume that σ(A∗) σ(B∗ ) , . It follows that σ(A∗) σ(B∗ ) = 0 . But then A∗ cannot be \ 1 ; \ 1 f g nilpotent. Truly, if σ(A∗) = 0 = σ(A), then by assumption, σ(B) σ(A) , , therefore, 0 σ(B), that is, f g \ ; 2 0 σ. If u (B ), it follows that B u = 0 and u E?, but then Bu = B u = 0, so u (B) EB, therefore 2 2 N 1 1 2 B 1 2 N ⊂ u EB EB? = 0 . Hence contradiction, implying that A∗ is not nilpotent, but rather has finite ascend, 2 \ f g m asc(A∗) = m 1, where ((A∗) ) is a proper subspace of V . ≥ N 2 Now observe B∗ : E? E?, which is not invertible by assumption. Take arbitrary Z∗ ( (A∗); (B∗ )) 1 B ! B 0 2 B N N 1 operator. Then for every d (A∗), there exists (by (2)) unique u (B∗ )? such that 2 N 2 N 1 B1∗ u = C1∗ d: Define X∗ (Z∗ ) on (A∗) as X∗ (Z∗ )d := Z∗ d + u. Since asc(A∗) = m, the following recursive formula applies. 1 0 N 1 0 0 Assume that m = 1. Precisely, decompose V = (A∗) (A∗)? and A∗ = 0 A∗ . Then A∗ is injective 2 N ⊕ N ⊕ 1 1 from (A∗)? to (A∗)? and X∗ can be defined on (A∗)? as restriction of X∗ from Case 1. N N 1 N 1 Assume that m > 1. Then proceed to decompose (A∗)? = (A∗ ) (A∗ )? and and define X∗ on (A∗ ) N N 1 ⊕ N 1 1 N 1 as X∗ (N∗ )u := N∗ u + d, where Z∗ ( (A∗ ); (B∗ )) is arbitrary operator and 1 1 1 1 2 B N 1 N 1 B1∗ u = C1∗ d: If A∗ is injective on (A∗ )?, i.e. if m = 2, then X can be defined on (A∗ )? as restriction of X from Case 1 N 1 1 N 1 1 1. If not, then proceed to decompose (A1∗ )? = (A2∗ ) (A2∗ )? and so on. Eventually, one would get to iteration no. m, in a manner that N N ⊕ N V = (A∗) (A∗ ) (A∗ ) ::: (A∗ ) (A∗ )? 2 N ⊕ N 1 ⊕ N 2 ⊕ ⊕ N m ⊕ N m and A∗ : (A∗ )? (A∗ )? is injective. Then σ(B∗ ) σ(A∗ ) = , ergo define X∗ on (A∗ )? as restriction m N m !N m 1 \ m ; 1 N m of X∗ from Case 1 to (A∗ )? . Further, for 0 n m, let Z∗ ( (A∗ ); (B∗ )) be arbitrary operators. Then 1 N m ≤ ≤ n 2 B N n N 1 define X∗ on (A∗ ) as 1 N n X1∗ (Zn∗ )d := Zn∗ d + u; where once again u (B∗ )? is unique element such that B∗ u = C∗ d. Equivalently, there exists X 2 N 1 1 1 1 2 (EB?; V2) such that B AX X B = C ; 1 − 1 1 1 where X1 = X1(Z0∗ ; Z1∗ ;:::; Zm∗ ): Condition (C∗ ) = (B∗ )? = (B ) yields X to be well defined on the entire E?.
Recommended publications
  • Invertibility Condition of the Fisher Information Matrix of a VARMAX Process and the Tensor Sylvester Matrix
    Invertibility Condition of the Fisher Information Matrix of a VARMAX Process and the Tensor Sylvester Matrix André Klein University of Amsterdam Guy Mélard ECARES, Université libre de Bruxelles April 2020 ECARES working paper 2020-11 ECARES ULB - CP 114/04 50, F.D. Roosevelt Ave., B-1050 Brussels BELGIUM www.ecares.org Invertibility Condition of the Fisher Information Matrix of a VARMAX Process and the Tensor Sylvester Matrix Andr´eKlein∗ Guy M´elard† Abstract In this paper the invertibility condition of the asymptotic Fisher information matrix of a controlled vector autoregressive moving average stationary process, VARMAX, is dis- played in a theorem. It is shown that the Fisher information matrix of a VARMAX process becomes invertible if the VARMAX matrix polynomials have no common eigenvalue. Con- trarily to what was mentioned previously in a VARMA framework, the reciprocal property is untrue. We make use of tensor Sylvester matrices since checking equality of the eigen- values of matrix polynomials is most easily done in that way. A tensor Sylvester matrix is a block Sylvester matrix with blocks obtained by Kronecker products of the polynomial coefficients by an identity matrix, on the left for one polynomial and on the right for the other one. The results are illustrated by numerical computations. MSC Classification: 15A23, 15A24, 60G10, 62B10. Keywords: Tensor Sylvester matrix; Matrix polynomial; Common eigenvalues; Fisher in- formation matrix; Stationary VARMAX process. Declaration of interest: none ∗University of Amsterdam, The Netherlands, Rothschild Blv. 123 Apt.7, 6527123 Tel-Aviv, Israel (e-mail: [email protected]). †Universit´elibre de Bruxelles, Solvay Brussels School of Economics and Management, ECARES, avenue Franklin Roosevelt 50 CP 114/04, B-1050 Brussels, Belgium (e-mail: [email protected]).
    [Show full text]
  • Beam Modes of Lasers with Misaligned Complex Optical Elements
    Portland State University PDXScholar Dissertations and Theses Dissertations and Theses 1995 Beam Modes of Lasers with Misaligned Complex Optical Elements Anthony A. Tovar Portland State University Follow this and additional works at: https://pdxscholar.library.pdx.edu/open_access_etds Part of the Electrical and Computer Engineering Commons Let us know how access to this document benefits ou.y Recommended Citation Tovar, Anthony A., "Beam Modes of Lasers with Misaligned Complex Optical Elements" (1995). Dissertations and Theses. Paper 1363. https://doi.org/10.15760/etd.1362 This Dissertation is brought to you for free and open access. It has been accepted for inclusion in Dissertations and Theses by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected]. DISSERTATION APPROVAL The abstract and dissertation of Anthony Alan Tovar for the Doctor of Philosophy in Electrical and Computer Engineering were presented April 21, 1995 and accepted by the dissertation committee and the doctoral program. Carl G. Bachhuber Vincent C. Williams, Representative of the Office of Graduate Studies DOCTORAL PROGRAM APPROVAL: Rolf Schaumann, Chair, Department of Electrical Engineering ************************************************************************ ACCEPTED FOR PORTLAND STATE UNIVERSITY BY THE LIBRARY ABSTRACT An abstract of the dissertation of Anthony Alan Tovar for the Doctor of Philosophy in Electrical and Computer Engineering presented April 21, 1995. Title: Beam Modes of Lasers with Misaligned Complex Optical Elements A recurring theme in my research is that mathematical matrix methods may be used in a wide variety of physics and engineering applications. Transfer matrix tech­ niques are conceptually and mathematically simple, and they encourage a systems approach.
    [Show full text]
  • Generalized Sylvester Theorems for Periodic Applications in Matrix Optics
    Portland State University PDXScholar Electrical and Computer Engineering Faculty Publications and Presentations Electrical and Computer Engineering 3-1-1995 Generalized Sylvester theorems for periodic applications in matrix optics Lee W. Casperson Portland State University Anthony A. Tovar Follow this and additional works at: https://pdxscholar.library.pdx.edu/ece_fac Part of the Electrical and Computer Engineering Commons Let us know how access to this document benefits ou.y Citation Details Anthony A. Tovar and W. Casperson, "Generalized Sylvester theorems for periodic applications in matrix optics," J. Opt. Soc. Am. A 12, 578-590 (1995). This Article is brought to you for free and open access. It has been accepted for inclusion in Electrical and Computer Engineering Faculty Publications and Presentations by an authorized administrator of PDXScholar. Please contact us if we can make this document more accessible: [email protected]. 578 J. Opt. Soc. Am. AIVol. 12, No. 3/March 1995 A. A. Tovar and L. W. Casper Son Generalized Sylvester theorems for periodic applications in matrix optics Anthony A. Toyar and Lee W. Casperson Department of Electrical Engineering,pirtland State University, Portland, Oregon 97207-0751 i i' Received June 9, 1994; revised manuscripjfeceived September 19, 1994; accepted September 20, 1994 Sylvester's theorem is often applied to problems involving light propagation through periodic optical systems represented by unimodular 2 X 2 transfer matrices. We extend this theorem to apply to broader classes of optics-related matrices. These matrices may be 2 X 2 or take on an important augmented 3 X 3 form. The results, which are summarized in tabular form, are useful for the analysis and the synthesis of a variety of optical systems, such as those that contain periodic distributed-feedback lasers, lossy birefringent filters, periodic pulse compressors, and misaligned lenses and mirrors.
    [Show full text]
  • A Note on the Inversion of Sylvester Matrices in Control Systems
    Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 2011, Article ID 609863, 10 pages doi:10.1155/2011/609863 Research Article A Note on the Inversion of Sylvester Matrices in Control Systems Hongkui Li and Ranran Li College of Science, Shandong University of Technology, Shandong 255049, China Correspondence should be addressed to Hongkui Li, [email protected] Received 21 November 2010; Revised 28 February 2011; Accepted 31 March 2011 Academic Editor: Gradimir V. Milovanovic´ Copyright q 2011 H. Li and R. Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We give a sufficient condition the solvability of two standard equations of Sylvester matrix by using the displacement structure of the Sylvester matrix, and, according to the sufficient condition, we derive a new fast algorithm for the inversion of a Sylvester matrix, which can be denoted as a sum of products of two triangular Toeplitz matrices. The stability of the inversion formula for a Sylvester matrix is also considered. The Sylvester matrix is numerically forward stable if it is nonsingular and well conditioned. 1. Introduction Let Rx be the space of polynomials over the real numbers. Given univariate polynomials ∈ f x ,g x R x , a1 / 0, where n n−1 ··· m m−1 ··· f x a1x a2x an1,a1 / 0,gx b1x b2x bm1,b1 / 0. 1.1 Let S denote the Sylvester matrix of f x and g x : ⎫ ⎛ ⎞ ⎪ ⎪ a a ··· ··· a ⎪ ⎜ 1 2 n 1 ⎟ ⎬ ⎜ ··· ··· ⎟ ⎜ a1 a2 an1 ⎟ m row ⎜ ⎟ ⎪ ⎜ .
    [Show full text]
  • (Integer Or Rational Number) Arithmetic. -Computing T
    Electronic Transactions on Numerical Analysis. ETNA Volume 18, pp. 188-197, 2004. Kent State University Copyright 2004, Kent State University. [email protected] ISSN 1068-9613. A NEW SOURCE OF STRUCTURED SINGULAR VALUE DECOMPOSITION PROBLEMS ¡ ANA MARCO AND JOSE-J´ AVIER MARTINEZ´ ¢ Abstract. The computation of the Singular Value Decomposition (SVD) of structured matrices has become an important line of research in numerical linear algebra. In this work the problem of inversion in the context of the computation of curve intersections is considered. Although this problem has usually been dealt with in the field of exact rational computations and in that case it can be solved by using Gaussian elimination, when one has to work in finite precision arithmetic the problem leads to the computation of the SVD of a Sylvester matrix, a different type of structured matrix widely used in computer algebra. In addition only a small part of the SVD is needed, which shows the interest of having special algorithms for this situation. Key words. curves, intersection, singular value decomposition, structured matrices. AMS subject classifications. 14Q05, 65D17, 65F15. 1. Introduction and basic results. The singular value decomposition (SVD) is one of the most valuable tools in numerical linear algebra. Nevertheless, in spite of its advantageous features it has not usually been applied in solving curve intersection problems. Our aim in this paper is to consider the problem of inversion in the context of the computation of the points of intersection of two plane curves, and to show how this problem can be solved by computing the SVD of a Sylvester matrix, a type of structured matrix widely used in computer algebra.
    [Show full text]
  • Evaluation of Spectrum of 2-Periodic Tridiagonal-Sylvester Matrix
    Evaluation of spectrum of 2-periodic tridiagonal-Sylvester matrix Emrah KILIÇ AND Talha ARIKAN Abstract. The Sylvester matrix was …rstly de…ned by J.J. Sylvester. Some authors have studied the relationships between certain or- thogonal polynomials and determinant of Sylvester matrix. Chu studied a generalization of the Sylvester matrix. In this paper, we introduce its 2-period generalization. Then we compute its spec- trum by left eigenvectors with a similarity trick. 1. Introduction There has been increasing interest in tridiagonal matrices in many di¤erent theoretical …elds, especially in applicative …elds such as numer- ical analysis, orthogonal polynomials, engineering, telecommunication system analysis, system identi…cation, signal processing (e.g., speech decoding, deconvolution), special functions, partial di¤erential equa- tions and naturally linear algebra (see [2, 4, 5, 6, 15]). Some authors consider a general tridiagonal matrix of …nite order and then describe its LU factorizations, determine the determinant and inverse of a tridi- agonal matrix under certain conditions (see [3, 7, 10, 11]). The Sylvester type tridiagonal matrix Mn(x) of order (n + 1) is de- …ned as x 1 0 0 0 0 n x 2 0 0 0 2 . 3 0 n 1 x 3 .. 0 0 6 . 7 Mn(x) = 6 . .. .. .. .. 7 6 . 7 6 . 7 6 0 0 0 .. .. n 1 0 7 6 7 6 0 0 0 0 x n 7 6 7 6 0 0 0 0 1 x 7 6 7 4 5 1991 Mathematics Subject Classi…cation. 15A36, 15A18, 15A15. Key words and phrases. Sylvester Matrix, spectrum, determinant. Corresponding author.
    [Show full text]
  • Uniqueness of Solution of a Generalized ⋆-Sylvester Matrix
    Uniqueness of solution of a generalized ?-Sylvester matrix equationI Fernando De Ter´ana,∗, Bruno Iannazzob aDepartamento de Matem´aticas, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911 Legan´es,Spain. [email protected]. bDipartimento di Matematica e Informatica, Universit`adi Perugia, Via Vanvitelli 1, 06123 Perugia, Italy. [email protected] Abstract We present necessary and sufficient conditions for the existence of a unique solution of the generalized ?-Sylvester matrix equation AXB + CX?D = E, where A; B; C; D; E are square matrices of the same size with real or complex entries, and where ? stands for either the transpose or the conjugate transpose. This generalizes several previous uniqueness results for specific equations like the ?-Sylvester or the ?-Stein equations. Keywords: Linear matrix equation, matrix pencil, Sylvester equation, ?-Sylvester equation, ?-Stein equation, T -Sylvester equation, eigenvalues. AMS classification: 15A22, 15A24, 65F15 1. Introduction Given A; B; C; D; E 2 Fn×n, with F being C or R, we consider the equa- tion AXB + CX?D = E (1) where X 2 Fn×n is an unknown matrix, and where, for a given matrix M 2 Fn×n, M ? stands for either the transpose M T or the conjugate transpose M ∗. IThis work was partially supported by the Ministerio de Econom´ıay Competitividad of Spain through grant MTM-2012-32542 (F. De Ter´an)and by the INdAM through a GNCS Project 2015 (B. Iannazzo). It was partially developed while F. De Ter´anwas visiting the Universit`adi Perugia, funded by INdAM. ∗corresponding author. Preprint submitted to Linear Algebra and its Applications October 20, 2015 In the last few years, this equation has been considered by several authors in the context of linear Sylvester-like equations arising in applications (see, for instance, [3, 14, 17, 18]).
    [Show full text]
  • Matrix Algebraic Properties of the Fisher Information Matrix of Stationary Processes
    Entropy 2014, 16, 2023-2055; doi:10.3390/e16042023 OPEN ACCESS entropy ISSN 1099-4300 www.mdpi.com/journal/entropy Article Matrix Algebraic Properties of the Fisher Information Matrix of Stationary Processes Andre´ Klein Rothschild Blv. 123 Apt.7, 65271 Tel Aviv, Israel; E-Mail: [email protected] or [email protected]; Tel.: 972.5.25594723 Received: 12 February 2014; in revised form: 11 March 2014 / Accepted: 24 March 2014 / Published: 8 April 2014 Abstract: In this survey paper, a summary of results which are to be found in a series of papers, is presented. The subject of interest is focused on matrix algebraic properties of the Fisher information matrix (FIM) of stationary processes. The FIM is an ingredient of the Cramer-Rao´ inequality, and belongs to the basics of asymptotic estimation theory in mathematical statistics. The FIM is interconnected with the Sylvester, Bezout and tensor Sylvester matrices. Through these interconnections it is shown that the FIM of scalar and multiple stationary processes fulfill the resultant matrix property. A statistical distance measure involving entries of the FIM is presented. In quantum information, a different statistical distance measure is set forth. It is related to the Fisher information but where the information about one parameter in a particular measurement procedure is considered. The FIM of scalar stationary processes is also interconnected to the solutions of appropriate Stein equations, conditions for the FIM to verify certain Stein equations are formulated. The presence of Vandermonde matrices is also emphasized. Keywords: Bezout matrix; Sylvester matrix; tensor Sylvester matrix; Stein equation; Vandermonde matrix; stationary process; matrix resultant; Fisher information matrix MSC Classification: 15A23, 15A24, 15B99, 60G10, 62B10, 62M20.
    [Show full text]
  • Polynomial Bezoutian Matrix with Respect to a General Basisୋ Z.H
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 331 (2001) 165–179 www.elsevier.com/locate/laa Polynomial Bezoutian matrix with respect to a general basisୋ Z.H. Yang Department of Mathematics, East Campus, China Agricultural University, P.O. Box 71, 100083 Beijing, People’s Republic of China Received 7 September 2000; accepted 25 January 2001 Submitted by R.A. Brualdi Abstract We introduce a class of so-called polynomial Bezoutian matrices. The operator represen- tation relative to a pair of dual bases and the generalized Barnett factorization formula for this kind of matrix are derived. An intertwining relation and generalized Bezoutian reduction via polynomial Vandermonde matrix are presented. © 2001 Elsevier Science Inc. All rights reserved. AMS classification: 15A57 Keywords: Polynomial Bezoutian; Vandermonde matrix; Barnett-type formula 1. Introduction Given a pair of polynomials p(z) and q(z) with degree p(z) = n and degree q(z) n, the (classical) Bezoutian of p(z) and q(z) is the bilinear form given by − − p(x)q(y) − p(y)q(x) n 1 n 1 π(x,y) = = b xiyj (1.1) x − y ij i=0 j=0 × = and Bezoutian matrix of p and q is defined by the n n symmetric matrix B(p, q) n−1 bij i,j=0. ୋ Partially supported by National Science Foundation of China. 0024-3795/01/$ - see front matter 2001 Elsevier Science Inc. All rights reserved. PII:S0024-3795(01)00282-8 166 Z.H.
    [Show full text]
  • On the Classic Solution of Fuzzy Linear Matrix Equations
    On the classic Solution of Fuzzy Linear Matrix Equations Zhijie Jin∗ Jieyong Zhouy Qixiang Hez May 15, 2021 Abstract Since most time-dependent models may be represented as linear or non-linear dynamical systems, fuzzy linear matrix equations are quite general in signal processing, control and system theory. A uniform method of finding the classic solution of fuzzy linear matrix equations is presented in this paper. Conditions of solution existence are also studied here. Under the framework, a numerical method to solve fuzzy generalized Lyapunov matrix equations is developed. In order to show the validation and efficiency, some selected examples and numer- ical tests are presented. Keyword: The general form of fuzzy linear matrix equations; classic solu- tion; numerical solution; fuzzy generalized Lyapunov matrix equations. 1 Introduction Let Ai and Bi; i = 1; 2; ··· ; l; are n × s and t × m real or complex matrices, respectively. The general form of linear matrix equations is A1XB1 + A2XB2 + ··· + AlXBl = C; where C is an n×m matrix. This general form covers many kinds of linear matrix equations such as Lyapunov matrix equations, generalized Lyapunov matrix equations, Sylvester matrix equations, generalized Sylvester matrix equations, etc. Those matrix equations arise from a lots of applications including control ∗School of Information and Management, Shanghai University of Finance and Economics, Shanghai, 200433, P.R. China ([email protected]) yMathematics School, Shanghai Key Laboratory of Financial Information Technology, The Laboratory of Computational Mathematics, Shanghai University of Finance and Economics, Shanghai, 200433, P.R. China ([email protected]). Corresponding author. zShanghai University of Finance and Economics Zhejiang College, Zhejiang 321013, P.R.
    [Show full text]
  • How and Why to Solve the Operator Equation Axâ‹™Xb
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Publications of the IAS Fellows HOW AND WHY TO SOLVE THE OPERATOR EQUATION AX®XB ¯ Y RAJENDRA BHATIA PETER ROSENTHAL 1. Introduction The entities A, B, X, Y in the title are operators, by which we mean either linear transformations on a finite-dimensional vector space (matrices) or bounded (¯ continuous) linear transformations on a Banach space. (All scalars will be complex numbers.) The definitions and statements below are valid in both the finite- dimensional and the infinite-dimensional cases, unless the contrary is stated. The simplest kind of equation involving operators is a linear equation of the form AX ¯ Y or of the form XB ¯ Y. The condition that is both necessary and sufficient that such an equation have a unique solution is that A (or B) should be inertible (bijective). Let σ(A) denote the spectrum of A; in the finite-dimensional case this is just the set of eigenvalues of A. In any case, invertibility of A is the statement 0 a σ(A). The equation ax ¯ y in numbers can always be solved when a 1 0. The analogous condition for the operator equation AX ¯ Y is 0 a σ(A). This analogy provides guidance in formulating potential theorems and guessing solutions of AX®XB ¯ Y. When does AX®XB ¯ Y have a unique solution X for each given Y? In the scalar case of ax®xb ¯ y, the answer is obvious: we must have a®b 1 0. The answer is almost as obvious in another case: if the matrices A and B are both diagonal, with diagonal entries ²λ",...,λn´ and ²µ",...,µn´, respectively, then the above equation has the solution xij ¯ yij}(λi®µj) provided λi®µj 1 0 for all i, j.
    [Show full text]
  • Computational Methods for Linear Matrix Equations∗
    COMPUTATIONAL METHODS FOR LINEAR MATRIX EQUATIONS∗ V. SIMONCINI† Abstract. Given the square matrices A,B,D,E and the matrix C of conforming dimensions, we consider the linear matrix equation AXE + DXB = C in the unknown matrix X. Our aim is to provide an overview of the major algorithmic developments that have taken place in the past few decades in the numerical solution of this and of related problems, which are becoming a reliable numerical tool in the formulation and solution of advanced mathematical models in engineering and scientific computing. Key words. Sylvester equation, Lyapunov equation, Stein equation, multiple right-hand side, generalized matrix equations. Schur decomposition. Large scale computation. AMS subject classifications. 65F10, 65F30, 15A06 Contents 1. Introduction 2 2. Notationandpreliminarydefinitions 4 3. Applications 5 4. Continuous-timeSylvesterequation 10 4.1. StabilityoftheSylvesterequation 12 4.2. Small scale computation 13 4.3. Computation with large A and small B 15 4.4. Computation with large A and large B 19 4.4.1. Projectionmethods 21 4.4.2. ADI iteration 25 4.4.3. Datasparseandothermethods 27 5. Continuous-time Lyapunov equation 28 5.1. Small scale computation 29 5.2. Large scale computation 31 5.2.1. Projectionmethods 31 5.2.2. ADI methods 35 5.2.3. Spectral, Sparse format and other methods 39 6. ThediscreteSylvesterandSteinequations 42 7. Generalizedlinearequations 44 7.1. The Generalized Sylvester and Lyapunov equations 44 7.2. Bilinear, constrained, and other linear equations 46 7.3. Sylvester-like and Lyapunov-like equations 50 8. Software and high performance computation 52 9. Concluding remarks and future outlook 53 10.
    [Show full text]