A Toeplitz Algorithm for Polynomial J-Spectral Factorization 1
Total Page:16
File Type:pdf, Size:1020Kb
A Toeplitz algorithm for polynomial J-spectral factorization 1 Juan Carlos Z´u~niga a Didier Henrion a;b;2 aLAAS-CNRS 7 Avenue du Colonel Roche, 31077 Toulouse, France bDepartment of Control Engineering, Faculty of Electrical Engineering, Czech Technical University in Prague Technick´a2, 16627 Prague, Czech Republic Abstract A block Toeplitz algorithm is proposed to perform the J-spectral factorization of a para-Hermitian polynomial matrix. The input matrix can be singular or indefinite, and it can have zeros along the imaginary axis. The key assumption is that the finite zeros of the input polynomial matrix are given as input data. The algorithm is based on numerically reliable operations only, namely computation of the null-spaces of related block Toeplitz matrices, polynomial matrix factor extraction and linear polynomial matrix equations solving. Key words: Polynomial matrices, spectral factorization, numerical algorithms, computer-aided control system design. 1 Introduction The J-spectral factorization of para-Hermitian polyno- mial matrices has important applications in control and 1.1 Polynomial J-spectral factorization systems theory, as described first in [Wiener, 1949]. See e.g. [Kwakernaak and Sebek,ˇ 1994] and [Grimble and Kuˇcera,1996] for comprehensive descriptions of appli- In this paper we are interested in solving the following cations in multivariable Wiener filtering, LQG control J-spectral factorization (JSF) problem for polynomial and H optimization. matrices: 1 d Let A(s) = A0 + A1s + + Ads be an n-by-n poly- nomial matrix with real coefficients··· and degree d in the 1.1.1 Factorization of singular matrices complex indeterminate s. Assume that A(s) is para- T Hermitian, i.e. A(s) = A ( s) where T denotes the In its most general form, the JSF problem applies to − transpose. We want to find an n-by-n polynomial matrix singular matrices. In this paragraph, we show that the P (s) and a constant matrix J such that problem of factorizing a singular matrix can be converted into the problem of factorizing a non-singular matrix. T Alternatively, we can also seek a non-square spectral A(s) = P ( s)JP (s);J = diag In+ ; In ; 0n0 : (1) − f − − g factor P (s). P (s) is non-singular and its spectrum 3 lies within the T left half-plane. J = J is a signature matrix with +1, 1 If A(s) has rank r < n, then n +n = r in the signature − + and 0 along the diagonal, and such that n++n +n0 = n. matrix J. The null-space of A(s) is− symmetric, namely if − the columns of Z(s) form a basis of the right null-space 1 A preliminary version of this paper was presented at the of A(s), i.e. A(s)Z(s) = 0, then ZT ( s)A(s) = 0. As IFAC Symposium on System, Structure and Control, Oax- pointed out in [Sebek,ˇ 1990], the factorization− of A(s) aca, Mexico, December 8-10, 2004. could start with the extraction of its null-space. There 2 Corresponding author. Tel: (+33) 561 33 63 08. Fax: (+33) always exists an unimodular matrix 561 33 69 69. e-mail: [email protected] 3 The set of all the zeros (eigenvalues) of a polynomial ma- 1 trix [Gohberg et al., 1982b]. U − (s) = [B(s) Z(s)] Preprint submitted to Automatica 30 November 2005 such that of the scope of this paper. Some other works giving nec- essary and sufficient conditions for the existence of the T 1 U − ( s)A(s)U − (s) = diag A¯(s); 0 (2) JSF are [Meinsma, 1995] and [Ran, 2003]. There, the − f g conditions are related to the existence of an stabilizing where the r r matrix A¯(s) is non-singular. Now, if A¯(s) solution of an associated Riccati equation. A deeper dis- is factorized× as A¯(s) = P¯T ( s)J¯P¯(s), then the JSF of cussion of all the related results on the literature would A(s) is given by: − be very extensive and also out of our objectives. J = diag J;¯ 0 ;P (s) = diag P¯(s);In r U(s): 1.2.1 Canonical factorizations f g f − g Equivalently, if we accept P (s) to be non-square, then we It is easy to see that if A(s) is para-Hermitian, then the can eliminate the zero columns and rows in factorization degrees δi for i = 1; 2; : : : ; n of the n diagonal entries of (1) and obtain A(s) are even numbers. We define the diagonal leading matrix of A(s) as A(s) = P T ( s)JP (s);J = diag I ; I (3) n+ n T 1 − f − − g = lim = D− ( s)A(s)D− (s) (4) A s − where P (s) has size r n. The different properties of j j!1 × factorizations (1) and (3) are discussed later in section δ1 δn 3.4. where D(s) = diag s 2 ; : : : ; s 2 . We say that A(s) is diagonally reducedf if existsg and is non- A What is important to notice here is that any singular fac- singular. The JSF (3) can be defined for both di- agonally or non-diagonally reduced matrices. From torization can be reduced to a non-singular one. There- ˇ fore, the theory of non-singular factorizations can be [Kwakernaak and Sebek, 1994] we say that the JSF naturally extended to the singular case. of a diagonally reduced matrix A(s) is canonical if P (s) is column reduced 4 with column degrees equal to half the diagonal degrees of A(s), see also 1.2 Existence conditions [Gohberg and Kaashoek, 1986]. Suppose that the full rank n-by-n matrix A(s) admits In this paper we extend the concept of canonical factor- a factorization A(s) = P T ( s)JP (s) where constant ization to matrices with no assumption on its diagonally matrix J T = J has dimension− m. Let σ[A(s)] be the reducedness (see section 3.3) or even singular matrices spectrum of A(s), then (see section 3.4). m m0 = max +[A(z)] + max [A(z)] 1.3 Current algorithms and contributions ≥ z iR/σ[A(s)] V z iR/σ[A(s)] V− 2 2 The interest in numerical algorithms for polynomial JSF where + and are, respectively, the number of posi- tive andV negativeV− eigenvalues of A(s) [Ran and Rodman, has increased in the last years. Several different algo- 1994] rithms are now available. As for many problems related to polynomial matrices, these algorithms can be classi- fied in two major approaches: the state-space approach If A(s) has no constant signature on the imaginary axis, and the polynomial approach. i.e. the difference +[A(z)] [A(z)] is not constant V − V− for all z iR/σ[A(z)], then n < m0 2n, see the proof of Theorem2 3.1 in [Ran and Rodman,≤ 1994]. The state-space methods usually relate the problem of JSF to the stabilizing solution of an algebraic Riccati equation. One of the first contributions was [Tuel, 1968], On the other hand, if A(s) has constant signature, then where an algorithm for the standard sign-definite case m = m = n, and matrix J can be chosen as the unique 0 (J = I) is presented. The evolution of this kind of square matrix given in (3). In [Ran and Rodman, 1994] methods is resumed in [Stefanovski, 2003] and ref- it is shown that any para-Hermitian matrix A(s) with erences inside. This paper, based on the results of constant signature admits a JSF. So, constant signature [Trentelman and Rapisarda, 1999], describes and algo- of A(s) is the basic existence condition for JSF that we rithm that can handle indefinite para-hermitian ma- will assume in this work. trices (with constant signature) with zeros along the imaginary axis. The case of singular matrices is tackled In [Ran and Zizler, 1997] the authors give necessary in [Stefanovski, 2004] only for the discrete case. and sufficient conditions for a self-adjoint polyno- mial matrix to have constant signature, see also 4 Let q be the degree of the determinant of polynomial ma- [Gohberg et al., 1982a]. These results can be extended trix P (s) and ki the maximum degree of the entries of the ith n to para-Hermitian polynomial matrices but this is out column of P (s). Then P (s) is column reduced if q = ki. Pi=1 2 The popularity of the state-space methods is related with 2.1 Eigenstructure of polynomial matrices its good numerical properties. There exist several numer- ical reliable algorithms to solve the Riccati equation, see The eigenstructure of a polynomial matrix A(s) contains for example [Bittanti et al. (Eds.), 1991]. On the nega- the finite structure, the infinite structure and the null- tive side, sometimes the reformulation of the polyno- space structure. mial problem in terms of the state-space requires elab- orated preliminary steps [Kwakernaak and Sebek,ˇ 1994] 2.1.1 Finite structure and some concepts and particularities of the polynomial problem are difficult to recover from the new represen- A finite zero of a non-singular polynomial matrix A(s) tation. On the other hand, the advantage of polynomial is a complex number z such that there exists a non-zero methods is their conceptual simplicity and straightfor- complex vector v satisfying A(z)v = 0. Vector v is called ward application to the polynomial matrix, resulting, in characteristic vector or eigenvector associated to z. general, in faster algorithms. On the negative side, the polynomial methods are often related with elementary operations over the ring of polynomials, and it is well If z is a finite zero of A(s) with algebraic multiplicity known that these operations are numerically unstable.