
Improved Balancing for General and Structured Eigenvalue Problems Vasile Sima Peter Benner National Institute for Research & Max Planck Institute for Dynamics Development in Informatics of Complex Technical Systems 011455 Bucharest, Romania 39106 Magdeburg, Germany Email: [email protected] Email: [email protected] Abstract—Badly-scaled matrices or matrix pencils can reduce When an eigenproblem has a special structure, imply- the reliability and accuracy of computed results for various ing structural properties of its spectrum, it is important numerical problems, including the computation of spectra and to use structure-preserving and/or structure-exploiting algo- bases of invariant or deflating subspaces, which are used in many basic procedures for optimal and robust control, model reduction, rithms, since theoretical properties cannot be preserved dur- spectral factorization, and other domains. Standard balancing ing computations with general algorithms [5]–[7]. Common techniques can often improve the results, but sometimes the special structures are Hamiltonian and symplectic matrices solution of the scaled problem is much worse than that for the or matrix pencils, encountered in optimal or robust control original problem. This paper presents an improved balancing (e.g., solution of algebraic Riccati equations, evaluation of the technique for general or structured matrices and matrix pencils, and illustrates its good performance in solving eigenvalue prob- H∞- or L∞-norms of linear standard or generalized dynamic lems and algebraic Riccati equations for large sets of examples systems), spectral factorization, model reduction, etc., see, for from well-known benchmark collections. instance, [8]–[10]. Often, structured matrix pencils can be Index Terms—Algebraic Riccati equation, Hamiltonian matrix, transformed to skew-Hamiltonian/Hamiltonian (sHH) pencils, linear quadratic optimization, numerical algorithms, structure- for which dedicated algorithms have been developed, see preservation. e.g., [11]–[16], and the references therein. Structure-preserving balancing techniques have been developed in [17], for (skew-) I. INTRODUCTION Hamiltonian matrices, and in [18], for sHH matrix pencils. The computation of spectra and bases of invariant or de- However, numerical experiments have shown that the spec- flating subspaces of matrices or matrix pencils, respectively, tra or subspaces computed after balancing may sometimes be is key for many numerical procedures in control engineering much less accurate than those computed without balancing. and other domains. Sometimes, the given matrices have large A strategy has been proposed in [18] for sHH pencils to norms and elements of highly different magnitude, which control the scaling factors in order to avoid a (significant) could potentially produce numerical difficulties for eigen- increase of the 1-norms for the balanced pencil matrices. solvers, with undesired consequences on the reliability and This paper presents an adaptation of the strategy to general accuracy of the results, see, e.g., [1]. Balancing procedures matrices or pencils, and illustrates its performance for some can be used to improve the numerical behavior. Parlett and control engineering applications, in comparison with standard Reinsch [2] and Ward [3] proposed balancing algorithms for balancing and no balancing cases. Without such a strategy, general matrices and matrix pencils, respectively, which have balancing might sometimes be useless or even disastrous. been included in state-of-the-art software packages, such as LAPACK [4]. These techniques will be referred to as stan- II. BALANCING GENERAL MATRIX PENCILS dard balancing. The matrix/matrices is/are preprocessed by Let A − λB be a matrix pencil, with λ ∈ IC, and similarity/equivalence transformations, in two optional steps: × A, B ∈ IRm m. A better representation, from a numeri- the first step uses permutations to find isolated eigenvalues cal point of view, is βA − αB. The finite eigenvalues λ (available with no rounding errors), and the second step uses i are the ratios α /β which solve the characteristic equation diagonal scaling operations to make the row and corresponding i i det(βA− αB) = 0, while infinite eigenvalues are those with column 1-norms as close as possible. Balancing may reduce β = 0, corresponding to the zero eigenvalues of the pencil the 1-norm of the scaled matrices, but this is not guaranteed j B− µA [19]. Balancing A− λB is done by an equivalence in general. This is probably the rationale for avoiding scaling transformation [3], A = LAR, B = LBR, with in some LAPACK subroutines; expert driver routines DGEES, DGEESX, DGGES, DGGESX, and DGGEV use permutations A11 A12 A13 B11 B12 B13 DGEEVX DGGEVX only, while and allow either permuting, A = 0 A22 A23 , B = 0 B22 B23 , scaling, permuting and scaling, or none of them. 0 0 A33 0 0 B33 978-1-5090-2720-0/16/$31.00 c 2016 IEEE (1) where A11, B11, of order ℓ − 1, and A33, B33, of order m − balancing, to make A22 and B22 in (1) have comparable h, are upper triangular (i.e., with only zeros below the main 1-norms. For τ = −2, the same measure is used, but if diagonal), and ℓ − 1 and m − h have the maximum possible max(A221, B221) > cM0 and t>T , where c and T values (sometimes 0). The structure in (1) allows to obtain are given constants (c possibly larger than 1), and t is the the eigenvalues λi, i =1: ℓ − 1 ∪ h +1: m, only using the maximum of the condition numbers of L and R, then the diagonal entries of A11, B11, and of A33, B33. (A MATLAB- scaling factors are set to 1 and a warning indicator is set; style notation for indices [20] is used.) Finding A11 and B11 here, the matrices with tilde accent are the solution of the l is done by ℓ − 1 row permutations, Pi , i =1: ℓ − 1; finding above norm ratio reduction problem. This approach avoids to r A33 and B33 is done by m − h column permutations, Pj , obtain scaled matrices with too large norms, compared to the j = m : −1 : h + 1. (A permutation matrix P of order m given ones, and also limits the range of the scaling factors. For has exactly one nonzero entry, which is 1, in each row and τ = −3, the measure used is the smallest product of norms, T T column, and it satisfies the relations P P = PP = Im, mini( Ai(s,s)1Bi(s,s)1 ), over the sequence of τi values T where M denotes the transpose of a matrix M, and Im is the tried, while for τ = −4, the condition numbers of the scaling identity matrix of order m.) Specifically, for l = m : −1 : 1, transformations are additionally supervised, as for τ = −2. the first row i ∈ l : −1 : 1 with only one nonzero in any This tends to reduce the 1-norms of both matrices. Finally, if column j ∈ 1 : l, i = j, of the current A and B, if any, is τ = −10k, the condition numbers of the acceptable scaling interchanged with row l; similarly, for k =1: m, the first matrices are bounded by 10k. A similar procedure, specialized column j ∈ k : l with only one nonzero in any row i ∈ k : l, for sHH pencils, has been recently proposed in [18]. i = j, if any, is interchanged with column k. Reducing A The improved balancing algorithm can also be applied to and B to the form in (1) by permutations is the first step of standard eigenvalue problems, A− λIm. Similarity transfor- balancing. The second step performs row and column scaling mations are needed in this case, which means that L = R−1 −1 by diagonal nonsingular matrices, so that the 1-norms of the in (2). Therefore, Pl = Pr, and Dl = Dr . The procedure for rows and corresponding columns of the submatrices A22 and improving conditioning can be easily adapted (and simplified). B22 are as close as possible. A generalized conjugate gradient iteration is used for computing the scaling factors. Therefore, III. BALANCING SKEW-HAMILTONIAN/HAMILTONIAN the left and right transformation matrices are defined by MATRIX PENCILS T L = DlPl , R = PrDr, (2) Let αS − βH be a skew-Hamiltonian/Hamiltonian pencil, 2n×2n where Dl and Dr are left and right diagonal scaling matrices, with α ∈ IC, β ∈ IR, S ∈ IR a skew-Hamiltonian matrix, 2n×2n with diagonal entries possibly non-unity in the locations (i,i), and H∈ IR a Hamiltonian matrix, i.e., i = ℓ : h, and P and P are left and right (row and l r 0 I column) permutation matrices, defined by the products of the (SJ )T = −SJ , (HJ )T = HJ , J := n . (3) −I 0 elementary permutations mentioned above. n The standard balancing algorithm uses all nonzero entries These definitions imply the following structure for S and H: (i,j), with i,j ∈ s := ℓ : h, to compute the scaling factors. As shown in Section IV, this can produce very ill-conditioned A D C V S = , H = , (4) scaling transformations, which may increase the norms of the E AT W −CT scaled parts of A and/or B, and (significantly) decrease the accuracy of computed spectra and, consequently, of bases where D and E are skew-symmetric, i.e., DT = −D, ET = for deflating subspaces. A procedure which can improve −E, and V and W are symmetric matrices. When S and H the conditioning is described here. This procedure uses an are complex matrices, then D and E are skew-Hermitian, V original and important modification of the standard balancing and W are Hermitian, and the T superscript is replaced by for finding the scaling factors, which optionally limits the the conjugate transpose superscript H. range of their variation via an outer loop. Specifically, a A structure-exploiting balancing procedure for complex threshold value, τ, is used instead of 0.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-