Nonlinear Eigenvalue Problems: a Challenge for Modern Eigenvalue Methods
Total Page:16
File Type:pdf, Size:1020Kb
gamm header will be provided by the publisher Nonlinear Eigenvalue Problems: A Challenge for Modern Eigenvalue Methods Volker Mehrmann∗1 and Heinrich Voss∗∗2 1 Institut fur¨ Mathematik, MA 4-5, Technische Universitat¨ Berlin, D-10623 Berlin, Fed. Rep. Germany. Supported by Deutsche Forschungsgemeinschaft through the DFG Research Center MATHEON ‘Mathematics for key technologies’ in Berlin 2 Arbeitsbereich Mathematik, Technische Universitat¨ Hamburg-Harburg, D-21071 Hamburg, Fed. Rep. Germany Key words matrix polynomial, projection method, Krylov-subspace method, Arnoldi method, rational-Krylov method, linearization, structure preservation. MSC (2000) 65F15, 15A18, 35P30 We discuss the state of the art in numerical solution methods for large scale polynomial or rational eigenvalue problems. We present the currently available solution methods such as the Jacobi-Davidson, Arnoldi or the rational Krylov method and analyze their properties. We briefly introduce a new linearization technique and demonstrate how it can be used to improve structure preservation and with this the accuracy and efficiency of linearization based methods. We present several recent applications where structured and unstructured nonlinear eigenvalue problems arise and some numerical results. Copyright line will be provided by the publisher 1 Introduction We discuss numerical methods for the solution of large scale nonlinear eigenvalue problems F(λ)x = F(λ; M0; : : : ; Mk; p)x = 0; (1) where for K = C or K = R F : D ! Km;n: is a family of matrices depending on a variable λ 2 D, where D ⊂ K is an open set. As in the linear case, λ 2 D is called an eigenvalue of problem (1) if equation (1) has a nontrivial solution x 6= 0. Then x is called an eigenvector corresponding to λ. m;n The function F typically depends on some coefficient matrices M0; : : : ; Mk 2 K and often also on a vector of parameters p 2 Cr, e.g. material parameters or excitation frequen- cies. In many applications the purpose of the solution of the eigenvalue problem is to optimize certain properties of the eigenvalues, eigenvectors or the underlying dynamical system with respect to these parameters. ∗ e-mail: [email protected], Phone: +49 30 314 25736, Fax: +49 30 314 79706 ∗∗ e-mail [email protected] Copyright line will be provided by the publisher 4 V. Mehrmann and H. Voss: Nonlinear Eigenvalue Problems Nonlinear eigenvalue problems arise in a variety of applications. The most widely studied class in applications is the quadratic eigenvalue problem with F(λ) := λ2M + λC + K (2) that arises in the dynamic analysis of structures, see [27, 48, 74, 90] and the references therein. Here, typically the stiffness matrix K and the mass matrix M are real symmetric and positive (semi-)definite, and the damping matrix is general. Another source for such problems are vibrations of spinning structures yielding conservative gyroscopic systems [21, 47, 39], where K = KT and M = M T are real positive (semi-)definite, and C = −C T is real skew– symmetric. In most applications one is interested in the eigenvalues of smallest real part. A quadratic problem of slightly different structure arises in the study of corner singularities in anisotropic elastic materials, [3, 4, 5, 44, 52, 63, 89], where the problem has the form (λ2M(p) + λG(p) + K(p))x = 0; (3) with large and sparse coefficient matrices M(p) = M(p)T , G(p) = −G(p)T , K(p) = K(p)T that are resulting from a finite element discretization. Here M(p) and −K(p) are pos- itive definite and the coefficient matrices depend on a set of material and geometry parameters p which are varied. The part of the spectrum that is desired are the eigenvalues nearest to the imaginary axis and these are also the eigenvalues for which the error estimates in the finite element discretizations is most favorable. Polynomial eigenvalue problems of the form (3) are called even, see [55], since replacing λ by −λ and transposing gives the same problem. The spectrum of these problems has the Hamiltonian eigensymmetry, i.e. it is symmetric with respect to the real and imaginary axis. Another recently studied structured eigenvalue problem arises in the optimization of the acoustic emissions of high speed trains [34, 35, 55]. The model for the vibrations of the rails leads to a rational eigenvalue problem of the form 1 (λM (!) + M (!) + M T (!))x = 0; (4) 1 0 λ 1 where the coefficients M0; M1 are large and sparse complex matrices depending on the exci- tation frequency !. Here M1(!) is highly rank deficient and M0(!) is complex symmetric. 1 The eigenvalues occur in pairs λ, λ and most of the eigenvalues are at 0 and 1. What is needed in the industrial application are the finite nonzero eigenvalues and the corresponding eigenvectors. None of the classical methods worked for this problem and only special meth- ods that were able to deal with the specific structure delivered sufficiently accurate eigenvalue approximations. Eigenvalue problems of the form (4) (or rather their polynomial representation which is 1 obtained by multiplying (4) by λ) are called palindromic in [55], since replacing λ by λ and transposing yields the same problem. The spectrum has the symplectic eigensymmetry, i.e. it is symmetric with respect to the unit circle. There are many other applications leading to structured or unstructured quadratic eigen- value problems. A detailed survey has recently been given in [100]. Quadratic eigenvalue problems are special cases of polynomial eigenvalue problems k j P(λ)x = 0 λ Mj1 x = 0; (5) Xj=0 @ A Copyright line will be provided by the publisher gamm header will be provided by the publisher 5 n;n with coefficients Mj 2 K . An important application of polynomial eigenvalue problems is the solution of the optimal control problem to minimize the cost functional t1 k (i) T (i) T (q ) Qiq + u Ru dt Z t0 Xi=0 h i T T with Qi = Qi positive semidefinite, R = R positive definite, subject to the k-th order control system k (i) Miq = Bu(t); Xi=0 with control input u(t) and initial conditions (i) q (t0) = qi;0; i = 0; 1; : : : ; k − 1: (6) Application of the linear version of the Pontryagin maximum principle, e.g. [61], leads to the boundary value problem of Euler-Lagrange equations k−1 j−1 T (2j) (−1) Qj M2j q (2j) + M2j 0 µ Xj=1 k−1 T (2j+1) 0 −M2j+1 q (2j+1) + M2j+1 0 µ Xj=1 T −Q0 M0 x −1 T = 0; M0 −BR B µ (i) with initial conditions (6) and µ (t1) = 0 for i = 0; : : : k − 1, where we have introduced the new coefficients Mk+1 = Mk+2 = : : : = M2k = 0. Here, all coefficients of derivatives higher than k are singular and one obtains a boundary value problem with coefficient matri- ces that alternate between real symmetric and skew-symmetric matrices. The solution of this boundary value problem can then be obtained by decoupling the forward and backward inte- gration, i.e. by computing the deflating subspace associated with the eigenvalues in the left (or right) half plane, see e.g. [61]. The associated matrix polynomial is even or odd depending on the degree and whether the leading coefficient is symmetric or skew-symmetric, and the spectrum has the Hamiltonian eigensymmetry. For this problem, even though it is large and sparse, the solution of the boundary value problem requires the computation of a deflating subspace associated with half of the eigen- values. This can only be done for medium size problems, where it is possible to store the full matrix. If the system size is bigger, then alternative techniques based on low rank approxima- tions to Riccati equations have to be applied, see e.g. [14, 54, 71]. Other polynomial eigenvalue problems of higher degree than two arise when discretizing linear eigenproblems by dynamic elements [74, 103, 104] or by least squares elements [78, 79], i.e. if one uses ansatz functions in a Rayleigh–Ritz approach which depend polynomially on the eigenparameter. Copyright line will be provided by the publisher 6 V. Mehrmann and H. Voss: Nonlinear Eigenvalue Problems Rational eigenproblems k λ R(λ)x = −Kx + λMx + Cj x = 0; (7) σj − λ Xj=1 T T T where K = K and M = M are positive definite and Cj = Cj are matrices of small rank, occur in the study of the free vibration of plates with elastically attached masses [59, 96, 106] or vibrations of fluid solid structures [17, 73, 108]. A similar problem k 2 1 R(λ)x = −Kx + λMx + λ Cj x = 0 (8) !j − λ Xj=1 arises when a generalized linear eigenproblem is condensed exactly [72, 102]. Both these problems have real eigenvalues which can be characterized as min-max values of a Rayleigh functional [106], and in both cases one is interested in a small number of eigenvalues at the lower end of the spectrum or which are close to an excitation frequency. Another type of rational eigenproblem is obtained for the free vibrations of a structure if one uses a viscoelastic constitutive relation to describe the behavior of a material [31, 32]. A finite element model takes the form k 2 1 R(λ) := λ M + K − ∆Kj x = 0; (9) 1 + bj λ Xj=1 where the stiffness and mass matrices K and M are positive definite, k denotes the number of regions with different relaxation parameters bj , and ∆Kj is an assemblage of element stiffness matrices over the region with the distinct relaxation constants. Note that the rational problems (4), (7), (8), and (9) can be turned into polynomial eigenvalue problems by multiplying with an appropriate scalar polynomial in λ.