<<

The nonlinear eigenvalue problem solved by linearization. Advisor: Maria Isabel Bueno Cachadina

Prerequisites. Before the theoretical description of this project is presented, let me men- tion some important information about what you need to know in order to get involved in this research:

• The math background needed to work on this project is upper-division Linear Algebra. Some knowledge of is desirable.

• Being capable of writing good proofs is a must.

• Having programming skills in Matlab is desirable as well. Programming in some other language would be useful too. If you don’t know how to program in Matlab, no worries. We will teach you here the basics.

When you read the description of the project below, you may feel over- whelmed by lots of technical words that you have not heard about before. Do not worry! We will spend some time learning the background in depth before we start working on the project. Even though part of the project involves programming, a good part of it is theoretical and requires coming up with conjectures, writing theorems and proving them. If you like Linear Algebra and numerical experiments, this is your project!

1 Description of the project. Many applications (e.g. signal processing, , experiment design) require finding solutions (γ, z) to the nonlinear eigenvalue problem n×n (NEP) F (γ)z = 0, for some holomorphic F :Ω → C , where Ω is a non-empty set of the real numbers. A real number γ ∈ Ω and a nonzero n×1 vector z satisfying this are said to be, respectively, an eigenvalue of F and an eigenvector of F with eigenvalue γ. In practice, one possible way of solving the NEP is through polynomial interpolation, that is, F is approximated by a matrix polynomial

k k−1 n×n P (λ) = Akλ + Ak−1λ + ··· + A0,Ai ∈ C . (1) Thus, the eigenvalues of the resulting polynomial eigenvalue problem P (λ)x = 0, x 6= 0, closely approximate some of the eigenvalues of the original NEP. In order to approximate a holomorphic function F (γ) by a matrix poly- nomial through interpolation, a set of k + 1 points {γ1, . . . , γk+1} in Ω are chosen and F is evaluated at these points. P (λ) is then constructed to be the polynomial of degree k that runs through F (γ1),...,F (γk+1). However, obtaining such a polynomial is, in general, numerically unstable when P (λ) is expressed in the monomial basis {1, λ, λ2,...}. The usual solution to this problem is to express the interpolating polynomial P (λ) in a non-monomial basis. In practice, the most used bases are the Chebyshev basis, the Newton basis and the Lagrange basis. An important feature of any mathematical problem (function) is the so- called condition number. This number roughly measures how small changes in the input of the function can affect the output. An ill-conditioned prob- lem produces large errors in the output when small errors are made in the input. A natural question is: how much does the condition number of an eigenvalue change when the polynomial eigenvalue problem is expressed in a non-monomial basis compared to the monomial basis? This question is particularly interesting when we consider the three bases mentioned above. Once F (γ) is interpolated with a matrix polynomial, it is then necessary to solve the PEP P (λ)x = 0, x 6= 0. There are no known algorithms to solve the PEP directly. This problem is usually solved through a process called linearization. Linearization is the process of replacing P (λ) by a matrix polynomial of degree 1 L(λ), with the same eigenvalues as P (λ) (including multiplicities) and then solve the corresponding generalized eigenvalue prob- lem L(λ)˜x = 0, for which there are efficient algorithms. The penalty paid in

2 order to reduce the degree of the polynomial is to increase the size of L(λ). If P (λ) is of size n, then L(λ) is usually chosen to be of size nk × nk. This im- plies that the eigenvectors of L(λ) are not the same as those of P (λ). Thus, it is important to choose a linearization L(λ) from which the eigenvectors of P (λ) can be recovered easily. Among the linearizations with this property, not all of them are equally desirable. In the 2019 REU program, a group of students produced families of linearizations easy to construct from a matrix polynomial when it is expressed in the Chebyshev or Newton or Lagrange basis. All these linearizations allow also an easy recovering of eigenvectors. A second natural question is which of these linearizations (within the same family) behave better in terms of conditioning and backward error. Some progress was made by the 2019 REU students in this direction but there are still answered questions. Once “the best” linearization within a family is found, a third natural question is: how do the three linearizations (the best in each of the three families) compare? Or in other words, what basis is more appropriate to use if we are just thinking about the best linearization possible? In summer 2020, we will work on the three questions presented above and probably on some new questions that arise from the research.

3