Preconditioned and shift-invert Arnoldi method

Melina Freitag

Department of Mathematical Sciences University of Bath

CSC Seminar Max-Planck-Institute for Dynamics of Complex Technical Systems Magdeburg 3rd May 2011

joint work with Alastair Spence (Bath) 1 Introduction

2 Inexact inverse iteration

3 Inexact Shift-invert Arnoldi method

4 Inexact Shift-invert Arnoldi method with implicit restarts

5 Conclusions Outline

1 Introduction

2 Inexact inverse iteration

3 Inexact Shift-invert Arnoldi method

4 Inexact Shift-invert Arnoldi method with implicit restarts

5 Conclusions Problem and iterative methods

Find a small number of eigenvalues and eigenvectors of:

Ax = λx, λ ∈ C,x ∈ Cn

A is large, sparse, nonsymmetric Problem and iterative methods

Find a small number of eigenvalues and eigenvectors of:

Ax = λx, λ ∈ C,x ∈ Cn

A is large, sparse, nonsymmetric Iterative solves Power method Simultaneous iteration Arnoldi method Jacobi-Davidson method repeated application of the matrix A to a vector Generally convergence to largest/outlying eigenvector Problem and iterative methods

Find a small number of eigenvalues and eigenvectors of:

Ax = λx, λ ∈ C,x ∈ Cn

A is large, sparse, nonsymmetric Iterative solves Power method Simultaneous iteration Arnoldi method Jacobi-Davidson method The first three of these involve repeated application of the matrix A to a vector Generally convergence to largest/outlying eigenvector Shift-invert strategy

Wish to find a few eigenvalues close to a shift σ σ

λλ λ λ λ 3 1 2 4 n Shift-invert strategy

Wish to find a few eigenvalues close to a shift σ σ

λλ λ λ λ 3 1 2 4 n Problem becomes − 1 (A − σI) 1x = x λ − σ each step of the involves repeated application of A =(A − σI)−1 to a vector Inner iterative solve: (A − σI)y = x using Krylov method for linear systems. leading to inner-outer iterative method. Shift-invert strategy

This talk: Inner iteration and preconditioning

Fixed shifts only

Inverse iteration and Arnoldi method Outline

1 Introduction

2 Inexact inverse iteration

3 Inexact Shift-invert Arnoldi method

4 Inexact Shift-invert Arnoldi method with implicit restarts

5 Conclusions The

Inexact inverse iteration for i =1 to . . . do choose τ (i) solve (A − σI)y(i) − x(i) = d(i)≤ τ (i), y(i) Rescale x(i+1) = , y(i) H Update λ(i+1) = x(i+1) Ax(i+1), Test: eigenvalue residual r(i+1) =(A − λ(i+1)I)x(i+1). end for The algorithm

Inexact inverse iteration for i =1 to . . . do choose τ (i) solve (A − σI)y(i) − x(i) = d(i)≤ τ (i), y(i) Rescale x(i+1) = , y(i) H Update λ(i+1) = x(i+1) Ax(i+1), Test: eigenvalue residual r(i+1) =(A − λ(i+1)I)x(i+1). end for

Convergence rates If τ (i) = Cr(i) then convergence rate is linear (same convergence rate as for exact solves). The inner iteration for (A − σI)y = x

Standard GMRES theory for y0 = 0 and A diagonalisable

x − (A − σI)yk≤ κ(W ) min max |p(λj )|x p∈Pk j=1,...,n

−1 where λj are eigenvalues of A − σI and (A − σI)= W ΛW . The inner iteration for (A − σI)y = x

Standard GMRES theory for y0 = 0 and A diagonalisable

x − (A − σI)yk≤ κ(W ) min max |p(λj )|x p∈Pk j=1,...,n

−1 where λj are eigenvalues of A − σI and (A − σI)= W ΛW .

Number of inner iterations

x k ≥ C + C log 1 2 τ

for x − (A − σI)yk≤ τ. The inner iteration for (A − σI)y = x

More detailed GMRES theory for y0 = 0

|λ2 − λ1| x − (A − σI)yk≤ κ˜(W ) min max |p(λj )|Qx λ1 p∈Pk−1 j=2,...,n

where λj are eigenvalues of A − σI.

Number of inner iterations

′ ′ Qx k ≥ C + C log , 1 2 τ where Q projects onto the space not spanned by the eigenvector. The inner iteration for (A − σI)y = x

Good news!

(i) ′ ′ C r k(i) ≥ C + C log 3 , 1 2 τ (i) where τ (i) = Cr(i). Iteration number approximately constant! The inner iteration for (A − σI)y = x

Good news!

(i) ′ ′ C r k(i) ≥ C + C log 3 , 1 2 τ (i) where τ (i) = Cr(i). Iteration number approximately constant!

Bad news :-( For a standard P

− − (A − σI)P 1y˜(i) = x(i) P 1y˜(i) = y(i)

(i) ′′ ′′ Q˜x ′′ ′′ C k(i) ≥ C + C log = C + C log , 1 2 τ (i) 1 2 τ (i) where τ (i) = Cr(i). Iteration number increases! Convection-Diffusion operator

Finite difference discretisation on a 32 × 32 grid of the convection-diffusion operator 2 −∆u + 5ux + 5uy = λu on (0, 1) , with homogeneous Dirichlet boundary conditions (961 × 961 matrix).

smallest eigenvalue: λ1 ≈ 32.18560954, Preconditioned GMRES with tolerance τ (i) = 0.01r(i), standard and tuned preconditioner (incomplete LU). Convection-Diffusion operator

right preconditioning, 569 iterations right preconditioning, 569 iterations 0 45 10

40 −2 10 35

−4 30 10

25

−6 10 20 inner iterations

15

residual norms \|r^{(i)}\| −8 10

10

−10 5 10

2 4 6 8 10 12 14 16 18 20 22 100 200 300 400 500 600 outer iterations sum of inner iterations

Figure: Inner iterations vs outer Figure: Eigenvalue residual norms vs iterations total number of inner iterations The inner iteration for (A − σI)P −1y˜ = x

How to overcome this problem

Use a different preconditioner, namely one that satisfies

H (i) (i) (i) (i) Pix = Ax , Pi := P +(A − P )x x

minor modification and minor extra computational cost, P−1 (i) (i) [A i ]Ax = Ax . The inner iteration for (A − σI)P −1y˜ = x

How to overcome this problem

Use a different preconditioner, namely one that satisfies

H (i) (i) (i) (i) Pix = Ax , Pi := P +(A − P )x x

minor modification and minor extra computational cost, P−1 (i) (i) [A i ]Ax = Ax .

Why does that work?

Assume we have found eigenvector x1

−1 λ1 − σ Ax1 = Px1 = λ1x1 ⇒ (A − σI)P x1 = x1 λ1

−1 and convergence of Krylov method applied to (A − σI)P y˜ = x1 in one iteration. For general x(i)

(i) ′′ ′′ C r k(i) ≥ C + C log 3 , where τ (i) = Cr(i). 1 2 τ (i) Convection-Diffusion operator

Finite difference discretisation on a 32 × 32 grid of the convection-diffusion operator 2 −∆u + 5ux + 5uy = λu on (0, 1) , with homogeneous Dirichlet boundary conditions (961 × 961 matrix).

smallest eigenvalue: λ1 ≈ 32.18560954, Preconditioned GMRES with tolerance τ (i) = 0.01r(i), standard and tuned preconditioner (incomplete LU). Convection-Diffusion operator

right preconditioning, 569 iterations right preconditioning, 569 iterations 0 45 10

40 −2 10 35

−4 30 10

25

−6 10 20 inner iterations

15

residual norms \|r^{(i)}\| −8 10

10

−10 5 10

2 4 6 8 10 12 14 16 18 20 22 100 200 300 400 500 600 outer iterations sum of inner iterations

Figure: Inner iterations vs outer Figure: Eigenvalue residual norms vs iterations total number of inner iterations Convection-Diffusion operator

right preconditioning, 569 iterations right preconditioning, 569 iterations 0 10 tuned right preconditioning, 115 iterations 45 tuned right preconditioning, 115 iterations

40 −2 10 35

−4 30 10

25

−6 10 20 inner iterations

15

residual norms \|r^{(i)}\| −8 10

10

−10 5 10

2 4 6 8 10 12 14 16 18 20 22 100 200 300 400 500 600 outer iterations sum of inner iterations

Figure: Inner iterations vs outer Figure: Eigenvalue residual norms vs iterations total number of inner iterations Outline

1 Introduction

2 Inexact inverse iteration

3 Inexact Shift-invert Arnoldi method

4 Inexact Shift-invert Arnoldi method with implicit restarts

5 Conclusions The algorithm

Arnoldi’s method

Arnoldi method constructs an orthogonal basis of k-dimensional Krylov subspace

(1) (1) (1) 2 (1) k−1 (1) Kk(A, q ) = span{q , Aq , A q ,..., A q },

H AQ = Q H + q h eH = Q k k k k k+1 k+1,k k k+1 h eH k+1,k k H Qk Qk = I. The algorithm

Arnoldi’s method

Arnoldi method constructs an orthogonal basis of k-dimensional Krylov subspace

(1) (1) (1) 2 (1) k−1 (1) Kk(A, q ) = span{q , Aq , A q ,..., A q },

H AQ = Q H + q h eH = Q k k k k k+1 k+1,k k k+1 h eH k+1,k k H Qk Qk = I.

Eigenvalues of Hk are eigenvalue approximations of (outlying) eigenvalues of A

H rk = Ax − θx = (AQk − QkHk)u = |hk+1,k||ek u|, The algorithm

Arnoldi’s method

Arnoldi method constructs an orthogonal basis of k-dimensional Krylov subspace

(1) (1) (1) 2 (1) k−1 (1) Kk(A, q ) = span{q , Aq , A q ,..., A q },

H AQ = Q H + q h eH = Q k k k k k+1 k+1,k k k+1 h eH k+1,k k H Qk Qk = I.

Eigenvalues of Hk are eigenvalue approximations of (outlying) eigenvalues of A

H rk = Ax − θx = (AQk − QkHk)u = |hk+1,k||ek u|,

at each step, application of A to qk: Aqk =q ˜k+1 Example

random complex matrix of dimension n = 144 generated in Matlab: G=numgrid(’N’,14);B=delsq(G);A=sprandn(B)+i*sprandn(B)

eigenvalues of A

3

2

1

0

−1

−2

−3

−4 −3 −2 −1 0 1 2 3 4 after 5 Arnoldi steps

Arnoldi after 5 steps

3

2

1

0

−1

−2

−3

−4 −3 −2 −1 0 1 2 3 4 after 10 Arnoldi steps

Arnoldi after 10 steps

3

2

1

0

−1

−2

−3

−4 −3 −2 −1 0 1 2 3 4 after 15 Arnoldi steps

Arnoldi after 15 steps

3

2

1

0

−1

−2

−3

−4 −3 −2 −1 0 1 2 3 4 after 20 Arnoldi steps

Arnoldi after 20 steps

3

2

1

0

−1

−2

−3

−4 −3 −2 −1 0 1 2 3 4 after 25 Arnoldi steps

Arnoldi after 25 steps

3

2

1

0

−1

−2

−3

−4 −3 −2 −1 0 1 2 3 4 after 30 Arnoldi steps

Arnoldi after 30 steps

3

2

1

0

−1

−2

−3

−4 −3 −2 −1 0 1 2 3 4 The algorithm: take σ = 0

Shift-Invert Arnoldi’s method A := A−1

Arnoldi method constructs an orthogonal basis of k-dimensional Krylov subspace

−1 (1) (1) −1 (1) −1 2 (1) −1 k−1 (1) Kk(A , q ) = span{q ,A q , (A ) q ,..., (A ) q },

−1 H Hk A Qk = QkHk + qk+1hk+1,kek = Qk+1 H hk ,ke +1 k H Qk Qk = I.

Eigenvalues of Hk are eigenvalue approximations of (outlying) eigenvalues of A−1

−1 −1 H rk = A x − θx = (A Qk − QkHk)u = |hk+1,k||ek u|,

−1 −1 at each step, application of A to qk: A qk =q ˜k+1 Inexact solves

Inexact solves (Simoncini 2005), Bouras and Frayss´e(2000)

Wish to solve qk − Aq˜k+1 = d˜k≤ τk Inexact solves

Inexact solves (Simoncini 2005), Bouras and Frayss´e(2000)

Wish to solve qk − Aq˜k+1 = d˜k≤ τk leads to inexact Arnoldi relation

− H H A 1Q = Q k + D = Q k + [d | . . . |d ] k k+1 h eH k k+1 h eH 1 k k+1,k k k+1,k k Inexact solves

Inexact solves (Simoncini 2005), Bouras and Frayss´e(2000)

Wish to solve qk − Aq˜k+1 = d˜k≤ τk leads to inexact Arnoldi relation

− H H A 1Q = Q k + D = Q k + [d | . . . |d ] k k+1 h eH k k+1 h eH 1 k k+1,k k k+1,k k

u eigenvector of Hk:

−1 H rk = (A Qk − QkHk)u = |hk+1,k||ek u| + Dku, Inexact solves

Inexact solves (Simoncini 2005), Bouras and Frayss´e(2000)

Wish to solve qk − Aq˜k+1 = d˜k≤ τk leads to inexact Arnoldi relation

− H H A 1Q = Q k + D = Q k + [d | . . . |d ] k k+1 h eH k k+1 h eH 1 k k+1,k k k+1,k k

u eigenvector of Hk:

−1 H rk = (A Qk − QkHk)u = |hk+1,k||ek u| + Dku,

Linear combination of the columns of Dk

k

Dku = dlul, if ul small, then dl allowed to be large! l =1 Inexact solves

Inexact solves (Simoncini 2005), Bouras and Frayss´e(2000)

Linear combination of the columns of Dk

k

Dku = dlul, if ul small, then dl allowed to be large! l =1 1 d u ≤ ε ⇒ D u <ε l l k k and |ul|≤ C(l,k)rl−1 ⋆ leads to qk − Aq˜k+1 = d˜k 1 d˜k = C ⋄ rk−1 Solve tolerance can be relaxed. −1 The inner iteration for AP q˜k+1 = qk

Preconditioning GMRES convergence bound

−1 l qk − AP q˜k+1 = κ min max |p(i)|qk p∈Πl i=1,...,n

depending on −1 The inner iteration for AP q˜k+1 = qk

Preconditioning GMRES convergence bound

−1 l qk − AP q˜k+1 = κ min max |p(i)|qk p∈Πl i=1,...,n

depending on the eigenvalue clustering of AP −1

the

the right hand side (initial guess) −1 The inner iteration for AP q˜k+1 = qk

Preconditioning

Introduce preconditioner P and solve

−1 −1 AP q˜k+1 = qk, P q˜k+1 = qk+1

using GMRES −1 The inner iteration for AP q˜k+1 = qk

Preconditioning

Introduce preconditioner P and solve

−1 −1 AP q˜k+1 = qk, P q˜k+1 = qk+1

using GMRES

Tuned Preconditioner using a tuned preconditioner for Arnoldi’s method

H PkQk = AQk; given by Pk = P +(A − P )QkQk The inner iteration for Aq˜ = q

Theorem (Properties of the tuned preconditioner) Let P with P = A + E be a preconditioner for A and assume k steps of P−1 Arnoldi’s method have been carried out; then k eigenvalues of A k are equal to one: P−1 [A k ]AQk = AQk and n − k eigenvalues are close to the corresponding eigenvalues of AP −1. The inner iteration for Aq˜ = q

Theorem (Properties of the tuned preconditioner) Let P with P = A + E be a preconditioner for A and assume k steps of P−1 Arnoldi’s method have been carried out; then k eigenvalues of A k are equal to one: P−1 [A k ]AQk = AQk and n − k eigenvalues are close to the corresponding eigenvalues of AP −1.

Implementation

Sherman-Morrison-Woodbury. Only minor extra costs (one back substitution per outer iteration) Numerical Example

sherman5.mtx nonsymmetric matrix from the Matrix Market library (3312 × 3312). −2 smallest eigenvalue: λ1 ≈ 4.69 × 10 , Preconditioned GMRES as inner solver (both fixed tolerance and relaxation strategy), standard and tuned preconditioner (incomplete LU). No tuning and standard preconditioner

Arnoldi tolerance 10e−14/m 0 10 25

20 || −5

(i) 10

15

−10 10

inner itertaions 10 residual norms ||r

−15 10 5

Arnoldi tolerance 10e−14/m

2 4 6 8 10 12 50 100 150 200 250 300 outer iterations sum of inner iterations

Figure: Inner iterations vs outer Figure: Eigenvalue residual norms vs iterations total number of inner iterations Tuning the preconditioner

Arnoldi tolerance 10e−14/m 0 10 Arnoldi tolerance 10e−14 tuned 25

20 || −5

(i) 10

15

−10 10

inner itertaions 10 residual norms ||r

−15 10 5 Arnoldi tolerance 10e−14/m Arnoldi tolerance 10e−14 tuned

2 4 6 8 10 12 50 100 150 200 250 300 outer iterations sum of inner iterations

Figure: Inner iterations vs outer Figure: Eigenvalue residual norms vs iterations total number of inner iterations Relaxation

Arnoldi tolerance 10e−14/m 0 10 Arnoldi relaxed tolerance 25 Arnoldi tolerance 10e−14 tuned

20 || −5

(i) 10

15

−10 10

inner itertaions 10 residual norms ||r

−15 10 Arnoldi tolerance 10e−14/m 5 Arnoldi relaxed tolerance Arnoldi tolerance 10e−14 tuned

2 4 6 8 10 12 50 100 150 200 250 300 outer iterations sum of inner iterations

Figure: Inner iterations vs outer Figure: Eigenvalue residual norms vs iterations total number of inner iterations Tuning and relaxation strategy

Arnoldi tolerance 10e−14/m 0 10 Arnoldi relaxed tolerance 25 Arnoldi tolerance 10e−14/m tuned Arnoldi relaxed tolerance tuned

20 || −5

(i) 10

15

−10 10

inner itertaions 10 residual norms ||r Arnoldi tolerance 10e−14/m −15 Arnoldi relaxed tolerance 10 5 Arnoldi tolerance 10e−14/m tuned Arnoldi relaxed tolerance tuned

2 4 6 8 10 12 50 100 150 200 250 300 outer iterations sum of inner iterations

Figure: Inner iterations vs outer Figure: Eigenvalue residual norms vs iterations total number of inner iterations Ritz values of exact and inexact Arnoldi

Exact eigenvalues Ritz values (exact Arnoldi) Ritz values (inexact Arnoldi, tuning) +4.69249563e-02 +4.69249563e-02 +4.69249563e-02 +1.25445378e-01 +1.25445378e-01 +1.25445378e-01 +4.02658363e-01 +4.02658347e-01 +4.02658244e-01 +5.79574381e-01 +5.79625498e-01 +5.79817301e-01 +6.18836405e-01 +6.18798666e-01 +6.18650849e-01

Table: Ritz values of exact Arnoldi’s method and inexact Arnoldi’s method with the tuning strategy compared to exact eigenvalues closest to zero after 14 shift-invert Arnoldi steps. Outline

1 Introduction

2 Inexact inverse iteration

3 Inexact Shift-invert Arnoldi method

4 Inexact Shift-invert Arnoldi method with implicit restarts

5 Conclusions Implicitly restarted Arnoldi (Sorensen (1992))

Exact shifts

take an k + p step Arnoldi factorisation

H AQk+p = Qk+pHk+p + qk+p+1hk+p+1,k+pek+p

Compute Λ(Hk+p) and select p shifts for an implicit QR iteration p(A)q(1) implicit restart with new starting vectorq ˆ(1) = p(A)q(1) Implicitly restarted Arnoldi (Sorensen (1992))

Exact shifts

take an k + p step Arnoldi factorisation

H AQk+p = Qk+pHk+p + qk+p+1hk+p+1,k+pek+p

Compute Λ(Hk+p) and select p shifts for an implicit QR iteration p(A)q(1) implicit restart with new starting vectorq ˆ(1) = p(A)q(1)

Aim of IRA

H AQk = QkHk + qk+1 hk+1,k ek → 0 Relaxation strategy for IRA

Theorem For any given ε ∈ R with ε> 0 assume that C ε if l >k, d ≤ Rk ⋄ l  ε otherwise. Then  AQmU − QmUΘ − Rm≤ ε.

Very technical Relaxation strategy also works for IRA! Tuning

Tuning for implicitly restarted Arnoldi’s method

Introduce preconditioner P and solve P−1 P−1 A k q˜k+1 = qk, k q˜k+1 = qk+1 using GMRES and a tuned preconditioner

H PkQk = AQk; given by Pk = P +(A − P )QkQk Tuning

Why does tuning help?

Assume we have found an approximate invariant subspace, that is

−1 H A Qk = QkHk + qk+1hk+1,kek

≈0 Tuning

Why does tuning help?

Assume we have found an approximate invariant subspace, that is

−1 H A Qk = QkHk + qk+1hk+1,kek

≈0

let A−1 have the upper Hessenberg form

⊥ H −1 ⊥ Hk T12 Qk Qk A Qk Qk = H , hk ,ke ek T +1 1 22 ⊥ k,k n−k,n−k where Qk Qk is unitary and Hk ∈ C and T22 ∈ C are upper Hessenberg. Tuning

Why does tuning help?

Assume we have found an approximate invariant subspace, that is

−1 H A Qk = QkHk + qk+1hk+1,kek

≈0

let A−1 have the upper Hessenberg form

⊥ H −1 ⊥ Hk T12 Qk Qk A Qk Qk = H , hk ,ke ek T +1 1 22 ⊥ k,k n−k,n−k where Qk Qk is unitary and Hk ∈ C and T22 ∈ C are upper Hessenberg.

If hk+1,k = 0 then

I ⋆ QH AP−1Q⊥ ⊥ H P−1 ⊥ + k k k Qk Qk A k Qk Qk = −1 ⊥H ⊥ −1 ⋆ T22 (Qk PQk ) + ⋆ Tuning

Why does tuning help?

Assume we have found an approximate invariant subspace, that is

−1 H A Qk = QkHk + qk+1hk+1,kek

≈0

let A−1 have the upper Hessenberg form

⊥ H −1 ⊥ Hk T12 Qk Qk A Qk Qk = H , hk ,ke ek T +1 1 22 ⊥ k,k n−k,n−k where Qk Qk is unitary and Hk ∈ C and T22 ∈ C are upper Hessenberg.

If hk+1,k = 0 then

IQH AP−1Q⊥ ⊥ H P−1 ⊥ k k k Qk Qk A k Qk Qk = −1 ⊥H ⊥ −1 0 T22 (Qk PQk ) Tuning

Another advantage of tuning

System to be solved at each step of Arnoldi’s method is P−1 P−1 A k q˜k+1 = qk, k q˜k+1 =q ˜k Tuning

Another advantage of tuning

System to be solved at each step of Arnoldi’s method is P−1 P−1 A k q˜k+1 = qk, k q˜k+1 =q ˜k

−1 Assuming invariant subspace found then (A Qk = QkHk): P−1 A k qk = qk Tuning

Another advantage of tuning

System to be solved at each step of Arnoldi’s method is P−1 P−1 A k q˜k+1 = qk, k q˜k+1 =q ˜k

−1 Assuming invariant subspace found then (A Qk = QkHk): P−1 A k qk = qk the right hand side of the system matrix is an eigenvector of the system! Tuning

Another advantage of tuning

System to be solved at each step of Arnoldi’s method is P−1 P−1 A k q˜k+1 = qk, k q˜k+1 =q ˜k

−1 Assuming invariant subspace found then (A Qk = QkHk): P−1 A k qk = qk the right hand side of the system matrix is an eigenvector of the system! Krylov methods converge in one iteration Tuning

Another advantage of tuning

In practice: −1 H A Qk = QkHk + qk+1hk+1,kek and P−1 A k qk − qk = O(|hk+1,k|) Tuning

Another advantage of tuning

In practice: −1 H A Qk = QkHk + qk+1hk+1,kek and P−1 A k qk − qk = O(|hk+1,k|) number of iterations decreases as the outer iteration proceeds Tuning

Another advantage of tuning

In practice: −1 H A Qk = QkHk + qk+1hk+1,kek and P−1 A k qk − qk = O(|hk+1,k|) number of iterations decreases as the outer iteration proceeds Rigorous analysis quite technical. Numerical Example

sherman5.mtx nonsymmetric matrix from the Matrix Market library (3312 × 3312). k = 8 eigenvalues closest to zero IRA with exact shifts p = 4 Preconditioned GMRES as inner solver (fixed tolerance and relaxation strategy), standard and tuned preconditioner (incomplete LU). No tuning and standard preconditioner

Arnoldi tolerance 10e−12 60 −2 10

50 −4 || 10 (i)

−6 40 10

−8 10 30

−10 10

20

inner iterations −12 10 residual norms ||r

−14 10 10

Arnoldi tolerance 10e−12 −16 0 10

0 5 10 15 20 25 30 35 40 45 50 500 1000 1500 2000 2500 outer iterations sum of inner iterations

Figure: Inner iterations vs outer Figure: Eigenvalue residual norms vs iterations total number of inner iterations Tuning

Arnoldi tolerance 10e−12 60 −2 Arnoldi 10e−12 tuned 10

50 −4 || 10 (i)

−6 40 10

−8 10 30

−10 10

20

inner iterations −12 10 residual norms ||r

−14 10 10 Arnoldi tolerance 10e−12

Arnoldi 10e−12 tuned −16 0 10

0 5 10 15 20 25 30 35 40 45 50 500 1000 1500 2000 2500 outer iterations sum of inner iterations

Figure: Inner iterations vs outer Figure: Eigenvalue residual norms vs iterations total number of inner iterations Relaxation

Arnoldi tolerance 10e−12 60 −2 Arnold relaxed tolerance 10 Arnoldi 10e−12 tuned

50 −4 || 10 (i)

−6 40 10

−8 10 30

−10 10

20

inner iterations −12 10 residual norms ||r

Arnoldi tolerance 10e−12 −14 10 10 Arnold relaxed tolerance

Arnoldi 10e−12 tuned −16 0 10

0 5 10 15 20 25 30 35 40 45 50 500 1000 1500 2000 2500 outer iterations sum of inner iterations

Figure: Inner iterations vs outer Figure: Eigenvalue residual norms vs iterations total number of inner iterations Tuning and relaxation strategy

Arnoldi tolerance 10e−12 60 −2 Arnold relaxed tolerance 10 Arnoldi 10e−12 tuned Arnold relaxed tolerance tuned 50 −4 || 10 (i)

−6 40 10

−8 10 30

−10 10

20

inner iterations −12 10

Arnoldi tolerance 10e−12 residual norms ||r

Arnold relaxed tolerance −14 10 10 Arnoldi 10e−12 tuned

Arnold relaxed tolerance tuned −16 0 10

0 5 10 15 20 25 30 35 40 45 50 500 1000 1500 2000 2500 outer iterations sum of inner iterations

Figure: Inner iterations vs outer Figure: Eigenvalue residual norms vs iterations total number of inner iterations Numerical Example

qc2534.mtx matrix from the Matrix Market library. k = 6 eigenvalues closest to zero IRA with exact shifts p = 4 Preconditioned GMRES as inner solver (fixed tolerance and relaxation strategy), standard and tuned preconditioner (incomplete LU). Tuning and relaxation strategy

100 Arnoldi tolerance 10e−12 Arnold relaxed tolerance Arnoldi 10e−12 tuned 90 0 Arnold relaxed tolerance tuned 10

80 || (i)

70

60 −5 10 50

40 inner iterations

30 −10

Arnoldi tolerance 10e−12 residual norms ||r 10 20 Arnold relaxed tolerance Arnoldi 10e−12 tuned 10 Arnold relaxed tolerance tuned

5 10 15 20 25 30 35 40 45 50 55 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 outer iterations sum of inner iterations

Figure: Inner iterations vs outer Figure: Eigenvalue residual norms vs iterations total number of inner iterations Outline

1 Introduction

2 Inexact inverse iteration

3 Inexact Shift-invert Arnoldi method

4 Inexact Shift-invert Arnoldi method with implicit restarts

5 Conclusions Conclusions

For eigenvalue computations it is advantageous to consider small rank changes to the standard Works for any preconditioner Works for SI versions of Power method, Simultaneous iteration, Arnoldi method Inexact inverse iteration with a special tuned preconditioner is equivalent to the Jacobi-Davidson method (without subspace expansion) For Arnoldi method best results are obtained when relaxation and tuning are combined M. A. Freitag and A. Spence, Rayleigh quotient iteration and simplified Jacobi-Davidson method with preconditioned iterative solves. , Convergence rates for inexact inverse iteration with application to preconditioned iterative solves, BIT, 47 (2007), pp. 27–44. , A tuned preconditioner for inexact inverse iteration applied to Hermitian eigenvalue problems, IMA journal of , 28 (2008), p. 522. , Shift-invert Arnoldi’s method with preconditioned iterative solves, SIAM Journal on Matrix Analysis and Applications, 31 (2009), pp. 942–969. F. Xue and H. C. Elman, Convergence analysis of iterative solvers in inexact rayleigh quotient iteration, SIAM J. Matrix Anal. Appl., 31 (2009), pp. 877–899. , Fast inexact subspace iteration for generalized eigenvalue problems with spectral transformation, and its Applications, (2010).