Numerical Methods of

1 Introduction

1.1 Tasks of Linear Algebra 1. Solving systems of linear equations A~x = ~b • Solution of a system with regular (= nonsingular, invertible) square A of order n×n for one or multiple right-hand sides • Calculation of the matrix inverse A−1 • Calculation of the determinant • Solving systems of n equations with m unknowns and singular n×n systems in some pre-defined sense (one particular solution + the basis of nullspace, solution with the smallest norm, or in least-squares sense) 2. Calculation of eigenvalues and eigenvectors • Complete eigenvalue problem • Partial eigenvalue problem

Standard numerical libraries - LINPACK (special library for linear systems), EISPACK (special library for eigenvalues and eigenvectors), NAG (general), IMSL (general)

1.2 Basic Terms and Notation Here, by vector we always mean a column vector, superscript T stands for vector or matrix transpose, matrix is denoted using boldface     x1 a11 a12 . . . a1m  x   a a . . . a   2  T  21 22 2m  ~x =  .  = (x1, x2, . . . , xn) , A =  . . . .  = (aij) .  .   . . . .  xn an1 an2 . . . anm

1 If not stated otherwise, we suppose real matrices and vectors. Size of a vector or matrix is measured by its vector norm A vector norm must satisfy the following 3 conditions:

a) k~xk ≥ 0 and k~xk = 0 ⇔ ~x = ~0,

b) kλ~xk = |λ|k~xk,

c) k~x + ~yk ≤ k~xk + k~yk.

Examples of vector norms:

∞ • Maximum norm (L norm): kxkI = max |xi| i=1,...,n

n 1 X • L norm (taxicab norm, Manhattan norm): kxkII = |xi| i=1 v u n 2 uX 2 • Euclidean norm (L norm): kxkIII = t xi i=1

Matrix norm A vector norm of a matrix is called matrix norm, if for all matrices A, B the following additional condition d) holds:

kA · Bk ≤ kAk · kBk.

A matrix norm is compatible with a vector norm, if ∀ A, ~x holds

kA~xk ≤ kAk · k~xk

Examples of matrix norms: n X • Max row sum: kAkI = max |aij| i=1,...,n j=1

n X • Max column sum: kAkII = max |aij| j=1,...,n i=1

2 v u n n uX X 2 • Frobenius norm (Euclidean norm): kAkIII = t aij i=1 j=1 Each of the matrix norms above is compatible with the vector norm of the same name.

1.3 Methods for Solution of Linear Systems

1. Direct methods 2. Iterative methods 3. Gradient methods

3 2 Direct Methods for Solving Linear Systems

Direct methods consist in transformation of the matrix to triangular (or diag- onal) shape (forward run), followed by solution of a system with upper (U) or lower (L) (backsubstitution). Backsubstitution is much faster than forward run.

2.1 Solving Systems with Triangular Matrix

Systems with upper triangular matrix U is solved by sequential application of the formula in the direction of decreasing index k

 n  1 X xk = bk − ukjxj . ukk j=k+1

To calculate any xk we need no more than n inner cycles (1 multiplication + 1 addition), thus the number of operations grows with ∼ n2 (more precisely, ' 0.5 n2 inner cycles).

2.2 Gauss and Gauss-Jordan Elimination

We solve the system of equations A~x = ~b. Suppose that in the first step a11 6= 0 (which can be always achieved by swapping of the equations). Element a11, used for modification of equations 2, . . . , n, will be called the pivot element or simply pivot. th st (1) From the i equation we subtract the 1 equation times the multiplier mi = st −ai1/a11. In the modified system, all elements in the 1 column below the diagonal are now zero. This transformation performed together with the right- hand side corresponds to multiplication of the equation system by matrix   1 0 ... 0  −a /a 1 ... 0   21 11  D1 =  . . .. .  .  . . . .  −an1/a11 0 ... 1

~ After the first transformation the equation reads D1A~x = D1b. We denote (1) ~(1) A ≡ D1A and b ≡ D1.

4 After k − 1 transformations the matrix A(k−1) reads   a11 a12 . . . a1,k−1 a1k . . . a1n  0 a(1) . . . a(1) a(1) . . . a(1)   22 2,k−1 2k 2n   ......   ......     0 0 . . . a(k−2) a(k−2) . . . a(k−2)  (k−1)  k−1,k−1 k−1,k k−1,n  A =  (k−1) (k−1)  ,  0 0 ... 0 akk . . . ak,n   (k−1) (k−1)   0 0 ... 0 a . . . a   k+1,k k+1,n   ......   ......  (k−1) (k−1) 0 0 ... 0 ank . . . an,n where the superscript stands for the given element’s number of modifications. (k−1) (k) If akk 6= 0, we can select it as a pivot, calculate the multipliers mi = (k−1) (k−1) −aik /akk for i = k + 1, . . . , n and modify the corresponding equations.

The pivot in kth step of the transformation is an element that has been modi- fied (by subtraction!) k−1 times before → loss of ⇒ pivoting needed.

Direct methods without pivoting are useless for general matrices!

Number of operations 1 We need to zero 2n(n − 1) elements, each of which costs ≤ n inner cycles. Total number of inner cycles is ∼ n3 (more precisely ' 1/3 n3), so the com- plexity of this algorithm is n3.

Gauss-Jordan elimination We modify all off-diagonal elements. The matrix is transformed to identity I, matrix inverse A−1 is directly calculated as a result. More operations are needed, in particular ' n3 inner cycles.

2.3 Pivoting (Selection of the Pivot Element)

In each step of the forward run we are selecting the pivot. • Full pivoting: we search the entire yet unprocessed region of the matrix: max |aij|. This is slow.

5 • Partial pivoting: we only search the given column (column pivoting) or row (row pivoting). • Implicit pivoting: Faster, improved strategy for column pivoting. We compare the size of elements in given column normalized to maximum absolute values in given row of the original matrix. With pivoting, direct methods can be used for a majority of matrices. For general big matrices (N > 50), double precision is needed. Even then, difficulties are often encountered for large, ill-conditioned matrices!

Difficulties: 1. Singular matrices 2. Singular matrix produced by loss of accuracy during transformations 3. Loss of accuracy

2.4 LU Method Any regular matrix A can be decomposed as A = L·U, where L, U is a lower left resp. upper right triangular matrix. The system is then solved by solving two systems with triangular matrices

A~x = ~b ⇒ (LU)~x = ~b ⇒ L (U~x) = ~b ⇒ L~y = ~b, U~x = ~y. | {z } ~y LU decomposition       a11 a12 a13 . . . a1n 1 0 0 ... 0 u11 u12 u13 . . . u1n  a a a . . . a   l 1 0 ... 0   0 u u . . . u   21 22 23 2n   21   22 23 2n   a a a . . . a   l l 1 ... 0   0 0 u . . . u   31 32 33 3n  =  31 32   33 3n   . . . . .   ......   ......   . . . . .   . . . . .   . . . . .  an1 an2 an3 . . . ann ln1 ln2 ln3 ... 1 0 0 0 . . . unn

6

i−1 X for i ≤ j : aij = uij + likukj k=1 j−1 X for i > j : aij = lijujj + likukj k=1 Crout’s algorithm - sequential calculation e.g. going left by columns and down each column. First i−1 X uij = aij − likukj i = 1, . . . , j k=1 uses l from preceding columns and u from preceding rows, and then

j−1 ! X lij = aij − likukj /ujj i = j + 1, . . . , n k=1 uses l from preceding columns and u from the part of the current column which is above the diagonal. Column pivoting (full pivoting impossible).

Elements aij are used only once, resulting elements of matrices L and U can be stored the same array. Properties of the LU method: • Direct method, the same number of steps as the forward run of Gauss elimination. • Main advantage: the decomposition does not touch (depend on) the right-hand side, thus fast calculation for multiple right-hand sides (e.g. if these are dynamically obtained during the calculation) • The solution can be improved by iteration

2.5 Iterative Improvement of the Solution We are searching for solution ~x of the linear equation A~x = ~b. First we obtain an inaccurate solution ~x˜

~x˜ = ~x + δx~ ⇒ A(~x + δx~ ) = ~b + δb~ ⇒ Aδx~ = δb~ = A~x˜ −~b~x = ~x˜ − δx~

7 ~ We denote by ~x0 the inaccurate solution from the first step A~x0 ' b and perform the iteration ~ ~ ~ ~xi+1 = ~xi + (δx)i, A(δx)i = b − A~xi.

2.6 Conditioning of Linear System Solution

Due to input and roundoff errors, instead of A~x = ~b we in fact solve

(A + ∆A)(~x + ∆~x) = ~b + ∆~b.

• First for the case ∆A = 0:

∆~x = A−1∆~b ⇒ k∆~xk ≤ kA−1k · k∆~bk, k~bk A~x = ~b ⇒ k~xk ≥ . kAk

Therefore, for the relative error of solution it holds

k∆~xk k∆~bk ≤ kAk · kA−1k · . k~xk k~bk

−1 The value Cp = kAk·kA k is called the condition number of the matrix. • If moreover ∆A 6= 0, then

k∆Ak k∆~bk k∆~xk kAk + ~ ≤ C kbk . k~xk p k∆Ak 1 − Cp kAk

For Cp  1, the system is ill-conditioned, meaning that small input errors or small roundoff errors during the calculation lead to a large error of the solution.

2.7 Calculation of the Matrix Inverse and Determinant

Gauss-Jordan elimination calculates the matrix inverse directly. With Gauss elimination and LU decomposition we obtain the matrix inverse by solving for n right-hand sides, given by the vectors of standard basis. All three methods are equivalent in accuracy as well as computational cost n3.

8 The determinant should not be computed by classical formulas (uncontrolled growth of roundoff error), but rather using the fact that the determinant of matrix product equals to the product of determinants. For LU decomposition n Y det(A) = det(L) · det(U) = ujj. j=1 Gauss elimination corresponds to multiplication by transformation matrices and if the row itself is not multiplied (i.e. only multiples of other rows are added to it), the determinant of each such transformation is Di = 1, and thus the determinant of A equals to the product of diagonal elements of the triangular matrix. If rows are swapped by pivoting, the determinant is multiplied by −1 and we have to remember the number of such swaps (changes of sign).

2.8 Special Types of Matrices • A has most of its elements = 0. To solve systems with sparse matrices, gradient methods are frequently used, consisting in minimization of a suitable quadratic function such as ~ 2 kA~x − bkIII, because for a sparse matrix the number of operations for calculation of A~x is ∼ n instead of ∼ n2, as it is the case for a dense matrix.

• A is a if aij = 0 for |i − j| > p. For p = 1 we have a , for p = 2 a pentadiagonal matrix. • Systems with a Tridiagonal Matrix       a1 b1 0 0 ... 0 0 0 x1 f1  c2 a2 b2 0 ... 0 0 0   x2   f2         0 c3 a3 b3 ... 0 0 0   x3   f3   . . . . .   .   .   ......   .   .         ......  ·  .  =  .   ......   .   .   ......   .   .   ......   .   .         0 0 0 0 . . . cn−1 an−1 bn−1   xn−1   fn−1  0 0 0 0 ... 0 cn an xn fn The tridiagonal matrix is stored in 3 vectors ~a, ~b, ~c. In practice in most cases we encounter a tridiagonal matrix where pivoting is not needed

9 (strongly regular matrices).

Solution: We expect the backward run (abbreviated backsubstitution) xk = µkxk+1 + ρk. After insertion we have

ci (µi−1xi + ρi−1) + aixi + bixi+1 = fi, and further rearrangement yields

−bi fi − ciρi−1 xi = xi+1 + , ciµi−1 + ai ciµi−1 + ai so that

−bi fi − ciρi−1 µi = , ρi = . ciµi−1 + ai ciµi−1 + ai

Initialization: c1 = 0, bn = 0, {µ0, ρ0, xn+1} arbitrary.

• Block tridiagonal matrix: Ai, Bi, Ci are small matrices ⇒ µi, ρi are small matrices

2.9 Systems with No Solution or ∞ Solutions m equations with n unknowns, linearly dependent n×n systems.

SVD method (singular value decomposition): if A~x = ~b has ∞ solutions, SVD will find the solution with the smallest Euclidean norm and the basis of the nullspace. If the solution does not exist, SVD finds the solution in least-squares ~ sense: the vector ~x minimizing kA~x − bkIII.

SVD: A, U matrices m × n, W, V, I matrices n × n, W diagonal, I identity, U, V orthogonal (UT · U = VT · V = I).

T T ~ A = U · W · V ⇒ ~x = V · [diag(1/wj)] · U · b

1 If wj = 0 (wj ' 0), we substitute → 0 (detects a singular matrix). wj

10 2.10 Computational Cost of Solving a n×n Linear System by Direct Methods

• Gauss–Jordan elimination: The calculation performs ∼ n3 inner cycles, each cycle contains one mul- tiplication and one addition • Gauss elimination and LU decomposition: 1 3 forward run is much more expensive. Both methods need ∼ 3 n inner cycles in the forward run. In backsubstitution, Gauss elimination needs 1 ∼ 2 n(n − 1) cycles. LU decomposition needs two backsubstitutions, but the forward run is slightly shorter, since the right-hand side is not being modified. The total number of operations for Gauss elimination and LU decomposition are equal. For the calculation of matrix inverse, the costs of Gauss–Jordan elimination, Gauss elimination and LU decomposition are equal. • A method of order < 3: Strassen proved the existence of a method, where the number of opera- log2 7 tions ∼ n (log2 7 ' 2.807) and thus with increasing dimension n of the matrix it grows slower than in the classical methods (which need ∼ n3 operations). This algorithm however requires a complicated bookkeeping of intermediate results, so for small matrices it is much slower than the classical methods and thus its advantage takes effect only for matrices of order n  1000.

11 3 Gradient Methods

The linear equation A~x = ~b is solved for example by minimization of function 1 f(~x) = |A~x −~b|2. 2 In each step, we are searching for λ such that f(~x + λ~u) is minimized. Thus −~u ∇f λ = , where ∇f(~x) = AT (A~x −~b). |A~u|2 For sparse matrices, the complexity (computational cost) of multiplying vector ~x by matrix A reduces from ∼ n2 to ∼ n.

Note: The most popular gradient method is the conjugate gradient method.

12 4 Iterative Methods for Solving Linear Systems

4.1 Some Types of Matrices • A n×n matrix A is diagonally dominant, if

n X |aii| > |aij| ∀i = 1, 2, . . . , n. j=1,j6=i

• A symmetric n×n matrix A is positive definite, if ∀~x 6= ~0 the scalar product (~x, A~x) > 0.

4.2 The Iteration Process The solution ~x of linear equation A~x = ~b is first estimated by vector ~x(0) (initial guess). The next approximation to exact solution is given by

(k+1) (k) (k) ~x = Bk ~x + ~c . The solution ~x has to satisfy

~x = Bk ~x + ~ck.

(k+1) (k) (k−1) This implies that ~x − ~x = Bk(~x − ~x) = BkBk−1(~x − ~x) = ... . Thus the necessary and sufficient condition for convergence of the iteration process is

lim BkBk−1 ... B0 = 0. k→∞

Iterative methods are either stationary, i.e. their matrix Bk is constant (Bk = B), or nonstationary.

4.3 Example of a Nonstationary Iterative Method An example of a nonstationary iterative method is the ”hand-picked” relaxation. For simplicity we consider matrix A with ones on diagonal (aii = 1). Let after the kth iteration the biggest component of the residuum |A~x − ~b| be the ith component. Therefore we want to zero this ith component by the (k + 1)th iteration

(k+1) (k) (k) (k) (k) xi = bi − ai1x1 − · · · − aii−1xi−1 − aii+1xi+1 − · · · − ainxn .

13 Matrix Bk and vector ~ck are now  1   0  .  ...   .       1   0          Bk =  −ai1 ... −aii−1 0 −aii+1 ... −ain  , ~ck =  bi  .      1   0   .   .   ..   .  1 0 This method is however not suitable for computers, because in each step it requires the time-consuming search of equation with the biggest deviation from the solution.

4.4 Stationary Iterative Methods All eigenvalues λ of matrix B (i.e. numbers, for which there exists ~v such that B~v = λ~v) have to satisfy |λ| < 1.

Theorem: Necessary and sufficient condition for convergence of the method is, that the spectral radius %(B) satisfies

%(B) = max |λi| < 1. i=1,...,n Theorem: If in some matrix norm

kBk < 1, then the iteration does converge.

Estimation of error. We want to reach precision ε and thus we iterate until k~x(k) − ~xk ≤ ε. Expression k~x(k) − ~xk can be estimated by

k~x(k) − ~xk = k~x(k) − A−1~bk = kA−1(A~x(k) −~b)k ≤ kA−1k kA~x(k) −~bk.

14 4.5 Simple Iteration (Also referred to as Original Richardson iteration or Fixed point iteration for linear systems) The linear equation A~x = ~b is rewritten as

~x = (I − A)~x +~b, where I is the . Simple iteration is then given by iterative formula

~x(k+1) = (I − A)~x(k) +~b.

Denoting B = I − A, we can estimate the accuracy of th kth iteration as " # k~bk k~x(k) − ~xk ≤ kBkk k~x(0)k + . 1 − kBk

In practice, while simple iteration is sometimes used for nonlinear systems, its use for systems of linear equations is very rare.

4.6 Jacobi Iteration

This method assumes that matrix A has nonzero diagonal elements aii 6= 0. Components of the (k + 1)th Jacobi iteration of the solution are

(k+1) ai1 (k) aii−1 (k) aii+1 (k) ain (k) bi xi = − x1 − · · · − xi−1 − xi+1 − · · · − xn + . aii aii aii aii aii Matrix A can be written in the form A = D + L + R, where D is diagonal, L lower triangular and R upper triangular matrix (L and R have zeros on diagonals).

Jacobi iteration can be written as

~x(k+1) = −D−1(L + R)~x(k) + D−1~b.

Theorem: If A is diagonally dominant, then the Jacobi iteration method does converge.

15 Proof: For a diagonally dominant matrix it holds that n X |aij| < 1 |aii| j=1,j6=i −1 and thus the max row sum norm of matrix kD (L + R)kI < 1.

4.7 Gauss–Seidel Iteration

Gauss–Seidel method is similar to the Jacobi iteration, but unlike it to compute (k+1) components of vector xi it employs the already known components from (k + 1)th iteration. The iteration is given by relation

(k+1) ai1 (k+1) aii−1 (k+1) aii+1 (k) ain (k) bi xi = − x1 − · · · − xi−1 − xi+1 − · · · − xn + , aii aii aii aii aii which can be written in the vector form ~x(k+1) = −(D + L)−1R~x(k) + (D + L)−1~b. Theorem: The sufficient condition for the convergence of Gauss-Seidel iter- ation is, that at least one of the following statements is true: 1. Matrix A is diagonally dominant 2. Matrix A is positive definite (symmetric with positive eigenvalues).

4.8 Successive Over-relaxation (SOR)

Gauss–Seidel method converges for a wide variety of matrices, however its (k) (k+1) (k) convergence can be very slow in some cases. Let ∆xi = xi − xi be the difference between two consecutive Gauss-Seidel iterations. Then the succes- sive over-relaxation method is given by (k+1) (k) (k) xi = xi + ω∆xi , where the relaxation factor ω is between 0 and 2, typically ω ∈ h1, 2). Relaxation factor is intended to accelerate the method and its optimal value can be calculated as 2 −1 ωopt = , B = −(D + L) R. 1 + p1 − %2(B) B is the iteration matrix of the Gauss–Seidel method. Gauss–Seidel method is thus a special case of successive over-relaxation method with ω = 1.

16 5 Calculation of Eigenvalues and Eigenvectors

5.1 Introduction

Let for a number λ exist a vector ~x 6= ~0 such that A~x = λ~x. Then λ is an eigenvalue and ~x is an eigenvector of the matrix A.

Two types of tasks: 1. Complete eigenvalue problem – to find all eigenvalues and if needed also all eigenvectors 2. Partial Eigenvalue problem – to find one or a few eigenvalues (typically the biggest ones)

The characteristic polynomial of matrix A is the determinant det(A − λI).

Note: If A is a n×n matrix, then its characteristic polynomial is of degree n, and thus has n roots (possibly multiple ones). For each eigenvalue there exists at least one eigenvector. The number l of linearly independent eigenvectors is l≤k, where k is the multiplicity of given eigenvalue.

Note: A has < n linearly independent eigenvectors. For example:  1 0  A = , det(A − λI) = (1 − λ)2 = 0, and thus λ =1. 1 1 1,2  0  Vector ~x = is the only eigenvector of A. 1 Note: A real matrix can have conjugate complex eigenvalues and eigenvectors. For example:  1 −1  A = , det(A − λI) = (1 − λ)2 + 1 = 0, thus λ = 1 ± i. 1 1 1,2  1   1  The eigenvectors are ~x = a ~x = . 1 −i 2 i A A satisfies ATA = AAT. A normal matrix of order n has n linearly independent eigenvectors.

17 Note: All eigenvalues of a (A = AT) are real.

Note: All eigenvalues of a triangular matrix are on its diagonal.

Theorem: Similar matrices A and P−1AP have equal eigenvalues (equal spectra). Proof:

det(P−1AP − λI) = det[P−1(A − λI)P] = = det(P−1) det(A − λI) det(P) = det(A − λI)

If vector ~x is an eigenvector of matrix A, then vector P−1~x is an eigenvector of matrix P−1AP.

Theorem: For each matrix there exists a similar matrix in (Jordan canonical form)     λ 0 0 ... 0 J 0 ... 0 1  1 λ 0 0   0 J2 0    J =  . . .  , where Ji =  0 1 λ 0  .  . .. .   . .     . ... .  0 0 ... J   s 0 0 0 . . . λ

Note: For each normal matrix there exists a similar .

Note: There is no finite process to transform a matrix to Jordan normal form.

Numerical Methods for Solving the Complete Eignevalue Problem

1. Using a sequence of elementary transformations, we convert the matrix to an approximately diagonal (resp. Jordan normal) form or to an ap- proximately special type (e.g. tridiagonal or ).

2. We decompose the matrix A into a product of two matrices A = FL ·FR. Matrix A˜ = FRFL is similar to matrix A. −1 −1 Derivation: FRFL = FL FLFRFL = FL AFL.

18 5.2 Jacobi Transformation (Method)

The Jacobi method finds all eigenvalues and eigenvectors of a symmetric matrix. The task is to successively reduce the size of off-diagonal values.

In each step we find the off-diagonal element with the biggest absolute value |apq| = max |aij| and rotate the axes p, q so, that submatrix [app apq ; aqp aqq] i,j

The kth iteration of Jacobi method is  1 0   .   ..     cos ϕ . . . − sin ϕ    (k) T (k−1)  . .  A = Tp,qA Tp,q, where Tp,q =  . 1 .  .    sin ϕ . . . cos ϕ   .   ..     0 1  After the kth iteration, the matrix element becomes

(k) 2 2 (k−1) (k−1) (k−1) apq = (cos ϕ − sin ϕ) apq − cos ϕ sin ϕ (app − aqq ).

(k) In order to apq = 0, the angle ϕ has to satisfy 2a tg 2ϕ = pq . app − aqq

Proof of convergence: In the course of Jacobi transformations, off-diagonal zeros are not permanent. The proof of convergence is based on the conver- gence of the sum of the off-diagonal elements to zero. n P 2 Let us denote t(A) = aij. Then i,j=1;i6=j t(A(k−1)) t(A(k)) = t(A(k−1)) − 2a2 ∨ a2 ≥ . pq pq n(n − 1) Thus  2   2 k t(A(k)) ≤ 1 − t(A(k−1)) ≤ 1 − t(A). n(n − 1) n(n − 1)

19 So the sequence is estimated from above by a geometric sequence with com- mon ratio < 1.

5.3 LU Decomposition for Complete Eigenvalue Problem

This method converges very slowly, the calculation needs a lot of operations.

th Matrix A is the zeroth step of the iteration, i.e. A0 ≡ A. In the k step we decompose the matrix Ak = LkUk and create matrix Ak+1 = UkLk. This matrix is similar to Ak.

If sequence Bk = L0L1 ... Lk → to a regular matrix, then matrix Ak → to a triangular matrix, which has eigenvalues on the diagonal.

There exist special decompositions of matrices suited for the calculation of eigenvalues and eigenvectors. A similar process using these decompositions converges rapidly.

5.4 Partial Eigenvalue Problem

Let us search for the eigenvalue with the biggest absolute value. We start with choosing an arbitrary vector ~x(0). Further we iterate

(k+1) 1 (k) T (k) (k) ~x = A~x , where %k = ~e1 A~x (resp. %k = kA~x k). %k Then it holds that

(k) lim %k = λ1 ∨ lim ~x = ~x1. k→∞ k→∞

To get the next eigenvalue, we reduce the matrix to order (n − 1). If the T eigenvector ~x1 = (u1, . . . , un) , it holds that   u1 0 ... 0  u 1 0   λ ~qT   2  −1 1 P =  . .. .  , P AP = ~  . . .  0 B un 0 ... 1

20 and thus we now search for the maximal eigenvalue of matrix B.

Note: The drawback is a gradual loss of accuracy.

Note: The smallest eigenvalue can be found as the biggest eigenvalue of A−1. To find an eigenvalue in certain region we can shift the matrix, since (A + µI)~x = (λ + µ)~x.

21