
Numerical Methods of Linear Algebra 1 Introduction 1.1 Tasks of Linear Algebra 1. Solving systems of linear equations A~x = ~b • Solution of a system with regular (= nonsingular, invertible) square matrix A of order n×n for one or multiple right-hand sides • Calculation of the matrix inverse A−1 • Calculation of the determinant • Solving systems of n equations with m unknowns and singular n×n systems in some pre-defined sense (one particular solution + the basis of nullspace, solution with the smallest norm, or in least-squares sense) 2. Calculation of eigenvalues and eigenvectors • Complete eigenvalue problem • Partial eigenvalue problem Standard numerical libraries - LINPACK (special library for linear systems), EISPACK (special library for eigenvalues and eigenvectors), NAG (general), IMSL (general) 1.2 Basic Terms and Notation Here, by vector we always mean a column vector, superscript T stands for vector or matrix transpose, matrix is denoted using boldface 0 1 0 1 x1 a11 a12 : : : a1m B x C B a a : : : a C B 2 C T B 21 22 2m C ~x = B . C = (x1; x2; : : : ; xn) ; A = B . C = (aij) : @ . A @ . A xn an1 an2 : : : anm 1 If not stated otherwise, we suppose real matrices and vectors. Size of a vector or matrix is measured by its vector norm A vector norm must satisfy the following 3 conditions: a) k~xk ≥ 0 and k~xk = 0 , ~x = ~0, b) kλ~xk = jλjk~xk, c) k~x + ~yk ≤ k~xk + k~yk. Examples of vector norms: 1 • Maximum norm (L norm): kxkI = max jxij i=1;:::;n n 1 X • L norm (taxicab norm, Manhattan norm): kxkII = jxij i=1 v u n 2 uX 2 • Euclidean norm (L norm): kxkIII = t xi i=1 Matrix norm A vector norm of a matrix is called matrix norm, if for all matrices A; B the following additional condition d) holds: kA · Bk ≤ kAk · kBk: A matrix norm is compatible with a vector norm, if 8 A; ~x holds kA~xk ≤ kAk · k~xk Examples of matrix norms: n X • Max row sum: kAkI = max jaijj i=1;:::;n j=1 n X • Max column sum: kAkII = max jaijj j=1;:::;n i=1 2 v u n n uX X 2 • Frobenius norm (Euclidean norm): kAkIII = t aij i=1 j=1 Each of the matrix norms above is compatible with the vector norm of the same name. 1.3 Methods for Solution of Linear Systems 1. Direct methods 2. Iterative methods 3. Gradient methods 3 2 Direct Methods for Solving Linear Systems Direct methods consist in transformation of the matrix to triangular (or diag- onal) shape (forward run), followed by solution of a system with upper (U) or lower (L) triangular matrix (backsubstitution). Backsubstitution is much faster than forward run. 2.1 Solving Systems with Triangular Matrix Systems with upper triangular matrix U is solved by sequential application of the formula in the direction of decreasing index k 0 n 1 1 X xk = @bk − ukjxjA : ukk j=k+1 To calculate any xk we need no more than n inner cycles (1 multiplication + 1 addition), thus the number of operations grows with ∼ n2 (more precisely, ' 0:5 n2 inner cycles). 2.2 Gauss and Gauss-Jordan Elimination We solve the system of equations A~x = ~b. Suppose that in the first step a11 6= 0 (which can be always achieved by swapping of the equations). Element a11, used for modification of equations 2; : : : ; n, will be called the pivot element or simply pivot. th st (1) From the i equation we subtract the 1 equation times the multiplier mi = st −ai1=a11. In the modified system, all elements in the 1 column below the diagonal are now zero. This transformation performed together with the right- hand side corresponds to multiplication of the equation system by matrix 0 1 1 0 ::: 0 B −a =a 1 ::: 0 C B 21 11 C D1 = B . .. C : @ . A −an1=a11 0 ::: 1 ~ After the first transformation the equation reads D1A~x = D1b. We denote (1) ~(1) A ≡ D1A and b ≡ D1. 4 After k − 1 transformations the matrix A(k−1) reads 0 1 a11 a12 : : : a1;k−1 a1k : : : a1n B 0 a(1) : : : a(1) a(1) : : : a(1) C B 22 2;k−1 2k 2n C B . .. C B . C B C B 0 0 : : : a(k−2) a(k−2) : : : a(k−2) C (k−1) B k−1;k−1 k−1;k k−1;n C A = B (k−1) (k−1) C ; B 0 0 ::: 0 akk : : : ak;n C B (k−1) (k−1) C B 0 0 ::: 0 a : : : a C B k+1;k k+1;n C B . C @ . A (k−1) (k−1) 0 0 ::: 0 ank : : : an;n where the superscript stands for the given element's number of modifications. (k−1) (k) If akk 6= 0, we can select it as a pivot, calculate the multipliers mi = (k−1) (k−1) −aik =akk for i = k + 1; : : : ; n and modify the corresponding equations. The pivot in kth step of the transformation is an element that has been modi- fied (by subtraction!) k−1 times before ! loss of precision ) pivoting needed. Direct methods without pivoting are useless for general matrices! Number of operations 1 We need to zero 2n(n − 1) elements, each of which costs ≤ n inner cycles. Total number of inner cycles is ∼ n3 (more precisely ' 1=3 n3), so the com- plexity of this algorithm is n3. Gauss-Jordan elimination We modify all off-diagonal elements. The matrix is transformed to identity I, matrix inverse A−1 is directly calculated as a result. More operations are needed, in particular ' n3 inner cycles. 2.3 Pivoting (Selection of the Pivot Element) In each step of the forward run we are selecting the pivot. • Full pivoting: we search the entire yet unprocessed region of the matrix: max jaijj. This is slow. 5 • Partial pivoting: we only search the given column (column pivoting) or row (row pivoting). • Implicit pivoting: Faster, improved strategy for column pivoting. We compare the size of elements in given column normalized to maximum absolute values in given row of the original matrix. With pivoting, direct methods can be used for a majority of matrices. For general big matrices (N > 50), double precision is needed. Even then, difficulties are often encountered for large, ill-conditioned matrices! Difficulties: 1. Singular matrices 2. Singular matrix produced by loss of accuracy during transformations 3. Loss of accuracy 2.4 LU Method Any regular matrix A can be decomposed as A = L·U, where L; U is a lower left resp. upper right triangular matrix. The system is then solved by solving two systems with triangular matrices A~x = ~b ) (LU)~x = ~b ) L (U~x) = ~b ) L~y = ~b; U~x = ~y: | {z } ~y LU decomposition 0 1 0 1 0 1 a11 a12 a13 : : : a1n 1 0 0 ::: 0 u11 u12 u13 : : : u1n B a a a : : : a C B l 1 0 ::: 0 C B 0 u u : : : u C B 21 22 23 2n C B 21 C B 22 23 2n C B a a a : : : a C B l l 1 ::: 0 C B 0 0 u : : : u C B 31 32 33 3n C = B 31 32 C B 33 3n C B . C B . .. C B . .. C @ . A @ . A @ . A an1 an2 an3 : : : ann ln1 ln2 ln3 ::: 1 0 0 0 : : : unn 6 Matrix multiplication i−1 X for i ≤ j : aij = uij + likukj k=1 j−1 X for i > j : aij = lijujj + likukj k=1 Crout's algorithm - sequential calculation e.g. going left by columns and down each column. First i−1 X uij = aij − likukj i = 1; : : : ; j k=1 uses l from preceding columns and u from preceding rows, and then j−1 ! X lij = aij − likukj =ujj i = j + 1; : : : ; n k=1 uses l from preceding columns and u from the part of the current column which is above the diagonal. Column pivoting (full pivoting impossible). Elements aij are used only once, resulting elements of matrices L and U can be stored the same array. Properties of the LU method: • Direct method, the same number of steps as the forward run of Gauss elimination. • Main advantage: the decomposition does not touch (depend on) the right-hand side, thus fast calculation for multiple right-hand sides (e.g. if these are dynamically obtained during the calculation) • The solution can be improved by iteration 2.5 Iterative Improvement of the Solution We are searching for solution ~x of the linear equation A~x = ~b. First we obtain an inaccurate solution ~x~ ~x~ = ~x + δx~ ) A(~x + δx~ ) = ~b + δb~ ) Aδx~ = δb~ = A~x~ −~b~x = ~x~ − δx~ 7 ~ We denote by ~x0 the inaccurate solution from the first step A~x0 ' b and perform the iteration ~ ~ ~ ~xi+1 = ~xi + (δx)i; A(δx)i = b − A~xi: 2.6 Conditioning of Linear System Solution Due to input and roundoff errors, instead of A~x = ~b we in fact solve (A + ∆A)(~x + ∆~x) = ~b + ∆~b: • First for the case ∆A = 0: ∆~x = A−1∆~b ) k∆~xk ≤ kA−1k · k∆~bk; k~bk A~x = ~b ) k~xk ≥ : kAk Therefore, for the relative error of solution it holds k∆~xk k∆~bk ≤ kAk · kA−1k · : k~xk k~bk −1 The value Cp = kAk·kA k is called the condition number of the matrix. • If moreover ∆A 6= 0, then k∆Ak k∆~bk k∆~xk kAk + ~ ≤ C kbk : k~xk p k∆Ak 1 − Cp kAk For Cp 1, the system is ill-conditioned, meaning that small input errors or small roundoff errors during the calculation lead to a large error of the solution.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages21 Page
-
File Size-