Part B. Linear Algebra, Vector Calculus Chap. 6
Total Page:16
File Type:pdf, Size:1020Kb
Part B. Linear Algebra, Vector Calculus Chap. 6: Linear Algebra; Matrices, Vectors, Determinants, Linear Systems of Equations - Theory and application of linear systems of equations, linear transformations, and eigenvalue problems. - Vectors, matrices, determinants, … 6.1. Basic concepts - Basic concepts and rules of matrix and vector algebra - Matrix: rectangular array of numbers (or functions) elements or entries Reference: “Matrix Computations” by G.H. Golub, C.F. Van Loan (1996) column Ex. 1) 5x – 2y + z = 0 3x + 4z = 0 2 0.4 8 row − 5 − 32 0 Coefficient matrix: 5 2 1 3 0 4 Ch. 6: Linear Algebra 화공수학 = + + + = f1(x1,x 2 ,...,x n ) 0 a11x1 a12x 2 a1n x n b1 = + + + = f2 (x1,x 2 ,...,x n ) 0 a 21x1 a 22x 2 a 2n x n b2 = + + + = fn (x1,x 2 ,...,x n ) 0 a n1x1 a n2x 2 a nn x n bn a11 a12 a1n x1 b1 a a a x b 21 22 2n 2 = 2 Ax = b a n1 a n2 a nn x n bn m x n matrix General Notations and Concepts m = n: square matrix aii: principal or main diagonal Matrix : A, B a11 a12 a1n of the matrix. a a a Vector : a, b = []= 21 22 2n (important for simultaneous A a jk linear equations) Row (Eqn.) a m1 a m2 a mn A or a - rectangular matrix that is not ij square Column (Variable) Ch. 6: Linear Algebra 화공수학 Vectors b1 Row vector: b = []b ,...,b T = 1 m b Transpose b Column vector: m c1 = c cm Transposition A (m × n) AT (n × m) T = T = Symmetric matrices: A A (aij aij ) a11 a 21 a m1 T T a a a Skew-symmetric matrices: A = −A (a = −a ) T = []= 12 22 m2 ij ij A a kj Square matrices a1n a 2n a mn Properties of transpose: T ()AT = A, ()A + B T = AT + BT , ()AB T = BT AT , ()kA T = kAT , ()ABC T = CT BT AT Ch. 6: Linear Algebra 화공수학 = ( = ) Equalities of Matrices: A B aij bij same size = + ( = + ) Matrix Addition: C A B cij aij bij same size Scalar Multiplication: (−k)A = −kA A + B = B + A c(A + B)= cA + cB ()U + V + W = U + ()V + W ()c + k A = cA + kA A + 0 = A c()kA = ()ck A A + ()− A = 0 6.2. Matrix Multiplication - Multiplication of matrices by matrices m A B = C C = a b n×m m×l n×l ij ik kj k=1 (number of columns of 1st factor A = number of rows of 2nd factor B) Ch. 6: Linear Algebra 화공수학 Difference from Multiplication of Numbers - Matrix multiplication is not commutative in general: AB ≠ BA -AB = 0 does not necessarily imply A = 0 or B = 0 or BA = 0 -AC = AD does not necessarily imply C = D (even when A=0) (kA)B = k (AB)= A (kB) A ()BC = ()AB C ()A + B C = AC + BC Special Matrices Upper triangular matrix Lower triangular matrix Banded matrix Results of finite a a 0 difference solutions a11 a12 a1n a11 0 0 11 12 0 a a a a 0 a a 0 for PDE or ODE. 22 2n 21 22 21 22 a a a 0 0 a (especially, 0 0 amn m1 m2 mn mn tridiagonal) Matrix can be decomposed into Lower & Upper triangular matrices LU decomposition Ch. 6: Linear Algebra 화공수학 Special Matrices: ≠ = ≠ a ii 0, a ij 0 (i j) Diagonal matrix = = = ≠ aij a ji Symmetric matrix, I or Iii 1 (Iij 0, i j) Identity (or unit) matrix Inner Product of Vectors b1 b n a ⋅ b = []a a a 2 = a b 1 2 n k k k=1 bn A B = C Product in Terms of Row and Column Vectors: m×n n×p m×p = ⋅ c jk a j bk (jth first row of A).(kth first column of B) ⋅ ⋅ ⋅ a1 b1 a1 b2 a1 bp ⋅ ⋅ ⋅ a 2 b1 a 2 b2 a 2 bp AB = ⋅ ⋅ ⋅ a m b1 a m b2 a m bp Ch. 6: Linear Algebra 화공수학 Linear Transformations y = Ax, x = Bw y = ABw = Cw 6.3. Linear Systems of Equations: Gauss Elimination - The most important practical use of matrices: the solution of linear systems of equations Linear System, Coefficient Matrix, Augmented Matrix a x + a x + + a x = b 11 1 12 2 1n n 1 a11 a12 a1n x1 b1 + + + = a 21x1 a 22x 2 a 2n x n b2 a a a x b 21 22 2n 2 = 2 a x + a x + + a x = b m1 1 m2 2 mn n m a m1 a m2 a mn x n bm Ax = b b = 0: homogeneous system (this has at least one solution, i.e., x = 0) b ≠ 0: nonhomogeneous system a11 a12 a1n b1 a a a b Augmented matrix 21 22 2n 2 a m1 a m2 a mn bm Ch. 6: Linear Algebra 화공수학 Ex. 1) Geometric interpretation: Existence of solutions x2 Precisely one solution x1 Same slope Infinite solutions Very close to singular Different intercept ill-conditioned No solution - difficult to identify the solution - extremely sensitive to round-off error Singular Ch. 6: Linear Algebra 화공수학 Gauss Elimination - Standard method for solving linear systems - The elimination of unknowns can be extended to large sets of equations. (Forward elimination of unknowns & solution through back substitution) 1 a'12 a'13 a'14 x1 b'1 1 a'12 a'13 a'14 x1 b'1 a a a a x b 0 a' a' a' x b' 21 22 23 24 2 = 2 22 23 24 2 = 2 − − − − − − − − x3 b3 x3 b3 − − − − a41 a44 x4 b4 a41 a44 x4 b4 During this operations the first row: pivot row (a11: pivot element) And then second row becomes pivot row: a’22 pivot element 1 a'12 a'13 a'14 x1 b'1 = + = → = − x4 b'4 , x3 a'34 x4 b'3 x3 b'3 a'34 x4 0 1 a' a' x b' 23 24 2 = 2 N − x = b' − a' x 0 1 a'34 x3 b'3 i i ij j = + − − j i 1 0 1 x4 b'4 Repeat back substitution, moving upward Upper triangular form Ch. 6: Linear Algebra 화공수학 Elementary Row Operation: Row-Equivalent Systems - Interchange of two rows - Addition of a constant multiple of one row to another row - Multiplication of a row by a nonzero constant c Overdetermined: equations > unknowns Determined: equations = unknowns Underdetermined: equations < unknowns Consistent: system has at least one solution. Inconsistent: system has no solutions at all. = A(x + x )= Ax + Ax = 0 + b = b Homogeneous solution: xh Ax 0 h p h p = Nonhomogeneous solution: xp Ax b xp+xh is also a solutions of the nonhomogeneous systems Homogeneous systems: always consistent (why? trivial solution exists, x=0) Theorem: A homogeneous system possess nontrivial solutions if the number of m of equations is less than the number of n of unknowns (m < n) Ch. 6: Linear Algebra 화공수학 Echelon Form: Information Resulting from It 3 2 1 3 2 1 3 − − − 0 1/3 1/3 and 0 1/3 1/3 2 0 0 0 0 0 0 12 + + + = a11x1 a12x 2 a1n x n b1 (a) No solution: if r < m and one of the numbers + + = * c22x 2 c2n x n b2 ~ ~ br+1,..., bm is not zero. (b) Precisely one solution: if r=n and + + = ~ crr x r crn x n br ~ ~ ~ br+1,..., bm , if present, are zero. 0 = b + r 1 (c) Infinitely many solutions: if r < n and ~ ~ b ,..., b , if present, are zero. = ~ r+1 m 0 bm Existence and uniqueness of the solutions Next issue Ch. 6: Linear Algebra 화공수학 Gauss-Jordan elimination: particularly well suited to digital computation 1 a'12 a'13 a'14 x1 b'1 0 1 a' a' x b' 23 24 2 = 2 Multiplying the second row by a’12 and − subtracting the second row from the first 0 1 a'34 x3 b'3 − − 0 1 x4 b'4 1 0 a'' a'' x b'' 1 0 0 0 x b'' = 13 14 1 1 1 1 x1 b''1 0 1 a' a' x b' 0 1 0 0 x b'' = 23 24 2 = 2 2 = 2 x2 b''2 − 0 1 a'34 x3 b'3 0 0 1 0 x3 b''3 x = b'' 3 3 0 − − 1 x b' 0 0 0 1 x b'' = 4 4 4 4 x4 b''4 - The most efficient approach is to eliminate all elements both above and below the pivot element in order to clear to zero the entire column containing the pivot element, except of course for the 1 in the pivot position on the main diagonal. Ch. 6: Linear Algebra 화공수학 LU-Decomposition Gauss elimination: - Forward elimination + back-substitution (computation effort ↑) - Inefficient when solving equations with the same coefficient A, but with different rhs constants B LU Decomposition: - Well suited for those situations where many rhs B must be evaluated for a single value of A Elimination step can be formulated so that it involves only operations on the matrix of coefficient A. - Provides an efficient means to compute the matrix inverse. (1) LU Decomposition Ax = B - LU decomposition separates the time-consuming elimination of A from the manipulations of rhs B. once A has been “decomposed”, multiple rhs vectors can be evaluated effectively. • Overview of LU Decomposition Ax − B = 0 -Upper triangular form: u u u x d 11 12 13 1 1 = Ux − D = 0 0 u 22 u 23 x 2 d2 0 0 u33 x3 d3 1 0 0 = - Assume lower diagonal matrix with 1’s on the diagonal: L l21 1 0 l31 l32 1 Ch.