Matrix Computation

Total Page:16

File Type:pdf, Size:1020Kb

Matrix Computation Chapter 3 Matrix Computation The main objective of this chapter is to solve the linear system Ax = f, where A is a square n n matrix, and x and f are vectors of order n. × 3.1 Basics Matrix. a a a 11 12 · · · 1n a21 a22 a2n A = . · · · . = [aij ] i, j = 1, 2, ..., n. a a a n1 n2 · · · nn Column Vectors. x1 f1 x2 f2 x = . , f = . . x f n n Row Vectors. T x = (x1, x2, . , xn) T f = (f1, f2, . , fn) Inner Products. b1 n b2 T a b = (a1, a2, . , an) . = aibi scalar . ← i=1 b ' n Operation count: n multiplications, n 1 additions 2n operations. − $ 29 Chap. 3. Matrix Computation CS414 Class Notes 30 Outer Products. a a b a b a b 1 1 1 1 2 · · · 1 n a2 a2b1 a2b2 a2bn T ab = . (b1, b2, . , bn) = . · · · . matrix . ← a a b a b a b n n 1 n 2 · · · n n Operation count: n2 multiplications = n2 operations. 3.1.1 Setting up matrix problems in MATLAB Hilbert matrix. Suppose we want to construct a matrix A s.t. 1 A = ij i + j 1 − MATLAB A = zeros(n,n); for i = 1:n, for j = 1:n, A(i,j) = 1/(i+j-1); end; end; This is known as a Hilbert matrix, and can also be created in MATLAB by A = hilb(n). For n = 5, this matrix is of the form 1 1/2 1/3 1/4 1/5 1/2 1/3 1/4 1/5 1/6 A = 1/3 1/4 1/5 1/6 1/7 1/4 1/5 1/6 1/7 1/8 1/5 1/6 1/7 1/8 1/9 Observing that the Hilbert matrix is symmetric, the above MATLABcode fragment could be modified to take advantage of this symmetry: MATLAB A = zeros(n,n); for i = 1:n, for j = i:n, A(i,j) = 1/(i+j-1); A(j,i) = A(i,j); end; end; Another matrix. As another example of matrix set-up, consider the following MATLAB code: MATLAB P = zeros(n,n); P(:,1) = ones(n,1); % P(:,1) = 1st column of P for i = 2:n, for j = 2:i, P(i,j) = P(i-1,j-1)+P(i-1,j); end; end; Chap. 3. Matrix Computation CS414 Class Notes 31 For n = 5, the above code generates the following matrix 1 0 0 0 0 1 1 0 0 0 P = 1 2 1 0 0 1 3 3 1 0 1 4 6 4 1 Next, we present two special matrices that can be created given a vector of n components, i.e., the elements of the matrix depend only on n parameters. Vandermonde matrix. This a special type of matrix that can be created from a vector of n components. Given a vector x1 x2 x = . . x n we can create a matrix V as follows: MATLAB n = length(x); V(:,1) = ones(n,1); for j = 2:n, V(:,j) = x .* V(:,j-1); end; Over here, the jth column of V is obtained by element-wise multiplication of the vector x with the (j 1)th column. For n = 4, V is of the form − 2 3 1 x1 x1 x1 2 3 1 x2 x2 x2 V = 2 3 1 x3 x3 x3 2 3 1 x4 x x 4 4 Circulant matrix. Another matrix that can be created from a vector of size n is the circulant matrix. Given a vector a1 a2 a = . . a n we can create a circulant matrix C as follows: MATLAB function C = circulant(a) n = length(a,1); C(1,:) = a’; for j = 2:n, C(i,:)=[C(i-1,n) C(i-1,1:n-1)]; end; Over here, the ith row of C is obtained by permuting (i 1)th row; the last element of (i 1)th row is permuted to the start of the ith row. For n = 4, C is of the− form − a1 a2 a3 a4 a a a a C = 4 1 2 3 a3 a4 a1 a2 a2 a3 a4 a1 Chap. 3. Matrix Computation CS414 Class Notes 32 3.1.2 Structure of Matrices The nonzero elements of a matrix determine its structure. Following are commonly occurring matrices. ! 0 0 0 0 ! ! 0 0 0 0 ! 0 0 0 ! ! ! 0 0 0 0 ! 0 0 0 ! ! ! 0 0 0 0 ! 0 0 0 ! ! ! 0 0 0 0 ! 0 0 0 ! ! Diagonal Tridiagonal ! 0 0 0 0 ! ! ! ! ! ! ! ! ! ! ! ! 0 0 0 0 ! ! ! ! ! ! ! ! ! ! ! ! 0 0 0 0 ! ! ! 0 ! ! ! ! ! ! ! ! 0 0 0 0 ! ! 0 0 ! ! ! ! ! ! ! ! 0 0 0 0 ! 0 0 0 ! ! Lower triangular Upper triangular Upper Hessenberg A diagonal matrix can be constructed from a vector d, where d = [d d d d ]; 1 2 3 · · · n by the MATLAB statement D = diag(d). This constructs the diagonal matrix D, s.t., d1 d2 D = . .. d n MATLAB also provides a way to specify diagonal entries other than the main diagonal. For example, consider the following code: MATLAB m = 3; k = 2; v = [10 20 30]; A = diag(v,k); This constructs a matrix A with the vector v forming the kth diagonal of A: 0 0 10 0 0 0 0 0 20 0 A = 0 0 0 0 30 0 0 0 0 0 0 0 0 0 0 Also, the statement v = diag(A,k) extracts the kth diagonal in the form of a vector. A negative k refers to diagonals below the main diagonal while k = 0 refers to the main diagonal. Block structures. Matrices with block structure can also be constructed in MATLAB. A block structured matrix consists of elements that are in turn smaller matrices. For example, a block structured matrix of the form A11 A12 (3 3) (3 2) × × A = A21 A22 (2 3) (2 2) × × can be constructed from the following code MATLAB Chap. 3. Matrix Computation CS414 Class Notes 33 A11 = rand(3,3); A12 = rand(3,2); A21 = [1 2 3 ; 4 5 6]; A22 = [7 8 ; 9 10]; A = [A11 A12 ; A21 A22]; 3.2 Matrix Operations Matrix-Vector Multiplication. Given an m n matrix A and a vector x of size n, we wish to compute the vector y of size m s.t. × y = Ax where a a a x 11 12 · · · 1n 1 a21 a22 a2n x2 A = . · · · x = . . a a a x m1 m2 · · · mn n gives the vector y with elements n yi = aikxk i = 1, 2, . , m 'k=1 Operation count: mn multiplications, m(n 1) additions 2mn operations. − $ MATLAB y = zeros(m,1); for i = 1:m, for j = 1:n, y(i) = y(i) + A(i,j) * x(j) end; end; This can be computed in a number of ways. Alternative 1: (row-oriented) T T a1 x1 a1 x T T a2 x2 a2 x A = . , x = . y = . . ⇒ . aT x aT x m n m The corresponding MATLAB program uses m inner products: MATLAB for i = 1:m, y(i) = A(i,:) * x; end; Alternative 2: (column-oriented) x1 n x2 A = (b1, b2, . , bn), x = . y = xjbj . ⇒ j=1 x ' n The corresponding MATLAB program uses summation of n vectors, i.e., “scalar * vector + vector”: MATLAB Chap. 3. Matrix Computation CS414 Class Notes 34 for j = 1:n, y = y + x(j) * A(:,j); end; Matrix-Matrix Multiplication. Given an m r matrix A and an r n matrix B we wish to compute the product C of size m n s.t. × × × C = A B ∗ (m ↓ n) (m ↓ r) (r ↓ n) × × × The matrix C can be represented as i = 1, 2, . , m C = [c ], ij j = 1, 2, . , n where r cij = aikbkj k'=1 Operation count: mnr multiplications, mn(r 1) additions 2mnr operations. − $ MATLAB C = zeros(m,n); for j = 1:n, for i = 1:m, for k = 1:r, C(i,j) = C(i,j) + A(i,k)*B(k,j); end; end; end; Alternative 1: (inner products) T a1 aT 2 T A = . , B = (b1, b2, . , bn) cij = ai bj . ⇒ aT m MATLAB for j = 1:n, for i = 1:m, C(i,j) = A(i,:)*B(:,j); end; end; Alternative 2: (outer products) T h1 hT r 2 T A = (g1, g2, . , gr) B = . , C = gkhk m n matrix . ⇒ ← × k=1 hT ' r MATLAB Chap. 3. Matrix Computation CS414 Class Notes 35 for k = 1:r, C = C + A(:,k)*B(k,:); end; Alternative 3: (matrix-vector products) B = (b , b , . , b ) C = (Ab , Ab , . , Ab ) 1 2 n ⇒ 1 2 n MATLAB for j = 1:n, C(:,j) = A*B(:,j); end; 3.2.1 Matrix Norm, Inverse, and Sensitivity of Linear Systems Inverse of a Matrix. If A is a square matrix of order n, then A is nonsingular if there exists a unique n n matrix X such that × 1 1 AX = XA = I = . .. 1 1 The inverse, X, is also denoted by A− . The following property holds for inverse of a product of two matrices: 1 1 1 (A B)− = B− A− . ∗ ∗ Note also that (A B)T = BT AT . ∗ ∗ The identity matrix I is often expressed as T I = (e1, e2, . , en), ei = (0, 0, . , 0, 1, 0, . , 0), i.e., ei has a one in the ith position and zero everywhere else. Vector and matrix norms. Suppose that x" and x"" are two approximations of x. Let 1 1.01 1 x = 1 , x" = 1.01 , x"" = 1.02 .
Recommended publications
  • Arxiv:2003.06292V1 [Math.GR] 12 Mar 2020 Eggnrtr N Ignlmti.Tedaoa Arxi Matrix Diagonal the Matrix
    ALGORITHMS IN LINEAR ALGEBRAIC GROUPS SUSHIL BHUNIA, AYAN MAHALANOBIS, PRALHAD SHINDE, AND ANUPAM SINGH ABSTRACT. This paper presents some algorithms in linear algebraic groups. These algorithms solve the word problem and compute the spinor norm for orthogonal groups. This gives us an algorithmic definition of the spinor norm. We compute the double coset decompositionwith respect to a Siegel maximal parabolic subgroup, which is important in computing infinite-dimensional representations for some algebraic groups. 1. INTRODUCTION Spinor norm was first defined by Dieudonné and Kneser using Clifford algebras. Wall [21] defined the spinor norm using bilinear forms. These days, to compute the spinor norm, one uses the definition of Wall. In this paper, we develop a new definition of the spinor norm for split and twisted orthogonal groups. Our definition of the spinornorm is rich in the sense, that itis algorithmic in nature. Now one can compute spinor norm using a Gaussian elimination algorithm that we develop in this paper. This paper can be seen as an extension of our earlier work in the book chapter [3], where we described Gaussian elimination algorithms for orthogonal and symplectic groups in the context of public key cryptography. In computational group theory, one always looks for algorithms to solve the word problem. For a group G defined by a set of generators hXi = G, the problem is to write g ∈ G as a word in X: we say that this is the word problem for G (for details, see [18, Section 1.4]). Brooksbank [4] and Costi [10] developed algorithms similar to ours for classical groups over finite fields.
    [Show full text]
  • Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
    MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve.
    [Show full text]
  • Extremely Accurate Solutions Using Block Decomposition and Extended Precision for Solving Very Ill-Conditioned Equations
    Extremely accurate solutions using block decomposition and extended precision for solving very ill-conditioned equations †Kansa, E.J.1, Skala V.2, and Holoborodko, P.3 1Convergent Solutions, Livermore, CA 94550 USA 2Computer Science & Engineering Dept., Faculty of Applied Sciences, University of West Bohemia, University 8, CZ 301 00 Plzen, Czech Republic 3Advanpix LLC, Maison Takashima Bldg. 2F, Daimachi 10-15-201, Yokohama, Japan 221-0834 †Corresponding author: [email protected] Abstract Many authors have complained about the ill-conditioning associated the numerical solution of partial differential equations (PDEs) and integral equations (IEs) using as the continuously differentiable Gaussian and multiquadric continuously differentiable (C ) radial basis functions (RBFs). Unlike finite elements, finite difference, or finite volume methods that lave compact local support that give rise to sparse equations, the C -RBFs with simple collocation methods give rise to full, asymmetric systems of equations. Since C RBFs have adjustable constent or variable shape parameters, the resulting systems of equations that are solve on single or double precision computers can suffer from “ill-conditioning”. Condition numbers can be either the absolute or relative condition number, but in the context of linear equations, the absolute condition number will be understood. Results will be presented that demonstrates the combination of Block Gaussian elimination, arbitrary arithmetic precision, and iterative refinement can give remarkably accurate numerical salutations to large Hilbert and van der Monde equation systems. 1. Introduction An accurate definition of the condition number, , of the matrix, A, is the ratio of the largest to smallest absolute value of the singular values, { i}, obtained from the singular value decomposition (SVD) method, see [1]: (A) = maxjjminjj (1) This definition of condition number will be used in this study.
    [Show full text]
  • A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations
    A Study of Convergence of the PMARC Matrices applicable to WICS Calculations /- by Amitabha Ghosh Department of Mechanical Engineering Rochester Institute of Technology Rochester, NY 14623 Final Report NASA Cooperative Agreement No: NCC 2-937 Presented to NASA Ames Research Center Moffett Field, CA 94035 August 31, 1997 Table of Contents Abstract ............................................ 3 Introduction .......................................... 3 Solution of Linear Systems ................................... 4 Direct Solvers .......................................... 5 Gaussian Elimination: .................................. 5 Gauss-Jordan Elimination: ................................ 5 L-U Decompostion: ................................... 6 Iterative Solvers ........................................ 6 Jacobi Method: ..................................... 7 Gauss-Seidel Method: .............................. .... 7 Successive Over-relaxation Method: .......................... 8 Conjugate Gradient Method: .............................. 9 Recent Developments: ......................... ....... 9 Computational Efficiency .................................... 10 Graphical Interpretation of Residual Correction Schemes .................... 11 Defective Matrices ............................. , ......... 15 Results and Discussion ............ ......................... 16 Concluding Remarks ...................................... 19 Acknowledgements ....................................... 19 References ..........................................
    [Show full text]
  • Implementation of Gaussian- Elimination
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-5 Issue-11, April 2016 Implementation of Gaussian- Elimination Awatif M.A. Elsiddieg Abstract: Gaussian elimination is an algorithm for solving systems of linear equations, can also use to find the rank of any II. PIVOTING matrix ,we use Gaussian Jordan elimination to find the inverse of a non singular square matrix. This work gives basic concepts The objective of pivoting is to make an element above or in section (1) , show what is pivoting , and implementation of below a leading one into a zero. Gaussian elimination to solve a system of linear equations. The "pivot" or "pivot element" is an element on the left Section (2) we find the rank of any matrix. Section (3) we use hand side of a matrix that you want the elements above and Gaussian elimination to find the inverse of a non singular square matrix. We compare the method by Gauss Jordan method. In below to be zero. Normally, this element is a one. If you can section (4) practical implementation of the method we inherit the find a book that mentions pivoting, they will usually tell you computation features of Gaussian elimination we use programs that you must pivot on a one. If you restrict yourself to the in Matlab software. three elementary row operations, then this is a true Keywords: Gaussian elimination, algorithm Gauss, statement. However, if you are willing to combine the Jordan, method, computation, features, programs in Matlab, second and third elementary row operations, you come up software.
    [Show full text]
  • Applied Linear Algebra
    APPLIED LINEAR ALGEBRA Giorgio Picci November 24, 2015 1 Contents 1 LINEAR VECTOR SPACES AND LINEAR MAPS 10 1.1 Linear Maps and Matrices . 11 1.2 Inverse of a Linear Map . 12 1.3 Inner products and norms . 13 1.4 Inner products in coordinate spaces (1) . 14 1.5 Inner products in coordinate spaces (2) . 15 1.6 Adjoints. 16 1.7 Subspaces . 18 1.8 Image and kernel of a linear map . 19 1.9 Invariant subspaces in Rn ........................... 22 1.10 Invariant subspaces and block-diagonalization . 23 2 1.11 Eigenvalues and Eigenvectors . 24 2 SYMMETRIC MATRICES 25 2.1 Generalizations: Normal, Hermitian and Unitary matrices . 26 2.2 Change of Basis . 27 2.3 Similarity . 29 2.4 Similarity again . 30 2.5 Problems . 31 2.6 Skew-Hermitian matrices (1) . 32 2.7 Skew-Symmetric matrices (2) . 33 2.8 Square roots of positive semidefinite matrices . 36 2.9 Projections in Rn ............................... 38 2.10 Projections on general inner product spaces . 40 3 2.11 Gramians. 41 2.12 Example: Polynomial vector spaces . 42 3 LINEAR LEAST SQUARES PROBLEMS 43 3.1 Weighted Least Squares . 44 3.2 Solution by the Orthogonality Principle . 46 3.3 Matrix least-Squares Problems . 48 3.4 A problem from subspace identification . 50 3.5 Relation with Left- and Right- Inverses . 51 3.6 The Pseudoinverse . 54 3.7 The Euclidean pseudoinverse . 63 3.8 The Pseudoinverse and Orthogonal Projections . 64 3.9 Linear equations . 66 4 3.10 Unfeasible linear equations and Least Squares . 68 3.11 The Singular value decomposition (SVD) .
    [Show full text]
  • Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected]
    nbm_sle_sim_naivegauss.nb 1 Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected] Introduction One of the most popular numerical techniques for solving simultaneous linear equations is Naïve Gaussian Elimination method. The approach is designed to solve a set of n equations with n unknowns, [A][X]=[C], where A nxn is a square coefficient matrix, X nx1 is the solution vector, and C nx1 is the right hand side array. Naïve Gauss consists of two steps: 1) Forward Elimination: In this step, the unknown is eliminated in each equation starting with the first@ D equation. This way, the equations @areD "reduced" to one equation and@ oneD unknown in each equation. 2) Back Substitution: In this step, starting from the last equation, each of the unknowns is found. To learn more about Naïve Gauss Elimination as well as the pitfall's of the method, click here. A simulation of Naive Gauss Method follows. Section 1: Input Data Below are the input parameters to begin the simulation. This is the only section that requires user input. Once the values are entered, Mathematica will calculate the solution vector [X]. èNumber of equations, n: n = 4 4 è nxn coefficient matrix, [A]: nbm_sle_sim_naivegauss.nb 2 A = Table 1, 10, 100, 1000 , 1, 15, 225, 3375 , 1, 20, 400, 8000 , 1, 22.5, 506.25, 11391 ;A MatrixForm 1 10@88 100 1000 < 8 < 18 15 225 3375< 8 <<D êê 1 20 400 8000 ji 1 22.5 506.25 11391 zy j z j z j z j z è nx1 rightj hand side array, [RHS]: z k { RHS = Table 227.04, 362.78, 517.35, 602.97 ; RHS MatrixForm 227.04 362.78 @8 <D êê 517.35 ji 602.97 zy j z j z j z j z j z k { Section 2: Naïve Gaussian Elimination Method The following sections divide Naïve Gauss elimination into two steps: 1) Forward Elimination 2) Back Substitution To conduct Naïve Gauss Elimination, Mathematica will join the [A] and [RHS] matrices into one augmented matrix, [C], that will facilitate the process of forward elimination.
    [Show full text]
  • Linear Programming
    Linear Programming Lecture 1: Linear Algebra Review Lecture 1: Linear Algebra Review Linear Programming 1 / 24 1 Linear Algebra Review 2 Linear Algebra Review 3 Block Structured Matrices 4 Gaussian Elimination Matrices 5 Gauss-Jordan Elimination (Pivoting) Lecture 1: Linear Algebra Review Linear Programming 2 / 24 columns rows 2 3 a1• 6 a2• 7 = a a ::: a = 6 7 •1 •2 •n 6 . 7 4 . 5 am• 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 . .. 7 6 . 7 1• 2• m• 4 . 5 4 . 5 T a1n a2n ::: amn a•n Matrices in Rm×n A 2 Rm×n 2 3 a11 a12 ::: a1n 6 a21 a22 ::: a2n 7 A = 6 7 6 . .. 7 4 . 5 am1 am2 ::: amn Lecture 1: Linear Algebra Review Linear Programming 3 / 24 rows 2 3 a1• 6 a2• 7 = 6 7 6 . 7 4 . 5 am• 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 . .. 7 6 . 7 1• 2• m• 4 . 5 4 . 5 T a1n a2n ::: amn a•n Matrices in Rm×n A 2 Rm×n columns 2 3 a11 a12 ::: a1n 6 a21 a22 ::: a2n 7 A = 6 7 = a a ::: a 6 . .. 7 •1 •2 •n 4 . 5 am1 am2 ::: amn Lecture 1: Linear Algebra Review Linear Programming 3 / 24 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 .
    [Show full text]
  • Matrix Computations
    Chapter 5 Matrix Computations §5.1 Setting Up Matrix Problems §5.2 Matrix Operations §5.3 Once Again, Setting Up Matrix Problems §5.4 Recursive Matrix Operations §5.5 Distributed Memory Matrix Multiplication The next item on our agenda is the linear equation problem Ax = b. However, before we get into algorithmic details, it is important to study two simpler calculations: matrix-vector mul- tiplication and matrix-matrix multiplication. Both operations are supported in Matlab so in that sense there is “nothing to do.” However, there is much to learn by studying how these com- putations can be implemented. Matrix-vector multiplications arise during the course of solving Ax = b problems. Moreover, it is good to uplift our ability to think at the matrix-vector level before embarking on a presentation of Ax = b solvers. The act of setting up a matrix problem also deserves attention. Often, the amount of work that is required to initialize an n-by-n matrix is as much as the work required to solve for x. We pay particular attention to the common setting when each matrix entry aij is an evaluation of a continuous function f(x, y). A theme throughout this chapter is the exploitation of structure. It is frequently the case that there is a pattern among the entries in A which can be used to reduce the amount of work. The fast Fourier transform and the fast Strassen matrix multiply algorithm are presented as examples of recursion in the matrix computations. The organization of matrix-matrix multiplication on a ring of processors is also studied and gives us a nice snapshot of what algorithm development is like in a distributed memory environment.
    [Show full text]
  • Gaussian Elimination and Lu Decomposition (Supplement for Ma511)
    GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1. Matrix multiplication The rule for multiplying matrices is, at first glance, a little complicated. If A is m × n and B is n × p then C = AB is defined and of size m × p with entries n X cij = aikbkj k=1 The main reason for defining it this way will be explained when we talk about linear transformations later on. THEOREM 1.1. (1) Matrix multiplication is associative, i.e. A(BC) = (AB)C. (Henceforth, we will drop parentheses.) (2) If A is m × n and Im×m and In×n denote the identity matrices of the indicated sizes, AIn×n = Im×mA = A. (We usually just write I and the size is understood from context.) On the other hand, matrix multiplication is almost never commutative; in other words generally AB 6= BA when both are defined. So we have to be careful not to inadvertently use it in a calculation. Given an n × n matrix A, the inverse, if it exists, is another n × n matrix A−1 such that AA−I = I;A−1A = I A matrix is called invertible or nonsingular if A−1 exists. In practice, it is only necessary to check one of these. THEOREM 1.2. If B is n × n and AB = I or BA = I, then A invertible and B = A−1. Proof. The proof of invertibility of A will need to wait until we talk about deter- minants.
    [Show full text]
  • Numerical Analysis Lecture Notes Peter J
    Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor of one of the all-time mathematical greats — the early nineteenth century German mathematician Carl Friedrich Gauss. As the father of linear algebra, his name will occur repeatedly throughout this text. Gaus- sian Elimination is quite elementary, but remains one of the most important algorithms in applied (as well as theoretical) mathematics. Our initial focus will be on the most important class of systems: those involving the same number of equations as unknowns — although we will eventually develop techniques for handling completely general linear systems. While the former typically have a unique solution, general linear systems may have either no solutions or infinitely many solutions. Since physical models require exis- tence and uniqueness of their solution, the systems arising in applications often (but not always) involve the same number of equations as unknowns. Nevertheless, the ability to confidently handle all types of linear systems is a basic prerequisite for further progress in the subject. In contemporary applications, particularly those arising in numerical solu- tions of differential equations, in signal and image processing, and elsewhere, the governing linear systems can be huge, sometimes involving millions of equations in millions of un- knowns, challenging even the most powerful supercomputer. So, a systematic and careful development of solution techniques is essential. Section 4.5 discusses some of the practical issues and limitations in computer implementations of the Gaussian Elimination method for large systems arising in applications.
    [Show full text]
  • Math 304 Linear Algebra from Wednesday: I Basis and Dimension
    Highlights Math 304 Linear Algebra From Wednesday: I basis and dimension I transition matrix for change of basis Harold P. Boas Today: [email protected] I row space and column space June 12, 2006 I the rank-nullity property Example Example continued 1 2 1 2 1 1 2 1 2 1 1 2 1 2 1 Let A = 2 4 3 7 1 . R2−2R1 2 4 3 7 1 −−−−−→ 0 0 1 3 −1 4 8 5 11 3 R −4R 4 8 5 11 3 3 1 0 0 1 3 −1 Three-part problem. Find a basis for 1 2 1 2 1 1 2 0 −1 2 R3−R2 R1−R2 5 −−−−→ 0 0 1 3 −1 −−−−→ 0 0 1 3 −1 I the row space (the subspace of R spanned by the rows of the matrix) 0 0 0 0 0 0 0 0 0 0 3 I the column space (the subspace of R spanned by the columns of the matrix) Notice that in each step, the second column is twice the first column; also, the second column is the sum of the third and the nullspace (the set of vectors x such that Ax = 0) I fifth columns. We can analyze all three parts by Gaussian elimination, Row operations change the column space but preserve linear even though row operations change the column space. relations among the columns. The final row-echelon form shows that the column space has dimension equal to 2, and the first and third columns are linearly independent.
    [Show full text]