MA 580; Gaussian Elimination

Total Page:16

File Type:pdf, Size:1020Kb

MA 580; Gaussian Elimination Part IVa: Gaussian Elimination MA 580; Gaussian Elimination C. T. Kelley NC State University tim [email protected] Version of September 7, 2016 Master Chapters 1--7 of the Matlab book. NOW!!! Read chapter 5 of the notes. Start on Chapter 9 of the Matlab book. NCSU, Fall 2016 Part IVa: Gaussian Elimination c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 1 / 78 Part IVa: Gaussian Elimination Simple Gaussian Elimination Revisited For the second time we solve (I) 2x1 + x2 = 1 (II) x1 + x2 = 2 This time we will record what we do. c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 2 / 78 Part IVa: Gaussian Elimination Pivot element and multiplier We plan to multiply (I) by .5 and subtract from (II). Words: a11 is the pivot element l21 = :5 = a21=a11 is the multiplier I will tell you what L is soon. c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 3 / 78 Part IVa: Gaussian Elimination Solving the system As before: the new (II) is (II)'x2=2 = 1:5 Now I can backsolve the problem starting with x2 = 3. Record the coefficient of x2 in (II)' as u22 = :5. Now plug into (I) to get x1 = −1. In matrix form, there's something profound happening. c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 4 / 78 Part IVa: Gaussian Elimination Encoding the history of Gaussian Elimination Suppose I put the multipliers in a unit lower triangular matrix L 1 0 1 0 L = = l21 1 :5 1 and the coefficients of the backsolve in 2 1 U = : 0 :5 I've encoded all the work in the solve in L and U. In fact 1 0 2 1 2 1 LU = = = A :5 1 0 :5 1 1 c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 5 / 78 Part IVa: Gaussian Elimination The Factor-Solve Paradigm In real life, one separates the factorization A = LU from the solve. First factor A = LU. Then Solve Lz = b (forward solve) Solve Ux = z (back solve) Put this together and Ax = LUx = Lz = b: So you can do several right hand side vectors with a single factorization. c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 6 / 78 Part IVa: Gaussian Elimination Triangular Solves The system for the forward solve looks like 0 1 0 1 0 1 1 0 0 ::: 0 z1 b1 Bl21 1 0 ::: 0 C Bz2 C Bb2 C B C B C B C Bl31 l32 1 0 :::C Bz3 C Bb3 C Lz = B C B C = B C B . .. C B . C B . C @ . 0 A @ . A @ . A lN1 lN2 lN3 ::: 1 zN bN So z1 = b1, z2 = b2 − l21z1, ... c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 7 / 78 Part IVa: Gaussian Elimination Algorithm for Lower Triangular Solve (forward) Forward means solve for zi in order Solve Lz = b with unit triangular L for i = 1 : N do zi = bi for j = 1 : i − 1 do zi = zi − lij zj end for end for i−1 X Bottom line: zi = bi − lij zj ; 1 ≤ i ≤ N: j=1 What if lii 6= 1? c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 8 / 78 Part IVa: Gaussian Elimination Algorithm for Upper Triangular Solve (backward) Backward means solve for zi in reverse order (zN first) Solve Ux = z for i = N : 1 do xi = zi for j = i + 1 : N do xi = xi − uij xj end for xi = xi =uii end for 0 N 1 X Bottom line: xi = @zi − uij xj A =uii ; N ≥ i ≥ 1 j=i+1 c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 9 / 78 Part IVa: Gaussian Elimination Evaluating Cost You can use several equivalent metrics for cost floating point operations floating point multiplications “flops” an add + a multiply + some address computation In MA 580, with one exception, adds and multiplies are paired and the three metrics give equal relative results. c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 10 / 78 Part IVa: Gaussian Elimination Scalar products and matrix multiplies N T X x y = xi yi i=1 N multiplies and N − 1 adds. Cost: N + O(1) leading order is the same for the adds/multiplies Matrix-vector product: N X (Ax)i = aij xj for j = 1 ::: N i=1 N2 multiplies and N2 − N adds (N scalar products) Cost: N2 + O(N) c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 11 / 78 Part IVa: Gaussian Elimination A trick Write the loops as nested sums and add 1 for each multiply. Replace the sums by integrals. Evaluate the integrals You'll be right to leading order. c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 12 / 78 Part IVa: Gaussian Elimination Triangular Solve We'll do lower, upper is the same. Algorithm: for i=1:N do Pi−1 zi = bi − j=1 lij ∗ zj end for N i−1 X X Sum: 1 i=1 j=1 Integral (check it out) Z N Z i−1 dj di = N2=2 + O(N) 1 1 c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 13 / 78 Part IVa: Gaussian Elimination Simple Gaussian Elimination Here are the steps Make a copy of A. Do Gaussian elimination as if one were solving an equation. Recover L and U from our (mangled) copy of A. In the real world, we would overwrite A with the data for the factors, saving storage. Have I told you about the computer scientist who was low on memory? The guy tells me he doesn't have enough bytes. c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 14 / 78 Part IVa: Gaussian Elimination Not enough bytes? So I bit him. H. Youngman c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 15 / 78 Part IVa: Gaussian Elimination Overwriting A with L and U lu simple A A for j = 1 : N do for i = j + 1 : N do aij = aij =ajj fCompute the multipliers. Store them in the strict lower triangle of A.g for k = j + 1 : N do aik = aik − aij ajk fDo the elimination.g end for end for end for c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 16 / 78 Part IVa: Gaussian Elimination Cost of LU factorization N N N X X X Z N Z N Z N 1 ≈ dk di dj = N3=3 + O(N2): j=1 i=j+1 k=j+1 1 j+1 j+1 So, cost of LU is N3=3 + O(N2). c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 17 / 78 Part IVa: Gaussian Elimination Recovering L and U (L; U) recover(A) for j = 1 : N do ljj = 1 for i = 1 : j do uij = aij end for for i = j + 1 : N do lij = aij end for end for c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 18 / 78 Part IVa: Gaussian Elimination Factor-Solve Paradigm: II The factorization costs O(N3) work The upper and lower triangular solves cost O(N2) work So . the standard procedure is to separate them. This makes is much more efficient to handle multiple right hand sides, as we will do in the nonlinear equations part. c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 19 / 78 Part IVa: Gaussian Elimination Mission Accomplished? Not exactly. Does 0 1 A = 1 0 have an LU factorization? No, but solving x b Ax = 2 = b = 1 x1 b2 is pretty easy. We've clearly missed something. c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 20 / 78 Part IVa: Gaussian Elimination Partial pivoting aka column pivoting The easy fix it to interchange the equations (rows) in the problem. Then you get A0 = b0. 1 0 A0 = 0 1 0 T and b = (b2; b1) . Same answer, different problem. Proposed algorithm: When you get a zero pivot, find something below it that is not zero, and swap those rows. Then make sure you don't forget what you did. c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 21 / 78 Part IVa: Gaussian Elimination Are we there yet? How about :1 10 A = 1 0 and b = (20; −1)T . Exact solution: (−1; 2:01)T c C. T. Kelley, I. C. F. Ipsen, 2016 Part IVa: Gaussian Elimination MA 580, Fall 2016 22 / 78 Part IVa: Gaussian Elimination Recall the Toy Floating Point Number System Here's a radix 10 system. Two digit mantissa Exponent range: -2:2 Largest float = :99 ∗ 102 = 99 eps(0) = :10 ∗ 10−2 eps(1) = 1:1 − 1 = :1 Round to nearest.
Recommended publications
  • Arxiv:2003.06292V1 [Math.GR] 12 Mar 2020 Eggnrtr N Ignlmti.Tedaoa Arxi Matrix Diagonal the Matrix
    ALGORITHMS IN LINEAR ALGEBRAIC GROUPS SUSHIL BHUNIA, AYAN MAHALANOBIS, PRALHAD SHINDE, AND ANUPAM SINGH ABSTRACT. This paper presents some algorithms in linear algebraic groups. These algorithms solve the word problem and compute the spinor norm for orthogonal groups. This gives us an algorithmic definition of the spinor norm. We compute the double coset decompositionwith respect to a Siegel maximal parabolic subgroup, which is important in computing infinite-dimensional representations for some algebraic groups. 1. INTRODUCTION Spinor norm was first defined by Dieudonné and Kneser using Clifford algebras. Wall [21] defined the spinor norm using bilinear forms. These days, to compute the spinor norm, one uses the definition of Wall. In this paper, we develop a new definition of the spinor norm for split and twisted orthogonal groups. Our definition of the spinornorm is rich in the sense, that itis algorithmic in nature. Now one can compute spinor norm using a Gaussian elimination algorithm that we develop in this paper. This paper can be seen as an extension of our earlier work in the book chapter [3], where we described Gaussian elimination algorithms for orthogonal and symplectic groups in the context of public key cryptography. In computational group theory, one always looks for algorithms to solve the word problem. For a group G defined by a set of generators hXi = G, the problem is to write g ∈ G as a word in X: we say that this is the word problem for G (for details, see [18, Section 1.4]). Brooksbank [4] and Costi [10] developed algorithms similar to ours for classical groups over finite fields.
    [Show full text]
  • Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
    MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve.
    [Show full text]
  • Triangular Factorization
    Chapter 1 Triangular Factorization This chapter deals with the factorization of arbitrary matrices into products of triangular matrices. Since the solution of a linear n n system can be easily obtained once the matrix is factored into the product× of triangular matrices, we will concentrate on the factorization of square matrices. Specifically, we will show that an arbitrary n n matrix A has the factorization P A = LU where P is an n n permutation matrix,× L is an n n unit lower triangular matrix, and U is an n ×n upper triangular matrix. In connection× with this factorization we will discuss pivoting,× i.e., row interchange, strategies. We will also explore circumstances for which A may be factored in the forms A = LU or A = LLT . Our results for a square system will be given for a matrix with real elements but can easily be generalized for complex matrices. The corresponding results for a general m n matrix will be accumulated in Section 1.4. In the general case an arbitrary m× n matrix A has the factorization P A = LU where P is an m m permutation× matrix, L is an m m unit lower triangular matrix, and U is an×m n matrix having row echelon structure.× × 1.1 Permutation matrices and Gauss transformations We begin by defining permutation matrices and examining the effect of premulti- plying or postmultiplying a given matrix by such matrices. We then define Gauss transformations and show how they can be used to introduce zeros into a vector. Definition 1.1 An m m permutation matrix is a matrix whose columns con- sist of a rearrangement of× the m unit vectors e(j), j = 1,...,m, in RI m, i.e., a rearrangement of the columns (or rows) of the m m identity matrix.
    [Show full text]
  • Implementation of Gaussian- Elimination
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-5 Issue-11, April 2016 Implementation of Gaussian- Elimination Awatif M.A. Elsiddieg Abstract: Gaussian elimination is an algorithm for solving systems of linear equations, can also use to find the rank of any II. PIVOTING matrix ,we use Gaussian Jordan elimination to find the inverse of a non singular square matrix. This work gives basic concepts The objective of pivoting is to make an element above or in section (1) , show what is pivoting , and implementation of below a leading one into a zero. Gaussian elimination to solve a system of linear equations. The "pivot" or "pivot element" is an element on the left Section (2) we find the rank of any matrix. Section (3) we use hand side of a matrix that you want the elements above and Gaussian elimination to find the inverse of a non singular square matrix. We compare the method by Gauss Jordan method. In below to be zero. Normally, this element is a one. If you can section (4) practical implementation of the method we inherit the find a book that mentions pivoting, they will usually tell you computation features of Gaussian elimination we use programs that you must pivot on a one. If you restrict yourself to the in Matlab software. three elementary row operations, then this is a true Keywords: Gaussian elimination, algorithm Gauss, statement. However, if you are willing to combine the Jordan, method, computation, features, programs in Matlab, second and third elementary row operations, you come up software.
    [Show full text]
  • Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected]
    nbm_sle_sim_naivegauss.nb 1 Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected] Introduction One of the most popular numerical techniques for solving simultaneous linear equations is Naïve Gaussian Elimination method. The approach is designed to solve a set of n equations with n unknowns, [A][X]=[C], where A nxn is a square coefficient matrix, X nx1 is the solution vector, and C nx1 is the right hand side array. Naïve Gauss consists of two steps: 1) Forward Elimination: In this step, the unknown is eliminated in each equation starting with the first@ D equation. This way, the equations @areD "reduced" to one equation and@ oneD unknown in each equation. 2) Back Substitution: In this step, starting from the last equation, each of the unknowns is found. To learn more about Naïve Gauss Elimination as well as the pitfall's of the method, click here. A simulation of Naive Gauss Method follows. Section 1: Input Data Below are the input parameters to begin the simulation. This is the only section that requires user input. Once the values are entered, Mathematica will calculate the solution vector [X]. èNumber of equations, n: n = 4 4 è nxn coefficient matrix, [A]: nbm_sle_sim_naivegauss.nb 2 A = Table 1, 10, 100, 1000 , 1, 15, 225, 3375 , 1, 20, 400, 8000 , 1, 22.5, 506.25, 11391 ;A MatrixForm 1 10@88 100 1000 < 8 < 18 15 225 3375< 8 <<D êê 1 20 400 8000 ji 1 22.5 506.25 11391 zy j z j z j z j z è nx1 rightj hand side array, [RHS]: z k { RHS = Table 227.04, 362.78, 517.35, 602.97 ; RHS MatrixForm 227.04 362.78 @8 <D êê 517.35 ji 602.97 zy j z j z j z j z j z k { Section 2: Naïve Gaussian Elimination Method The following sections divide Naïve Gauss elimination into two steps: 1) Forward Elimination 2) Back Substitution To conduct Naïve Gauss Elimination, Mathematica will join the [A] and [RHS] matrices into one augmented matrix, [C], that will facilitate the process of forward elimination.
    [Show full text]
  • An Efficient Implementation of the Thomas-Algorithm for Block Penta
    An Efficient Implementation of the Thomas-Algorithm for Block Penta-diagonal Systems on Vector Computers Katharina Benkert1 and Rudolf Fischer2 1 High Performance Computing Center Stuttgart (HLRS), University of Stuttgart, 70569 Stuttgart, Germany [email protected] 2 NEC High Performance Computing Europe GmbH, Prinzenallee 11, 40549 Duesseldorf, Germany [email protected] Abstract. In simulations of supernovae, linear systems of equations with a block penta-diagonal matrix possessing small, dense matrix blocks occur. For an efficient solution, a compact multiplication scheme based on a restructured version of the Thomas algorithm and specifically adapted routines for LU factorization as well as forward and backward substi- tution are presented. On a NEC SX-8 vector system, runtime could be decreased between 35% and 54% for block sizes varying from 20 to 85 compared to the original code with BLAS and LAPACK routines. Keywords: Thomas algorithm, vector architecture. 1 Introduction Neutrino transport and neutrino interactions in dense matter play a crucial role in stellar core collapse, supernova explosions and neutron star formation. The multidimensional neutrino radiation hydrodynamics code PROMETHEUS / VERTEX [1] discretizes the angular moment equations of the Boltzmann equa- tion giving rise to a non-linear algebraic system. It is solved by a Newton Raphson procedure, which in turn requires the solution of multiple block-penta- diagonal linear systems with small, dense matrix blocks in each step. This is achieved by the Thomas algorithm and takes a major part of the overall com- puting time. Since the code already performs well on vector computers, this kind of architecture has been the focus of the current work.
    [Show full text]
  • Pivoting for LU Factorization
    Pivoting for LU Factorization Matthew W. Reid April 21, 2014 University of Puget Sound E-mail: [email protected] Copyright (C) 2014 Matthew W. Reid. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". 1 INTRODUCTION 1 1 Introduction Pivoting for LU factorization is the process of systematically selecting pivots for Gaussian elimina- tion during the LU factorization of a matrix. The LU factorization is closely related to Gaussian elimination, which is unstable in its pure form. To guarantee the elimination process goes to com- pletion, we must ensure that there is a nonzero pivot at every step of the elimination process. This is the reason we need pivoting when computing LU factorizations. But we can do more with piv- oting than just making sure Gaussian elimination completes. We can reduce roundoff errors during computation and make our algorithm backward stable by implementing the right pivoting strategy. Depending on the matrix A, some LU decompositions can become numerically unstable if relatively small pivots are used. Relatively small pivots cause instability because they operate very similar to zeros during Gaussian elimination. Through the process of pivoting, we can greatly reduce this instability by ensuring that we use relatively large entries as our pivot elements. This prevents large factors from appearing in the computed L and U, which reduces roundoff errors during computa- tion.
    [Show full text]
  • Linear Programming
    Linear Programming Lecture 1: Linear Algebra Review Lecture 1: Linear Algebra Review Linear Programming 1 / 24 1 Linear Algebra Review 2 Linear Algebra Review 3 Block Structured Matrices 4 Gaussian Elimination Matrices 5 Gauss-Jordan Elimination (Pivoting) Lecture 1: Linear Algebra Review Linear Programming 2 / 24 columns rows 2 3 a1• 6 a2• 7 = a a ::: a = 6 7 •1 •2 •n 6 . 7 4 . 5 am• 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 . .. 7 6 . 7 1• 2• m• 4 . 5 4 . 5 T a1n a2n ::: amn a•n Matrices in Rm×n A 2 Rm×n 2 3 a11 a12 ::: a1n 6 a21 a22 ::: a2n 7 A = 6 7 6 . .. 7 4 . 5 am1 am2 ::: amn Lecture 1: Linear Algebra Review Linear Programming 3 / 24 rows 2 3 a1• 6 a2• 7 = 6 7 6 . 7 4 . 5 am• 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 . .. 7 6 . 7 1• 2• m• 4 . 5 4 . 5 T a1n a2n ::: amn a•n Matrices in Rm×n A 2 Rm×n columns 2 3 a11 a12 ::: a1n 6 a21 a22 ::: a2n 7 A = 6 7 = a a ::: a 6 . .. 7 •1 •2 •n 4 . 5 am1 am2 ::: amn Lecture 1: Linear Algebra Review Linear Programming 3 / 24 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 .
    [Show full text]
  • Gaussian Elimination and Lu Decomposition (Supplement for Ma511)
    GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1. Matrix multiplication The rule for multiplying matrices is, at first glance, a little complicated. If A is m × n and B is n × p then C = AB is defined and of size m × p with entries n X cij = aikbkj k=1 The main reason for defining it this way will be explained when we talk about linear transformations later on. THEOREM 1.1. (1) Matrix multiplication is associative, i.e. A(BC) = (AB)C. (Henceforth, we will drop parentheses.) (2) If A is m × n and Im×m and In×n denote the identity matrices of the indicated sizes, AIn×n = Im×mA = A. (We usually just write I and the size is understood from context.) On the other hand, matrix multiplication is almost never commutative; in other words generally AB 6= BA when both are defined. So we have to be careful not to inadvertently use it in a calculation. Given an n × n matrix A, the inverse, if it exists, is another n × n matrix A−1 such that AA−I = I;A−1A = I A matrix is called invertible or nonsingular if A−1 exists. In practice, it is only necessary to check one of these. THEOREM 1.2. If B is n × n and AB = I or BA = I, then A invertible and B = A−1. Proof. The proof of invertibility of A will need to wait until we talk about deter- minants.
    [Show full text]
  • Numerical Analysis Lecture Notes Peter J
    Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor of one of the all-time mathematical greats — the early nineteenth century German mathematician Carl Friedrich Gauss. As the father of linear algebra, his name will occur repeatedly throughout this text. Gaus- sian Elimination is quite elementary, but remains one of the most important algorithms in applied (as well as theoretical) mathematics. Our initial focus will be on the most important class of systems: those involving the same number of equations as unknowns — although we will eventually develop techniques for handling completely general linear systems. While the former typically have a unique solution, general linear systems may have either no solutions or infinitely many solutions. Since physical models require exis- tence and uniqueness of their solution, the systems arising in applications often (but not always) involve the same number of equations as unknowns. Nevertheless, the ability to confidently handle all types of linear systems is a basic prerequisite for further progress in the subject. In contemporary applications, particularly those arising in numerical solu- tions of differential equations, in signal and image processing, and elsewhere, the governing linear systems can be huge, sometimes involving millions of equations in millions of un- knowns, challenging even the most powerful supercomputer. So, a systematic and careful development of solution techniques is essential. Section 4.5 discusses some of the practical issues and limitations in computer implementations of the Gaussian Elimination method for large systems arising in applications.
    [Show full text]
  • Math 304 Linear Algebra from Wednesday: I Basis and Dimension
    Highlights Math 304 Linear Algebra From Wednesday: I basis and dimension I transition matrix for change of basis Harold P. Boas Today: [email protected] I row space and column space June 12, 2006 I the rank-nullity property Example Example continued 1 2 1 2 1 1 2 1 2 1 1 2 1 2 1 Let A = 2 4 3 7 1 . R2−2R1 2 4 3 7 1 −−−−−→ 0 0 1 3 −1 4 8 5 11 3 R −4R 4 8 5 11 3 3 1 0 0 1 3 −1 Three-part problem. Find a basis for 1 2 1 2 1 1 2 0 −1 2 R3−R2 R1−R2 5 −−−−→ 0 0 1 3 −1 −−−−→ 0 0 1 3 −1 I the row space (the subspace of R spanned by the rows of the matrix) 0 0 0 0 0 0 0 0 0 0 3 I the column space (the subspace of R spanned by the columns of the matrix) Notice that in each step, the second column is twice the first column; also, the second column is the sum of the third and the nullspace (the set of vectors x such that Ax = 0) I fifth columns. We can analyze all three parts by Gaussian elimination, Row operations change the column space but preserve linear even though row operations change the column space. relations among the columns. The final row-echelon form shows that the column space has dimension equal to 2, and the first and third columns are linearly independent.
    [Show full text]
  • Mergesort / Quicksort Steven Skiena
    Lecture 8: Mergesort / Quicksort Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794–4400 http://www.cs.sunysb.edu/∼skiena Problem of the Day Given an array-based heap on n elements and a real number x, efficiently determine whether the kth smallest in the heap is greater than or equal to x. Your algorithm should be O(k) in the worst-case, independent of the size of the heap. Hint: you not have to find the kth smallest element; you need only determine its relationship to x. Solution Mergesort Recursive algorithms are based on reducing large problems into small ones. A nice recursive approach to sorting involves partitioning the elements into two groups, sorting each of the smaller problems recursively, and then interleaving the two sorted lists to totally order the elements. Mergesort Implementation mergesort(item type s[], int low, int high) f int i; (* counter *) int middle; (* index of middle element *) if (low < high) f middle = (low+high)/2; mergesort(s,low,middle); mergesort(s,middle+1,high); merge(s, low, middle, high); g g Mergesort Animation M E R G E S O R T M E R G E S O R T M E R G E S O R T M E M E R G E S O R T E M E M R E G O S R T E E G M R O R S T E E G M O R R S T Merging Sorted Lists The efficiency of mergesort depends upon how efficiently we combine the two sorted halves into a single sorted list.
    [Show full text]