Processing 20010302 182 Proceedings

Total Page:16

File Type:pdf, Size:1020Kb

Processing 20010302 182 Proceedings processing 20010302 182 Proceedings Part III (June 23) DISTRIBUTION STATEMENT A Approved for Public Release ft,-. :--•;■; Distribution Unlimited '-'-.: ; . ;.r.rj\""i.'.'';v ''■ ■':' ""• Faculdade de Engenharia da Universidade do Porto 2000 June, 21 22 23 REPORT DOCUMENTATION PAGE Form Approved OMB No, 0704-0188 Public reporting burden for this collection of information Is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188), Washington, DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE 3. REPORT TYPE AND DATES COVERED 12 January 2001 Conference Proceedings 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS VECPAR - 4th International Meeting on Vector and Parallel Processing F61775-00-WF071 fVH- HI 6. AUTHOR(S) Conference Committee 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER FEUP-FACULDADE DE ENGENHARIA DA Universidade do Porto RUA DOS BRAGAS N/A Porto 4050-123 Portugal 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING/MONITORING AGENCY REPORT NUMBER EOARD PSC 802 BOX 14 CSP 00-5071 FPO 09499-0200 11. SUPPLEMENTARY NOTES 12a. DISTRIBUTION/AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Approved for public release; distribution is unlimited. A 13. ABSTRACT (Maximum 200 words) The Final Proceedings for VECPAR - 4th International Meeting on Vector and Parallel Processing, 21 June 2000 - 23 June 2000, an interdisciplinary conference covering topics in all areas of vector, parallel and distributed computing applied to a broad range of research disciplines with a focus on engineering. The principal topics include: Cellular Automata, Computational Fluid Dynamics, Crash and Structural Analysis, Data Warehousing and Data Mining, Distributed Computing and Operating Systems, Fault Tolerant Systems, Imaging and Graphics, Interconnection Networks, Languages and Tools, Numerical Methods, Parallel and Distributed Algorithms, Real-time and Embedded Systems, Reconfigurable Systems, Linear Algebra Algorithms and Software for Large Scientific Problems, Computer Organization, Image Analysis and Synthesis, and Nonlinear Problems. 14. SUBJECT TERMS 15. NUMBER OF PAGES Three volumes: 1016 pages total (plus TOC, and front matter) EOARD, Modelling & Simulation, Parallel Computing, Distributed Computing 16. PRICE CODE N/A 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 19, SECURITY CLASSIFICATION 20. LIMITATION OF ABSTRACT OF REPORT OF THIS PAGE OF ABSTRACT UNCLASSIFIED UNCLASSIFIED UNCLASSIFIED UL NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89) Prescribed by ANSI Std. 239-18 298-102 VECPAR'2000 4rd International Meeting on Vector and Parallel Processing 2000, June 21-23 Conference Proceedings Part III (Friday, June 23) FEUP Faculdade de Engenharia da Universidade do Porto Preface This book is part of a 3-volume set with the written versions of all invited talks, papers and posters presented at VECPAR'2000 - 4 th International Meeting on Vector and Parallel Processing. The Preface and the Table of Contents are identical in all 3 volumes (one for each day of the conference), numbered in sequence. Papers are grouped according to the session where they were presented. The conference programme added up to a total of 6 plenary and 20 parallel sessions, comprising 6 invited talks, 66 papers and 11 posters. It is our great pleasure to express our gratitude to all people that helped us during the preparation of this event. The expertise provided by the Scientific Committee was crucial in the selection of more than 100 abstracts submitted for possible presentation. Even at the risk of forgetting some people, we would like to express our gratitude to the following people, whose collaboration went well beyond the call of duty. Fernando Jorge and Vitor Carvalho, for creation and maintenance of the conference web page; Alice Silva for the secretarial work; Dr. Jaime Villate for his assistance in organisational matters; and Nuno Sousa and Alberto Mota, for authoring the procedure for abstract submission via web. Porto, June 2000 The Organising and Scientific Committee Chairs VECPAR'2000 was held at Fundacäo Dr. Antonio Cupertino de Miranda, in Porto (Portugal), from 21 to 23 June, 2000. VECPAR is a series of conferences, on vector and parallel computing organised by the Faculty of Engineering of the University of Porto (FEUP) since 1993. Committees Organising Committee A. Augusto de Sousa (Chair) Jose Couto Marques (Co-chair) Jose MagaMes Cruz Local Advisory Committee Carlos Costa Raimundo Delgado Jose Marques dos Santos Fernando Nunes Ferreira Li'gia Ribeiro Jose Silva Matos Paulo Tavares de Castro Raul Vidal Scientific Committee J. Palma (Chair) Univ. do Porto, Portugal J. Dongarra (Co-chair) Univ. of Tennessee and Oak Ridge National Lab.. USA V. Hernandez (Co-chair) Univ. Politecnica de Valencia, Spain P. Amestoy ENSEEIHT-IRIT, Toulouse, France T. Barth NASA Ames Research Center, USA A. Campilho Univ. do Porto, Portugal G. Candler Univ. of Minnesota, USA A. Chalmers Univ. of Bristol, England B. Chapman Univ. of Southampton, England A. Coutinho Univ. Federal do Rio de Janeiro, Brazil J. C. Cunha Univ. Nova de Lisboa, Portugal F. dAlmeida Univ. do Porto, Portugal M. Dayde ENSEEIHT-IRIT, Toulouse, France J. Dekeyser Univ. des Sciences et Technologies, Lille, France P. Devloo Univ. Estadual de Campinas (UNICAMP), Brazil J. Duarte Univ. do Porto, Portugal I. Duff Rutherford Appleton Lab., England, and CERFACS, France D. Falcäo Univ. Federal do Rio de Janeiro, Brazil J. Fortes Purdue Univ., USA S. Gama Univ. do Porto, Portugal M. Giles Univ. of Oxford, England L. Giraud CERFACS, France G. Golub Stanford Univ., USA D. Heermann Univ. Heidelberg,Germany W. Janke Univ. of Leipzig, Germany M. Kamel Univ. of Waterloo, Canada M.-T. Kechadi Univ. College Dublin, Ireland D. Knight Rutgers-State Univ. of New Jersey, USA V. Kumar Univ. of Minnesota, USA R. Lohner George Mason Univ., USA E. Luque Univ. Autönoma de Barcelona, Spain J. Macedo Univ. do Porto, Portugal P. Marquet Univ. des Sciences et Technologies, Lille, France P. de Miguel Univ. Politecnica de Madrid, Spain F. Moura Univ. do Minho, Portugal E. Onate Univ. Politecnica de Catalunya, Spain A. Padilha Univ. do Porto, Portugal R. Pandey Univ. of Southern Mississipi, USA M. Peric Technische Univ. Hamburg-Harrurg, Germany T. Priol IRISA/INRIA, France R. Ralha Univ. do Minho, Portugal M. Ruano Univ. do Algarve, Portugal D. Ruiz ENSEEIHT-IRIT, Toulouse, France H. Ruskin Dublin City Univ., Ireland J. G. Silva Univ. de Coimbra, Portugal F. Tirado Univ. Complutense, Spain B. Tourancheau Univ. Claude Bernard de Lyon, France M. Valero Univ. Politecnica de Catalunya, Spain A. van der Steen Utrecht Univ., The Netherlands J. Vuillemin Ecole Normale Superieure, Paris, France J.-S. Wang National Univ. of Singapore, Singapore P. Watson Univ. of Newscastle, England P. Welch Univ. of Kent at Canterbury, England E. Zapata Univ. de Malaga, Spain Sponsoring Organisations The Organising Committee is very grateful to all sponsoring organisations for their support: FEUP - Faculdade de Engenharia da Universidade do Porto UP - Universidade do Porto CMP - Cämara Municipal do Porto EOARD - European Office of Aerospace Research and Development FACM - Fundacäo Dr. Antonio Cupertino de Miranda FCCN - Fundacäo para a Computacäo Cientifica Nacional FCG - Fundacäo Calouste Gulbenkian FCT - Fundacäo para a Ciencia e a Tecnologia FLAD - Fundacäo Luso-Americana para o Desenvolvimento ICCTI/BC - Inst, de Cooperacäo Cientifica e Tecnolögica Internacional/British Council INESC Porto - Institute de Engenharia de Sistemas e de Computadores do Porto OE - Ordern dos Engenheiros Porto Convention Bureau ALCATEL CISCO Systems COMPAQ MICROSOFT NEC European Supercomputer Systems NORTEL Networks SIEMENS PART I (June 21, Wednesday) Invited Talk (June 21, Wednesday. Auditorium, 10:50-11:50) High Performance Computing on the Internet 1 Ian Foster, Argonne National Laboratory and the University of Chicago (USA) Session 1: Distributed Computing and Operating Systems June 21. Wednesday (Auditorium, 11:50-12:50) Implementing and Analysing an Effective Explicit Coscheduling Algorithm on a NOW Francesc Solana, Francesc Gine, Fermin Molina, Porfidio Hernandez and Emilio Luque (Spain) 31 An Approximation Algorithm for the Static Task Scheduling on Multiprocessors Janez Brest, Jaka Jejcic, Aleksander Vreze and Viljem Zumer (Slovenia) 45 A New Buffer Management Scheme for Sequential and Looping Reference Pattern Applications Jun-Young Cho, Gyeong-Hun Kim, Hong-Kyu Kang and Myong-Soon Park (Korea) 57 Session 2: Languages and Tools June 21, Wednesday (Room A, 11:50-12:50) Parallel Architecture for Natural Language Processing Ricardo Annes (Brazil) 69 A Platform Independent ParallelisingTool Based on Graph Theoretic Models Oliver Sinnen and Leonel Sousa (Portugal) 81 A Tool for Distributed Software Design in the CORBA Environment Jan Kwiatkowski, Maciej Przewozny and Jose C. Cunha (Poland) 93 Session 3: Data-warehouse, Education and Genetic Algorithms
Recommended publications
  • Linear Algebra and Matrix Theory
    Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 .
    [Show full text]
  • Using Row Reduction to Calculate the Inverse and the Determinant of a Square Matrix
    Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n × n square matrix A is called invertible if there exists a matrix X such that AX = XA = I, where I is the n × n identity matrix. If such matrix X exists, one can show that it is unique. We call it the inverse of A and denote it by A−1 = X, so that AA−1 = A−1A = I holds if A−1 exists, i.e. if A is invertible. Not all matrices are invertible. If A−1 does not exist, the matrix A is called singular or noninvertible. Note that if A is invertible, then the linear algebraic system Ax = b has a unique solution x = A−1b. Indeed, multiplying both sides of Ax = b on the left by A−1, we obtain A−1Ax = A−1b. But A−1A = I and Ix = x, so x = A−1b The converse is also true, so for a square matrix A, Ax = b has a unique solution if and only if A is invertible. 2 Calculating the inverse To compute A−1 if it exists, we need to find a matrix X such that AX = I (1) Linear algebra tells us that if such X exists, then XA = I holds as well, and so X = A−1. 1 Now observe that solving (1) is equivalent to solving the following linear systems: Ax1 = e1 Ax2 = e2 ... Axn = en, where xj, j = 1, .
    [Show full text]
  • Linear Algebra Review
    Linear Algebra Review Kaiyu Zheng October 2017 Linear algebra is fundamental for many areas in computer science. This document aims at providing a reference (mostly for myself) when I need to remember some concepts or examples. Instead of a collection of facts as the Matrix Cookbook, this document is more gentle like a tutorial. Most of the content come from my notes while taking the undergraduate linear algebra course (Math 308) at the University of Washington. Contents on more advanced topics are collected from reading different sources on the Internet. Contents 3.8 Exponential and 7 Special Matrices 19 Logarithm...... 11 7.1 Block Matrix.... 19 1 Linear System of Equa- 3.9 Conversion Be- 7.2 Orthogonal..... 20 tions2 tween Matrix Nota- 7.3 Diagonal....... 20 tion and Summation 12 7.4 Diagonalizable... 20 2 Vectors3 7.5 Symmetric...... 21 2.1 Linear independence5 4 Vector Spaces 13 7.6 Positive-Definite.. 21 2.2 Linear dependence.5 4.1 Determinant..... 13 7.7 Singular Value De- 2.3 Linear transforma- 4.2 Kernel........ 15 composition..... 22 tion.........5 4.3 Basis......... 15 7.8 Similar........ 22 7.9 Jordan Normal Form 23 4.4 Change of Basis... 16 3 Matrix Algebra6 7.10 Hermitian...... 23 4.5 Dimension, Row & 7.11 Discrete Fourier 3.1 Addition.......6 Column Space, and Transform...... 24 3.2 Scalar Multiplication6 Rank......... 17 3.3 Matrix Multiplication6 8 Matrix Calculus 24 3.4 Transpose......8 5 Eigen 17 8.1 Differentiation... 24 3.4.1 Conjugate 5.1 Multiplicity of 8.2 Jacobian......
    [Show full text]
  • Amps: an Augmented Matrix Formulation for Principal Submatrix Updates with Application to Power Grids∗
    SIAM J. SCI.COMPUT. c 2017 Society for Industrial and Applied Mathematics Vol. 39, No. 5, pp. S809{S827 AMPS: AN AUGMENTED MATRIX FORMULATION FOR PRINCIPAL SUBMATRIX UPDATES WITH APPLICATION TO POWER GRIDS∗ YU-HONG YEUNGy , ALEX POTHENy , MAHANTESH HALAPPANAVARz , AND ZHENYU HUANGz Abstract. We present AMPS, an augmented matrix approach to update the solution to a linear system of equations when the matrix is modified by a few elements within a principal submatrix. This problem arises in the dynamic security analysis of a power grid, where operators need to perform N − k contingency analysis, i.e., determine the state of the system when exactly k links from N fail. Our algorithms augment the matrix to account for the changes in it, and then compute the solution to the augmented system without refactoring the modified matrix. We provide two algorithms|a direct method and a hybrid direct-iterative method|for solving the augmented system. We also exploit the sparsity of the matrices and vectors to accelerate the overall computation. We analyze the time complexity of both algorithms and show that it is bounded by the number of nonzeros in a subset of the columns of the Cholesky factor that are selected by the nonzeros in the sparse right-hand-side vector. Our algorithms are compared on three power grids with PARDISO, a parallel direct solver, and CHOLMOD, a direct solver with the ability to modify the Cholesky factors of the matrix. We show that our augmented algorithms outperform PARDISO (by two orders of magnitude) and CHOLMOD (by a factor of up to 5).
    [Show full text]
  • TP Matrices and TP Completability
    W&M ScholarWorks Undergraduate Honors Theses Theses, Dissertations, & Master Projects 5-2018 TP Matrices and TP Completability Duo Wang Follow this and additional works at: https://scholarworks.wm.edu/honorstheses Part of the Algebra Commons Recommended Citation Wang, Duo, "TP Matrices and TP Completability" (2018). Undergraduate Honors Theses. Paper 1159. https://scholarworks.wm.edu/honorstheses/1159 This Honors Thesis is brought to you for free and open access by the Theses, Dissertations, & Master Projects at W&M ScholarWorks. It has been accepted for inclusion in Undergraduate Honors Theses by an authorized administrator of W&M ScholarWorks. For more information, please contact [email protected]. TP Matrices and TP Completability A thesis submitted in partial fulfillment of the requirement for the degree of Bachelor of Science in Mathematics from The College of William and Mary by Duo Wang Committe Members: Charles Johnson Junping Shi Mark Greer Williamsburg, VA April 10, 2018 Abstract A matrix is called totally nonnegative (TN) if the determinant of every square submatrix is nonnegative and totally positive (TP) if the determinant of every square submatrix is positive. The TP (TN) completion problem asks which partial matrices have a TP (TN) completion. In this paper, several new TP-completable pat- terns in 3-by-n matrices are identified. The relationship between expansion and completability is developed based on the prior re- sults about single unspecified entry. These results extend our un- derstanding of TP-completable patterns. A new Ratio Theorem related to TP-completability is introduced in this paper, and it can possibly be a helpful tool in TP-completion problems.
    [Show full text]
  • Matrix Inverses Consider the Ordinary Algebraic Equation and Its Solution
    Matrix Inverses Consider the ordinary algebraic equation and its solution shown below: Since the linear system can be written as where , , and , (A = coefficient matrix, x = variable vector, b = constant vector) it is reasonable to ask if the matrix equation corresponding to above system of n linear equation in n variables can be solved for in a manner similar to the way the ordinary algebraic equation is solved for x. That is, is it possible to solve for the vector as follows: Recall that is a identity matrix with . For all matrix products that are defined, and . The above scheme for solving the matrix equation for the vector depends on our ability to find a matrix so that the product of with the coefficient matrix is the identity matrix . Example 1. Find a matrix , if possible, so that . Solution The equation after matrix multiplication on the left becomes . By matrix equality, we obtain the system which has solutions of . As a check, direct computation shows that . We also observe that . 2. Find a matrix , if possible, so that . Solution Here becomes and the resulting system of linear equations is given by We quickly see that this particular system is dependent. The point: Even for matrices it is not always possible to solve the matrix equation for the solution vector x as it is not always possible to find matrices B so that . Definition An square matrix A is invertible (or nonsingular) if there exists an matrix B such that . In this case, we call the matrix B an inverse for A.
    [Show full text]
  • Matrices and Linear Algebra
    Chapter 2 Matrices and Linear Algebra 2.1 Basics Definition 2.1.1. A matrix is an m n array of scalars from a given field F . The individual values in the matrix× are called entries. Examples. 213 12 A = B = 124 34 − The size of the array is–written as m n,where × m n × number of rows number of columns Notation a11 a12 ... a1n a21 a22 ... a2n rows ←− A = a a ... a n1 n2 mn columns A := uppercase denotes a matrix a := lower case denotes an entry of a matrix a F. ∈ Special matrices 33 34 CHAPTER 2. MATRICES AND LINEAR ALGEBRA (1) If m = n, the matrix is called square.Inthiscasewehave (1a) A matrix A is said to be diagonal if aij =0 i = j. (1b) A diagonal matrix A may be denoted by diag(d1,d2,... ,dn) where aii = di aij =0 j = i. The diagonal matrix diag(1, 1,... ,1) is called the identity matrix and is usually denoted by 10... 0 01 In = . . .. 01 or simply I,whenn is assumed to be known. 0 = diag(0,... ,0) is called the zero matrix. (1c) A square matrix L is said to be lower triangular if ij =0 i<j. (1d) A square matrix U is said to be upper triangular if uij =0 i>j. (1e) A square matrix A is called symmetric if aij = aji. (1f) A square matrix A is called Hermitian if aij =¯aji (¯z := complex conjugate of z). (1g) Eij has a 1 in the (i, j) position and zeros in all other positions.
    [Show full text]
  • §1.4 Matrix Equation Ax = B: Linear Combination
    x1.4 Matrix Equation A x = b: Linear Combination (I) Matrix-vector Product () Linear Combination (II) Example: 2 4 3 1 2 −1 A = ; x = 3 0 −5 3 4 5 7 Linear Equations in terms of Matrix-vector Product Example: Linear Equations in terms of Matrix-vector Product Example: Linear Equations in terms of Linear Combinations Linear Equations in terms of Linear Combinations 2 3 1 3 4 b1 4 0 14 10 b2 + 4 b1 5 0 7 5 b3 + 3 b1 2 3 1 3 4 b1 (`3)−(1=2)×(`2)!(`3) =) 4 0 14 10 b2 + 4 b1 5 0 0 0 b3 + 3 b1 − 1=2(b2 + 4 b1) Answer: equation Ax = b consistent () b3 − 1=2 b2 + b1 = 0: Existence of Solutions 2 3 2 3 1 3 4 b1 Example: Let A = 4 −4 2 −6 5 ; and b = 4 b2 5 : −3 −2 −7 b3 Question: For what values of b1; b2; b3 is equation Ax = b consistent? 2 3(`2) + (4) × (`1) ! (`2) 1 3 4 b1 (`3) + (3) × (`1) ! (`3) 4 −4 2 −6 b2 5 =) −3 −2 −7 b3 2 3 1 3 4 b1 4 0 14 10 b2 + 4 b1 5 0 0 0 b3 + 3 b1 − 1=2(b2 + 4 b1) Answer: equation Ax = b consistent () b3 − 1=2 b2 + b1 = 0: Existence of Solutions 2 3 2 3 1 3 4 b1 Example: Let A = 4 −4 2 −6 5 ; and b = 4 b2 5 : −3 −2 −7 b3 Question: For what values of b1; b2; b3 is equation Ax = b consistent? 2 3(`2) + (4) × (`1) ! (`2) 2 3 1 3 4 b1 1 3 4 b1 (`3) + (3) × (`1) ! (`3) 4 −4 2 −6 b2 5 =) 4 0 14 10 b2 + 4 b1 5 −3 −2 −7 b3 0 7 5 b3 + 3 b1 (` )−(1=2)×(` )!(` ) 3 =) 2 3 Answer: equation Ax = b consistent () b3 − 1=2 b2 + b1 = 0: Existence of Solutions 2 3 2 3 1 3 4 b1 Example: Let A = 4 −4 2 −6 5 ; and b = 4 b2 5 : −3 −2 −7 b3 Question: For what values of b1; b2; b3 is equation Ax = b consistent? 2 3(`2) + (4) × (`1) !
    [Show full text]
  • Systems of Linear Equations
    Systems of Linear Equations 0.1 Definitions Recall that if A 2 Rm×n and B 2 Rm×p, then the augmented matrix [A j B] 2 Rm×n+p is the matrix [AB], that is the matrix whose first n columns are the columns of A, and whose last p columns are the columns of B. Typically we consider B = 2 Rm×1 ' Rm, a column vector. We also recall that a matrix A 2 Rm×n is said to be in reduced row echelon form if, counting from the topmost row to the bottom-most, 1. any row containing a nonzero entry precedes any row in which all the entries are zero (if any) 2. the first nonzero entry in each row is the only nonzero entry in its column 3. the first nonzero entry in each row is 1 and it occurs in a column to the right of the first nonzero entry in the preceding row Example 0.1 The following matrices are not in reduced echelon form because they all fail some part of 3 (the first one also fails 2): 01 1 01 00 1 0 21 2 0 0 0 1 0 1 0 0 1 @ A @ A 0 1 0 1 0 1 0 0 1 1 A matrix that is in reduced row echelon form is: 01 0 1 01 @0 1 0 0A 0 0 0 1 A system of m linear equations in n unknowns is a set of m equations, numbered from 1 to m going down, each in n variables xi which are multiplied by coefficients aij 2 F , whose sum equals some bj 2 R: 8 a x + a x + ··· + a x = b > 11 1 12 2 1n n 1 > <> a21x1 + a22x2+ ··· + a2nxn = b2 (S) .
    [Show full text]
  • Math 240: Linear Systems and Rank of a Matrix
    Math 240: Linear Systems and Rank of a Matrix Ryan Blair University of Pennsylvania Thursday January 20, 2011 Ryan Blair (U Penn) Math 240: Linear Systems and Rank of a Matrix Thursday January 20, 2011 1 / 10 Outline 1 Review of Last Time 2 linear Independence Ryan Blair (U Penn) Math 240: Linear Systems and Rank of a Matrix Thursday January 20, 2011 2 / 10 Review of Last Time Review of last time 1 Transpose of a matrix 2 Special types of matrices 3 Matrix properties 4 Row-echelon and reduced row echelon form 5 Solving linear systems using Gaussian and Gauss-Jordan elimination Ryan Blair (U Penn) Math 240: Linear Systems and Rank of a Matrix Thursday January 20, 2011 3 / 10 Review of Last Time Echelon Forms Definition A matrix is in row-echelon form if 1 Any row consisting of all zeros is at the bottom of the matrix. 2 For all non-zero rows the leading entry must be a one. This is called the pivot. 3 In consecutive rows the pivot in the lower row appears to the right of the pivot in the higher row. Definition A matrix is in reduced row-echelon form if it is in row-echelon form and every pivot is the only non-zero entry in its column. Ryan Blair (U Penn) Math 240: Linear Systems and Rank of a Matrix Thursday January 20, 2011 4 / 10 Review of Last Time Row Operations We will be applying row operations to augmented matrices to find solutions to linear equations. This is called Gaussian or Gauss-Jordan elimination.
    [Show full text]
  • Augmented Matrices 567
    8.2 Systems of Linear Equations: Augmented Matrices 567 8.2 Systems of Linear Equations: Augmented Matrices In Section 8.1 we introduced Gaussian Elimination as a means of transforming a system of linear equations into triangular form with the ultimate goal of producing an equivalent system of linear equations which is easier to solve. If we take a step back and study the process, we see that all of our moves are determined entirely by the coefficients of the variables involved, and not the variables themselves. Much the same thing happened when we studied long division in Section 3.2. Just as we developed synthetic division to streamline that process, in this section, we introduce a similar bookkeeping device to help us solve systems of linear equations. To that end, we define a matrix as a rectangular array of real numbers. We typically enclose matrices with square brackets, `[' and `]', and we size matrices by the number of rows and columns they have. For example, the size (sometimes called the dimension) of 3 0 −1 2 −5 10 is 2 × 3 because it has 2 rows and 3 columns. The individual numbers in a matrix are called its entries and are usually labeled with double subscripts: the first tells which row the element is in and the second tells which column it is in. The rows are numbered from top to bottom and the columns are numbered from left to right. Matrices themselves are usually denoted by uppercase letters (A, B, C, etc.) while their entries are usually denoted by the corresponding letter.
    [Show full text]
  • Matrices, Block Stuctures and Gaussian Elimination
    MATRICES, BLOCK STRUCTURES AND GAUSSIAN ELIMINATION Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations with matrices having millions of components. The key to understanding how to implement such algorithms is to exploit underlying structure within the matrices. In these notes we touch on a few ideas and tools for dissecting matrix structure. Specifically we are concerned with the block structure matrices. 1. Rows and Columns Let A 2 Rm×n so that A has m rows and n columns. Denote the element of A in the ith row and jth column as Aij. Denote the m rows of A by A1·;A2·;A3·;:::;Am· and the n columns of A by A·1;A·2;A·3;:::;A·n. For example, if 2 3 2 −1 5 7 3 3 A = 4 −2 27 32 −100 0 0 5 ; −89 0 47 22 −21 33 then A2;4 = −100, A1· = 3 2 −1 5 7 3 ;A2· = −2 27 32 −100 0 0 ;A3· = −89 0 47 22 −21 33 and 2 3 3 2 2 3 2−13 2 5 3 2 7 3 2 3 3 A·1 = 4 −2 5 ;A·2 = 4275 ;A·3 = 4 32 5 ;A·4 = 4−1005 ;A·5 = 4 0 5 ;A·6 = 4 0 5 : −89 0 47 22 −21 33 Exercise 1.1. If 2 3 −4 1 1 0 0 3 6 2 2 0 0 1 0 7 6 7 C = 6 −1 0 0 0 0 1 7 ; 6 7 4 0 0 0 2 1 4 5 0 0 0 1 0 3 2−43 6 2 7 6 7 what are C4;4, C·4 and C4·? For example, C2· = 2 2 0 0 1 0 and C·2 = 6 0 7 : 6 7 4 0 5 0 The block structuring of a matrix into its rows and columns is of fundamental importance and is extremely useful in understanding the properties of a matrix.
    [Show full text]