Series and Useful Formulas

Total Page:16

File Type:pdf, Size:1020Kb

Series and Useful Formulas Appendix A Series and Useful Formulas A.1 Series and Useful Formulas Xn1 n .a C id/ D .a C l/ 2 iD0 where l D a C .n 1/d A.2 Geometric Series Xn1 a.1 rn/ ari D 1 r iD0 X1 a ari D 1 r iD0 where r ¤ 1 in the above two equations. © Springer International Publishing Switzerland 2015 541 F. Gebali, Analysis of Computer Networks, DOI 10.1007/978-3-319-15657-6 542 A Series and Useful Formulas A.3 Arithmetic-Geometric Series Xn1 a.1 rn/ rd 1 nrn1 C .n 1/ rn / .a C id/ ri D C 1 r .1 r/2 iD0 where r ¤ 1 in the above two equations. If 1<r<1, the series converge and we get X1 a rd .a C id/ ri D C 1 r .1 r/2 iD0 X1 r C r2 i 2 ri D .1 r/3 iD0 A.4 Sums of Powers of Positive Integers Xn n.n C 1/ i D 2 (A.1) iD1 Xn n.n C 1/.2n C 1/ i 2 D 6 (A.2) iD1 Xn n2.n C 1/2 i 3 D 4 (A.3) iD1 A.5 Binomial Series ! Xn n ani bi D .a C b/n i iD0 special cases ! Xn n D 2n i iD0 A Series and Useful Formulas 543 ! Xn n .1/i D 0 i iD0 !2 ! Xn n 2n D i n iD0 q.q 1/ q.q 1/.q 2/ .1 C x/q D 1 C qx C x2 C x3 2Š 3Š .1 C x/1 D 1 x C x2 x3 C x4 where x<1in the above equations. A.5.1 Properties of Binomial Coefficients ! ! n n D i n i ! ! n n n 1 D i i i 1 ! ! ! n n n C 1 C D i i 1 i ! ! n n D D 1 n 0 p nC 1 n nŠ 2n 2 e Stirling’s formula A.6 Other Useful Series and Formulas ! Xn n i ai1bni D na.a C b/n1 i iD0 ! Xn n i 2 ai bni D n.n 1/a2.a C b/n2 C na.a C b/n1 i iD0 544 A Series and Useful Formulas ! Xn n 1 1 1 ai .1 a/ni D Œ1 .1 a/n i 1 i n iD1 X1 ai D ea iŠ iD0 X1 ai .i C 1/ D ea .a C 1/ iŠ iD0 x lim .1 ˙ /n D e˙x n!1 n ! n lim xn D 0 jxj <1 n!1 i ! ! ! n n n C 1 C D i i 1 i a Án lim 1 D ea n!1 n ! i a n ni a e lim ai .1 a/ D a<1 n!1 i iŠ The last equation is used to derive the Poisson distribution from the binomial distribution. A.7 Integrals of Exponential Functions Z 1 ecx x D ecx d c Z 1 xecx2 x D ecx2 d 2c Z 1 1 eax dx D 0 a r Z 1 2 eax dx D ;a>0 0 a Z 1  à b 1 n C 1 xneax dx D a.nC1/=b 0 b b Appendix B Solving Difference Equations B.1 Introduction Difference equations describe discrete-time systems just like differential equations describe continuous-time systems. We encounter difference equations in many fields in telecommunications, digital signal processing, electromagnetics, civil engineering, etc. In Markov chains, and many queuing models, we often get a special structure for the state transition matrix that produces a recurrence relation between the system states. This appendix is based on the results provided in [1] and [2]. We start first by exploring simple approaches for simple situations, then we deal with the more general situation. B.2 First-Order Form Assume we have the simple recurrence equation si D asi1 C b (B.1) where a and b are given. Our task is to find values for the unknown si for all values of i D 0; 1; that satisfy the above equation. This is a first-order form since each sample depends on the immediate past value only. Since this is a linear relationship, we assume that the solution for si is composed of two components, a constant component c and a variable component v that depends on i. Thus, we write the trial solution for si as si D vi C c (B.2) © Springer International Publishing Switzerland 2015 545 F. Gebali, Analysis of Computer Networks, DOI 10.1007/978-3-319-15657-6 546 B Solving Difference Equations Substitute this into our recursion to get vi C c D a .vi1 C c/ C b (B.3) We can group the constant parts and the variable parts together to get c D acC b (B.4) vi D avi1 (B.5) and the value of the constant component of s is b c D 1 a (B.6) Assume a solution for vi in (B.5)oftheform i vi D (B.7) Substitute this solution in the recursion formula (B.5)forvi to get i D ai1 (B.8) which gives D a (B.9) and vi is given by i i vi D D a (B.10) Thus si is given from (B.2)as b s D ai C i 1 a (B.11) This is the desired solution to the difference equations. B.3 Second-Order Form Assume we have the simple recurrence equation siC1 C asi C bsi1 D 0 (B.12) B Solving Difference Equations 547 This is a second-order form since each sample depends on the two most recent past samples. Assume a solution for si of the form i si D (B.13) The recursion formula gives 2 C aC b D 0 (B.14) There are two possible solutions (roots) for ; which we denote as ˛ and ˇ; and there are three possible situations. B.3.1 Real and Different Roots ˛ ¤ ˇ When the two roots are real and different, si becomes a linear combination of these two solutions i i si D A˛ C Bˇ (B.15) where A and B are constants. The values of A and B are determined from any restrictions on the solutions for si such as given initial conditions. For example, if si represent the different components of the distribution vector in a Markov chain, then the sum of all the components must equal unity. B.3.2 Real and Equal Roots ˛ D ˇ When the two roots are real and equal, si is given by i si D .A C iB/ ˛ (B.16) B.3.3 Complex Conjugate Roots In that case we have ˛ D C j (B.17) ˇ D j (B.18) si is given by i si D ŒA cos.i / C B sin.i / (B.19) 548 B Solving Difference Equations B.4 General Approach Consider the N -order difference equations given by XN si D ak sik;i>0 (B.20) kD0 whereweassumedsi D 0 when i<0. We define the one-sided z-transform [3]ofsi as X1 i S.z/ D si z (B.21) iD0 Now take the z-transform of both sides of (B.20) to obtain " # XN X1 k .ik/ S.z/ s0 D ak z sik z (B.22) kD0 iD0 We assume that ak D 0 when k>Nand we also assume that si D 0 when i<0. Based on these two assumptions, we can change the upper limit for the summation over k and we can change the variable of summation of the term in square brackets as follows. " # X1 X1 k m S.z/ s0 D ak z sm z (B.23) kD0 mD0 where we introduced the new variable m D i k. Define the z-transform of the coefficients ak as X1 k A.z/ D ak z (B.24) kD0 Thus we get " # X1 m S.z/ s0 D A.z/ sm z mD0 D A.z/ S.z/ (B.25) Notice that the two summations on the RHS are now independent. Thus we finally get s0 S.z/ D (B.26) 1 A.z/ B Solving Difference Equations 549 We can write the above equation in the form s0 S.z/ D (B.27) D.z/ where the denominator polynomial is D.z/ D 1 A.z/ (B.28) MATLAB allows us to find the inverse z-transform of S.z/ using the command RESIDUE.a; b/ where a and b are the coefficients of the nominator and denominator polynomials A.z/ and B.z/, respectively, in descending powers of z1. The function RESIDUE returns the column vectors r, p, and c which give the residues, poles, and direct terms, respectively. The solution for si is given by the expression Xm .i1/ si D ci C rj .pj / i>0 (B.29) j D1 where m is the number of elements in r or p vectors. References 1. J.R. Norris, Markov Chains (Cambridge University Press, New York, 1997) 2. V.K. Ingle, J.G. Proakis, Digital Signal Processing Using MATLAB (Brooks/Cole, Pacific Grove, 2000) 3. A. Antoniou, Digital Filters: Analysis, Design, and Applications (McGraw-Hill, New York, 1993) Appendix C Finding s.n/ Using the Z-Transform When the transition matrix P of a Markov chain is not diagonalizable, we could use the z-transform technique to find the value of the distribution vector at any time instance s.n/.
Recommended publications
  • Triangular Factorization
    Chapter 1 Triangular Factorization This chapter deals with the factorization of arbitrary matrices into products of triangular matrices. Since the solution of a linear n n system can be easily obtained once the matrix is factored into the product× of triangular matrices, we will concentrate on the factorization of square matrices. Specifically, we will show that an arbitrary n n matrix A has the factorization P A = LU where P is an n n permutation matrix,× L is an n n unit lower triangular matrix, and U is an n ×n upper triangular matrix. In connection× with this factorization we will discuss pivoting,× i.e., row interchange, strategies. We will also explore circumstances for which A may be factored in the forms A = LU or A = LLT . Our results for a square system will be given for a matrix with real elements but can easily be generalized for complex matrices. The corresponding results for a general m n matrix will be accumulated in Section 1.4. In the general case an arbitrary m× n matrix A has the factorization P A = LU where P is an m m permutation× matrix, L is an m m unit lower triangular matrix, and U is an×m n matrix having row echelon structure.× × 1.1 Permutation matrices and Gauss transformations We begin by defining permutation matrices and examining the effect of premulti- plying or postmultiplying a given matrix by such matrices. We then define Gauss transformations and show how they can be used to introduce zeros into a vector. Definition 1.1 An m m permutation matrix is a matrix whose columns con- sist of a rearrangement of× the m unit vectors e(j), j = 1,...,m, in RI m, i.e., a rearrangement of the columns (or rows) of the m m identity matrix.
    [Show full text]
  • Problems in Abstract Algebra
    STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth 10.1090/stml/082 STUDENT MATHEMATICAL LIBRARY Volume 82 Problems in Abstract Algebra A. R. Wadsworth American Mathematical Society Providence, Rhode Island Editorial Board Satyan L. Devadoss John Stillwell (Chair) Erica Flapan Serge Tabachnikov 2010 Mathematics Subject Classification. Primary 00A07, 12-01, 13-01, 15-01, 20-01. For additional information and updates on this book, visit www.ams.org/bookpages/stml-82 Library of Congress Cataloging-in-Publication Data Names: Wadsworth, Adrian R., 1947– Title: Problems in abstract algebra / A. R. Wadsworth. Description: Providence, Rhode Island: American Mathematical Society, [2017] | Series: Student mathematical library; volume 82 | Includes bibliographical references and index. Identifiers: LCCN 2016057500 | ISBN 9781470435837 (alk. paper) Subjects: LCSH: Algebra, Abstract – Textbooks. | AMS: General – General and miscellaneous specific topics – Problem books. msc | Field theory and polyno- mials – Instructional exposition (textbooks, tutorial papers, etc.). msc | Com- mutative algebra – Instructional exposition (textbooks, tutorial papers, etc.). msc | Linear and multilinear algebra; matrix theory – Instructional exposition (textbooks, tutorial papers, etc.). msc | Group theory and generalizations – Instructional exposition (textbooks, tutorial papers, etc.). msc Classification: LCC QA162 .W33 2017 | DDC 512/.02–dc23 LC record available at https://lccn.loc.gov/2016057500 Copying and reprinting. Individual readers of this publication, and nonprofit libraries acting for them, are permitted to make fair use of the material, such as to copy select pages for use in teaching or research. Permission is granted to quote brief passages from this publication in reviews, provided the customary acknowledgment of the source is given. Republication, systematic copying, or multiple reproduction of any material in this publication is permitted only under license from the American Mathematical Society.
    [Show full text]
  • Row Echelon Form Matlab
    Row Echelon Form Matlab Lightless and refrigerative Klaus always seal disorderly and interknitting his coati. Telegraphic and crooked Ozzie always kaolinizing tenably and bell his cicatricles. Hateful Shepperd amalgamating, his decors mistiming purifies eximiously. The row echelon form of columns are both stored as One elementary transformations which matlab supports both above are equivalent, row echelon form may instead be used here we also stops all. Learn how we need your given value as nonzero column, row echelon form matlab commands that form, there are called parametric surfaces intersecting in matlab file make this? This article helpful in row echelon form matlab. There has multiple of satisfy all row echelon form matlab. Let be defined by translating from a sum each variable becomes a matrix is there must have already? We drop now vary the Second Derivative Test to determine the type in each critical point of f found above. Matrices in Matlab Arizona Math. The floating point, not change ababaarimes indicate that are displayed, and matrices with row operations for two general solution is a matrix is a line. The matlab will sum or graphing calculators for row echelon form matlab has nontrivial solution as a way: form for each componentwise operation or column echelon form. For accurate part, not sure to explictly give to appropriate was of equations as a comment before entering the appropriate matrices into MATLAB. If necessary, interchange rows to leaving this entry into the first position. But you with matlab do i mean a row echelon form matlab works by spaces and matrices, or multiply matrices.
    [Show full text]
  • Linear Algebra and Matrix Theory
    Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 .
    [Show full text]
  • Solutions Linear Algebra: Gradescope Problem Set 3 (1) Suppose a Has 4
    Solutions Linear Algebra: Gradescope Problem Set 3 (1) Suppose A has 4 rows and 3 columns, and suppose b 2 R4. If Ax = b has exactly one solution, what can you say about the reduced row echelon form of A? Explain. Solution: If Ax = b has exactly one solution, there cannot be any free variables in this system. Since free variables correspond to non-pivot columns in the reduced row echelon form of A, we deduce that every column is a pivot column. Thus the reduced row echelon form must be 0 1 1 0 0 B C B0 1 0C @0 0 1A 0 0 0 (2) Indicate whether each statement is true or false. If it is false, give a counterexample. (a) The vector b is a linear combination of the columns of A if and only if Ax = b has at least one solution. (b) The equation Ax = b is consistent only if the augmented matrix [A j b] has a pivot position in each row. (c) If matrices A and B are row equvalent m × n matrices and b 2 Rm, then the equations Ax = b and Bx = b have the same solution set. (d) If A is an m × n matrix whose columns do not span Rm, then the equation Ax = b is inconsistent for some b 2 Rm. Solution: (a) True, as Ax can be expressed as a linear combination of the columns of A via the coefficients in x. (b) Not necessarily. Suppose A is the zero matrix and b is the zero vector.
    [Show full text]
  • Circulant and Toeplitz Matrices in Compressed Sensing
    Circulant and Toeplitz Matrices in Compressed Sensing Holger Rauhut Hausdorff Center for Mathematics & Institute for Numerical Simulation University of Bonn Conference on Time-Frequency Strobl, June 15, 2009 Institute for Numerical Simulation Rheinische Friedrich-Wilhelms-Universität Bonn Holger Rauhut Hausdorff Center for Mathematics & Institute for NumericalCirculant and Simulation Toeplitz University Matrices inof CompressedBonn Sensing Overview • Compressive Sensing (compressive sampling) • Partial Random Circulant and Toeplitz Matrices • Recovery result for `1-minimization • Numerical experiments • Proof sketch (Non-commutative Khintchine inequality) Holger Rauhut Circulant and Toeplitz Matrices 2 Recovery requires the solution of the underdetermined linear system Ax = y: Idea: Sparsity allows recovery of the correct x. Suitable matrices A allow the use of efficient algorithms. Compressive Sensing N Recover a large vector x 2 C from a small number of linear n×N measurements y = Ax with A 2 C a suitable measurement matrix. Usual assumption x is sparse, that is, kxk0 := j supp xj ≤ k where supp x = fj; xj 6= 0g. Interesting case k < n << N. Holger Rauhut Circulant and Toeplitz Matrices 3 Compressive Sensing N Recover a large vector x 2 C from a small number of linear n×N measurements y = Ax with A 2 C a suitable measurement matrix. Usual assumption x is sparse, that is, kxk0 := j supp xj ≤ k where supp x = fj; xj 6= 0g. Interesting case k < n << N. Recovery requires the solution of the underdetermined linear system Ax = y: Idea: Sparsity allows recovery of the correct x. Suitable matrices A allow the use of efficient algorithms. Holger Rauhut Circulant and Toeplitz Matrices 3 n×N For suitable matrices A 2 C , `0 recovers every k-sparse x provided n ≥ 2k.
    [Show full text]
  • A Note on Nonnegative Diagonally Dominant Matrices Geir Dahl
    UNIVERSITY OF OSLO Department of Informatics A note on nonnegative diagonally dominant matrices Geir Dahl Report 269, ISBN 82-7368-211-0 April 1999 A note on nonnegative diagonally dominant matrices ∗ Geir Dahl April 1999 ∗ e make some observations concerning the set C of real nonnegative, W n diagonally dominant matrices of order . This set is a symmetric and n convex cone and we determine its extreme rays. From this we derive ∗ dierent results, e.g., that the rank and the kernel of each matrix A ∈Cn is , and may b e found explicitly. y a certain supp ort graph of determined b A ∗ ver, the set of doubly sto chastic matrices in C is studied. Moreo n Keywords: Diagonal ly dominant matrices, convex cones, graphs and ma- trices. 1 An observation e recall that a real matrix of order is called diagonal ly dominant if W P A n | |≥ | | for . If all these inequalities are strict, is ai,i j=6 i ai,j i =1,...,n A strictly diagonal ly dominant. These matrices arise in many applications as e.g., discretization of partial dierential equations [14] and cubic spline interp ola- [10], and a typical problem is to solve a linear system where tion Ax = b strictly diagonally dominant, see also [13]. Strict diagonal dominance A is is a criterion which is easy to check for nonsingularity, and this is imp ortant for the estimation of eigenvalues confer Ger²chgorin disks, see e.g. [7]. For more ab out diagonally dominant matrices, see [7] or [13]. A matrix is called nonnegative positive if all its elements are nonnegative p ositive.
    [Show full text]
  • Block Matrices in Linear Algebra
    PRIMUS Problems, Resources, and Issues in Mathematics Undergraduate Studies ISSN: 1051-1970 (Print) 1935-4053 (Online) Journal homepage: https://www.tandfonline.com/loi/upri20 Block Matrices in Linear Algebra Stephan Ramon Garcia & Roger A. Horn To cite this article: Stephan Ramon Garcia & Roger A. Horn (2020) Block Matrices in Linear Algebra, PRIMUS, 30:3, 285-306, DOI: 10.1080/10511970.2019.1567214 To link to this article: https://doi.org/10.1080/10511970.2019.1567214 Accepted author version posted online: 05 Feb 2019. Published online: 13 May 2019. Submit your article to this journal Article views: 86 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=upri20 PRIMUS, 30(3): 285–306, 2020 Copyright # Taylor & Francis Group, LLC ISSN: 1051-1970 print / 1935-4053 online DOI: 10.1080/10511970.2019.1567214 Block Matrices in Linear Algebra Stephan Ramon Garcia and Roger A. Horn Abstract: Linear algebra is best done with block matrices. As evidence in sup- port of this thesis, we present numerous examples suitable for classroom presentation. Keywords: Matrix, matrix multiplication, block matrix, Kronecker product, rank, eigenvalues 1. INTRODUCTION This paper is addressed to instructors of a first course in linear algebra, who need not be specialists in the field. We aim to convince the reader that linear algebra is best done with block matrices. In particular, flexible thinking about the process of matrix multiplication can reveal concise proofs of important theorems and expose new results. Viewing linear algebra from a block-matrix perspective gives an instructor access to use- ful techniques, exercises, and examples.
    [Show full text]
  • Analytical Solution of the Symmetric Circulant Tridiagonal Linear System
    Rose-Hulman Institute of Technology Rose-Hulman Scholar Mathematical Sciences Technical Reports (MSTR) Mathematics 8-24-2014 Analytical Solution of the Symmetric Circulant Tridiagonal Linear System Sean A. Broughton Rose-Hulman Institute of Technology, [email protected] Jeffery J. Leader Rose-Hulman Institute of Technology, [email protected] Follow this and additional works at: https://scholar.rose-hulman.edu/math_mstr Part of the Numerical Analysis and Computation Commons Recommended Citation Broughton, Sean A. and Leader, Jeffery J., "Analytical Solution of the Symmetric Circulant Tridiagonal Linear System" (2014). Mathematical Sciences Technical Reports (MSTR). 103. https://scholar.rose-hulman.edu/math_mstr/103 This Article is brought to you for free and open access by the Mathematics at Rose-Hulman Scholar. It has been accepted for inclusion in Mathematical Sciences Technical Reports (MSTR) by an authorized administrator of Rose-Hulman Scholar. For more information, please contact [email protected]. Analytical Solution of the Symmetric Circulant Tridiagonal Linear System S. Allen Broughton and Jeffery J. Leader Mathematical Sciences Technical Report Series MSTR 14-02 August 24, 2014 Department of Mathematics Rose-Hulman Institute of Technology http://www.rose-hulman.edu/math.aspx Fax (812)-877-8333 Phone (812)-877-8193 Analytical Solution of the Symmetric Circulant Tridiagonal Linear System S. Allen Broughton Rose-Hulman Institute of Technology Jeffery J. Leader Rose-Hulman Institute of Technology August 24, 2014 Abstract A circulant tridiagonal system is a special type of Toeplitz system that appears in a variety of problems in scientific computation. In this paper we give a formula for the inverse of a symmetric circulant tridiagonal matrix as a product of a circulant matrix and its transpose, and discuss the utility of this approach for solving the associated system.
    [Show full text]
  • Linear Algebra Review
    Linear Algebra Review Kaiyu Zheng October 2017 Linear algebra is fundamental for many areas in computer science. This document aims at providing a reference (mostly for myself) when I need to remember some concepts or examples. Instead of a collection of facts as the Matrix Cookbook, this document is more gentle like a tutorial. Most of the content come from my notes while taking the undergraduate linear algebra course (Math 308) at the University of Washington. Contents on more advanced topics are collected from reading different sources on the Internet. Contents 3.8 Exponential and 7 Special Matrices 19 Logarithm...... 11 7.1 Block Matrix.... 19 1 Linear System of Equa- 3.9 Conversion Be- 7.2 Orthogonal..... 20 tions2 tween Matrix Nota- 7.3 Diagonal....... 20 tion and Summation 12 7.4 Diagonalizable... 20 2 Vectors3 7.5 Symmetric...... 21 2.1 Linear independence5 4 Vector Spaces 13 7.6 Positive-Definite.. 21 2.2 Linear dependence.5 4.1 Determinant..... 13 7.7 Singular Value De- 2.3 Linear transforma- 4.2 Kernel........ 15 composition..... 22 tion.........5 4.3 Basis......... 15 7.8 Similar........ 22 7.9 Jordan Normal Form 23 4.4 Change of Basis... 16 3 Matrix Algebra6 7.10 Hermitian...... 23 4.5 Dimension, Row & 7.11 Discrete Fourier 3.1 Addition.......6 Column Space, and Transform...... 24 3.2 Scalar Multiplication6 Rank......... 17 3.3 Matrix Multiplication6 8 Matrix Calculus 24 3.4 Transpose......8 5 Eigen 17 8.1 Differentiation... 24 3.4.1 Conjugate 5.1 Multiplicity of 8.2 Jacobian......
    [Show full text]
  • What Is on Today 1 Row Reduction and Echelon Forms
    MA 242 (Linear Algebra) Lecture 2: January 23, 2018 Section B1 Professor Jennifer Balakrishnan, [email protected] What is on today 1 Row reduction and echelon forms1 1 Row reduction and echelon forms Lay{Lay{McDonald x1:2 pp. 12 { 24 Today we will refine the method introduced in the previous class into a row reduction algo- rithm that will allow us to analyze any system of linear equations. In the definitions that follow, a nonzero row or nonzero column in a matrix means a row or column that has at least one nonzero entry. A leading entry of a row refers to the leftmost nonzero entry in a nonzero row. A rectangular matrix is in echelon form (or row echelon form) if it has the following three properties: 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry of a row is in a column to the right of the leading entry of the row above it. 3. All entries in a column below a leading entry are zeros. If a matrix in echelon form satisfies the following additional conditions, then it is in reduced echelon form or reduced row echelon form: 4. The leading entry in each nonzero row is 1. 5. Each leading 1 is the only nonzero entry in its column. An echelon matrix (respectively, reduced echelon matrix) is one that is in echelon form (respectively, reduced echelon form). Example 1 The following matrix is in echelon form. The leading entries () may have any nonzero value. The starred entries (*) may have any value (including zero): 2 3 ∗ ∗ ∗ 6 0 ∗ ∗7 6 7 4 0 0 0 05 0 0 0 0 1 MA 242 (Linear Algebra) Lecture 2: January 23, 2018 Section B1 Property 2 above says that the leading entries form an echelon (\steplike") pattern that moves down and to the right through the matrix.
    [Show full text]
  • Lecture 12 Elementary Matrices and Finding Inverses
    Mathematics 13: Lecture 12 Elementary Matrices and Finding Inverses Dan Sloughter Furman University January 30, 2008 Dan Sloughter (Furman University) Mathematics 13: Lecture 12 January 30, 2008 1 / 24 I A is invertible. I The only solution to A~x = ~0 is the trivial solution ~x = ~0. I The reduced row-echelon form of A is In. ~ ~ n I A~x = b has a solution for b in R . I There exists an n × n matrix C for which AC = In. Result from previous lecture I The following statements are equivalent for an n × n matrix A: Dan Sloughter (Furman University) Mathematics 13: Lecture 12 January 30, 2008 2 / 24 I The only solution to A~x = ~0 is the trivial solution ~x = ~0. I The reduced row-echelon form of A is In. ~ ~ n I A~x = b has a solution for b in R . I There exists an n × n matrix C for which AC = In. Result from previous lecture I The following statements are equivalent for an n × n matrix A: I A is invertible. Dan Sloughter (Furman University) Mathematics 13: Lecture 12 January 30, 2008 2 / 24 I The reduced row-echelon form of A is In. ~ ~ n I A~x = b has a solution for b in R . I There exists an n × n matrix C for which AC = In. Result from previous lecture I The following statements are equivalent for an n × n matrix A: I A is invertible. I The only solution to A~x = ~0 is the trivial solution ~x = ~0. Dan Sloughter (Furman University) Mathematics 13: Lecture 12 January 30, 2008 2 / 24 ~ ~ n I A~x = b has a solution for b in R .
    [Show full text]