Systems of First Order Linear Differential Equations X1

Total Page:16

File Type:pdf, Size:1020Kb

Systems of First Order Linear Differential Equations X1 Systems of First Order Linear Differential Equations We will now turn our attention to solving systems of simultaneous homogeneous first order linear differential equations. The solutions of such systems require much linear algebra (Math 220). But since it is not a prerequisite for this course, we have to limit ourselves to the simplest instances: those systems of two equations and two unknowns only. But first, we shall have a brief overview and learn some notations and terminology. A system of n linear first order differential equations in n unknowns (an n × n system of linear equations) has the general form: x1′ = a11 x1 + a12 x2 + … + a1n xn + g1 x2′ = a21 x1 + a22 x2 + … + a2n xn + g2 x3′ = a31 x1 + a32 x2 + … + a3n xn + g3 (*) : : : : : : xn′ = an1 x1 + an2 x2 + … + ann xn + gn Where the coefficients aij ’s, and gi’s are arbitrary functions of t. If every term gi is constant zero, then the system is said to be homogeneous. Otherwise, it is a nonhomogeneous system if even one of the g’s is nonzero. © 2008, 2012 Zachary S Tseng D-1 - 1 The system (*) is most often given in a shorthand format as a matrix-vector equation, in the form: x′ = Ax + g ′ x a a ... a x g 1 11 12 1n 1 1 ′ a a ... a x g x2 21 22 2n 2 2 ′ x a31 a32 ... a3n x3 g3 3 = + : : : : : : : : : : : : : : ′ an1 an2 ... ann xn g n xn x′ A x g Where the matrix of coefficients, A, is called the coefficient matrix of the system. The vectors x′, x, and g are ′ x1 x1 g1 ′ x g x2 2 2 ′ x g x3 3 3 x′ = , x = , g = . : : : ′ x g xn n n For a homogeneous system, g is the zero vector. Hence it has the form x′ = Ax . © 2008, 2012 Zachary S Tseng D-1 - 2 Fact : Every n-th order linear equation is equivalent to a system of n first order linear equations. (This relation is not one-to-one. There are multiple systems thus associated with each linear equation, for n > 1.) Examples: (i) The mechanical vibration equation m u″ + γ u′ + k u = F(t) is equivalent to x1′ = x2 − k γ F(t) x′ = x − x + 2 m 1 m 2 m Note that the system would be homogeneous (respectively, nonhomogeneous) if the original equation is homogeneous (respectively, nonhomogeneous). (ii) y″′ − 2 y″ + 3 y′ − 4 y = 0 is equivalent to x1′ = x2 x2′ = x3 x3′ = 4 x1 − 3 x2 + 2 x3 © 2008, 2012 Zachary S Tseng D-1 - 3 This process can be easily generalized. Given an n-th order linear equation (n) (n−1) (n−2) any + an−1 y + an−2 y + … + a2 y″ + a1 y′ + a0 y = g(t). (n−1) (n) Make the substitutions: x1 = y, x2 = y′, x3 = y″, … , xn = y , and xn′ = y . The first n − 1 equations follow thusly. Lastly, substitute the x’s into the original equation to rewrite it into the n-th equation and obtain the system of the form: x1′ = x2 x2′ = x3 x3′ = x4 : : : : : : xn−1′ = xn ′ − a a a a g(t) x = 0 x − 1 x − 2 x − ... − n−1 x + n 1 2 3 n an an an an an Note : The reverse is also true (mostly) *. Given an n × n system of linear equations, it can be rewritten into a single n-th order linear equation. * The exceptions being the systems whose coefficient matrices are diagonal matrices. However, our Eigenvector method will nevertheless be able to solve them without any modification. © 2008, 2012 Zachary S Tseng D-1 - 4 Exercises D-1.1 : 1 – 3 Convert each linear equation into a system of first order equations. 1. y″ − 4y′ + 5y = 0 2. y″′ − 5y″ + 9y = t cos 2 t 3. y(4) + 3y″′ − πy″ + 2πy′ − 6 y = 11 4. Rewrite the system you found in (a) Exercise 1, and (b) Exercise 2, into a matrix-vector equation. 5. Convert the third order linear equation below into a system of 3 first order equation using (a) the usual substitutions, and (b) substitutions in the reverse order: x1 = y″, x2 = y′, x3 = y. Deduce the fact that there are multiple ways to rewrite each n-th order linear equation into a linear system of n equations. y″′ + 6y″ + y′ − 2y = 0 Answers D-1.1 : 1. x1′ = x2 2. x1′ = x2 x2′ = −5x1 + 4 x2 x2′ = x3 x3′ = −9x1 + 5 x3 + t cos 2 t 3. x1′ = x2 x2′ = x3 x3′ = x4 x4′ = 6x1 − 2πx 2 + πx 3 − 3 x4 + 11 0 1 0 0 0 1 0 0 1 0 4. (a) x′ = x (b) x′ = x + −5 4 −9 0 5 tcos2t 5. (a) x1′ = x2 (b) x1′ = −6x1 − x2 + 2 x3 x2′ = x3 x2′ = x1 x3′ = 2x1 − x2 − 6x3 x3′ = x2 © 2008, 2012 Zachary S Tseng D-1 - 5 A Crash Course in (2 ××× 2) Matrices Several weeks worth of matrix algebra in an hour… (Relax, we will only study the simplest case, that of 2 × 2 matrices.) Review topics: 1. What is a matrix ( pl . matrices)? A matrix is a rectangular array of objects (called entries ). Those entries are usually numbers, but they can also include functions, vectors, or even other matrices. Each entry’s position is addressed by the row and column (in that order) where it is located. For example, a52 represents the entry positioned at the 5th row and the 2nd column of the matrix A. 2. The size of a matrix The size of a matrix is specified by 2 numbers [number of rows ] × [number of columns ]. Therefore, an m × n matrix is a matrix that contains m rows and n columns. A matrix that has equal number of rows and columns is called a square matrix . A square matrix of size n × n is usually referred to simply as a square matrix of size (or order) n. Notice that if the number of rows or columns is 1, the result (respectively, a 1 × n, or an m × 1 matrix) is just a vector. A 1 × n matrix is called a row vector , and an m × 1 matrix is called a column vector . Therefore, vectors are really just special types of matrices. Hence, you will probably notice the similarities between many of the matrix operations defined below and vector operations that you might be familiar with. © 2008, 2012 Zachary S Tseng D-1 - 6 3. Two special types of matrices Identity matrices (square matrices only) The n × n identity matrix is often denoted by In. 1 0 0 1 0 0 1 0 I = , I = , etc. 2 0 1 3 0 0 1 Properties (assume A and I are of the same size): AI = IA = A In x = x, x = any n × 1 vector Zero matrices – matrices that contain all-zero entries. Properties: A + 0 = 0 + A = A A 0 = 0 = 0 A 4. Arithmetic operations of matrices (i) Addition / subtraction a b e f a ± e b ± f ± = c d g h c ± g d ± h © 2008, 2012 Zachary S Tseng D-1 - 7 (ii) Scalar Multiplication a b ka kb k = , for any scalar k. c d kc kd (iii) Matrix multiplication a b e f ae + bg af + bh = c d g h ce + dg cf + dh The matrix multiplication AB = C is defined only if there are as many rows in B as there are columns in A. For example, when A is m × k and B is k × n. The product matrix C is going to be of size m × n, and whose ij -th entry, cij , is equal to the vector dot product between the i- th row of A and the j-th column of B. Since vectors are matrices, we can also multiply together a matrix and a vector, assuming the above restriction on their sizes is met. The product of a 2 × 2 matrix and a 2- entry column vector is a b x ax + by = . c d y cx + dy Note 1: Two square matrices of the same size can always be multiplied together. Because, obviously, having the same number of rows and columns, they satisfy the size requirement outlined above. Note 2: In general, AB ≠ BA . Indeed, depending on the sizes of A and B, one product might not even be defined while the other product is. © 2008, 2012 Zachary S Tseng D-1 - 8 5. Determinant (square matrices only) For a 2 × 2 matrix, its determinant is given by the formula a b det = ad − bc c d Note : The determinant is a function whose domain is the set of all square matrices of a certain size, and whose range is the set of all real (or complex) numbers. 6. Inverse matrix (of a square matrix) Given an n × n square matrix A, if there exists a matrix B (necessarily of the same size) such that AB = BA = In, then the matrix B is called the inverse matrix of A, denoted A−1. The inverse matrix, if it exists, is unique for each A. A matrix is called invertible if it has an inverse matrix. a b Theorem : For any 2 × 2 matrix A = , c d its inverse, if exists, is given by 1 d − b A−1 = . ad − bc − c a Theorem : A square matrix is invertible if and only if its determinant is nonzero. © 2008, 2012 Zachary S Tseng D-1 - 9 1 − 2 2 − 3 Examples : Let A = and B = . 5 2 −1 4 1 − 2 2 − 3 2 − 2 − 4 − (− )3 (i) 2 A − B = 2 − = 5 2 −1 4 10 − (− )1 4 − 4 0 −1 = 11 0 1 − 2 2 − 3 2 + 2 − 3 − 8 4 −11 (ii) AB = = = 5 2 −1 4 10 − 2 −15 + 8 8 − 7 On the other hand: 2 − 3 1 − 2 2 −15 − 4 − 6 −13 −10 BA = = = −1 4 5 2 −1+ 20 2 + 8 19 10 (iii) det( A) = 2 − (−10) = 12, det( B) = 8 − 3 = 5.
Recommended publications
  • Cramer's Rule 1 Cramer's Rule
    Cramer's rule 1 Cramer's rule In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the vector of right hand sides of the equations. It is named after Gabriel Cramer (1704–1752), who published the rule for an arbitrary number of unknowns in 1750[1], although Colin Maclaurin also published special cases of the rule in 1748[2] (and probably knew of it as early as 1729).[3][4][5] General case Consider a system of n linear equations for n unknowns, represented in matrix multiplication form as follows: where the n by n matrix has a nonzero determinant, and the vector is the column vector of the variables. Then the theorem states that in this case the system has a unique solution, whose individual values for the unknowns are given by: where is the matrix formed by replacing the ith column of by the column vector . The rule holds for systems of equations with coefficients and unknowns in any field, not just in the real numbers. It has recently been shown that Cramer's rule can be implemented in O(n3) time,[6] which is comparable to more common methods of solving systems of linear equations, such as Gaussian elimination. Proof The proof for Cramer's rule uses just two properties of determinants: linearity with respect to any given column (taking for that column a linear combination of column vectors produces as determinant the corresponding linear combination of their determinants), and the fact that the determinant is zero whenever two columns are equal (the determinant is alternating in the columns).
    [Show full text]
  • Systems of Linear Equations
    Systems of Linear Equations 0.1 Definitions Recall that if A 2 Rm×n and B 2 Rm×p, then the augmented matrix [A j B] 2 Rm×n+p is the matrix [AB], that is the matrix whose first n columns are the columns of A, and whose last p columns are the columns of B. Typically we consider B = 2 Rm×1 ' Rm, a column vector. We also recall that a matrix A 2 Rm×n is said to be in reduced row echelon form if, counting from the topmost row to the bottom-most, 1. any row containing a nonzero entry precedes any row in which all the entries are zero (if any) 2. the first nonzero entry in each row is the only nonzero entry in its column 3. the first nonzero entry in each row is 1 and it occurs in a column to the right of the first nonzero entry in the preceding row Example 0.1 The following matrices are not in reduced echelon form because they all fail some part of 3 (the first one also fails 2): 01 1 01 00 1 0 21 2 0 0 0 1 0 1 0 0 1 @ A @ A 0 1 0 1 0 1 0 0 1 1 A matrix that is in reduced row echelon form is: 01 0 1 01 @0 1 0 0A 0 0 0 1 A system of m linear equations in n unknowns is a set of m equations, numbered from 1 to m going down, each in n variables xi which are multiplied by coefficients aij 2 F , whose sum equals some bj 2 R: 8 a x + a x + ··· + a x = b > 11 1 12 2 1n n 1 > <> a21x1 + a22x2+ ··· + a2nxn = b2 (S) .
    [Show full text]
  • Systems of Differential Equations
    Chapter 11 Systems of Differential Equations 11.1: Examples of Systems 11.2: Basic First-order System Methods 11.3: Structure of Linear Systems 11.4: Matrix Exponential 11.5: The Eigenanalysis Method for x′ = Ax 11.6: Jordan Form and Eigenanalysis 11.7: Nonhomogeneous Linear Systems 11.8: Second-order Systems 11.9: Numerical Methods for Systems Linear systems. A linear system is a system of differential equa- tions of the form x = a x + + a x + f , 1′ 11 1 · · · 1n n 1 x2′ = a21x1 + + a2nxn + f2, (1) . · · · . · · · x = a x + + a x + f , m′ m1 1 · · · mn n m where ′ = d/dt. Given are the functions aij(t) and fj(t) on some interval a<t<b. The unknowns are the functions x1(t), . , xn(t). The system is called homogeneous if all fj = 0, otherwise it is called non-homogeneous. Matrix Notation for Systems. A non-homogeneous system of linear equations (1) is written as the equivalent vector-matrix system x′ = A(t)x + f(t), where x1 f1 a11 a1n . · · · . x = . , f = . , A = . . · · · xn fn am1 amn · · · 11.1 Examples of Systems 521 11.1 Examples of Systems Brine Tank Cascade ................. 521 Cascades and Compartment Analysis ................. 522 Recycled Brine Tank Cascade ................. 523 Pond Pollution ................. 524 Home Heating ................. 526 Chemostats and Microorganism Culturing ................. 528 Irregular Heartbeats and Lidocaine ................. 529 Nutrient Flow in an Aquarium ................. 530 Biomass Transfer ................. 531 Pesticides in Soil and Trees ................. 532 Forecasting Prices ................. 533 Coupled Spring-Mass Systems ................. 534 Boxcars ................. 535 Electrical Network I ................. 536 Electrical Network II ................. 537 Logging Timber by Helicopter ................
    [Show full text]
  • Computing the Strict Chebyshev Solution of Overdetermined Linear Equations
    MATHEMATICS OF COMPUTATION, VOLUME 31, NUMBER 140 OCTOBER 1977, PAGES 974-983 Computing the Strict Chebyshev Solution of Overdetermined Linear Equations By Nabih N. Abdelmalek Abstract. A method for calculating the strict Chebyshev solution of overdetermined systems of linear equations using linear programming techniques is described. This method provides: (1) a way to determine, for the majority of cases, all the equations belonging to the characteristic set, (2) an efficient method to obtain the inverse of the matrix needed to calculate the strict Chebyshev solution, and (3) a way of recog- nizing when an element of the Chebyshev solution equals a corresponding element of the strict Chebyshev solution. As a result, in general, the computational effort is con- siderably reduced. Also the present method deals with full rank as well as rank defi- cient cases. Numerical results are given. 1. Introduction. Consider the overdetermined system of linear equations (1) Ca = f, where C is a given real n x m matrix of rank k < m < n and / is a given real «-vector. Let E denote the set of n equations (1). The Chebyshev solution (C.S.) of system (1) is the OT-vectora* = (a*) which minimizes the Chebyshev norm z, (2) z = max\r¡(a)\, i E E, where r, is the rth residual in (1) and is given by (3) a-,.= cixax+ ••• + cimam - f¡, i E E. Let us denote the C.S. a* by (a*)c s . It is known that if C satisfies the Haar condition, the C.S. (a*)c s is unique. Otherwise it may not be unique.
    [Show full text]
  • CHAPTER 8: MATRICES and DETERMINANTS
    (Section 8.1: Matrices and Determinants) 8.01 CHAPTER 8: MATRICES and DETERMINANTS The material in this chapter will be covered in your Linear Algebra class (Math 254 at Mesa). SECTION 8.1: MATRICES and SYSTEMS OF EQUATIONS PART A: MATRICES A matrix is basically an organized box (or “array”) of numbers (or other expressions). In this chapter, we will typically assume that our matrices contain only numbers. Example Here is a matrix of size 2 3 (“2 by 3”), because it has 2 rows and 3 columns: 102 015 The matrix consists of 6 entries or elements. In general, an m n matrix has m rows and n columns and has mn entries. Example Here is a matrix of size 2 2 (an order 2 square matrix): 4 1 3 2 The boldfaced entries lie on the main diagonal of the matrix. (The other diagonal is the skew diagonal.) (Section 8.1: Matrices and Determinants) 8.02 PART B: THE AUGMENTED MATRIX FOR A SYSTEM OF LINEAR EQUATIONS Example 3x + 2y + z = 0 Write the augmented matrix for the system: 2x z = 3 Solution Preliminaries: Make sure that the equations are in (what we refer to now as) standard form, meaning that … • All of the variable terms are on the left side (with x, y, and z ordered alphabetically), and • There is only one constant term, and it is on the right side. Line up like terms vertically. Here, we will rewrite the system as follows: 3x + 2y + z = 0 2x z = 3 (Optional) Insert “1”s and “0”s to clarify coefficients.
    [Show full text]
  • Solving Systems of Linear Equations Using Matrices What Is a Matrix? a Matrix Is a Compact Grid Or Array of Numbers
    Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations. Matrices have many applications in science, engineering, and math courses. This handout will focus on how to solve a system of linear equations using matrices. How to Solve a System of Equations Using Matrices Matrices are useful for solving systems of equations. There are two main methods of solving systems of equations: Gaussian elimination and Gauss-Jordan elimination. Both processes begin the same way. To begin solving a system of equations with either method, the equations are first changed into a matrix. The coefficient matrix is a matrix comprised of the coefficients of the variables which is written such that each row represents one equation and each column contains the coefficients for the same variable in each equation. The constant matrix is the solution to each of the equations written in a single column and in the same order as the rows of the coefficient matrix. The augmented matrix is the coefficient matrix with the constant matrix as the last column. Example: Write the coefficient matrix, constant matrix, and augmented matrix for the following system of equations: 3 2 + 4 = 9 − 3 − 2 = 5 4 3 − + 2 = 7 Solution: The coefficient matrix is created by taking the coefficients of each variable and − entering them into each row. The first equation will be the first row; the second equation will be the second row, and the third equation will be the third row.
    [Show full text]
  • An Algorithm for Solving Parametric Linear Systems +
    J. Symbolic Computation (1992) 13, 353-394 An Algorithm for Solving Parametric Linear Systems + WILLIAM Y. SIT Department of Mathematics, The City College of New York, New York, NY 10031, USA 1BM Research, P. 0 .Box 218, Yorktown Heights, NY 10598, USA (Received 17 April 1988) We present a theoretical foundation for studying parametric systems of linear equations and prove an efficient algorithm for identifying all parametric values (including degener- ate cases) for which the system is consistent. The algorithm gives a small set of regimes where for each regime, the solutions of the specialized systems may be given uniformly. For homogeneous linear systems, or for systems where the right hand side is arbitrary, this small set is irredundant. We discuss in detail practical Issues concerning implemen- tations, with particular emphasis on simplification of results. Examples are given based on a close implementation of the algorithm in SCRATCHPAD II. We also give a com- plexity analysis of the Gaussian elimination method and compare that with our algorithm. I. Introduction Consider the following problem: given a parametric system of linear equations (PSLE) over a computable coefficient domain R, with parameters x = (x,, ..., x~,), determine all choices of parametric values ct = (% ..., am) for which the system (which may become degenerate) is solvable, and for each such choice, solve the linear system. Here a+ lies in some computable extension field U of the quotient field F of R; for example, U may be some finite algebraic extension of Q when R = 7/. For obvious reasons, we would prefer the algorithm to give the solutions in the most genetic form, and the set of 0c for which a generic solution is valid should be described as simply as possible.
    [Show full text]
  • Linear Algebra Review
    Linear algebra review Dec 10, 2014 1. Solving system of linear equations: Given a system of linear equation Ax = b, where A is an m n matrix. × (a) We first observe the augmented matrix and perform elementary row operations to find the reduced echelon form: [A b] [R c]. | −→ | If there are some rows in [R c] of the form | [0 0 d], ··· where d =0,thenthesystemoflinearequationshasnosolutionandisinconsis- tent; otherwise$ we only need to observe the coefficient matrix A: If rank A = n, then there is a unique solution (no free variables); If rank A<n, then there are infinitely many solutions (still has free variables). Note rank A can never be greater than either m or n. (b) To find the general solutions to the system of linear equations, we still need to use the augmented matrix [R c], the concrete steps are as follows: | Recover a “new” system of linear equations from [R c], solve basic variables using free variables, write them in the parametric vector| form. (c) If A is square matrix (number of the equations equals the number of variables), then Cramer’s rule is an efficient method to determine the existence as well as to calculate the exact solutions of a system of linear equations. There will be formulas involving determinant for the solution. 2. Homogeneous system of linear equations Ax = 0 There are only two possible types of solutions for homogeneous system: If rank A = n (column vectors of A are linearly independent), only trivial solution. If rank A<n, infinitely many solutions. The set of all solutions forms a vector space, called Null space of A.
    [Show full text]
  • Eigenvalues and Eigenvectors
    CHAPTER 7 Eigenvalues and Eigenvectors 7.1 ELEMENTARY PROPERTIES OF EIGENSYSTEMS Up to this point, almost everything was either motivated by or evolved from the consideration of systems of linear algebraic equations. But we have come to a turning point, and from now on the emphasis will be different. Rather than being concerned with systems of algebraic equations, many topics will be motivated or driven by applications involving systems of linear differential equations and their discrete counterparts, difference equations. For example, consider the problem of solving the system of two first-order linear differential equations, du1/dt =7u1 4u2 and du2/dt =5u1 2u2. In matrix notation, this system is − − u1′ 7 4 u1 = − or, equivalently, u′ = Au, (7.1.1) u′ 5 2 u 2 − 2 ′ u1 7 −4 u1 where u′ = ′ , A = , and u = . Because solutions of a single u2 5 −2 u2 λt equation u′ = λu have the form u = αe , we are motivated to seek solutions of (7.1.1) that also have the form λt λt u1 = α1e and u2 = α2e . (7.1.2) Differentiating these two expressions and substituting the results in (7.1.1) yields λt λt λt α1λe =7α1e 4α2e α1λ =7α1 4α2 7 4 α1 α1 − − − =λ . α λeλt =5α eλt 2α eλt ⇒ α λ =5α 2α ⇒ 5 2 α α 2 1 − 2 2 1 − 2 − 2 2 490 Chapter 7 Eigenvalues and Eigenvectors In other words, solutions of (7.1.1) having the form (7.1.2) can be constructed provided solutions for λ and x = α1 in the matrix equation Ax = λx can α2 be found.
    [Show full text]
  • Chapter 8 Eigenvalues
    Chapter 8 Eigenvalues So far, our applications have concentrated on statics: unchanging equilibrium con¯g- urations of physical systems, including mass/spring chains, circuits, and structures, that are modeled by linear systems of algebraic equations. It is now time to set our universe in motion. In general, a dynamical system refers to the (di®erential) equations governing the temporal behavior of some physical system: mechanical, electrical, chemical, uid, etc. Our immediate goal is to understand the behavior of the simplest class of linear dynam- ical systems | ¯rst order autonomous linear systems of ordinary di®erential equations. As always, complete analysis of the linear situation is an essential prerequisite to making progress in the vastly more complicated nonlinear realm. We begin with a very quick review of the scalar case, where solutions are simple exponential functions. We use the exponential form of the scalar solution as a template for a possible solution in the vector case. Substituting the resulting formula into the system leads us immediately to the equations de¯ning the so-called eigenvalues and eigenvectors of the coe±cient matrix. Further developments will require that we become familiar with the basic theory of eigenvalues and eigenvectors, which prove to be of absolutely fundamental importance in both the mathematical theory and its applications, including numerical algorithms. The present chapter develops the most important properties of eigenvalues and eigen- vectors. The applications to dynamical systems will appear in Chapter 9, while applications to iterative systems and numerical methods is the topic of Chapter 10. Extensions of the eigenvalue concept to di®erential operators acting on in¯nite-dimensional function space, of essential importance for solving linear partial di®erential equations modelling continuous dynamical systems, will be covered in later chapters.Each square matrix has a collection of one or more complex scalars called eigenvalues and associated vectors, called eigenvectors.
    [Show full text]
  • Lecture 21: 5.6 Rank and Nullity
    Lecture 21: 5.6 Rank and Nullity Wei-Ta Chu 2008/12/5 Rank and Nullity Definition The common dimension of the row and column space of a matrix A is called the rank (秩) of A and is denoted by rank(A); the dimension of the nullspace of A is called the nullity (零核維數) of A and is denoted by nullity(A). 2008/12/5 Elementary Linear Algebra 2 Example (Rank and Nullity) Find the rank and nullity of the matrix 1 2 0 4 5 3 3 7 2 0 1 4 A 2 5 2 4 6 1 4 9 2 4 4 7 Solution: The reduced row-echelon form of A is 1 0 4 28 37 13 0 1 2 12 16 5 0 0 0 0 0 0 0 0 0 0 0 0 Since there are two nonzero rows (two leading 1’s), the row space and column space are both two-dimensional, so rank(A) = 2. 2008/12/5 Elementary Linear Algebra 3 Example (Rank and Nullity) To find the nullity of A, we must find the dimension of the solution space of the linear system Ax=0. The corresponding system of equations will be x1 –4x3 –28x4 –37x5 + 13x6 = 0 x2 –2x3 –12x4 –16 x5+ 5 x6 = 0 It follows that the general solution of the system is x1 = 4r + 28s + 37t –13u, x2 = 2r + 12s + 16t –5u, x3 = r, x4 = s, x5 = t, x6 = u or x1 4 28 37 13 x 2 12 16 5 2 x3 1 0 0 0 Thus, nullity(A) = 4.
    [Show full text]
  • An Extension of Gauss' Transformation for Improving the Condition of Systems of Linear Equations 1
    AN EXTENSION OF GAUSS' TRANSFORMATION 9 An Extension of Gauss' Transformation for Improving the Condition of Systems of Linear Equations 1. Gauss' Transformation Extended. Consider a consistent system of linear equations n (1) Z»tf*¿ = bi(i = 1, ■■■,n) j-i with o,y, bi real. Let the matrix be symmetric and of positive rank n — d and suppose the quadratic form corresponding to A is non-negative semi-definite. Thus the solution points of (1) in affine «-space form a linear subspace of dimension d. The following is our extension of a transformation due to Gauss: Let î = (iii • • •, sn) be any real vector. Make the substitution (2) Xi = y i + 5,y„+i (* « 1, '•-",'*), and thereby convert (1) into a system (3) of n equations in the » + 1 unknowns yi, ■• •, y„+i : (3) ¿ any i + ( ¿ dijSj ) yn+i = &<(*' = 1, • • •, »). An (« + l)-th equation is obtained as the weighted sum of the n equations (3): (4) ¿ ( ¿ atjSi ) y i + ( ¿ üijSiSj 1 yn+l = ¿ ô<5». ¿-1 \ í-l / V'iH / »-1 The redundancy of (4) means that the solution space of the equation pair (3, 4) is a linear subspace of dimension d + 1 ; that is, the rank of the coeffi- cient matrix Ai of the system (3, 4) is n — d. However, the quantities Xi = y< + s.-yn+i are determined exactly as well by the system (3, 4) as by the system (1). If A is symmetric, the system (3, 4) also has a symmetric coefficient matrix. Gauss910, in writing how he liked to solve certain systems (1) by relaxa- tion,23 presented a transformation whose application, he was convinced, would improve the convergence of the relaxation process for normal equations associated with the adjustment of surveying data.
    [Show full text]