PRACTICING PROOFS This File Contains Two Sets of Problems To

Total Page:16

File Type:pdf, Size:1020Kb

PRACTICING PROOFS This File Contains Two Sets of Problems To PRACTICING PROOFS This file contains two sets of problems to practice your ability with proofs. So- lutions to the first set of problems are provided. The solutions to the second set of problems are intentionally left to the reader (as an incentive to practice!). For your convenience, we begin by recalling some preliminary definitions and theorems that can be used to solve the problems below. PRELIMINARY DEFINITIONS / THEOREMS Definition 1. An n × n matrix B is called idempotent if B2 = B. Example The identity matrix is idempotent, because I2 = I · I = I. Definition 2. An n × n matrix B is called nilpotent if there exists a power of the matrix B which is equal to the zero matrix. This means that there is an index k such that Bk = O. 0 1 2 Example The zero matrix is obviously nilpotent. The matrix B = 0 0 1 0 0 0 is also nilpotent, because B3 = B · B · B = O. (Note that any higher power of B is also zero.) Definition 3. An n×n matrix B is called symmetric if it is equal to its transpose: B = BT . An n × n matrix B is called skew-symmetric if B = −BT . 0 1 2 Example The matrix B = 1 0 1 is symmetric. 2 1 3 0 1 2 The matrix B = −1 0 1 is skew-symmetric. −2 −1 0 Definition 4. An n × n matrix B is called non-singular (or “invertible”) if it has a multiplicative inverse, and is called singular (or “not invertible”) otherwise. Theorem 1. For every pair of n × n matrices A and B, det(AB)= det(A)det(B). 1 2 Theorem 2. The following conditions are equivalent: • A is non-singular. • det(A) =06 . • The system Ax = 0 has a unique solution (x = 0). Theorem 3. For every n×n matrix A, the determinant of A equals the product of its eigenvalues. PRACTICE PROBLEMS (solutions provided below) (1) Let A be an n × n matrix. Prove that if A is idempotent, then det(A) is equal to either 0 or 1. (2) Let A be an n × n matrix. Prove that if A is idempotent, then the matrix I − A is also idempotent. (Here I is the identity matrix.) (3) Let A be an n × n matrix. Prove that if A is nilpotent, then det(A)=0. 1 1 1 (4) Let B be the matrix 0 2 1 , and let A be any 3x3 matrix. Prove 0 0 3 that the matrix A is invertible if and only if the matrix AB is invertible. (5) Let v be any vector of length 3. Let A = (v, 2v, 3v) be the 3 × 3 matrix with columns v, 2v, 3v. Prove that A is singular. (6) Let A = (a1, a2, a3, a4)bea4 × 4 matrix with columns a1, a2, a3, a4. Suppose that a1 − 3a4 = 0 (the zero vector). Prove that A is singular. 3 ADDITIONAL PROBLEMS (without solutions) (7) Let A be an n × n invertible matrix. Suppose that A−1 = A. Prove that det(A) is equal to either +1 or −1. (8) Suppose that A is an n × n matrix and that 0 is an eigenvalue of A. Prove that A is not invertible. (9) Suppose that A is an n × n matrix and that A2 +3A = I. Prove that A is invertible. (10) Let A, B, C be n × n invertible matrices. Prove that the product ABC is also invertible. (11) Let A and B be n × n matrices. Suppose that B is invertible and that A = B−1AB. Prove that A and B commute. (12) Let A be any n × n matrix. Prove that the matrix A + AT is symmetric. (13) Let A be any n×n matrix. Prove that the matrix A−AT is skew-symmetric. (14) Prove that every n × n matrix can be written as the sum of a symmetric matrix and a skew symmetric matrix. 4 Solutions to the first set of practice problems. Problem 1 Suppose that A is idempotent, that is, A2 = A. Taking the determinant of both sides of this equation, we find: 2 (1) det(A )= det(A). Recall that the determinant of two matrices equals the product of the two deter- minants (see Theorem 1). Then (2) det(A2)= det(A · A)= det(A) · det(A) = (det(A))2. Combining equations (1) and (2), we find that 2 (det(A)) = det(A). Hence det(A) · [det(A) − 1]=0, and det(A)=0 or1. Problem 2 Suppose that A is idempotent, that is, A2 = A. To prove that the matrix B = I −A is also idempotent, we must show that B2 = B. Hence, we compute B2, and we verify that B2 is equal to B. B2 = (I − A)2 = (I − A)(I − A)= 2 2 = I − IA − AI + A = =I =A =A =A |{z} |{z} |{z} |{z} = I − A − A + A = = I − A = B. Note that the only things we used are the definition of idempotent matrix and the fact that multiplication by identity matrix leaves every matrix unchanged. Problem 3 Suppose that A is nilpotent, that is, there exists an index m ≥ 1 such that m A = O. 5 (Here O is the zero matrix.) Taking the determinant of both sides of the equation, we find m det(A )= det(O). Now, det(O) = 0 and det(Am) = (det(A))m. (The latter can be proven iterating the argument in problem 1, or, more elegantly by induction.) Then m (det(A)) =0, so det(A)=0. Problem 4 The matrix B is upper triangular, hence the determinant of B equals the product of the elements on the diagonal: 1 1 1 det(B)= det 0 2 1 =1 · 2 · 3=6. 0 0 3 This gives det(AB)= det(A)det(B)=6det(A). Then, it’s clear that det(AB)=0 ⇔ det(A)=0 and, of course, det(AB) =06 ⇔ det(A) =06 . This is equivalent to saying that AB is invertible if and only if A is invertible. Problem 5 It is sufficient to prove that the matrix A = (v, 2v, 3v) has determinant zero. Consider the matrix A′ obtained from A by subtracting twice the first column from the second column: ′ A = (v, 2v − 2v, 3v) = (v, 0, 3v). This kind of operation does not affect the determinant, so ′ (3) det(A )= det(A). Because A′ has a zero column, det(A′) = 0. Equation (3) implies that det(A)=0, as well. 6 Problem 6 The matrix A is 4 × 4, with columns a1, a2, a3, a4. Write A = (a1, a2, a3, a4). b1 b2 For every vector b = , we can write: b3 b4 b1 b2 Ab = (a1, a2, a3, a4) = b1a1 + b2a2 + b3a3 + b4a4. b3 b4 1 0 In particular, if b = , we find: 0 −3 1 1 0 0 A = (a1 , a2 , a3 , a4) 0 0 −3 −3 =1a1 +0a2 +0a3 + (−3)a4 = a1 − 3a4. By assumption, a1 − 3a4 = 0. Then 1 0 A = 0. 0 −3 1 0 Because the vector is non-zero, this shows that the system Ax = 0 has a 0 −3 non-trivial solution. Hence, A is singular. [See Theorem 2.].
Recommended publications
  • Math 221: LINEAR ALGEBRA
    Math 221: LINEAR ALGEBRA Chapter 8. Orthogonality §8-3. Positive Definite Matrices Le Chen1 Emory University, 2020 Fall (last updated on 11/10/2020) Creative Commons License (CC BY-NC-SA) 1 Slides are adapted from those by Karen Seyffarth from University of Calgary. Positive Definite Matrices Cholesky factorization – Square Root of a Matrix Positive Definite Matrices Definition An n × n matrix A is positive definite if it is symmetric and has positive eigenvalues, i.e., if λ is a eigenvalue of A, then λ > 0. Theorem If A is a positive definite matrix, then det(A) > 0 and A is invertible. Proof. Let λ1; λ2; : : : ; λn denote the (not necessarily distinct) eigenvalues of A. Since A is symmetric, A is orthogonally diagonalizable. In particular, A ∼ D, where D = diag(λ1; λ2; : : : ; λn). Similar matrices have the same determinant, so det(A) = det(D) = λ1λ2 ··· λn: Since A is positive definite, λi > 0 for all i, 1 ≤ i ≤ n; it follows that det(A) > 0, and therefore A is invertible. Theorem A symmetric matrix A is positive definite if and only if ~xTA~x > 0 for all n ~x 2 R , ~x 6= ~0. Proof. Since A is symmetric, there exists an orthogonal matrix P so that T P AP = diag(λ1; λ2; : : : ; λn) = D; where λ1; λ2; : : : ; λn are the (not necessarily distinct) eigenvalues of A. Let n T ~x 2 R , ~x 6= ~0, and define ~y = P ~x. Then ~xTA~x = ~xT(PDPT)~x = (~xTP)D(PT~x) = (PT~x)TD(PT~x) = ~yTD~y: T Writing ~y = y1 y2 ··· yn , 2 y1 3 6 y2 7 ~xTA~x = y y ··· y diag(λ ; λ ; : : : ; λ ) 6 7 1 2 n 1 2 n 6 .
    [Show full text]
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • The Invertible Matrix Theorem
    The Invertible Matrix Theorem Ryan C. Daileda Trinity University Linear Algebra Daileda The Invertible Matrix Theorem Introduction It is important to recognize when a square matrix is invertible. We can now characterize invertibility in terms of every one of the concepts we have now encountered. We will continue to develop criteria for invertibility, adding them to our list as we go. The invertibility of a matrix is also related to the invertibility of linear transformations, which we discuss below. Daileda The Invertible Matrix Theorem Theorem 1 (The Invertible Matrix Theorem) For a square (n × n) matrix A, TFAE: a. A is invertible. b. A has a pivot in each row/column. RREF c. A −−−→ I. d. The equation Ax = 0 only has the solution x = 0. e. The columns of A are linearly independent. f. Null A = {0}. g. A has a left inverse (BA = In for some B). h. The transformation x 7→ Ax is one to one. i. The equation Ax = b has a (unique) solution for any b. j. Col A = Rn. k. A has a right inverse (AC = In for some C). l. The transformation x 7→ Ax is onto. m. AT is invertible. Daileda The Invertible Matrix Theorem Inverse Transforms Definition A linear transformation T : Rn → Rn (also called an endomorphism of Rn) is called invertible iff it is both one-to-one and onto. If [T ] is the standard matrix for T , then we know T is given by x 7→ [T ]x. The Invertible Matrix Theorem tells us that this transformation is invertible iff [T ] is invertible.
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Stat 5102 Notes: Regression
    Stat 5102 Notes: Regression Charles J. Geyer April 27, 2007 In these notes we do not use the “upper case letter means random, lower case letter means nonrandom” convention. Lower case normal weight letters (like x and β) indicate scalars (real variables). Lowercase bold weight letters (like x and β) indicate vectors. Upper case bold weight letters (like X) indicate matrices. 1 The Model The general linear model has the form p X yi = βjxij + ei (1.1) j=1 where i indexes individuals and j indexes different predictor variables. Ex- plicit use of (1.1) makes theory impossibly messy. We rewrite it as a vector equation y = Xβ + e, (1.2) where y is a vector whose components are yi, where X is a matrix whose components are xij, where β is a vector whose components are βj, and where e is a vector whose components are ei. Note that y and e have dimension n, but β has dimension p. The matrix X is called the design matrix or model matrix and has dimension n × p. As always in regression theory, we treat the predictor variables as non- random. So X is a nonrandom matrix, β is a nonrandom vector of unknown parameters. The only random quantities in (1.2) are e and y. As always in regression theory the errors ei are independent and identi- cally distributed mean zero normal. This is written as a vector equation e ∼ Normal(0, σ2I), where σ2 is another unknown parameter (the error variance) and I is the identity matrix. This implies y ∼ Normal(µ, σ2I), 1 where µ = Xβ.
    [Show full text]
  • Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 22. Let U = U1 U2 U3 . Explain Why U · U ≥ 0. Wh
    Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 2 3 u1 22. Let ~u = 4 u2 5 . Explain why ~u · ~u ≥ 0 . When is ~u · ~u = 0 ? u3 2 2 2 We have ~u · ~u = u1 + u2 + u3 , which is ≥ 0 because it is a sum of squares (all of which are ≥ 0 ). It is zero if and only if ~u = ~0 . Indeed, if ~u = ~0 then ~u · ~u = 0 , as 2 can be seen directly from the formula. Conversely, if ~u · ~u = 0 then all the terms ui must be zero, so each ui must be zero. This implies ~u = ~0 . 2 5 3 26. Let ~u = 4 −6 5 , and let W be the set of all ~x in R3 such that ~u · ~x = 0 . What 7 theorem in Chapter 4 can be used to show that W is a subspace of R3 ? Describe W in geometric language. The condition ~u · ~x = 0 is equivalent to ~x 2 Nul ~uT , and this is a subspace of R3 by Theorem 2 on page 187. Geometrically, it is the plane perpendicular to ~u and passing through the origin. 30. Let W be a subspace of Rn , and let W ? be the set of all vectors orthogonal to W . Show that W ? is a subspace of Rn using the following steps. (a). Take ~z 2 W ? , and let ~u represent any element of W . Then ~z · ~u = 0 . Take any scalar c and show that c~z is orthogonal to ~u . (Since ~u was an arbitrary element of W , this will show that c~z is in W ? .) ? (b).
    [Show full text]
  • Section 2.4–2.5 Partitioned Matrices and LU Factorization
    Section 2.4{2.5 Partitioned Matrices and LU Factorization Gexin Yu [email protected] College of William and Mary Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization One approach to simplify the computation is to partition a matrix into blocks. 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns. Standard computation techniques are inefficient in such cases, so we need to develop techniques which exploit the internal structure of the matrices. In most cases, the matrices of interest have lots of zeros. Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns.
    [Show full text]
  • The Invertible Matrix Theorem II in Section 2.8, We Gave a List of Characterizations of Invertible Matrices (Theorem 2.8.1)
    i i “main” 2007/2/16 page 312 i i 312 CHAPTER 4 Vector Spaces For Problems 9–12, determine the solution set to Ax = b, 14. Show that a 6 × 4 matrix A with nullity(A) = 0 must and show that all solutions are of the form (4.9.3). have rowspace(A) = R4. Is colspace(A) = R4? 13−1 4 15. Prove that if rowspace(A) = nullspace(A), then A 9. A = 27 9 , b = 11 . contains an even number of columns. 15 21 10 16. Show that a 5×7 matrix A must have 2 ≤ nullity(A) ≤ 2 −114 5 7. Give an example of a 5 × 7 matrix A with 10. A = 1 −123 , b = 6 . nullity(A) = 2 and an example of a 5 × 7 matrix 1 −255 13 A with nullity(A) = 7. 11−2 −3 17. Show that 3 × 8 matrix A must have 5 ≤ nullity(A) ≤ 3 −1 −7 2 8. Give an example of a 3 × 8 matrix A with 11. A = , b = . 111 0 nullity(A) = 5 and an example of a 3 × 8 matrix 22−4 −6 A with nullity(A) = 8. 11−15 0 18. Prove that if A and B are n × n matrices and A is 12. A = 02−17 , b = 0 . invertible, then 42−313 0 nullity(AB) = nullity(B). 13. Show that a 3 × 7 matrix A with nullity(A) = 4 must have colspace(A) = R3. Is rowspace(A) = R3? [Hint: Bx = 0 if and only if ABx = 0.] 4.10 The Invertible Matrix Theorem II In Section 2.8, we gave a list of characterizations of invertible matrices (Theorem 2.8.1).
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • DECOMPOSITION of SINGULAR MATRICES INTO IDEMPOTENTS 11 Us Show How to Construct Ai+1, Bi+1, Ci+1, Di+1
    DECOMPOSITION OF SINGULAR MATRICES INTO IDEMPOTENTS ADEL ALAHMADI, S. K. JAIN, AND ANDRE LEROY Abstract. In this paper we provide concrete constructions of idempotents to represent typical singular matrices over a given ring as a product of idempo- tents and apply these factorizations for proving our main results. We generalize works due to Laffey ([12]) and Rao ([3]) to noncommutative setting and fill in the gaps in the original proof of Rao's main theorems (cf. [3], Theorems 5 and 7 and [4]). We also consider singular matrices over B´ezoutdomains as to when such a matrix is a product of idempotent matrices. 1. Introduction and definitions It was shown by Howie [10] that every mapping from a finite set X to itself with image of cardinality ≤ cardX − 1 is a product of idempotent mappings. Erd¨os[7] showed that every singular square matrix over a field can be expressed as a product of idempotent matrices and this was generalized by several authors to certain classes of rings, in particular, to division rings and euclidean domains [12]. Turning to singular elements let us mention two results: Rao [3] characterized, via continued fractions, singular matrices over a commutative PID that can be decomposed as a product of idempotent matrices and Hannah-O'Meara [9] showed, among other results, that for a right self-injective regular ring R, an element a is a product of idempotents if and only if Rr:ann(a) = l:ann(a)R= R(1 − a)R. The purpose of this paper is to provide concrete constructions of idempotents to represent typical singular matrices over a given ring as a product of idempotents and to apply these factorizations for proving our main results.
    [Show full text]