Determinants & Permutations

Total Page:16

File Type:pdf, Size:1020Kb

Determinants & Permutations 18.06.23: Determinants & Permutations Lecturer: Barwick Friday, 8 April 2016 18.06.23: Determinants & Permutations So here’s an 푛 × 푛 matrix 퐴 = ( ~푣1 ⋯ ~푣푛 ), and we have this number det(퐴) = det (~푣1,…,~푣푛) ∈ R that measures the signed 푛-dimensional volume of the parallelopiped spanned by ~푣1,…,~푣푛. Let’s list the things we know about det(퐴). 18.06.23: Determinants & Permutations (1) Normalization. The identity matrix has determinant 1: det(퐼) = det (푒1 ̂ , ⋯ ,푒푛 ̂ ) = 1. (2) Multilinearity. For any real numbers 푟, 푠 ∈ R, det (~푣1, ⋯ , 푟~푥푖 + 푠~푦푖, ⋯ ,~푣푛) = 푟 det (~푣1, ⋯ ,~푥푖, ⋯ ,~푣푛) +푠 det (~푣1, ⋯ ,~푦푖, ⋯ ,~푣푛). 18.06.23: Determinants & Permutations (3) Alternation. The determinant det (~푣1, ⋯ , ⋯ ,~푣푛) = 0 if any two of the ~푣푖s are equal. These are the core, defining properties of det. 18.06.23: Determinants & Permutations Here are more properties, which we deduced from the three core properties above: (4) Multiplying a row or column by a number 푟 ∈ R multiplies the determi- nant by that 푟. (5) Swapping two rows or columns in 퐴 multiplies the determinant by a −1. (6) Adding a multiple of a row or column onto another row or column doesn’t change the determinant. 18.06.23: Determinants & Permutations And here are some general computational facts we extract from these proper- ties. (8) det(퐴) ≠ 0 if and only if 퐴 is invertible. (9) det(푀푁) = det(푀) det(푁). (10) det(퐴⊺) = det(퐴). (Why?) 푛 (11) det diag(휆1,…, 휆푛) = ∏푖=1 휆푖. (12) More generally, if 퐴 is triangular, then det(퐴) is the product of the entries along the diagonal. (Why?) 18.06.23: Determinants & Permutations Let’s compute the determinant of the following matrices: 1 1 1 2 1 2 4 1 1 2 2 ( 1 3 9 ),( ) 1 2 2 3 1 4 16 2 2 3 4 1 1 2 5 1 2 5 14 1 2 5 14 2 5 5 14 ( ),( ) 2 5 14 42 5 14 42 132 5 14 42 132 14 42 132 429 18.06.23: Determinants & Permutations I was half-joking about the last two; this is actually a neat little piece of math- ematics: there’s only one sequence of integers 푐0, 푐1, 푐2,… such that for any 푛 ≥ 1, det(푐푖+푗−2) = det(푐푖+푗−1) = 1. These are called the Catalan numbers, and your mathematical life isn’t com- plete until you’ve read about them! (I was going to put a problem about these on the homework, but I thought that might be too much.) 18.06.23: Determinants & Permutations Suppose 휎∶ {1, … , 푛} {1, … , 푛} a permutation of {1, … , 푛} – i.e., a bijection from that set to itself. One can express a permutation very compactly, by writing down the matrix 푃휎 = ( 푒휎̂ (1) ⋯푒휎 ̂ (푛) ) called the permutation matrix corresponding to 휎. This is great, because matrix multiplication corresponds to composition of permutations: 푃휎∘휏 = 푃휎푃휏 and 푃푖푑 = 퐼. 18.06.23: Determinants & Permutations Also, these matrices are orthogonal; in fact, ⊺ −1 푃휎 = 푃휎 = 푃휎−1 . 18.06.23: Determinants & Permutations Here’s a permutation matrix for 푛 = 5: 푒2̂ 0 1 0 0 0 푒4̂ 0 0 0 1 0 ( ) ( ) 푒1̂ = 1 0 0 0 0 . 푒3̂ 0 0 1 0 0 ( 푒5̂ ) ( 0 0 0 0 1 ) What’s its determinant? 18.06.23: Determinants & Permutations This is a general pattern: the determinant of a permutation matrix 푃휎 is called the sign of the permutation 휎: sgn(휎) ≔ det(푃휎) In effect, it’s (−1)number of swaps in 휎. What’s weird about this is that you can imagine performing more or fewer swaps to get sigma. The magic of determinants is telling you that the parity of the number of swaps stays the same! Note that sgn(휎) = sgn(휎−1). (Why?) 18.06.23: Determinants & Permutations It turns out that permutations give you a formula for the determinant of any matrix 퐴 = ( ~푣1 ⋯ ~푣푛 ). Let’s see why. First, the 푗-th column ~푣푗 can be written as 푛 ~푣푗 = ∑ 푎푘,푗푒푘̂ . 푘=1 18.06.23: Determinants & Permutations The multilinearity of det can then be deployed: 푛 푛 det(퐴) = det ( ∑ 푎푘(1),푗푒푘̂ (1),…, ∑ 푎푘(푛),푗푒푘̂ (푛)) 푘(1)=1 푘(푛)=1 푛 푛 푛 = ∑ ⋯ ∑ (∏ 푎푘(푖),푖) det(푒푘 ̂ (1), … ,푒푘 ̂ (푛)) 푘(1)=1 푘(푛)=1 푖=1 Now all those sums can be combined into one sum. You’re summing over the set 퐸푛 of all maps 푘∶ {1, … , 푛} {1, … , 푛}: 푛 det(퐴) = ∑ (∏ 푎푘(푖),푖) det(푒푘 ̂ (1), … ,푒푘 ̂ (푛)). 푘∈퐸푛 푖=1 18.06.23: Determinants & Permutations 푛 det(퐴) = ∑ (∏ 푎푘(푖),푖) det(푒푘 ̂ (1), … ,푒푘 ̂ (푛)). 푘∈퐸푛 푖=1 Now we use the alternatingness: if any two columns are equal, then the de- terminant is zero. So any summand in which 푘∶ {1, … , 푛} {1, … , 푛} is not injective doesn’t appear: 푛 det(퐴) = ∑ (∏ 푎휎(푖),푖) det(푒휎 ̂ (1), … ,푒휎 ̂ (푛)), 휎∈훴푛 푖=1 where 훴푛 is the set of permutations of {1, … , 푛}. 18.06.23: Determinants & Permutations Now we have: 푛 det(퐴) = ∑ (∏ 푎휎(푖),푖) det(푒휎 ̂ (1), … ,푒휎 ̂ (푛)) 휎∈훴푛 푖=1 푛 = ∑ (∏ 푎휎(푖),푖) det(푃휎) 휎∈훴푛 푖=1 푛 = ∑ sgn(휎)(∏ 푎휎(푖),푖). 휎∈훴푛 푖=1 18.06.23: Determinants & Permutations 푛 det(퐴) = ∑ sgn(휎)(∏ 푎휎(푖),푖) 휎∈훴푛 푖=1 There it is – the Leibniz formula for the determinant. Do you care? Well, if you’re trying to program a computer to compute deter- minants, no. Evaluating this formula involves 훺(푛! 푛) operations. Gaussian elimination uses 푂(푛3) operations. We have a winner. On the other hand, the fact that there is a formula is vaguely reassuring. But there’s another advantage … 18.06.23: Determinants & Permutations 2 Think of the determinant as a function from R푛 R; this formula expresses that function as a polynomial in 푛2 variables. That means that it’s continuous and infinitely differentiable. So this leads us to the following result: Proposition. Suppose 퐴 = (푎푖,푗) an invertible 푛 × 푛 matrix. Then there exists ′ ′ ′ an 휀 > 0 such that if 퐴 = (푎푖,푗) is an 푛 × 푛 matrix such that |푎푖,푗 − 푎푖,푗| < 휀, then 퐴′ is invertible too. That is, invertible matrices are stable under small perturbations. 18.06.23: Determinants & Permutations 2 Here’s another wacky-sounding consequence. Suppose 퐿 ⊆ R푛 is a line. Then if there exists one point on 퐿 that corresponds to an invertible matrix, then all but finitely many points on 퐿 correspond to invertible matrices. 18.06.23: Determinants & Permutations Question. Suppose 퐴 an 푛 × 푛 matrix. For how many real numbers 푡 ∈ R is 퐴 + 푡퐼 is invertible (none, finitely many, infinitely many, all)?.
Recommended publications
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Math 511 Advanced Linear Algebra Spring 2006
    MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 x2:1 : 2; 5; 9; 12 x2:3 : 3; 6 x2:4 : 2; 4; 5; 9; 11 Section 2:1: Unitary Matrices Problem 2 If ¸ 2 σ(U) and U 2 Mn is unitary, show that j¸j = 1. Solution. If ¸ 2 σ(U), U 2 Mn is unitary, and Ux = ¸x for x 6= 0, then by Theorem 2:1:4(g), we have kxkCn = kUxkCn = k¸xkCn = j¸jkxkCn , hence j¸j = 1, as desired. Problem 5 Show that the permutation matrices in Mn are orthogonal and that the permutation matrices form a sub- group of the group of real orthogonal matrices. How many different permutation matrices are there in Mn? Solution. By definition, a matrix P 2 Mn is called a permutation matrix if exactly one entry in each row n and column is equal to 1, and all other entries are 0. That is, letting ei 2 C denote the standard basis n th element of C that has a 1 in the i row and zeros elsewhere, and Sn be the set of all permutations on n th elements, then P = [eσ(1) j ¢ ¢ ¢ j eσ(n)] = Pσ for some permutation σ 2 Sn such that σ(k) denotes the k member of σ. Observe that for any σ 2 Sn, and as ½ 1 if i = j eT e = σ(i) σ(j) 0 otherwise for 1 · i · j · n by the definition of ei, we have that 2 3 T T eσ(1)eσ(1) ¢ ¢ ¢ eσ(1)eσ(n) T 6 .
    [Show full text]
  • The Invertible Matrix Theorem
    The Invertible Matrix Theorem Ryan C. Daileda Trinity University Linear Algebra Daileda The Invertible Matrix Theorem Introduction It is important to recognize when a square matrix is invertible. We can now characterize invertibility in terms of every one of the concepts we have now encountered. We will continue to develop criteria for invertibility, adding them to our list as we go. The invertibility of a matrix is also related to the invertibility of linear transformations, which we discuss below. Daileda The Invertible Matrix Theorem Theorem 1 (The Invertible Matrix Theorem) For a square (n × n) matrix A, TFAE: a. A is invertible. b. A has a pivot in each row/column. RREF c. A −−−→ I. d. The equation Ax = 0 only has the solution x = 0. e. The columns of A are linearly independent. f. Null A = {0}. g. A has a left inverse (BA = In for some B). h. The transformation x 7→ Ax is one to one. i. The equation Ax = b has a (unique) solution for any b. j. Col A = Rn. k. A has a right inverse (AC = In for some C). l. The transformation x 7→ Ax is onto. m. AT is invertible. Daileda The Invertible Matrix Theorem Inverse Transforms Definition A linear transformation T : Rn → Rn (also called an endomorphism of Rn) is called invertible iff it is both one-to-one and onto. If [T ] is the standard matrix for T , then we know T is given by x 7→ [T ]x. The Invertible Matrix Theorem tells us that this transformation is invertible iff [T ] is invertible.
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • An Exploration of the Relationship Between Mathematics and Music
    An Exploration of the Relationship between Mathematics and Music Shah, Saloni 2010 MIMS EPrint: 2010.103 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: http://eprints.maths.manchester.ac.uk/ And by contacting: The MIMS Secretary School of Mathematics The University of Manchester Manchester, M13 9PL, UK ISSN 1749-9097 An Exploration of ! Relation"ip Between Ma#ematics and Music MATH30000, 3rd Year Project Saloni Shah, ID 7177223 University of Manchester May 2010 Project Supervisor: Professor Roger Plymen ! 1 TABLE OF CONTENTS Preface! 3 1.0 Music and Mathematics: An Introduction to their Relationship! 6 2.0 Historical Connections Between Mathematics and Music! 9 2.1 Music Theorists and Mathematicians: Are they one in the same?! 9 2.2 Why are mathematicians so fascinated by music theory?! 15 3.0 The Mathematics of Music! 19 3.1 Pythagoras and the Theory of Music Intervals! 19 3.2 The Move Away From Pythagorean Scales! 29 3.3 Rameau Adds to the Discovery of Pythagoras! 32 3.4 Music and Fibonacci! 36 3.5 Circle of Fifths! 42 4.0 Messiaen: The Mathematics of his Musical Language! 45 4.1 Modes of Limited Transposition! 51 4.2 Non-retrogradable Rhythms! 58 5.0 Religious Symbolism and Mathematics in Music! 64 5.1 Numbers are God"s Tools! 65 5.2 Religious Symbolism and Numbers in Bach"s Music! 67 5.3 Messiaen"s Use of Mathematical Ideas to Convey Religious Ones! 73 6.0 Musical Mathematics: The Artistic Aspect of Mathematics! 76 6.1 Mathematics as Art! 78 6.2 Mathematical Periods! 81 6.3 Mathematics Periods vs.
    [Show full text]
  • 18.703 Modern Algebra, Permutation Groups
    5. Permutation groups Definition 5.1. Let S be a set. A permutation of S is simply a bijection f : S −! S. Lemma 5.2. Let S be a set. (1) Let f and g be two permutations of S. Then the composition of f and g is a permutation of S. (2) Let f be a permutation of S. Then the inverse of f is a permu­ tation of S. Proof. Well-known. D Lemma 5.3. Let S be a set. The set of all permutations, under the operation of composition of permutations, forms a group A(S). Proof. (5.2) implies that the set of permutations is closed under com­ position of functions. We check the three axioms for a group. We already proved that composition of functions is associative. Let i: S −! S be the identity function from S to S. Let f be a permutation of S. Clearly f ◦ i = i ◦ f = f. Thus i acts as an identity. Let f be a permutation of S. Then the inverse g of f is a permutation of S by (5.2) and f ◦ g = g ◦ f = i, by definition. Thus inverses exist and G is a group. D Lemma 5.4. Let S be a finite set with n elements. Then A(S) has n! elements. Proof. Well-known. D Definition 5.5. The group Sn is the set of permutations of the first n natural numbers. We want a convenient way to represent an element of Sn. The first way, is to write an element σ of Sn as a matrix.
    [Show full text]
  • Permutation Group and Determinants (Dated: September 16, 2021)
    Permutation group and determinants (Dated: September 16, 2021) 1 I. SYMMETRIES OF MANY-PARTICLE FUNCTIONS Since electrons are fermions, the electronic wave functions have to be antisymmetric. This chapter will show how to achieve this goal. The notion of antisymmetry is related to permutations of electrons’ coordinates. Therefore we will start with the discussion of the permutation group and then introduce the permutation-group-based definition of determinant, the zeroth-order approximation to the wave function in theory of many fermions. This definition, in contrast to that based on the Laplace expansion, relates clearly to properties of fermionic wave functions. The determinant gives an N-particle wave function built from a set of N one-particle waves functions and is called Slater’s determinant. II. PERMUTATION (SYMMETRIC) GROUP Definition of permutation group: The permutation group, known also under the name of symmetric group, is the group of all operations on a set of N distinct objects that order the objects in all possible ways. The group is denoted as SN (we will show that this is a group below). We will call these operations permutations and denote them by symbols σi. For a set consisting of numbers 1, 2, :::, N, the permutation σi orders these numbers in such a way that k is at jth position. Often a better way of looking at permutations is to say that permutations are all mappings of the set 1, 2, :::, N onto itself: σi(k) = j, where j has to go over all elements. Number of permutations: The number of permutations is N! Indeed, we can first place each object at positions 1, so there are N possible placements.
    [Show full text]
  • Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 22. Let U = U1 U2 U3 . Explain Why U · U ≥ 0. Wh
    Math 54. Selected Solutions for Week 8 Section 6.1 (Page 282) 2 3 u1 22. Let ~u = 4 u2 5 . Explain why ~u · ~u ≥ 0 . When is ~u · ~u = 0 ? u3 2 2 2 We have ~u · ~u = u1 + u2 + u3 , which is ≥ 0 because it is a sum of squares (all of which are ≥ 0 ). It is zero if and only if ~u = ~0 . Indeed, if ~u = ~0 then ~u · ~u = 0 , as 2 can be seen directly from the formula. Conversely, if ~u · ~u = 0 then all the terms ui must be zero, so each ui must be zero. This implies ~u = ~0 . 2 5 3 26. Let ~u = 4 −6 5 , and let W be the set of all ~x in R3 such that ~u · ~x = 0 . What 7 theorem in Chapter 4 can be used to show that W is a subspace of R3 ? Describe W in geometric language. The condition ~u · ~x = 0 is equivalent to ~x 2 Nul ~uT , and this is a subspace of R3 by Theorem 2 on page 187. Geometrically, it is the plane perpendicular to ~u and passing through the origin. 30. Let W be a subspace of Rn , and let W ? be the set of all vectors orthogonal to W . Show that W ? is a subspace of Rn using the following steps. (a). Take ~z 2 W ? , and let ~u represent any element of W . Then ~z · ~u = 0 . Take any scalar c and show that c~z is orthogonal to ~u . (Since ~u was an arbitrary element of W , this will show that c~z is in W ? .) ? (b).
    [Show full text]
  • Section 2.4–2.5 Partitioned Matrices and LU Factorization
    Section 2.4{2.5 Partitioned Matrices and LU Factorization Gexin Yu [email protected] College of William and Mary Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization One approach to simplify the computation is to partition a matrix into blocks. 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns. Standard computation techniques are inefficient in such cases, so we need to develop techniques which exploit the internal structure of the matrices. In most cases, the matrices of interest have lots of zeros. Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns.
    [Show full text]
  • The Invertible Matrix Theorem II in Section 2.8, We Gave a List of Characterizations of Invertible Matrices (Theorem 2.8.1)
    i i “main” 2007/2/16 page 312 i i 312 CHAPTER 4 Vector Spaces For Problems 9–12, determine the solution set to Ax = b, 14. Show that a 6 × 4 matrix A with nullity(A) = 0 must and show that all solutions are of the form (4.9.3). have rowspace(A) = R4. Is colspace(A) = R4? 13−1 4 15. Prove that if rowspace(A) = nullspace(A), then A 9. A = 27 9 , b = 11 . contains an even number of columns. 15 21 10 16. Show that a 5×7 matrix A must have 2 ≤ nullity(A) ≤ 2 −114 5 7. Give an example of a 5 × 7 matrix A with 10. A = 1 −123 , b = 6 . nullity(A) = 2 and an example of a 5 × 7 matrix 1 −255 13 A with nullity(A) = 7. 11−2 −3 17. Show that 3 × 8 matrix A must have 5 ≤ nullity(A) ≤ 3 −1 −7 2 8. Give an example of a 3 × 8 matrix A with 11. A = , b = . 111 0 nullity(A) = 5 and an example of a 3 × 8 matrix 22−4 −6 A with nullity(A) = 8. 11−15 0 18. Prove that if A and B are n × n matrices and A is 12. A = 02−17 , b = 0 . invertible, then 42−313 0 nullity(AB) = nullity(B). 13. Show that a 3 × 7 matrix A with nullity(A) = 4 must have colspace(A) = R3. Is rowspace(A) = R3? [Hint: Bx = 0 if and only if ABx = 0.] 4.10 The Invertible Matrix Theorem II In Section 2.8, we gave a list of characterizations of invertible matrices (Theorem 2.8.1).
    [Show full text]