Chapter Four Determinants

Total Page:16

File Type:pdf, Size:1020Kb

Chapter Four Determinants Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc. Of course, these formulas are related — that is, we will develop a family of formulas, a scheme that describes the formula for each size. 289 290 Chapter Four. Determinants I Definition For 1£1 matrices, determining nonsingularity is trivial. ¡ ¢ a is nonsingular iff a 6= 0 The 2£2 formula came out in the course of developing the inverse. µ ¶ a b is nonsingular iff ad ¡ bc 6= 0 c d The 3£3 formula can be produced similarly (see Exercise 9). 0 1 a b c @d e fA is nonsingular iff aei + bfg + cdh ¡ hfa ¡ idb ¡ gec 6= 0 g h i With these cases in mind, we posit a family of formulas, a, ad¡bc, etc. For each n the formula gives rise to a determinant function detn£n : Mn£n ! R such that an n£n matrix T is nonsingular if and only if detn£n(T ) 6= 0. (We usually omit the subscript because if T is n£n then ‘det(T )’ could only mean ‘detn£n(T )’.) I.1 Exploration This subsection is optional. It briefly describes how an investigator might come to a good general definition, which is given in the next subsection. The three cases above don’t show an evident pattern to use for the general n£n formula. We may spot that the 1£1 term a has one letter, that the 2£2 terms ad and bc have two letters, and that the 3£3 terms aei, etc., have three letters. We may also observe that in those terms there is a letter from each row and column of the matrix, e.g., the letters in the cdh term 0 1 c @d A h come one from each row and one from each column. But these observations perhaps seem more puzzling than enlightening. For instance, we might wonder why some of the terms are added while others are subtracted. A good problem solving strategy is to see what properties a solution must have and then search for something with those properties. So we shall start by asking what properties we require of the formulas. At this point, our primary way to decide whether a matrix is singular is to do Gaussian reduction and then check whether the diagonal of resulting echelon form matrix has any zeroes (that is, to check whether the product down the diagonal is zero). So, we may expect that the proof that a formula Section I. Definition 291 determines singularity will involve applying Gauss’ method to the matrix, to show that in the end the product down the diagonal is zero if and only if the determinant formula gives zero. This suggests our initial plan: we will look for a family of functions with the property of being unaffected by row operations and with the property that a determinant of an echelon form matrix is the product of its diagonal entries. Under this plan, a proof that the functions determine singularity would go, “Where T ! ¢ ¢ ¢ ! Tˆ is the Gaussian reduction, the determinant of T equals the determinant of Tˆ (because the determinant is unchanged by row operations), which is the product down the diagonal, which is zero if and only if the matrix is singular”. In the rest of this subsection we will test this plan on the 2£2 and 3£3 determinants that we know. We will end up modifying the “unaffected by row operations” part, but not by much. The first step in checking the plan is to test whether the 2£2 and 3£3 formulas are unaffected by the row operation of pivoting: if k½ +½ T ¡!i j Tˆ then is det(Tˆ) = det(T )? This check of the 2£2 determinant after the k½1 + ½2 operation µ ¶ a b det( ) = a(kb + d) ¡ (ka + c)b = ad ¡ bc ka + c kb + d shows that it is indeed unchanged, and the other 2£2 pivot k½2 + ½1 gives the same result. The 3£3 pivot k½3 + ½2 leaves the determinant unchanged 0 1 a b c det(@kg + d kh + e ki + fA) = a(kh + e)i + b(ki + f)g + c(kg + d)h g h i ¡ h(ki + f)a ¡ i(kg + d)b ¡ g(kh + e)c = aei + bfg + cdh ¡ hfa ¡ idb ¡ gec as do the other 3£3 pivot operations. So there seems to be promise in the plan. Of course, perhaps the 4 £ 4 determinant formula is affected by pivoting. We are exploring a possibility here and we do not yet have all the facts. Nonetheless, so far, so good. The next step is to compare det(Tˆ) with det(T ) for the operation ½ $½ T ¡!i j Tˆ of swapping two rows. The 2£2 row swap ½1 $ ½2 µ ¶ c d det( ) = cb ¡ ad a b does not yield ad ¡ bc. This ½1 $ ½3 swap inside of a 3£3 matrix 0 1 g h i det(@d e fA) = gec + hfa + idb ¡ bfg ¡ cdh ¡ aei a b c 292 Chapter Four. Determinants also does not give the same determinant as before the swap — again there is a sign change. Trying a different 3£3 swap ½1 $ ½2 0 1 d e f det(@a b cA) = dbi + ecg + fah ¡ hcd ¡ iae ¡ gbf g h i also gives a change of sign. Thus, row swaps appear to change the sign of a determinant. This mod- ifies our plan, but does not wreck it. We intend to decide nonsingularity by considering only whether the determinant is zero, not by considering its sign. Therefore, instead of expecting determinants to be entirely unaffected by row operations, will look for them to change sign on a swap. To finish, we compare det(Tˆ) to det(T ) for the operation k½ T ¡!i Tˆ of multiplying a row by a scalar k 6= 0. One of the 2£2 cases is µ ¶ a b det( ) = a(kd) ¡ (kc)b = k ¢ (ad ¡ bc) kc kd and the other case has the same result. Here is one 3£3 case 0 1 a b c det(@ d e f A) = ae(ki) + bf(kg) + cd(kh) kg kh ki ¡(kh)fa ¡ (ki)db ¡ (kg)ec = k ¢ (aei + bfg + cdh ¡ hfa ¡ idb ¡ gec) and the other two are similar. These lead us to suspect that multiplying a row by k multiplies the determinant by k. This fits with our modified plan because we are asking only that the zeroness of the determinant be unchanged and we are not focusing on the determinant’s sign or magnitude. In summary, to develop the scheme for the formulas to compute determi- nants, we look for determinant functions that remain unchanged under the pivoting operation, that change sign on a row swap, and that rescale on the rescaling of a row. In the next two subsections we will find that for each n such a function exists and is unique. For the next subsection, note that, as above, scalars come out of each row without affecting other rows. For instance, in this equality 0 1 0 1 3 3 9 1 1 3 det(@2 1 1 A) = 3 ¢ det(@2 1 1 A) 5 10 ¡5 5 10 ¡5 the 3 isn’t factored out of all three rows, only out of the top row. The determi- nant acts on each row of independently of the other rows. When we want to use this property of determinants, we shall write the determinant as a function of the rows: ‘det(~½1; ~½2; : : : ~½n)’, instead of as ‘det(T )’ or ‘det(t1;1; : : : ; tn;n)’. The definition of the determinant that starts the next subsection is written in this way. Section I. Definition 293 Exercises X 1.1 Evaluate the determinantà of each.! à ! µ ¶ 2 0 1 4 0 1 3 1 (a) (b) 3 1 1 (c) 0 0 1 ¡1 1 ¡1 0 1 1 3 ¡1 1.2 Evaluate the determinantà of each.
Recommended publications
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Math 511 Advanced Linear Algebra Spring 2006
    MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 x2:1 : 2; 5; 9; 12 x2:3 : 3; 6 x2:4 : 2; 4; 5; 9; 11 Section 2:1: Unitary Matrices Problem 2 If ¸ 2 σ(U) and U 2 Mn is unitary, show that j¸j = 1. Solution. If ¸ 2 σ(U), U 2 Mn is unitary, and Ux = ¸x for x 6= 0, then by Theorem 2:1:4(g), we have kxkCn = kUxkCn = k¸xkCn = j¸jkxkCn , hence j¸j = 1, as desired. Problem 5 Show that the permutation matrices in Mn are orthogonal and that the permutation matrices form a sub- group of the group of real orthogonal matrices. How many different permutation matrices are there in Mn? Solution. By definition, a matrix P 2 Mn is called a permutation matrix if exactly one entry in each row n and column is equal to 1, and all other entries are 0. That is, letting ei 2 C denote the standard basis n th element of C that has a 1 in the i row and zeros elsewhere, and Sn be the set of all permutations on n th elements, then P = [eσ(1) j ¢ ¢ ¢ j eσ(n)] = Pσ for some permutation σ 2 Sn such that σ(k) denotes the k member of σ. Observe that for any σ 2 Sn, and as ½ 1 if i = j eT e = σ(i) σ(j) 0 otherwise for 1 · i · j · n by the definition of ei, we have that 2 3 T T eσ(1)eσ(1) ¢ ¢ ¢ eσ(1)eσ(n) T 6 .
    [Show full text]
  • Lecture # 4 % System of Linear Equations (Cont.)
    Lecture # 4 - System of Linear Equations (cont.) In our last lecture, we were starting to apply the Gaussian Elimination Method to our macro model Y = C + 1500 C = 200 + 4 (Y T ) 5 1 T = 100 + 5 Y We de…ned the extended coe¢ cient matrix 1 1 0 1000 [A d] = 2 4 1 4 200 3 5 5 6 7 6 7 6 1 0 1 100 7 6 5 7 4 5 The objective is to use elementary row operations (ERO’s)to transform our system of linear equations into another, simpler system. – Eliminate coe¢ cient for …rst variable (column) from all equations (rows) except …rst equation (row). – Eliminate coe¢ cient for second variable (column) from all equations (rows) except second equation (rows). – Eliminate coe¢ cient for third variable (column) from all equations (rows) except third equation (rows). The objective is to get a system that looks like this: 1 0 0 s1 0 1 0 s2 2 3 0 0 1 s3 4 5 1 Let’suse our example 1 1 0 1500 [A d] = 2 4 1 4 200 3 5 5 6 7 6 7 6 1 0 1 100 7 6 5 7 4 5 Multiply …rst row (equation) by 1 and add it to third row 5 1 1 0 1500 [A d] = 2 4 1 4 200 3 5 5 6 7 6 7 6 0 1 1 400 7 6 5 7 4 5 Multiply …rst row by 4 and add it to row 2 5 1 1 0 1500 [A d] = 2 0 1 4 1400 3 5 5 6 7 6 7 6 0 1 1 400 7 6 5 7 4 5 Add row 2 to row 3 1 1 0 1500 [A d] = 2 0 1 4 1400 3 5 5 6 7 6 7 6 0 0 9 1800 7 6 5 7 4 5 Multiply second row by 5 1 1 0 1500 [A d] = 2 0 1 4 7000 3 6 7 6 7 6 0 0 9 1800 7 6 5 7 4 5 Add row 2 to row 1 1 0 4 8500 [A d] = 2 0 1 4 7000 3 6 7 6 7 6 0 0 9 1800 7 6 5 7 4 5 2 Multiply row 3 by 5 9 1 0 4 8500 [A d] = 2 0 1 4 7000 3 6 7 6 7 6 0 0 1 1000
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Determinants
    Determinants §1. Prerequisites 1) Every row operation can be achieved by pre-multiplying (left-multiplying) by an invertible matrix E called the elementary matrix for that operation. The matrix E is obtained by applying the ¡ row operation to the appropriate identity matrix. The matrix E is also denoted by Aij c ¡ , Mi c , or Pij, respectively, for the operations add c times row j to i operation, multiply row i by c, and permute rows i and j, respectively. 2) The Row Reduction Theorem asserts that every matrix A can be row reduced to a unique row echelon reduced matrix R. In matrix form: There is a unique row reduced matrix R and some 1 £ £ £ ¤ ¢ ¢ ¢ ¢ ¢ elementary Ei with Ep ¢ E1A R, or equivalently, A F1 FpR where Fi Ei are also elemen- tary. 3) A matrix A determines a linear transformation: It takes vectors x and gives vectors Ax. §2. Restrictions All matrices must be square. Determinants are not defined for non-square matrices. §3. Motivation Determinants determine whether a matrix has an inverse. They give areas and play a crucial role in the change of variables formula in multivariable calculus. ¡ ¥ ¡ Let’s compute the area of the parallelogram determined by vectors a ¥ b and c d . See Figure 1. c a (a+c, b+d) b b (c, d) c d d c (a, b) b b (0, 0) a c Figure 1. 1 1 ¡ ¦ ¡ § ¡ § ¡ § £ ¦ ¦ ¦ § § § £ § The area is a ¦ c b d 2 2ab 2 2cd 2bc ab ad cb cd ab cd 2bc ad bc. Tentative definition: The determinant of a 2 by 2 matrix is ¨ a b a b £ § det £ ad bc © c d c d which is the “signed” area of the parallelogram with sides determine by the rows of the matrix.
    [Show full text]
  • 2014 CBK Linear Algebra Honors.Pdf
    PETERS TOWNSHIP SCHOOL DISTRICT CORE BODY OF KNOWLEDGE LINEAR ALGEBRA HONORS GRADE 12 For each of the sections that follow, students may be required to analyze, recall, explain, interpret, apply, or evaluate the particular concept being taught. Course Description This college level mathematics course will cover linear algebra and matrix theory emphasizing topics useful in other disciplines such as physics and engineering. Key topics include solving systems of equations, evaluating vector spaces, performing linear transformations and matrix representations. Linear Algebra Honors is designed for the extremely capable student who has completed one year of calculus. Systems of Linear Equations Categorize a linear equation in n variables Formulate a parametric representation of solution set Assess a system of linear equations to determine if it is consistent or inconsistent Apply concepts to use back-substitution and Guassian elimination to solve a system of linear equations Investigate the size of a matrix and write an augmented or coefficient matrix from a system of linear equations Apply concepts to use matrices and Guass-Jordan elimination to solve a system of linear equations Solve a homogenous system of linear equations Design, setup and solve a system of equations to fit a polynomial function to a set of data points Design, set up and solve a system of equations to represent a network Matrices Categorize matrices as equal Construct a sum matrix Construct a product matrix Assess two matrices as compatible Apply matrix multiplication
    [Show full text]
  • Section 2.4–2.5 Partitioned Matrices and LU Factorization
    Section 2.4{2.5 Partitioned Matrices and LU Factorization Gexin Yu [email protected] College of William and Mary Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization One approach to simplify the computation is to partition a matrix into blocks. 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns. Standard computation techniques are inefficient in such cases, so we need to develop techniques which exploit the internal structure of the matrices. In most cases, the matrices of interest have lots of zeros. Gexin Yu [email protected] Section 2.4{2.5 Partitioned Matrices and LU Factorization 2 3 0 −1 5 9 −2 3 Ex: A = 4 −5 2 4 0 −3 1 5. −8 −6 3 1 7 −4 This partition can also be written as the following 2 × 3 block matrix: A A A A = 11 12 13 A21 A22 A23 3 0 −1 In the block form, we have blocks A = and so on. 11 −5 2 4 partition matrices into blocks In real world problems, systems can have huge numbers of equations and un-knowns.
    [Show full text]
  • Linear Algebra and Matrix Theory
    Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 .
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]
  • Alternating Sign Matrices, Extensions and Related Cones
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/311671190 Alternating sign matrices, extensions and related cones Article in Advances in Applied Mathematics · May 2017 DOI: 10.1016/j.aam.2016.12.001 CITATIONS READS 0 29 2 authors: Richard A. Brualdi Geir Dahl University of Wisconsin–Madison University of Oslo 252 PUBLICATIONS 3,815 CITATIONS 102 PUBLICATIONS 1,032 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: Combinatorial matrix theory; alternating sign matrices View project All content following this page was uploaded by Geir Dahl on 16 December 2016. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately. Alternating sign matrices, extensions and related cones Richard A. Brualdi∗ Geir Dahly December 1, 2016 Abstract An alternating sign matrix, or ASM, is a (0; ±1)-matrix where the nonzero entries in each row and column alternate in sign, and where each row and column sum is 1. We study the convex cone generated by ASMs of order n, called the ASM cone, as well as several related cones and polytopes. Some decomposition results are shown, and we find a minimal Hilbert basis of the ASM cone. The notion of (±1)-doubly stochastic matrices and a generalization of ASMs are introduced and various properties are shown. For instance, we give a new short proof of the linear characterization of the ASM polytope, in fact for a more general polytope.
    [Show full text]
  • Laplace Expansion of the Determinant
    Geometria Lingotto. LeLing12: More on determinants. Contents: ¯ • Laplace expansion of the determinant. • Cross product and generalisations. • Rank and determinant: minors. • The characteristic polynomial. Recommended exercises: Geoling 14. ¯ Laplace expansion of the determinant The expansion of Laplace allows to reduce the computation of an n × n determinant to that of n (n − 1) × (n − 1) determinants. The formula, expanded with respect to the ith row (where A = (aij)), is: i+1 i+n det(A) = (−1) ai1det(Ai1) + ··· + (−1) aindet(Ain) where Aij is the (n − 1) × (n − 1) matrix obtained by erasing the row i and the column j from A. With respect to the j th column it is: j+1 j+n det(A) = (−1) a1jdet(A1j) + ··· + (−1) anjdet(Anj) Example 0.1. We do it with respect to the first row below. 1 2 1 4 1 3 1 3 4 3 4 1 = 1 − 2 + 1 = (4 − 6) − 2(3 − 5) + (3:6 − 5:4) = 0 6 1 5 1 5 6 5 6 1 The proof of the expansion along the first row is as follows. The determinant's linearity, proved in the previous set of notes, implies 0 1 Ej n BA C X B 2C det(A) = a1j det B . C j=1 @ . A An Ingegneria dell'Autoveicolo, LeLing12 1 Geometria Geometria Lingotto. where Ej is the canonical basis of the rows, i.e. Ej is zero except at position j where there is 1. Thus we have to calculate the determinants 0 0 ··· 0 1 0 0 ··· 0 a a ··· a a a ······ a 21 22 2(j−1) 2j 2(j+1) 2n .
    [Show full text]