38 Permutation Formula

Total Page:16

File Type:pdf, Size:1020Kb

38 Permutation Formula Existence of the Determinant Learning Goals: students learn that the determinant really exists, and find some formulas for it. So far our formula for the determinant is ±(product of pivots). This isn’t such a good formulas, because for all we know changing the order of the rows might change the pivots, or at least the sign. And it doesn’t tell us how the determinant depends on any particular entry in the matrix. So we need some other ways of finding the determinant. “Other,” not “better.” Why? Because in practice we find the determinant by reducing and multiplying the pivots (if we bother finding it at all—its more theoretically useful than practically useful). In theory, though, other formulas give us more insight. Let’s use the properties we have found to discover a new formula. ⎡ a11 … a1n ⎤ Start with a matrix ⎢ ⎥ . We have the linearity property that says the ⎢ ! " ! ⎥ ⎢am1 # amn ⎥ ⎣ ⎦ a11 … a1n determinant is linear in each row. So we can split up ! " ! in the linear combination of a # a m1 mn 1 0 ! 0 0 1 ! 0 0 0 ! 1 a a ! a a a ! a a a ! a the first row: a 21 22 2n + a 21 22 2n +!+ a 21 22 2n . So 11 " " # " 12 " " # " n1 " " # " a a ! a a a ! a a a ! a n1 n2 nn n1 n2 nn n1 n2 nn there are n terms. Each of these can be split up into the linear combination of the second row, and so on, until we get nn terms which are all possible combinations of one element from each row, times a determinant of a matrix with one 1 in each row. But many—indeed most—of these terms are zero. For the matrices that have ones in the same column will have two identical rows and thus have determinant zero. The only terms remaining are the ones where each 1 is in a different column. Permutations and permutation matrices We recognize these matrices as permutation matrices, the P’s we used in row reduction to swap rows. We will formulize this a little more now: Definition: a permutation σ is a function whose domain and image are both the set of numbers {1, 2, …, n}. The permutation matrix P is the matrix which has one 1 in each row, and the 1 in row k is in column σ(k). The determinant of a permutation matrix is either 1 or –1, because after changing rows around (which changes the sign of the determinant) a permutation matrix becomes I, whose determinant is one. Definition: the sign of a permutation, sgn(σ), is the determinant of the corresponding permutation matrix. Of course, this may not be well defined. How do we know that one way of turning P into I doesn’t require an odd number of row swaps while a different way of doing it might need an even number? We still have to prove that sgn(σ) is uniquely defined. But once that is done we will have proved Theorem: the determinant of a matrix is where the summation is ∑a1σ (1)a2σ (2) !anσ (n) sgn(σ ) σ taken over all possible permutations σ of 1 – n. The text refers to this as the “big formula.” It is quite large—there are n! terms in the sum. This is one reason why determinants are not taken this way—computationally n! compared to n3 operations for row reduction is a nightmare. This does prove something, though. If the sign of a permutation is well-defined, then there is a unique function that satisfies our defining properties for determinants. In other words, we now know that there is at most one determinant. If we show that signs are well-defined and that this formula does actually satisfy the properties, we will have shown existence, and we’re done. Theorem: sgn(σ) is well-defined. Proof: we will find a function whose behavior is very easy to understand why it is well defined and show that it is equivalent to sgn(σ). To this end, given permutation σ, define its “disorderliness” d(σ) (not a standard term—just a convenience for this proof) to be the number of pairs (i, j) with i < j but σ(i) > σ(j). For example, in the permutation (3, 7, 1, 4, 2, 6, 5) (that is, σ(1) = 3, σ(2) = 7 and so forth) the disordered pairs are (1, 3), (1, 7), (2, 3), (2, 4), (2, 7), (4, 7), (5, 6), (5, 7) and (6, 7), so the disorderliness is 9. It is clear that the disorderliness of a permutation is well-defined. We will show that sgn(σ) = 1 if d(σ) is even and –1 if it is odd. First, if any two adjacent values in σ are switched, the disorderliness changes by exactly one. For example, d(3, 7, 4, 1, 2, 6, 5) is 10, while that of (3, 7, 1, 4, 2, 6, 5) is 9. The simple reason for this is that switching two adjacent numbers changes the relative order of only the pair involving those two numbers. Now swapping any two terms in σ will change the disorderliness by an odd number. Why? Because any swap can be achieved by swapping adjacent items k times (if there are k things between the items being swapped) until the two items to be swapped are next to each other, swapping them, and then making k more adjacent swaps until the rest of the terms are back in order. For example, there are two things between the a and the d below, and we can swap them and leave everything else alone, with 2⋅2 + 1 = 5 adjacent swaps: (a, b, c, d, e) → (b, a, c, d, e) → (b, c, a, d, e) → (b, c, d, a, e) → (b, d, c, a, e) → (d, b, c, a, e). Thus any swap will always change the disorderliness by 1 an odd number of times, so the disorderliness changes by an odd number. Now the disorderliness of the identity permutation (corresponding to the identity matrix, of course) is zero. If σ is any permutation of even disorderliness, it always takes an even number of swaps to turn it into the identity. An odd number can’t do it, because changing an even number by an odd number an odd number of times can’t leave you with zero! Similarly, every sequence of swaps that changes a permutation with odd disorderliness into the identity must have an odd number of swaps. Since a swap of terms in a permutation corresponds to a swap of rows in a matrix, the number of swaps to recover the identity will always be either even or odd for a particular permuation. So even (disorderliness) permutations have signs of +1 and odd permutations have signs of –1. Now, we finish by showing that our “big formula” does indeed have the defining properties of a determinant so that, as promised, the determinant exists and is unique. Theorem: the formula satisfies the defining qualities of a determinant. Proof: three of the properties are easy. Namely, det(I) = 1 is trivial, since sgn(I) = 1 and I is the only term in its expansion. The pulling-out-multiples-from-a-row property is similarly easy, since this multiple appears exactly once in every term in the sum, as we dissect the row in which it was multiplied. The swapping rows property is also simple because the same products of aij’s occurs in the determinant of the swapped matrix, only with a permutation with swapped rows, so all of the signs are changed. That leaves the add-a-multiple-of-a-row property. So let’s say r times row i has been added to row j. Then (note carefully the subscripts of the second term in the parentheses!) ∑a1σ (1) !(a jσ ( j) + raiσ ( j) )!anσ (n) sgn(σ ) = ∑a1σ (1) !a jσ ( j) !anσ (n) sgn(σ ) σ σ + . The first of these two sums is det(A). The second, r∑a1σ (1) !aiσ ( j) !anσ (n) sgn(σ ) σ when we sum over all sigmas, cancels itself out. For each pair k and l has exactly the same product of a’s with σ(i) = k and σ(j) = l and another σ with σ(i) = l and σ(j) = k, but all other values the same. These two sigmas have opposite signs, so the terms cancel each other in the sum, and thus the second sum is zero. .
Recommended publications
  • Math 511 Advanced Linear Algebra Spring 2006
    MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 x2:1 : 2; 5; 9; 12 x2:3 : 3; 6 x2:4 : 2; 4; 5; 9; 11 Section 2:1: Unitary Matrices Problem 2 If ¸ 2 σ(U) and U 2 Mn is unitary, show that j¸j = 1. Solution. If ¸ 2 σ(U), U 2 Mn is unitary, and Ux = ¸x for x 6= 0, then by Theorem 2:1:4(g), we have kxkCn = kUxkCn = k¸xkCn = j¸jkxkCn , hence j¸j = 1, as desired. Problem 5 Show that the permutation matrices in Mn are orthogonal and that the permutation matrices form a sub- group of the group of real orthogonal matrices. How many different permutation matrices are there in Mn? Solution. By definition, a matrix P 2 Mn is called a permutation matrix if exactly one entry in each row n and column is equal to 1, and all other entries are 0. That is, letting ei 2 C denote the standard basis n th element of C that has a 1 in the i row and zeros elsewhere, and Sn be the set of all permutations on n th elements, then P = [eσ(1) j ¢ ¢ ¢ j eσ(n)] = Pσ for some permutation σ 2 Sn such that σ(k) denotes the k member of σ. Observe that for any σ 2 Sn, and as ½ 1 if i = j eT e = σ(i) σ(j) 0 otherwise for 1 · i · j · n by the definition of ei, we have that 2 3 T T eσ(1)eσ(1) ¢ ¢ ¢ eσ(1)eσ(n) T 6 .
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]
  • An Exploration of the Relationship Between Mathematics and Music
    An Exploration of the Relationship between Mathematics and Music Shah, Saloni 2010 MIMS EPrint: 2010.103 Manchester Institute for Mathematical Sciences School of Mathematics The University of Manchester Reports available from: http://eprints.maths.manchester.ac.uk/ And by contacting: The MIMS Secretary School of Mathematics The University of Manchester Manchester, M13 9PL, UK ISSN 1749-9097 An Exploration of ! Relation"ip Between Ma#ematics and Music MATH30000, 3rd Year Project Saloni Shah, ID 7177223 University of Manchester May 2010 Project Supervisor: Professor Roger Plymen ! 1 TABLE OF CONTENTS Preface! 3 1.0 Music and Mathematics: An Introduction to their Relationship! 6 2.0 Historical Connections Between Mathematics and Music! 9 2.1 Music Theorists and Mathematicians: Are they one in the same?! 9 2.2 Why are mathematicians so fascinated by music theory?! 15 3.0 The Mathematics of Music! 19 3.1 Pythagoras and the Theory of Music Intervals! 19 3.2 The Move Away From Pythagorean Scales! 29 3.3 Rameau Adds to the Discovery of Pythagoras! 32 3.4 Music and Fibonacci! 36 3.5 Circle of Fifths! 42 4.0 Messiaen: The Mathematics of his Musical Language! 45 4.1 Modes of Limited Transposition! 51 4.2 Non-retrogradable Rhythms! 58 5.0 Religious Symbolism and Mathematics in Music! 64 5.1 Numbers are God"s Tools! 65 5.2 Religious Symbolism and Numbers in Bach"s Music! 67 5.3 Messiaen"s Use of Mathematical Ideas to Convey Religious Ones! 73 6.0 Musical Mathematics: The Artistic Aspect of Mathematics! 76 6.1 Mathematics as Art! 78 6.2 Mathematical Periods! 81 6.3 Mathematics Periods vs.
    [Show full text]
  • 18.703 Modern Algebra, Permutation Groups
    5. Permutation groups Definition 5.1. Let S be a set. A permutation of S is simply a bijection f : S −! S. Lemma 5.2. Let S be a set. (1) Let f and g be two permutations of S. Then the composition of f and g is a permutation of S. (2) Let f be a permutation of S. Then the inverse of f is a permu­ tation of S. Proof. Well-known. D Lemma 5.3. Let S be a set. The set of all permutations, under the operation of composition of permutations, forms a group A(S). Proof. (5.2) implies that the set of permutations is closed under com­ position of functions. We check the three axioms for a group. We already proved that composition of functions is associative. Let i: S −! S be the identity function from S to S. Let f be a permutation of S. Clearly f ◦ i = i ◦ f = f. Thus i acts as an identity. Let f be a permutation of S. Then the inverse g of f is a permutation of S by (5.2) and f ◦ g = g ◦ f = i, by definition. Thus inverses exist and G is a group. D Lemma 5.4. Let S be a finite set with n elements. Then A(S) has n! elements. Proof. Well-known. D Definition 5.5. The group Sn is the set of permutations of the first n natural numbers. We want a convenient way to represent an element of Sn. The first way, is to write an element σ of Sn as a matrix.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]
  • Permutation Group and Determinants (Dated: September 16, 2021)
    Permutation group and determinants (Dated: September 16, 2021) 1 I. SYMMETRIES OF MANY-PARTICLE FUNCTIONS Since electrons are fermions, the electronic wave functions have to be antisymmetric. This chapter will show how to achieve this goal. The notion of antisymmetry is related to permutations of electrons’ coordinates. Therefore we will start with the discussion of the permutation group and then introduce the permutation-group-based definition of determinant, the zeroth-order approximation to the wave function in theory of many fermions. This definition, in contrast to that based on the Laplace expansion, relates clearly to properties of fermionic wave functions. The determinant gives an N-particle wave function built from a set of N one-particle waves functions and is called Slater’s determinant. II. PERMUTATION (SYMMETRIC) GROUP Definition of permutation group: The permutation group, known also under the name of symmetric group, is the group of all operations on a set of N distinct objects that order the objects in all possible ways. The group is denoted as SN (we will show that this is a group below). We will call these operations permutations and denote them by symbols σi. For a set consisting of numbers 1, 2, :::, N, the permutation σi orders these numbers in such a way that k is at jth position. Often a better way of looking at permutations is to say that permutations are all mappings of the set 1, 2, :::, N onto itself: σi(k) = j, where j has to go over all elements. Number of permutations: The number of permutations is N! Indeed, we can first place each object at positions 1, so there are N possible placements.
    [Show full text]
  • New Foundations for Geometric Algebra1
    Text published in the electronic journal Clifford Analysis, Clifford Algebras and their Applications vol. 2, No. 3 (2013) pp. 193-211 New foundations for geometric algebra1 Ramon González Calvet Institut Pere Calders, Campus Universitat Autònoma de Barcelona, 08193 Cerdanyola del Vallès, Spain E-mail : [email protected] Abstract. New foundations for geometric algebra are proposed based upon the existing isomorphisms between geometric and matrix algebras. Each geometric algebra always has a faithful real matrix representation with a periodicity of 8. On the other hand, each matrix algebra is always embedded in a geometric algebra of a convenient dimension. The geometric product is also isomorphic to the matrix product, and many vector transformations such as rotations, axial symmetries and Lorentz transformations can be written in a form isomorphic to a similarity transformation of matrices. We collect the idea Dirac applied to develop the relativistic electron equation when he took a basis of matrices for the geometric algebra instead of a basis of geometric vectors. Of course, this way of understanding the geometric algebra requires new definitions: the geometric vector space is defined as the algebraic subspace that generates the rest of the matrix algebra by addition and multiplication; isometries are simply defined as the similarity transformations of matrices as shown above, and finally the norm of any element of the geometric algebra is defined as the nth root of the determinant of its representative matrix of order n. The main idea of this proposal is an arithmetic point of view consisting of reversing the roles of matrix and geometric algebras in the sense that geometric algebra is a way of accessing, working and understanding the most fundamental conception of matrix algebra as the algebra of transformations of multiple quantities.
    [Show full text]
  • Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
    Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation
    [Show full text]