The Waldspurger Transform of Permutations and Alternating Sign Matrices
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Math 511 Advanced Linear Algebra Spring 2006
MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 x2:1 : 2; 5; 9; 12 x2:3 : 3; 6 x2:4 : 2; 4; 5; 9; 11 Section 2:1: Unitary Matrices Problem 2 If ¸ 2 σ(U) and U 2 Mn is unitary, show that j¸j = 1. Solution. If ¸ 2 σ(U), U 2 Mn is unitary, and Ux = ¸x for x 6= 0, then by Theorem 2:1:4(g), we have kxkCn = kUxkCn = k¸xkCn = j¸jkxkCn , hence j¸j = 1, as desired. Problem 5 Show that the permutation matrices in Mn are orthogonal and that the permutation matrices form a sub- group of the group of real orthogonal matrices. How many different permutation matrices are there in Mn? Solution. By definition, a matrix P 2 Mn is called a permutation matrix if exactly one entry in each row n and column is equal to 1, and all other entries are 0. That is, letting ei 2 C denote the standard basis n th element of C that has a 1 in the i row and zeros elsewhere, and Sn be the set of all permutations on n th elements, then P = [eσ(1) j ¢ ¢ ¢ j eσ(n)] = Pσ for some permutation σ 2 Sn such that σ(k) denotes the k member of σ. Observe that for any σ 2 Sn, and as ½ 1 if i = j eT e = σ(i) σ(j) 0 otherwise for 1 · i · j · n by the definition of ei, we have that 2 3 T T eσ(1)eσ(1) ¢ ¢ ¢ eσ(1)eσ(n) T 6 . -
Lie Algebras Associated with Generalized Cartan Matrices
LIE ALGEBRAS ASSOCIATED WITH GENERALIZED CARTAN MATRICES BY R. V. MOODY1 Communicated by Charles W. Curtis, October 3, 1966 1. Introduction. If 8 is a finite-dimensional split semisimple Lie algebra of rank n over a field $ of characteristic zero, then there is associated with 8 a unique nXn integral matrix (Ay)—its Cartan matrix—which has the properties Ml. An = 2, i = 1, • • • , n, M2. M3. Ay = 0, implies An = 0. These properties do not, however, characterize Cartan matrices. If (Ay) is a Cartan matrix, it is known (see, for example, [4, pp. VI-19-26]) that the corresponding Lie algebra, 8, may be recon structed as follows: Let e y fy hy i = l, • • • , n, be any Zn symbols. Then 8 is isomorphic to the Lie algebra 8 ((A y)) over 5 defined by the relations Ml = 0, bifj] = dyhy If or all i and j [eihj] = Ajid [f*,] = ~A„fy\ «<(ad ej)-A>'i+l = 0,1 /,(ad fà-W = (J\ii i 7* j. In this note, we describe some results about the Lie algebras 2((Ay)) when (Ay) is an integral square matrix satisfying Ml, M2, and M3 but is not necessarily a Cartan matrix. In particular, when the further condition of §3 is imposed on the matrix, we obtain a reasonable (but by no means complete) structure theory for 8(04t/)). 2. Preliminaries. In this note, * will always denote a field of char acteristic zero. An integral square matrix satisfying Ml, M2, and M3 will be called a generalized Cartan matrix, or g.c.m. -
Chapter Four Determinants
Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc. -
Hyperbolic Weyl Groups and the Four Normed Division Algebras
Hyperbolic Weyl Groups and the Four Normed Division Algebras Alex Feingold1, Axel Kleinschmidt2, Hermann Nicolai3 1 Department of Mathematical Sciences, State University of New York Binghamton, New York 13902–6000, U.S.A. 2 Physique Theorique´ et Mathematique,´ Universite´ Libre de Bruxelles & International Solvay Institutes, Boulevard du Triomphe, ULB – CP 231, B-1050 Bruxelles, Belgium 3 Max-Planck-Institut fur¨ Gravitationsphysik, Albert-Einstein-Institut Am Muhlenberg¨ 1, D-14476 Potsdam, Germany – p. Introduction Presented at Illinois State University, July 7-11, 2008 Vertex Operator Algebras and Related Areas Conference in Honor of Geoffrey Mason’s 60th Birthday – p. Introduction Presented at Illinois State University, July 7-11, 2008 Vertex Operator Algebras and Related Areas Conference in Honor of Geoffrey Mason’s 60th Birthday – p. Introduction Presented at Illinois State University, July 7-11, 2008 Vertex Operator Algebras and Related Areas Conference in Honor of Geoffrey Mason’s 60th Birthday “The mathematical universe is inhabited not only by important species but also by interesting individuals” – C. L. Siegel – p. Introduction (I) In 1983 Feingold and Frenkel studied the rank 3 hyperbolic Kac–Moody algebra = A++ with F 1 y y @ y Dynkin diagram @ 1 0 1 − Weyl Group W ( )= w , w , w wI = 2, wI wJ = mIJ F h 1 2 3 || | | | i 2 1 0 − 2(αI , αJ ) Cartan matrix [AIJ ]= 1 2 2 = − − (αJ , αJ ) 0 2 2 − 2 3 2 Coxeter exponents [mIJ ]= 3 2 ∞ 2 2 ∞ – p. Introduction (II) The simple roots, α 1, α0, α1, span the − 1 Root lattice Λ( )= α = nI αI n 1,n0,n1 Z F − ∈ ( I= 1 ) X− – p. -
Semisimple Q-Algebras in Algebraic Combinatorics
Semisimple Q-algebras in algebraic combinatorics Allen Herman (Regina) Group Ring Day, ICTS Bangalore October 21, 2019 I.1 Coherent Algebras and Association Schemes Definition 1 (D. Higman 1987). A coherent algebra (CA) is a subalgebra CB of Mn(C) defined by a special basis B, which is a collection of non-overlapping 01-matrices that (1) is a closed set under the transpose, (2) sums to J, the n × n all 1's matrix, and (3) contains a subset ∆ of diagonal matrices summing I , the identity matrix. The standard basis B of a CA in Mn(C) is precisely the set of adjacency matrices of a coherent configuration (CC) of order n and rank r = jBj. > B = B =) CB is semisimple. When ∆ = fI g, B is the set of adjacency matrices of an association scheme (AS). Allen Herman (Regina) Semisimple Q-algebras in Algebraic Combinatorics I.2 Example: Basic matrix of an AS An AS or CC can be illustrated by its basic matrix. Here we give the basic matrix of an AS with standard basis B = fb0; b1; :::; b7g: 2 012344556677 3 6 103244556677 7 6 7 6 230166774455 7 6 7 6 321066774455 7 6 7 6 447701665523 7 6 7 d 6 7 X 6 447710665532 7 ibi = 6 7 6 556677012344 7 i=0 6 7 6 556677103244 7 6 7 6 774455230166 7 6 7 6 774455321066 7 6 7 4 665523447701 5 665532447710 Allen Herman (Regina) Semisimple Q-algebras in Algebraic Combinatorics I.3 Example: Basic matrix of a CC of order 10 and j∆j = 2 2 0 123232323 3 6 1 032323232 7 6 7 6 4567878787 7 6 7 6 5476787878 7 d 6 7 X 6 4587678787 7 ibi = 6 7 6 5478767878 7 i=0 6 7 6 4587876787 7 6 7 6 5478787678 7 6 7 4 4587878767 5 5478787876 ∆ = fb0; b6g, δ(b0) = δ(b6) = 1 Diagonal block valencies: δ(b1) = 1, δ(b7) = 4, δ(b8) = 3; Off diagonal block valencies: row valencies: δr (b2) = δr (b3) = 4, δr (b4) = δr (b5) = 1 column valencies: δc (b2) = δc (b3) = 1, δc (b4) = δc (b5) = 4. -
Completely Integrable Gradient Flows
Commun. Math. Phys. 147, 57-74 (1992) Communications in Maltlematical Physics Springer-Verlag 1992 Completely Integrable Gradient Flows Anthony M. Bloch 1'*, Roger W. Brockett 2'** and Tudor S. Ratiu 3'*** 1 Department of Mathematics, The Ohio State University, Columbus, OH 43210, USA z Division of Applied Sciences, Harvard University, Cambridge, MA 02138, USA 3 Department of Mathematics, University of California, Santa Cruz, CA 95064, USA Received December 6, 1990, in revised form July 11, 1991 Abstract. In this paper we exhibit the Toda lattice equations in a double bracket form which shows they are gradient flow equations (on their isospectral set) on an adjoint orbit of a compact Lie group. Representations for the flows are given and a convexity result associated with a momentum map is proved. Some general properties of the double bracket equations are demonstrated, including a discussion of their invariant subspaces, and their function as a Lie algebraic sorter. O. Introduction In this paper we present details of the proofs and applications of the results on the generalized Toda lattice equations announced in [7]. One of our key results is exhibiting the Toda equations in a double bracket form which shows directly that they are gradient flow equations (on their isospectral set) on an adjoint orbit of a compact Lie group. This system, which is gradient with respect to the normal metric on the orbit, is quite different from the repre- sentation of the Toda flow as a gradient system on N" in Moser's fundamental paper [27]. In our representation the same set of equations is thus Hamiltonian and a gradient flow on the isospectral set. -
LIE GROUPS and ALGEBRAS NOTES Contents 1. Definitions 2
LIE GROUPS AND ALGEBRAS NOTES STANISLAV ATANASOV Contents 1. Definitions 2 1.1. Root systems, Weyl groups and Weyl chambers3 1.2. Cartan matrices and Dynkin diagrams4 1.3. Weights 5 1.4. Lie group and Lie algebra correspondence5 2. Basic results about Lie algebras7 2.1. General 7 2.2. Root system 7 2.3. Classification of semisimple Lie algebras8 3. Highest weight modules9 3.1. Universal enveloping algebra9 3.2. Weights and maximal vectors9 4. Compact Lie groups 10 4.1. Peter-Weyl theorem 10 4.2. Maximal tori 11 4.3. Symmetric spaces 11 4.4. Compact Lie algebras 12 4.5. Weyl's theorem 12 5. Semisimple Lie groups 13 5.1. Semisimple Lie algebras 13 5.2. Parabolic subalgebras. 14 5.3. Semisimple Lie groups 14 6. Reductive Lie groups 16 6.1. Reductive Lie algebras 16 6.2. Definition of reductive Lie group 16 6.3. Decompositions 18 6.4. The structure of M = ZK (a0) 18 6.5. Parabolic Subgroups 19 7. Functional analysis on Lie groups 21 7.1. Decomposition of the Haar measure 21 7.2. Reductive groups and parabolic subgroups 21 7.3. Weyl integration formula 22 8. Linear algebraic groups and their representation theory 23 8.1. Linear algebraic groups 23 8.2. Reductive and semisimple groups 24 8.3. Parabolic and Borel subgroups 25 8.4. Decompositions 27 Date: October, 2018. These notes compile results from multiple sources, mostly [1,2]. All mistakes are mine. 1 2 STANISLAV ATANASOV 1. Definitions Let g be a Lie algebra over algebraically closed field F of characteristic 0. -
Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation -
Symmetries, Fields and Particles. Examples 1
Michaelmas Term 2016, Mathematical Tripos Part III Prof. N. Dorey Symmetries, Fields and Particles. Examples 1. 1. O(n) consists of n × n real matrices M satisfying M T M = I. Check that O(n) is a group. U(n) consists of n × n complex matrices U satisfying U†U = I. Check similarly that U(n) is a group. Verify that O(n) and SO(n) are the subgroups of real matrices in, respectively, U(n) and SU(n). By considering how U(n) matrices act on vectors in Cn, and identifying Cn with R2n, show that U(n) is a subgroup of SO(2n). 2. Show that for matrices M ∈ O(n), the first column of M is an arbitrary unit vector, the second is a unit vector orthogonal to the first, ..., the kth column is a unit vector orthogonal to the span of the previous ones, etc. Deduce the dimension of O(n). By similar reasoning, determine the dimension of U(n). Show that any column of a unitary matrix U is not in the (complex) linear span of the remaining columns. 3. Consider the real 3 × 3 matrix, R(n,θ)ij = cos θδij + (1 − cos θ)ninj − sin θǫijknk 3 where n = (n1, n2, n3) is a unit vector in R . Verify that n is an eigenvector of R(n,θ) with eigenvalue one. Now choose an orthonormal basis for R3 with basis vectors {n, m, m˜ } satisfying, m · m = 1, m · n = 0, m˜ = n × m. By considering the action of R(n,θ) on these basis vectors show that this matrix corre- sponds to a rotation through an angle θ about an axis parallel to n and check that it is an element of SO(3). -
Alternating Sign Matrices, Extensions and Related Cones
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/311671190 Alternating sign matrices, extensions and related cones Article in Advances in Applied Mathematics · May 2017 DOI: 10.1016/j.aam.2016.12.001 CITATIONS READS 0 29 2 authors: Richard A. Brualdi Geir Dahl University of Wisconsin–Madison University of Oslo 252 PUBLICATIONS 3,815 CITATIONS 102 PUBLICATIONS 1,032 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: Combinatorial matrix theory; alternating sign matrices View project All content following this page was uploaded by Geir Dahl on 16 December 2016. The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document and are linked to publications on ResearchGate, letting you access and read them immediately. Alternating sign matrices, extensions and related cones Richard A. Brualdi∗ Geir Dahly December 1, 2016 Abstract An alternating sign matrix, or ASM, is a (0; ±1)-matrix where the nonzero entries in each row and column alternate in sign, and where each row and column sum is 1. We study the convex cone generated by ASMs of order n, called the ASM cone, as well as several related cones and polytopes. Some decomposition results are shown, and we find a minimal Hilbert basis of the ASM cone. The notion of (±1)-doubly stochastic matrices and a generalization of ASMs are introduced and various properties are shown. For instance, we give a new short proof of the linear characterization of the ASM polytope, in fact for a more general polytope. -
Contents 1 Root Systems
Stefan Dawydiak February 19, 2021 Marginalia about roots These notes are an attempt to maintain a overview collection of facts about and relationships between some situations in which root systems and root data appear. They also serve to track some common identifications and choices. The references include some helpful lecture notes with more examples. The author of these notes learned this material from courses taught by Zinovy Reichstein, Joel Kam- nitzer, James Arthur, and Florian Herzig, as well as many student talks, and lecture notes by Ivan Loseu. These notes are simply collected marginalia for those references. Any errors introduced, especially of viewpoint, are the author's own. The author of these notes would be grateful for their communication to [email protected]. Contents 1 Root systems 1 1.1 Root space decomposition . .2 1.2 Roots, coroots, and reflections . .3 1.2.1 Abstract root systems . .7 1.2.2 Coroots, fundamental weights and Cartan matrices . .7 1.2.3 Roots vs weights . .9 1.2.4 Roots at the group level . .9 1.3 The Weyl group . 10 1.3.1 Weyl Chambers . 11 1.3.2 The Weyl group as a subquotient for compact Lie groups . 13 1.3.3 The Weyl group as a subquotient for noncompact Lie groups . 13 2 Root data 16 2.1 Root data . 16 2.2 The Langlands dual group . 17 2.3 The flag variety . 18 2.3.1 Bruhat decomposition revisited . 18 2.3.2 Schubert cells . 19 3 Adelic groups 20 3.1 Weyl sets . 20 References 21 1 Root systems The following examples are taken mostly from [8] where they are stated without most of the calculations. -
More Gaussian Elimination and Matrix Inversion
Week 7 More Gaussian Elimination and Matrix Inversion 7.1 Opening Remarks 7.1.1 Introduction * View at edX 289 Week 7. More Gaussian Elimination and Matrix Inversion 290 7.1.2 Outline 7.1. Opening Remarks..................................... 289 7.1.1. Introduction..................................... 289 7.1.2. Outline....................................... 290 7.1.3. What You Will Learn................................ 291 7.2. When Gaussian Elimination Breaks Down....................... 292 7.2.1. When Gaussian Elimination Works........................ 292 7.2.2. The Problem.................................... 297 7.2.3. Permutations.................................... 299 7.2.4. Gaussian Elimination with Row Swapping (LU Factorization with Partial Pivoting) 304 7.2.5. When Gaussian Elimination Fails Altogether................... 311 7.3. The Inverse Matrix.................................... 312 7.3.1. Inverse Functions in 1D.............................. 312 7.3.2. Back to Linear Transformations.......................... 312 7.3.3. Simple Examples.................................. 314 7.3.4. More Advanced (but Still Simple) Examples................... 320 7.3.5. Properties...................................... 324 7.4. Enrichment......................................... 325 7.4.1. Library Routines for LU with Partial Pivoting................... 325 7.5. Wrap Up.......................................... 327 7.5.1. Homework..................................... 327 7.5.2. Summary...................................... 327 7.1. Opening