The Eigenvalue Distribution of a Random Unipotent Matrix in Its Representation on Lines

Total Page:16

File Type:pdf, Size:1020Kb

The Eigenvalue Distribution of a Random Unipotent Matrix in Its Representation on Lines Journal of Algebra 228, 497–511 (2000) doi:10.1006/jabr.1999.8278, available online at http://www.idealibrary.com on The Eigenvalue Distribution of a Random Unipotent Matrix in Its Representation on Lines Jason Fulman View metadata, citationDepartment and similar of Mathematics,papers at core.ac.uk Stanford University, Building 380, MC 2125, brought to you by CORE Stanford, California 94305 provided by Elsevier - Publisher Connector E-mail: [email protected] Communicated by Walter Feit Received July 13, 1999 The eigenvalue distribution of a uniformly chosen random finite unipotent matrix in its permutation action on lines is studied. We obtain bounds for the mean number of eigenvalues lying in a fixed arc of the unit circle and offer an approach to other asymptotics. For the case of all unipotent matrices, the proof gives a probabilis- tic interpretation to identities of Macdonald from symmetric function theory. For the case of upper triangular matrices over a finite field, connections between sym- metric function theory and a probabilistic growth algorithm of Borodin and Kirillov emerge. © 2000 Academic Press Key Words: random matrix; symmetric functions; Hall–Littlewood polynomial. 1. INTRODUCTION The subject of eigenvalues of random matrices is very rich. The eigen- value spacings of a complex unitary matrix chosen from Haar measure re- late to the spacings between the zeros of the Riemann zeta function [O, RS1, RS2]. For further recent work on random complex unitary matrices, see [DiS, R, So, W]. The references [Dy] and [Me] contain much of inter- est concerning the eigenvalues of a random matrix chosen from Dyson’s or- thogonal, unitary, and symplectic circular ensembles, for instance, connec- tions with the statistics of nuclear energy levels. The papers [AD, BaiDeJ, Ok] and the references contained in them give exciting recent results relat- ing eigenvalue distributions of matrices to statistics of random permutations such as the longest increasing subsequence. 497 0021-8693/00 $35.00 Copyright © 2000 by Academic Press All rights of reproduction in any form reserved. 498 jason fulman Little work seems to have been done on the eigenvalue statistics of ele- ments chosen from finite groups. One recent step is Chapter 5 of Wieand’s thesis [W]. She studies the permutation eigenvalues of a random element of the symmetric group in its representation on the set 1;:::;n. This note gives two natural q-analogs of Wieand’s work. For the first q-analog, let α 2 GLn; q be a random unipotent matrix. Letting V be the vector space on which α acts, we consider the eigenvalues of α in the permutation representation of GLn; q on the lines of V .LetXθα be the number of eigenvalues of α lying in a fixed arc 1;ei2πθ; 0 <θ<1, of the unit circle. Bounds are obtained for the mean of Xθ (we suspect that as n !1with q fixed, a normal limit theorem holds). A second q-analog which we analyze is the case when α is a randomly chosen unipotent upper triangular matrix over a finite field. A third interesting q-analog would be taking α uniformly chosen in GLn; q; however, this seems intractable. It would also be of interest to extend Wieand’s work to more general representations of the symmetric group, using formulas of Stembridge [Ste]. The main method of this paper is to interpret identities of symmetric function theory in a probabilistic setting. Section 2 gives background and results in this direction. This interaction appears fruitful, and it is shown for instance that a probabilistic algorithm of Borodin and Kirillov describing the Jordan form of a random unipotent upper triangular matrix [Bo, Ki1] follows from the combinatorics of symmetric functions. This ties in with work on analogous algorithms for the unipotent conjugacy classes of finite classical groups [F1]. The applications to the eigenvalue problems described above appear in Section 3. We remark that quite different computations in symmetric function theory play the central role in the work of Diaconis and Shahshahani [DiS] on the eigenvalues of random complex classical matrices. 2. SYMMETRIC FUNCTIONS To begin we describe some notation, asP on pages 2–5 of [Ma]. Let λ be a partition of a non-negative integer n D i λi into non-negative integral parts λ1 ≥ λ2 ≥···≥0. The notation λ=n will mean that λ is a partition 0 of n.Letmiλ be the number of parts of λ of size i, and let λ be the partition dual to λ in the sense that λ0 D m λ+m λC···.Letnλ be P i i iC1 the quantity i≥1i − 1λi. It is also useful to define the diagram associated 2 to λ as the set of points i; j∈Z such that 1 ≤ j ≤ λi. We use the convention that the row index i increases as one goes downward and the column index j increases as one goes across. So the diagram of the partition the eigenvalue distribution 499 5441 is ::::: :::: :::: : L λi Let Gλ be an abelian p-group isomorphic to i Cycp . We write G D λ if G is an abelian p-group isomorphic to G . Finally, let λ 1 1 1 D 1 − ··· 1 − : r p r p p The rest of the paper will treat the case GLn; p with p prime as op- posed to GLn; q. This reduction is made only to make the paper more accessible in places, allowing us to use the language of abelian p-groups rather than modules over power series rings. From Chapter 2 of Macdon- ald [Ma] it is clear that everything works for prime powers. 2.1. Unipotent Elements of GLn; p It is well known that the unipotent conjugacy classes of GLn; p are parametrized by partitions λ of n. A representative of the class λ is given by M 000 λ1 0 Mλ 00 2 ; 00Mλ ··· 3 000··· where Mi is the i ∗ i matrix of the form 110··· ··· 0 0110··· 0 0011··· 0 : ··· ··· ··· ··· ··· ··· ··· ··· ··· 011 000··· 01 500 jason fulman Lemmas 1–3 recall elementary facts about unipotent elements in GLn; p. Lemma 1. [Ma, p. 181; SS]. The number of unipotent elements in GLn; p with conjugacy class type λ is GLn; p P Q : λ0 2 1 p i i p miλ Chapter 3 of [Ma] defines Hall–Littlewood symmetric functions Pλx1;x2;:::y t which will be used extensively. There is an explicit formula for the Hall–Littlewood polynomials. Let the permutation w act on the x-variables by sending xi to xwi. There is also a coordinate-wise λ action of w on λ =λ1;:::;λn and Sn is defined as the subgroup of Sn stabilizing λ in this action. For a partition λ =λ1;:::;λn of length ≤ n, two formulas for the Hall–Littlewood polynomial restricted to n variables are: " # ! 1 X Y x − tx λ1 λn i j P x ;:::;x y t= w x ···xn λ 1 n Q Qm λ r 1 i 1−t xi − xj i≥0 rD1 1−t w2Sn i<j X λ λ Y xi − txj D w x 1 ···x n 1 n x − x λ λ >λ i j w2Sn=Sn i j Lemma 2. The probability that a unipotent element of GLn; p has con- jugacy class of type λ is equal to either of n 1 p p n 1. P 0 2 Q p λi 1 i p miλ pn 1 P 1 ; 1 ; 1 ;···y 1 2. p n λ p p2 p3 p pnλ Proof. The first statement follows from Lemma 1 and Steinberg’s theo- rem that GLn; p has pnn−1 unipotent elements. The second statement follows from the first and from elementary manipulations applied to Mac- donald’s principal specialization formula (page 337 of [Ma]). Full details appear in [F2]. One consequence of Lemma 2 is that in the p !1limit, all mass is placed on the partition λ =n. Thus the asymptotics in this paper will focus on the more interesting case of the fixed p, n !1limit. Lemma 3. X 1 1 P Q D λ0 2 1 1 p i pn λ`n i p miλ p n the eigenvalue distribution 501 Proof. Immediate from Lemma 2. Lemmas 4 and 5 relate to the theory of Hall polynomials and Hall– Littlewood symmetric functions [Ma]. Lemma 4, for instance, is the duality property of Hall polynomials. Lemma 4. [Ma, p. 181.] For all partitions λ; µ; ν, G1 ⊆ Gλ x Gλ=G1 D µ; G1 D ν=G1 ⊆ Gλ x Gλ=G1 D ν; G1 D µ: Lemma 5. Let Gλ denote an abelian p-group of type λ and G1 a subgroup. Then for all types µ, X G1 ⊆ Gλ x G1 D µ 1 1 P Q D P Q : λ0 2 1 µ0 2 1 1 p i p i pn−µ λ`n i p miλ i p miµ p n−µ Proof. Macdonald (page 220 of [Ma]), using Hall–Littlewood symmetric functions, establishes for any partitions µ; ν the equation X G1 ⊆ Gλ x Gλ=G1 D µ; G1 D ν P Q λ0 2 1 p i λxλ=µ+ν i p miλ 1 1 D P Q P Q : µ0 2 1 ν02 1 p i p i i p miµ i p miν Fixing µ, summing the left-hand side over all ν of size n −µ, and applying Lemma 4 yields X X G1 ⊆ Gλ x Gλ=G1 D µ; G1 D ν P Q λ0 2 1 p i λ ν i p miλ X X G1 ⊆ Gλ x Gλ=G1 D ν; G1 D µ D P Q λ0 2 1 p i λ ν i p miλ X G1 ⊆ Gλ x G1 D µ D P Q : λ0 2 1 p i λ i p miλ Fixing µ, summing the right-hand side over all ν of size n −µ, and apply- ing Lemma 3 gives that 1 X 1 P Q P Q µ0 2 1 ν02 1 p i p i i p miµ ν`n−µ i p miν 1 1 D P Q ; µ0 2 1 1 p i pn−µ i p miµ p n−µ proving the lemma.
Recommended publications
  • Parametrizations of K-Nonnegative Matrices
    Parametrizations of k-Nonnegative Matrices Anna Brosowsky, Neeraja Kulkarni, Alex Mason, Joe Suk, Ewin Tang∗ October 2, 2017 Abstract Totally nonnegative (positive) matrices are matrices whose minors are all nonnegative (positive). We generalize the notion of total nonnegativity, as follows. A k-nonnegative (resp. k-positive) matrix has all minors of size k or less nonnegative (resp. positive). We give a generating set for the semigroup of k-nonnegative matrices, as well as relations for certain special cases, i.e. the k = n − 1 and k = n − 2 unitriangular cases. In the above two cases, we find that the set of k-nonnegative matrices can be partitioned into cells, analogous to the Bruhat cells of totally nonnegative matrices, based on their factorizations into generators. We will show that these cells, like the Bruhat cells, are homeomorphic to open balls, and we prove some results about the topological structure of the closure of these cells, and in fact, in the latter case, the cells form a Bruhat-like CW complex. We also give a family of minimal k-positivity tests which form sub-cluster algebras of the total positivity test cluster algebra. We describe ways to jump between these tests, and give an alternate description of some tests as double wiring diagrams. 1 Introduction A totally nonnegative (respectively totally positive) matrix is a matrix whose minors are all nonnegative (respectively positive). Total positivity and nonnegativity are well-studied phenomena and arise in areas such as planar networks, combinatorics, dynamics, statistics and probability. The study of total positivity and total nonnegativity admit many varied applications, some of which are explored in “Totally Nonnegative Matrices” by Fallat and Johnson [5].
    [Show full text]
  • Random Matrix Theory and Covariance Estimation
    Introduction Random matrix theory Estimating correlations Comparison with Barra Conclusion Appendix Random Matrix Theory and Covariance Estimation Jim Gatheral New York, October 3, 2008 Introduction Random matrix theory Estimating correlations Comparison with Barra Conclusion Appendix Motivation Sophisticated optimal liquidation portfolio algorithms that balance risk against impact cost involve inverting the covariance matrix. Eigenvalues of the covariance matrix that are small (or even zero) correspond to portfolios of stocks that have nonzero returns but extremely low or vanishing risk; such portfolios are invariably related to estimation errors resulting from insuffient data. One of the approaches used to eliminate the problem of small eigenvalues in the estimated covariance matrix is the so-called random matrix technique. We would like to understand: the basis of random matrix theory. (RMT) how to apply RMT to the estimation of covariance matrices. whether the resulting covariance matrix performs better than (for example) the Barra covariance matrix. Introduction Random matrix theory Estimating correlations Comparison with Barra Conclusion Appendix Outline 1 Random matrix theory Random matrix examples Wigner’s semicircle law The Marˇcenko-Pastur density The Tracy-Widom law Impact of fat tails 2 Estimating correlations Uncertainty in correlation estimates. Example with SPX stocks A recipe for filtering the sample correlation matrix 3 Comparison with Barra Comparison of eigenvectors The minimum variance portfolio Comparison of weights In-sample and out-of-sample performance 4 Conclusions 5 Appendix with a sketch of Wigner’s original proof Introduction Random matrix theory Estimating correlations Comparison with Barra Conclusion Appendix Example 1: Normal random symmetric matrix Generate a 5,000 x 5,000 random symmetric matrix with entries aij N(0, 1).
    [Show full text]
  • Lecture Notes: Qubit Representations and Rotations
    Phys 711 Topics in Particles & Fields | Spring 2013 | Lecture 1 | v0.3 Lecture notes: Qubit representations and rotations Jeffrey Yepez Department of Physics and Astronomy University of Hawai`i at Manoa Watanabe Hall, 2505 Correa Road Honolulu, Hawai`i 96822 E-mail: [email protected] www.phys.hawaii.edu/∼yepez (Dated: January 9, 2013) Contents mathematical object (an abstraction of a two-state quan- tum object) with a \one" state and a \zero" state: I. What is a qubit? 1 1 0 II. Time-dependent qubits states 2 jqi = αj0i + βj1i = α + β ; (1) 0 1 III. Qubit representations 2 A. Hilbert space representation 2 where α and β are complex numbers. These complex B. SU(2) and O(3) representations 2 numbers are called amplitudes. The basis states are or- IV. Rotation by similarity transformation 3 thonormal V. Rotation transformation in exponential form 5 h0j0i = h1j1i = 1 (2a) VI. Composition of qubit rotations 7 h0j1i = h1j0i = 0: (2b) A. Special case of equal angles 7 In general, the qubit jqi in (1) is said to be in a superpo- VII. Example composite rotation 7 sition state of the two logical basis states j0i and j1i. If References 9 α and β are complex, it would seem that a qubit should have four free real-valued parameters (two magnitudes and two phases): I. WHAT IS A QUBIT? iθ0 α φ0 e jqi = = iθ1 : (3) Let us begin by introducing some notation: β φ1 e 1 state (called \minus" on the Bloch sphere) Yet, for a qubit to contain only one classical bit of infor- 0 mation, the qubit need only be unimodular (normalized j1i = the alternate symbol is |−i 1 to unity) α∗α + β∗β = 1: (4) 0 state (called \plus" on the Bloch sphere) 1 Hence it lives on the complex unit circle, depicted on the j0i = the alternate symbol is j+i: 0 top of Figure 1.
    [Show full text]
  • Rotation Matrix Sampling Scheme for Multidimensional Probability Distribution Transfer
    ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume III-7, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic ROTATION MATRIX SAMPLING SCHEME FOR MULTIDIMENSIONAL PROBABILITY DISTRIBUTION TRANSFER P. Srestasathiern ∗, S. Lawawirojwong, R. Suwantong, and P. Phuthong Geo-Informatics and Space Technology Development Agency (GISTDA), Laksi, Bangkok, Thailand (panu,siam,rata,pattrawuth)@gistda.or.th Commission VII,WG VII/4 KEY WORDS: Histogram specification, Distribution transfer, Color transfer, Data Gaussianization ABSTRACT: This paper address the problem of rotation matrix sampling used for multidimensional probability distribution transfer. The distri- bution transfer has many applications in remote sensing and image processing such as color adjustment for image mosaicing, image classification, and change detection. The sampling begins with generating a set of random orthogonal matrix samples by Householder transformation technique. The advantage of using the Householder transformation for generating the set of orthogonal matrices is the uniform distribution of the orthogonal matrix samples. The obtained orthogonal matrices are then converted to proper rotation matrices. The performance of using the proposed rotation matrix sampling scheme was tested against the uniform rotation angle sampling. The applications of the proposed method were also demonstrated using two applications i.e., image to image probability distribution transfer and data Gaussianization. 1. INTRODUCTION is based on the Monge’s transport problem which is to find mini- mal displacement for mass transportation. The solution from this In remote sensing, the analysis of multi-temporal data is widely linear approach can be used as initial solution for non-linear prob- used in many applications such as urban expansion monitoring, ability distribution transfer methods.
    [Show full text]
  • Math 511 Advanced Linear Algebra Spring 2006
    MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 x2:1 : 2; 5; 9; 12 x2:3 : 3; 6 x2:4 : 2; 4; 5; 9; 11 Section 2:1: Unitary Matrices Problem 2 If ¸ 2 σ(U) and U 2 Mn is unitary, show that j¸j = 1. Solution. If ¸ 2 σ(U), U 2 Mn is unitary, and Ux = ¸x for x 6= 0, then by Theorem 2:1:4(g), we have kxkCn = kUxkCn = k¸xkCn = j¸jkxkCn , hence j¸j = 1, as desired. Problem 5 Show that the permutation matrices in Mn are orthogonal and that the permutation matrices form a sub- group of the group of real orthogonal matrices. How many different permutation matrices are there in Mn? Solution. By definition, a matrix P 2 Mn is called a permutation matrix if exactly one entry in each row n and column is equal to 1, and all other entries are 0. That is, letting ei 2 C denote the standard basis n th element of C that has a 1 in the i row and zeros elsewhere, and Sn be the set of all permutations on n th elements, then P = [eσ(1) j ¢ ¢ ¢ j eσ(n)] = Pσ for some permutation σ 2 Sn such that σ(k) denotes the k member of σ. Observe that for any σ 2 Sn, and as ½ 1 if i = j eT e = σ(i) σ(j) 0 otherwise for 1 · i · j · n by the definition of ei, we have that 2 3 T T eσ(1)eσ(1) ¢ ¢ ¢ eσ(1)eσ(n) T 6 .
    [Show full text]
  • The Distribution of Eigenvalues of Randomized Permutation Matrices Tome 63, No 3 (2013), P
    R AN IE N R A U L E O S F D T E U L T I ’ I T N S ANNALES DE L’INSTITUT FOURIER Joseph NAJNUDEL & Ashkan NIKEGHBALI The distribution of eigenvalues of randomized permutation matrices Tome 63, no 3 (2013), p. 773-838. <http://aif.cedram.org/item?id=AIF_2013__63_3_773_0> © Association des Annales de l’institut Fourier, 2013, tous droits réservés. L’accès aux articles de la revue « Annales de l’institut Fourier » (http://aif.cedram.org/), implique l’accord avec les conditions générales d’utilisation (http://aif.cedram.org/legal/). Toute re- production en tout ou partie de cet article sous quelque forme que ce soit pour tout usage autre que l’utilisation à fin strictement per- sonnelle du copiste est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright. cedram Article mis en ligne dans le cadre du Centre de diffusion des revues académiques de mathématiques http://www.cedram.org/ Ann. Inst. Fourier, Grenoble 63, 3 (2013) 773-838 THE DISTRIBUTION OF EIGENVALUES OF RANDOMIZED PERMUTATION MATRICES by Joseph NAJNUDEL & Ashkan NIKEGHBALI Abstract. — In this article we study in detail a family of random matrix ensembles which are obtained from random permutations matrices (chosen at ran- dom according to the Ewens measure of parameter θ > 0) by replacing the entries equal to one by more general non-vanishing complex random variables. For these ensembles, in contrast with more classical models as the Gaussian Unitary En- semble, or the Circular Unitary Ensemble, the eigenvalues can be very explicitly computed by using the cycle structure of the permutations.
    [Show full text]
  • Parametrization of 3×3 Unitary Matrices Based on Polarization
    Parametrization of 33 unitary matrices based on polarization algebra (May, 2018) José J. Gil Parametrization of 33 unitary matrices based on polarization algebra José J. Gil Universidad de Zaragoza. Pedro Cerbuna 12, 50009 Zaragoza Spain [email protected] Abstract A parametrization of 33 unitary matrices is presented. This mathematical approach is inspired by polarization algebra and is formulated through the identification of a set of three orthonormal three-dimensional Jones vectors representing respective pure polarization states. This approach leads to the representation of a 33 unitary matrix as an orthogonal similarity transformation of a particular type of unitary matrix that depends on six independent parameters, while the remaining three parameters correspond to the orthogonal matrix of the said transformation. The results obtained are applied to determine the structure of the second component of the characteristic decomposition of a 33 positive semidefinite Hermitian matrix. 1 Introduction In many branches of Mathematics, Physics and Engineering, 33 unitary matrices appear as key elements for solving a great variety of problems, and therefore, appropriate parameterizations in terms of minimum sets of nine independent parameters are required for the corresponding mathematical treatment. In this way, some interesting parametrizations have been obtained [1-8]. In particular, the Cabibbo-Kobayashi-Maskawa matrix (CKM matrix) [6,7], which represents information on the strength of flavour-changing weak decays and depends on four parameters, constitutes the core of a family of parametrizations of a 33 unitary matrix [8]. In this paper, a new general parametrization is presented, which is inspired by polarization algebra [9] through the structure of orthonormal sets of three-dimensional Jones vectors [10].
    [Show full text]
  • Chapter Four Determinants
    Chapter Four Determinants In the first chapter of this book we considered linear systems and we picked out the special case of systems with the same number of equations as unknowns, those of the form T~x = ~b where T is a square matrix. We noted a distinction between two classes of T ’s. While such systems may have a unique solution or no solutions or infinitely many solutions, if a particular T is associated with a unique solution in any system, such as the homogeneous system ~b = ~0, then T is associated with a unique solution for every ~b. We call such a matrix of coefficients ‘nonsingular’. The other kind of T , where every linear system for which it is the matrix of coefficients has either no solution or infinitely many solutions, we call ‘singular’. Through the second and third chapters the value of this distinction has been a theme. For instance, we now know that nonsingularity of an n£n matrix T is equivalent to each of these: ² a system T~x = ~b has a solution, and that solution is unique; ² Gauss-Jordan reduction of T yields an identity matrix; ² the rows of T form a linearly independent set; ² the columns of T form a basis for Rn; ² any map that T represents is an isomorphism; ² an inverse matrix T ¡1 exists. So when we look at a particular square matrix, the question of whether it is nonsingular is one of the first things that we ask. This chapter develops a formula to determine this. (Since we will restrict the discussion to square matrices, in this chapter we will usually simply say ‘matrix’ in place of ‘square matrix’.) More precisely, we will develop infinitely many formulas, one for 1£1 ma- trices, one for 2£2 matrices, etc.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Matrix Lie Groups
    Maths Seminar 2007 MATRIX LIE GROUPS Claudiu C Remsing Dept of Mathematics (Pure and Applied) Rhodes University Grahamstown 6140 26 September 2007 RhodesUniv CCR 0 Maths Seminar 2007 TALK OUTLINE 1. What is a matrix Lie group ? 2. Matrices revisited. 3. Examples of matrix Lie groups. 4. Matrix Lie algebras. 5. A glimpse at elementary Lie theory. 6. Life beyond elementary Lie theory. RhodesUniv CCR 1 Maths Seminar 2007 1. What is a matrix Lie group ? Matrix Lie groups are groups of invertible • matrices that have desirable geometric features. So matrix Lie groups are simultaneously algebraic and geometric objects. Matrix Lie groups naturally arise in • – geometry (classical, algebraic, differential) – complex analyis – differential equations – Fourier analysis – algebra (group theory, ring theory) – number theory – combinatorics. RhodesUniv CCR 2 Maths Seminar 2007 Matrix Lie groups are encountered in many • applications in – physics (geometric mechanics, quantum con- trol) – engineering (motion control, robotics) – computational chemistry (molecular mo- tion) – computer science (computer animation, computer vision, quantum computation). “It turns out that matrix [Lie] groups • pop up in virtually any investigation of objects with symmetries, such as molecules in chemistry, particles in physics, and projective spaces in geometry”. (K. Tapp, 2005) RhodesUniv CCR 3 Maths Seminar 2007 EXAMPLE 1 : The Euclidean group E (2). • E (2) = F : R2 R2 F is an isometry . → | n o The vector space R2 is equipped with the standard Euclidean structure (the “dot product”) x y = x y + x y (x, y R2), • 1 1 2 2 ∈ hence with the Euclidean distance d (x, y) = (y x) (y x) (x, y R2).
    [Show full text]
  • LIE GROUPS and ALGEBRAS NOTES Contents 1. Definitions 2
    LIE GROUPS AND ALGEBRAS NOTES STANISLAV ATANASOV Contents 1. Definitions 2 1.1. Root systems, Weyl groups and Weyl chambers3 1.2. Cartan matrices and Dynkin diagrams4 1.3. Weights 5 1.4. Lie group and Lie algebra correspondence5 2. Basic results about Lie algebras7 2.1. General 7 2.2. Root system 7 2.3. Classification of semisimple Lie algebras8 3. Highest weight modules9 3.1. Universal enveloping algebra9 3.2. Weights and maximal vectors9 4. Compact Lie groups 10 4.1. Peter-Weyl theorem 10 4.2. Maximal tori 11 4.3. Symmetric spaces 11 4.4. Compact Lie algebras 12 4.5. Weyl's theorem 12 5. Semisimple Lie groups 13 5.1. Semisimple Lie algebras 13 5.2. Parabolic subalgebras. 14 5.3. Semisimple Lie groups 14 6. Reductive Lie groups 16 6.1. Reductive Lie algebras 16 6.2. Definition of reductive Lie group 16 6.3. Decompositions 18 6.4. The structure of M = ZK (a0) 18 6.5. Parabolic Subgroups 19 7. Functional analysis on Lie groups 21 7.1. Decomposition of the Haar measure 21 7.2. Reductive groups and parabolic subgroups 21 7.3. Weyl integration formula 22 8. Linear algebraic groups and their representation theory 23 8.1. Linear algebraic groups 23 8.2. Reductive and semisimple groups 24 8.3. Parabolic and Borel subgroups 25 8.4. Decompositions 27 Date: October, 2018. These notes compile results from multiple sources, mostly [1,2]. All mistakes are mine. 1 2 STANISLAV ATANASOV 1. Definitions Let g be a Lie algebra over algebraically closed field F of characteristic 0.
    [Show full text]