Topics in Random Matrix Theory Terence

Total Page:16

File Type:pdf, Size:1020Kb

Topics in Random Matrix Theory Terence Topics in random matrix theory Terence Tao Department of Mathematics, UCLA, Los Angeles, CA 90095 E-mail address: [email protected] To Garth Gaudry, who set me on the road; To my family, for their constant support; And to the readers of my blog, for their feedback and contributions. Contents Preface ix Acknowledgments x Chapter 1. Preparatory material 1 x1.1. A review of probability theory 2 x1.2. Stirling's formula 41 x1.3. Eigenvalues and sums of Hermitian matrices 45 Chapter 2. Random matrices 65 x2.1. Concentration of measure 66 x2.2. The central limit theorem 93 x2.3. The operator norm of random matrices 124 x2.4. The semicircular law 159 x2.5. Free probability 183 x2.6. Gaussian ensembles 217 x2.7. The least singular value 246 x2.8. The circular law 263 Chapter 3. Related articles 277 x3.1. Brownian motion and Dyson Brownian motion 278 x3.2. The Golden-Thompson inequality 297 vii viii Contents x3.3. The Dyson and Airy kernels of GUE via semiclassical analysis 305 x3.4. The mesoscopic structure of GUE eigenvalues 313 Bibliography 321 Index 329 Preface In the spring of 2010, I taught a topics graduate course on random matrix theory, the lecture notes of which then formed the basis for this text. This course was inspired by recent developments in the subject, particularly with regard to the rigorous demonstration of universal laws for eigenvalue spacing distributions of Wigner matri- ces (see the recent survey [Gu2009b]). This course does not directly discuss these laws, but instead focuses on more foundational topics in random matrix theory upon which the most recent work has been based. For instance, the first part of the course is devoted to basic probabilistic tools such as concentration of measure and the central limit theorem, which are then used to establish basic results in ran- dom matrix theory, such as the Wigner semicircle law on the bulk distribution of eigenvalues of a Wigner random matrix, or the cir- cular law on the distribution of eigenvalues of an iid matrix. Other fundamental methods, such as free probability, the theory of deter- minantal processes, and the method of resolvents, are also covered in the course. This text begins in Chapter 1 with a review of the aspects of prob- ability theory and linear algebra needed for the topics of discussion, but assumes some existing familiarity with both topics, as will as a first-year graduate-level understanding of measure theory (as covered for instance in my books [Ta2011, Ta2010]). If this text is used ix x Preface to give a graduate course, then Chapter 1 can largely be assigned as reading material (or reviewed as necessary), with the lectures then beginning with Section 2.1. The core of the book is Chapter 2. While the focus of this chapter is ostensibly on random matrices, the first two sections of this chap- ter focus more on random scalar variables, in particular discussing extensively the concentration of measure phenomenon and the cen- tral limit theorem in this setting. These facts will be used repeatedly when we then turn our attention to random matrices, and also many of the proof techniques used in the scalar setting (such as the moment method) can be adapted to the matrix context. Several of the key results in this chapter are developed through the exercises, and the book is designed for a student who is willing to work through these exercises as an integral part of understanding the topics covered here. The material in Chapter 3 is related to the main topics of this text, but is optional reading (although the material on Dyson Brow- nian motion from Section 3.1 is referenced several times in the main text). Acknowledgments I am greatly indebted to my students of the course on which this text was based, as well as many further commenters on my blog, including Ahmet Arivan, Joshua Batson, Alex Bloemendal, Andres Caicedo, Emmanuel Cand´es, J´er^omeChauvet, Brian Davies, Ben Golub, Stephen Heilman, John Jiang, Li Jing, Sungjin Kim, Allen Knutson, Greg Kuperberg, Choongbum Lee, George Lowther, Rafe Mazzeo, Mark Meckes, William Meyerson, Samuel Monnier, Andreas Naive, Giovanni Peccati, Leonid Petrov, Anand Rajagopalan, James Smith, Mads Sørensen, David Speyer, Luca Trevisan, Qiaochu Yuan, and several anonymous contributors, for comments and corrections. These comments, as well as the original lecture notes for this course, can be viewed online at terrytao.wordpress.com/category/teaching/254a-random-matrices The author is supported by a grant from the MacArthur Founda- tion, by NSF grant DMS-0649473, and by the NSF Waterman award. Chapter 1 Preparatory material 1 2 1. Preparatory material 1.1. A review of probability theory Random matrix theory is the study of matrices whose entries are ran- dom variables (or equivalently, the study of random variables which take values in spaces of matrices). As such, probability theory is an obvious prerequisite for this subject. As such, we will begin by quickly reviewing some basic aspects of probability theory that we will need in the sequel. We will certainly not attempt to cover all aspects of probability theory in this review. Aside from the utter foundations, we will be focusing primarily on those probabilistic concepts and operations that are useful for bounding the distribution of random variables, and on ensuring convergence of such variables as one sends a parameter n off to infinity. We will assume familiarity with the foundations of measure the- ory, which can be found in any text book (including my own text [Ta2011]). This is also not intended to be a first introduction to probability theory, but is instead a revisiting of these topics from a graduate-level perspective (and in particular, after one has under- stood the foundations of measure theory). Indeed, it will be almost impossible to follow this text without already having a firm grasp of undergraduate probability theory. 1.1.1. Foundations. At a purely formal level, one could call prob- ability theory the study of measure spaces with total measure one, but that would be like calling number theory the study of strings of digits which terminate. At a practical level, the opposite is true: just as number theorists study concepts (e.g. primality) that have the same meaning in every numeral system that models the natural numbers, we shall see that probability theorists study concepts (e.g. independence) that have the same meaning in every measure space that models a family of events or random variables. And indeed, just as the natural numbers can be defined abstractly without reference to any numeral system (e.g. by the Peano axioms), core concepts of probability theory, such as random variables, can also be defined ab- stractly, without explicit mention of a measure space; we will return to this point when we discuss free probability in Section 2.5. 1.1. A review of probability theory 3 For now, though, we shall stick to the standard measure-theoretic approach to probability theory. In this approach, we assume the pres- ence of an ambient sample space Ω, which intuitively is supposed to describe all the possible outcomes of all the sources of randomness that one is studying. Mathematically, this sample space is a proba- bility space Ω = (Ω; B; P) - a set Ω, together with a σ-algebra B of subsets of Ω (the elements of which we will identify with the proba- bilistic concept of an event), and a probability measure P on the space of events, i.e. an assignment E 7! P(E) of a real number in [0; 1] to every event E (known as the probability of that event), such that the whole space Ω has probability 1, and such that P is countably additive. Elements of the sample space Ω will be denoted !. However, for reasons that will be explained shortly, we will try to avoid actually referring to such elements unless absolutely required to. If we were studying just a single random process, e.g. rolling a single die, then one could choose a very simple sample space - in this case, one could choose the finite set f1;:::; 6g, with the dis- crete σ-algebra 2f1;:::;6g := fA : A ⊂ f1;:::; 6gg and the uniform probability measure. But if one later wanted to also study addi- tional random processes (e.g. supposing one later wanted to roll a second die, and then add the two resulting rolls), one would have to change the sample space (e.g. to change it now to the product space f1;:::; 6g × f1;:::; 6g). If one was particularly well organised, one could in principle work out in advance all of the random variables one would ever want or need, and then specify the sample space accord- ingly, before doing any actual probability theory. In practice, though, it is far more convenient to add new sources of randomness on the fly, if and when they are needed, and extend the sample space as nec- essary. This point is often glossed over in introductory probability texts, so let us spend a little time on it. We say that one probability space (Ω0; B0; P0) extends1 another (Ω; B; P) if there is a surjective map π :Ω0 ! Ω which is measurable (i.e. π−1(E) 2 B0 for every E 2 B) and probability preserving (i.e. P0(π−1(E)) = P(E) for every 1Strictly speaking, it is the pair ((Ω0; B0; P0); π) which is the extension of (Ω; B; P), not just the space (Ω0; B0; P0), but let us abuse notation slightly here.
Recommended publications
  • Random Matrix Theory and Covariance Estimation
    Introduction Random matrix theory Estimating correlations Comparison with Barra Conclusion Appendix Random Matrix Theory and Covariance Estimation Jim Gatheral New York, October 3, 2008 Introduction Random matrix theory Estimating correlations Comparison with Barra Conclusion Appendix Motivation Sophisticated optimal liquidation portfolio algorithms that balance risk against impact cost involve inverting the covariance matrix. Eigenvalues of the covariance matrix that are small (or even zero) correspond to portfolios of stocks that have nonzero returns but extremely low or vanishing risk; such portfolios are invariably related to estimation errors resulting from insuffient data. One of the approaches used to eliminate the problem of small eigenvalues in the estimated covariance matrix is the so-called random matrix technique. We would like to understand: the basis of random matrix theory. (RMT) how to apply RMT to the estimation of covariance matrices. whether the resulting covariance matrix performs better than (for example) the Barra covariance matrix. Introduction Random matrix theory Estimating correlations Comparison with Barra Conclusion Appendix Outline 1 Random matrix theory Random matrix examples Wigner’s semicircle law The Marˇcenko-Pastur density The Tracy-Widom law Impact of fat tails 2 Estimating correlations Uncertainty in correlation estimates. Example with SPX stocks A recipe for filtering the sample correlation matrix 3 Comparison with Barra Comparison of eigenvectors The minimum variance portfolio Comparison of weights In-sample and out-of-sample performance 4 Conclusions 5 Appendix with a sketch of Wigner’s original proof Introduction Random matrix theory Estimating correlations Comparison with Barra Conclusion Appendix Example 1: Normal random symmetric matrix Generate a 5,000 x 5,000 random symmetric matrix with entries aij N(0, 1).
    [Show full text]
  • Rotation Matrix Sampling Scheme for Multidimensional Probability Distribution Transfer
    ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume III-7, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic ROTATION MATRIX SAMPLING SCHEME FOR MULTIDIMENSIONAL PROBABILITY DISTRIBUTION TRANSFER P. Srestasathiern ∗, S. Lawawirojwong, R. Suwantong, and P. Phuthong Geo-Informatics and Space Technology Development Agency (GISTDA), Laksi, Bangkok, Thailand (panu,siam,rata,pattrawuth)@gistda.or.th Commission VII,WG VII/4 KEY WORDS: Histogram specification, Distribution transfer, Color transfer, Data Gaussianization ABSTRACT: This paper address the problem of rotation matrix sampling used for multidimensional probability distribution transfer. The distri- bution transfer has many applications in remote sensing and image processing such as color adjustment for image mosaicing, image classification, and change detection. The sampling begins with generating a set of random orthogonal matrix samples by Householder transformation technique. The advantage of using the Householder transformation for generating the set of orthogonal matrices is the uniform distribution of the orthogonal matrix samples. The obtained orthogonal matrices are then converted to proper rotation matrices. The performance of using the proposed rotation matrix sampling scheme was tested against the uniform rotation angle sampling. The applications of the proposed method were also demonstrated using two applications i.e., image to image probability distribution transfer and data Gaussianization. 1. INTRODUCTION is based on the Monge’s transport problem which is to find mini- mal displacement for mass transportation. The solution from this In remote sensing, the analysis of multi-temporal data is widely linear approach can be used as initial solution for non-linear prob- used in many applications such as urban expansion monitoring, ability distribution transfer methods.
    [Show full text]
  • TWAS Fellowships Worldwide
    CDC Round Table, ICTP April 2016 With science and engineering, countries can address challenges in agriculture, climate, health TWAS’s and energy. guiding principles 2 Food security Challenges Water quality for a Energy security new era Biodiversity loss Infectious diseases Climate change 3 A Globally, 81 nations fall troubling into the category of S&T- gap lagging countries. 48 are classified as Least Developed Countries. 4 The role of TWAS The day-to-day work of TWAS is focused in two critical areas: •Improving research infrastructure •Building a corps of PhD scholars 5 TWAS Research Grants 2,202 grants awarded to individuals and research groups (1986-2015) 6 TWAS’ AIM: to train 1000 PhD students by 2017 Training PhD-level scientists: •Researchers and university-level educators •Future leaders for science policy, business and international cooperation Rapidly growing opportunities P BRAZIL A K I N D I CA I RI A S AF TH T SOU A N M KENYA EX ICO C H I MALAYSIA N A IRAN THAILAND TWAS Fellowships Worldwide NRF, South Africa - newly on board 650+ fellowships per year PhD fellowships +460 Postdoctoral fellowships +150 Visiting researchers/professors + 45 17 Programme Partners BRAZIL: CNPq - National Council MALAYSIA: UPM – Universiti for Scientific and Technological Putra Malaysia WorldwideDevelopment CHINA: CAS - Chinese Academy of KENYA: icipe – International Sciences Centre for Insect Physiology and Ecology INDIA: CSIR - Council of Scientific MEXICO: CONACYT– National & Industrial Research Council on Science and Technology PAKISTAN: CEMB – National INDIA: DBT - Department of Centre of Excellence in Molecular Biotechnology Biology PAKISTAN: ICCBS – International Centre for Chemical and INDIA: IACS - Indian Association Biological Sciences for the Cultivation of Science PAKISTAN: CIIT – COMSATS Institute of Information INDIA: S.N.
    [Show full text]
  • Millennium Prize for the Poincaré
    FOR IMMEDIATE RELEASE • March 18, 2010 Press contact: James Carlson: [email protected]; 617-852-7490 See also the Clay Mathematics Institute website: • The Poincaré conjecture and Dr. Perelmanʼs work: http://www.claymath.org/poincare • The Millennium Prizes: http://www.claymath.org/millennium/ • Full text: http://www.claymath.org/poincare/millenniumprize.pdf First Clay Mathematics Institute Millennium Prize Announced Today Prize for Resolution of the Poincaré Conjecture a Awarded to Dr. Grigoriy Perelman The Clay Mathematics Institute (CMI) announces today that Dr. Grigoriy Perelman of St. Petersburg, Russia, is the recipient of the Millennium Prize for resolution of the Poincaré conjecture. The citation for the award reads: The Clay Mathematics Institute hereby awards the Millennium Prize for resolution of the Poincaré conjecture to Grigoriy Perelman. The Poincaré conjecture is one of the seven Millennium Prize Problems established by CMI in 2000. The Prizes were conceived to record some of the most difficult problems with which mathematicians were grappling at the turn of the second millennium; to elevate in the consciousness of the general public the fact that in mathematics, the frontier is still open and abounds in important unsolved problems; to emphasize the importance of working towards a solution of the deepest, most difficult problems; and to recognize achievement in mathematics of historical magnitude. The award of the Millennium Prize to Dr. Perelman was made in accord with their governing rules: recommendation first by a Special Advisory Committee (Simon Donaldson, David Gabai, Mikhail Gromov, Terence Tao, and Andrew Wiles), then by the CMI Scientific Advisory Board (James Carlson, Simon Donaldson, Gregory Margulis, Richard Melrose, Yum-Tong Siu, and Andrew Wiles), with final decision by the Board of Directors (Landon T.
    [Show full text]
  • The Distribution of Eigenvalues of Randomized Permutation Matrices Tome 63, No 3 (2013), P
    R AN IE N R A U L E O S F D T E U L T I ’ I T N S ANNALES DE L’INSTITUT FOURIER Joseph NAJNUDEL & Ashkan NIKEGHBALI The distribution of eigenvalues of randomized permutation matrices Tome 63, no 3 (2013), p. 773-838. <http://aif.cedram.org/item?id=AIF_2013__63_3_773_0> © Association des Annales de l’institut Fourier, 2013, tous droits réservés. L’accès aux articles de la revue « Annales de l’institut Fourier » (http://aif.cedram.org/), implique l’accord avec les conditions générales d’utilisation (http://aif.cedram.org/legal/). Toute re- production en tout ou partie de cet article sous quelque forme que ce soit pour tout usage autre que l’utilisation à fin strictement per- sonnelle du copiste est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright. cedram Article mis en ligne dans le cadre du Centre de diffusion des revues académiques de mathématiques http://www.cedram.org/ Ann. Inst. Fourier, Grenoble 63, 3 (2013) 773-838 THE DISTRIBUTION OF EIGENVALUES OF RANDOMIZED PERMUTATION MATRICES by Joseph NAJNUDEL & Ashkan NIKEGHBALI Abstract. — In this article we study in detail a family of random matrix ensembles which are obtained from random permutations matrices (chosen at ran- dom according to the Ewens measure of parameter θ > 0) by replacing the entries equal to one by more general non-vanishing complex random variables. For these ensembles, in contrast with more classical models as the Gaussian Unitary En- semble, or the Circular Unitary Ensemble, the eigenvalues can be very explicitly computed by using the cycle structure of the permutations.
    [Show full text]
  • Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A
    1 Graph Equivalence Classes for Spectral Projector-Based Graph Fourier Transforms Joya A. Deri, Member, IEEE, and José M. F. Moura, Fellow, IEEE Abstract—We define and discuss the utility of two equiv- Consider a graph G = G(A) with adjacency matrix alence graph classes over which a spectral projector-based A 2 CN×N with k ≤ N distinct eigenvalues and Jordan graph Fourier transform is equivalent: isomorphic equiv- decomposition A = VJV −1. The associated Jordan alence classes and Jordan equivalence classes. Isomorphic equivalence classes show that the transform is equivalent subspaces of A are Jij, i = 1; : : : k, j = 1; : : : ; gi, up to a permutation on the node labels. Jordan equivalence where gi is the geometric multiplicity of eigenvalue 휆i, classes permit identical transforms over graphs of noniden- or the dimension of the kernel of A − 휆iI. The signal tical topologies and allow a basis-invariant characterization space S can be uniquely decomposed by the Jordan of total variation orderings of the spectral components. subspaces (see [13], [14] and Section II). For a graph Methods to exploit these classes to reduce computation time of the transform as well as limitations are discussed. signal s 2 S, the graph Fourier transform (GFT) of [12] is defined as Index Terms—Jordan decomposition, generalized k gi eigenspaces, directed graphs, graph equivalence classes, M M graph isomorphism, signal processing on graphs, networks F : S! Jij i=1 j=1 s ! (s ;:::; s ;:::; s ;:::; s ) ; (1) b11 b1g1 bk1 bkgk I. INTRODUCTION where sij is the (oblique) projection of s onto the Jordan subspace Jij parallel to SnJij.
    [Show full text]
  • EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS June 6
    EUCLIDEAN DISTANCE MATRIX COMPLETION PROBLEMS HAW-REN FANG∗ AND DIANNE P. O’LEARY† June 6, 2010 Abstract. A Euclidean distance matrix is one in which the (i, j) entry specifies the squared distance between particle i and particle j. Given a partially-specified symmetric matrix A with zero diagonal, the Euclidean distance matrix completion problem (EDMCP) is to determine the unspecified entries to make A a Euclidean distance matrix. We survey three different approaches to solving the EDMCP. We advocate expressing the EDMCP as a nonconvex optimization problem using the particle positions as variables and solving using a modified Newton or quasi-Newton method. To avoid local minima, we develop a randomized initial- ization technique that involves a nonlinear version of the classical multidimensional scaling, and a dimensionality relaxation scheme with optional weighting. Our experiments show that the method easily solves the artificial problems introduced by Mor´e and Wu. It also solves the 12 much more difficult protein fragment problems introduced by Hen- drickson, and the 6 larger protein problems introduced by Grooms, Lewis, and Trosset. Key words. distance geometry, Euclidean distance matrices, global optimization, dimensional- ity relaxation, modified Cholesky factorizations, molecular conformation AMS subject classifications. 49M15, 65K05, 90C26, 92E10 1. Introduction. Given the distances between each pair of n particles in Rr, n r, it is easy to determine the relative positions of the particles. In many applications,≥ though, we are given only some of the distances and we would like to determine the missing distances and thus the particle positions. We focus in this paper on algorithms to solve this distance completion problem.
    [Show full text]
  • New Results on the Spectral Density of Random Matrices
    New Results on the Spectral Density of Random Matrices Timothy Rogers Department of Mathematics, King’s College London, July 2010 A thesis submitted for the degree of Doctor of Philosophy: Applied Mathematics Abstract This thesis presents some new results and novel techniques in the studyof thespectral density of random matrices. Spectral density is a central object of interest in random matrix theory, capturing the large-scale statistical behaviour of eigenvalues of random matrices. For certain ran- dom matrix ensembles the spectral density will converge, as the matrix dimension grows, to a well-known limit. Examples of this include Wigner’s famous semi-circle law and Girko’s elliptic law. Apart from these, and a few other cases, little else is known. Two factors in particular can complicate the analysis enormously - the intro- duction of sparsity (that is, many entries being zero) and the breaking of Hermitian symmetry. The results presented in this thesis extend the class of random matrix en- sembles for which the limiting spectral density can be computed to include various sparse and non-Hermitian ensembles. Sparse random matrices are studied through a mathematical analogy with statistical mechanics. Employing the cavity method, a technique from the field of disordered systems, a system of equations is derived which characterises the spectral density of a given sparse matrix. Analytical and numerical solutions to these equations are ex- plored, as well as the ensemble average for various sparse random matrix ensembles. For the case of non-Hermitian random matrices, a result is presented giving a method 1 for exactly calculating the spectral density of matrices under a particular type of per- turbation.
    [Show full text]
  • Random Rotation Ensembles
    Journal of Machine Learning Research 2 (2015) 1-15 Submitted 03/15; Published ??/15 Random Rotation Ensembles Rico Blaser ([email protected]) Piotr Fryzlewicz ([email protected]) Department of Statistics London School of Economics Houghton Street London, WC2A 2AE, UK Editor: TBD Abstract In machine learning, ensemble methods combine the predictions of multiple base learners to construct more accurate aggregate predictions. Established supervised learning algo- rithms inject randomness into the construction of the individual base learners in an effort to promote diversity within the resulting ensembles. An undesirable side effect of this ap- proach is that it generally also reduces the accuracy of the base learners. In this paper, we introduce a method that is simple to implement yet general and effective in improv- ing ensemble diversity with only modest impact on the accuracy of the individual base learners. By randomly rotating the feature space prior to inducing the base learners, we achieve favorable aggregate predictions on standard data sets compared to state of the art ensemble methods, most notably for tree-based ensembles, which are particularly sensitive to rotation. Keywords: Feature Rotation, Ensemble Diversity, Smooth Decision Boundary 1. Introduction Modern statistical learning algorithms combine the predictions of multiple base learners to form ensembles, which typically achieve better aggregate predictive performance than the individual base learners (Rokach, 2010). This approach has proven to be effective in practice and some ensemble methods rank among the most accurate general-purpose supervised learning algorithms currently available. For example, a large-scale empirical study (Caruana and Niculescu-Mizil, 2006) of supervised learning algorithms found that decision tree ensembles consistently outperformed traditional single-predictor models on a representative set of binary classification tasks.
    [Show full text]
  • 4. a Close Call: How a Near Failure Propelled Me to Succeed By
    Early Career When I entered graduate study at Princeton, I brought A Close Call: How a my study habits (or lack thereof) with me. At the time in Princeton, the graduate classes did not have any home- Near Failure Propelled work or tests; the only major examination one had to pass (apart from some fairly easy language requirements) were Me to Succeed the dreaded “generals’’—the oral qualifying exams, often lasting over two hours, that one would take in front of Terence Tao three faculty members, usually in one’s second year. The questions would be drawn from five topics: real analysis, For as long as I can remember, I was always fascinated by complex analysis, algebra, and two topics of the student’s numbers and the formal symbolic operations of mathe- choice. For most of the other graduate students in my year, matics, even before I knew the uses of mathematics in the preparing for the generals was a top priority; they would real world. One of my earliest childhood memories was read textbooks from cover to cover, organise study groups, demanding that my grandmother, who was washing the and give each other mock exams. It had become a tradition windows, put detergent on the windows in the shape of for every graduate student taking the generals to write up numbers. When I was particularly rowdy as a child, my the questions they received and the answers they gave for parents would sometimes give me a math workbook to future students to practice. There were even skits performed work on instead, which I was more than happy to do.
    [Show full text]
  • Program of the Sessions San Diego, California, January 9–12, 2013
    Program of the Sessions San Diego, California, January 9–12, 2013 AMS Short Course on Random Matrices, Part Monday, January 7 I MAA Short Course on Conceptual Climate Models, Part I 9:00 AM –3:45PM Room 4, Upper Level, San Diego Convention Center 8:30 AM –5:30PM Room 5B, Upper Level, San Diego Convention Center Organizer: Van Vu,YaleUniversity Organizers: Esther Widiasih,University of Arizona 8:00AM Registration outside Room 5A, SDCC Mary Lou Zeeman,Bowdoin upper level. College 9:00AM Random Matrices: The Universality James Walsh, Oberlin (5) phenomenon for Wigner ensemble. College Preliminary report. 7:30AM Registration outside Room 5A, SDCC Terence Tao, University of California Los upper level. Angles 8:30AM Zero-dimensional energy balance models. 10:45AM Universality of random matrices and (1) Hans Kaper, Georgetown University (6) Dyson Brownian Motion. Preliminary 10:30AM Hands-on Session: Dynamics of energy report. (2) balance models, I. Laszlo Erdos, LMU, Munich Anna Barry*, Institute for Math and Its Applications, and Samantha 2:30PM Free probability and Random matrices. Oestreicher*, University of Minnesota (7) Preliminary report. Alice Guionnet, Massachusetts Institute 2:00PM One-dimensional energy balance models. of Technology (3) Hans Kaper, Georgetown University 4:00PM Hands-on Session: Dynamics of energy NSF-EHR Grant Proposal Writing Workshop (4) balance models, II. Anna Barry*, Institute for Math and Its Applications, and Samantha 3:00 PM –6:00PM Marina Ballroom Oestreicher*, University of Minnesota F, 3rd Floor, Marriott The time limit for each AMS contributed paper in the sessions meeting will be found in Volume 34, Issue 1 of Abstracts is ten minutes.
    [Show full text]
  • Moments of Random Matrices and Hypergeometric Orthogonal Polynomials
    Commun. Math. Phys. 369, 1091–1145 (2019) Communications in Digital Object Identifier (DOI) https://doi.org/10.1007/s00220-019-03323-9 Mathematical Physics Moments of Random Matrices and Hypergeometric Orthogonal Polynomials Fabio Deelan Cunden1,FrancescoMezzadri2,NeilO’Connell1 , Nick Simm3 1 School of Mathematics and Statistics, University College Dublin, Belfield, Dublin 4, Ireland. E-mail: [email protected] 2 School of Mathematics, University of Bristol, University Walk, Bristol, BS8 1TW, UK 3 Mathematics Department, University of Sussex, Brighton, BN1 9RH, UK Received: 9 June 2018 / Accepted: 10 November 2018 Published online: 6 February 2019 – © The Author(s) 2019 Abstract: We establish a new connection between moments of n n random matri- × ces Xn and hypergeometric orthogonal polynomials. Specifically, we consider moments s E Tr Xn− as a function of the complex variable s C, whose analytic structure we describe completely. We discover several remarkable∈ features, including a reflection symmetry (or functional equation), zeros on a critical line in the complex plane, and orthogonality relations. An application of the theory resolves part of an integrality con- jecture of Cunden et al. (J Math Phys 57:111901, 2016)onthetime-delaymatrixof chaotic cavities. In each of the classical ensembles of random matrix theory (Gaus- sian, Laguerre, Jacobi) we characterise the moments in terms of the Askey scheme of hypergeometric orthogonal polynomials. We also calculate the leading order n asymptotics of the moments and discuss their symmetries and zeroes. We discuss aspects→∞ of these phenomena beyond the random matrix setting, including the Mellin transform of products and Wronskians of pairs of classical orthogonal polynomials.
    [Show full text]