Lectures on Random Matrix Theory
Total Page:16
File Type:pdf, Size:1020Kb
Lectures on Random Matrix Theory Govind Menon and Thomas Trogdon March 12, 2018 2 Contents 1 Overview 7 1.1 What is a random matrix? . .7 1.2 The main limit theorems . .8 1.2.1 The semicircle law . .9 1.2.2 Fluctuations in the bulk: the sine process . 11 1.2.3 Fluctuations at the edge: the Airy point process . 12 1.2.4 Fredholm determinants, Painlev´eequations, and integrable systems . 13 1.2.5 Universality . 15 1.3 Connections to other areas of mathematics . 15 1.3.1 Number theory . 15 1.3.2 Combinatorics and enumerative geometry . 17 1.3.3 Random permutations . 18 1.3.4 Spectral and inverse spectral theory of operators . 19 2 Integration on spaces of matrices 21 2.1 Integration on O(n) and Symm(n)................. 22 2.2 Weyl's formula on Symm(n)..................... 25 2.3 Diagonalization as a change of coordinates . 28 2.4 Independence and Invariance implies Gaussian . 29 2.5 Integration on Her(n) and U(n)................... 31 2.6 Integration on Quart(n) and USp(n)................ 33 3 Jacobi matrices and tridiagonal ensembles 37 3.1 Jacobi ensembles . 37 3.2 Householder tridiagonalization on Symm(n)............ 38 3.3 Tridiagonalization on Her(n) and Quart(n)............. 41 3.4 Inverse spectral theory for Jacobi matrices . 42 3.5 Jacobians for tridiagonal ensembles . 48 3.6 Notes . 52 4 Determinantal formulas 53 4.1 Probabilities as determinants . 53 4.2 The m-point correlation function . 55 3 4 CONTENTS 4.3 Determinants as generating functions . 57 4.4 Scaling limits of independent points . 59 4.5 Scaling limits I: the semicircle law . 61 4.6 Scaling limits II: the sine kernel . 62 4.7 Scaling limits III: the Airy kernel . 64 4.8 The eigenvalues and condition number of GUE . 65 4.9 Notes on universality and generalizations . 66 4.9.1 Limit theorems for β = 1; 4 ................. 66 4.9.2 Universality theorems . 66 4.10 Notes . 67 5 The equilibrium measure 69 5.1 The log-gas . 69 5.2 Energy minimization for the log-gas . 71 5.2.1 Case 1: bounded support . 71 5.2.2 Weak lower semicontinuity . 72 5.2.3 Strict convexity . 72 5.2.4 Case 2: Measures on the line . 75 5.3 Fekete points . 77 5.4 Exercises . 79 5.5 Notes . 80 6 Other random matrix ensembles 81 6.1 The Ginibre ensembles . 82 6.1.1 Schur decomposition of GinC(n).............. 86 6.1.2 QR decomposition of GinC(m; n).............. 87 6.1.3 Eigenvalues and eigenvectors of GinR(n).......... 88 6.1.4 QR decomposition of GinR(m; n).............. 91 6.2 SVD and the Laguerre (Wishart) ensembles . 91 6.2.1 The Cholesky decomposition . 92 6.2.2 Bidiagonalization of Ginibre . 95 6.2.3 Limit theorems . 96 7 Sampling random matrices 101 7.1 Sampling determinantal point processes . 101 7.2 Sampling unitary and orthogonal ensembles . 101 7.3 Brownian bridges and non-intersecting Brownian paths . 101 8 Additional topics 103 8.1 Joint distributions at the edge . 103 8.2 Estimates of U(n)........................... 103 8.3 Iterations of the power method . 103 CONTENTS 5 A The Airy function 105 A.1 Integral representation . 105 A.2 Differential equation . 106 A.3 Asymptotics . 106 B Hermite polynomials 107 B.1 Basic formulas . 107 B.2 Hermite wave functions . 108 B.3 Small x asymptotics . 109 B.4 Steepest descent for integrals . 110 B.5 Plancherel{Rotach asymptotics . 112 B.5.1 Uniform bounds . 118 C Fredholm determinants 121 C.1 Definitions . 121 C.2 Convergence . 123 C.2.1 Change of variables and kernel extension . 124 C.3 Computing Fredholm determinants . 124 C.4 Separable kernels . 124 D Notation 125 D.1 Indices . 125 D.2 Fields . 125 D.3 Matrices . 126 D.4 Lie groups . 126 D.5 Banach spaces . 127 6 CONTENTS Chapter 1 Overview 1.1 What is a random matrix? There are two distinct points of view that one may adopt. On one hand, our intuitive ideas of randomness are intimately tied to the notion of sampling a realization of a random variable. Thus, given a random number generator, one may build a random Hermitian matrix, M 2 Her(n), by choosing its real diagonal and complex upper-triangular entries independently at random. It is conventional to assume further that all the diagonal entries have the same law, and that all the upper-triangular entries have the same law. For example, we may assume that the entries on the diagonal are ±1 with probability 1=2, and that the upper-triangular entries are ±1 ± i with probability 1=4. It is also conventional to have the variance of the diagonal entries to be twice that of the off-diagonal entries. Random matrices of this kind, are said to be drawn from Wigner ensembles. On the other hand, one may adopt a more analytic view. The Hilbert- Pn ¯ Schmidt inner product of two Hermitian matrices, Tr(MN) = j;k=1 MjkNjk, defines a natural metric Tr(dM 2) and volume form DM on Her(n) (see Lec- ture 2). In this text, unless otherwise stated, kMk = pTr M ∗M). Thus, each positive function p : Her(n) ! [0; 1) that decays sufficiently fast as kMk ! 1, may be normalized to define a probability measure. A fundamental example is the law of the Gaussian Unitary Ensemble (GUE) 1 − 1 Tr(M 2) pGUE(M)DM = e 2 DM: (1.1.1) Zn Here Zn is a normalization constant that ensures pGUE is a probability density (we use the same notation for different ensembles; thus the numerical value of Zn must be inferred from the context). The term GUE was introduced by Freeman Dyson [9], and refers to an important invariance property of pGUE. Each U 2 U(n) defines a transformation Her(n) ! Her(n), M 7! UMU ∗. It is easily checked that the volume form DM is invariant under the map M 7! 7 8 CHAPTER 1. OVERVIEW ∗ UMU , as is the measure pGUE(M)DM. More generally, a probability measure on Her(n) is said to be invariant if p(M)DM remains invariant under the map M 7! UMU ∗, for each U 2 U(n). Important examples of invariant ensembles are defined by polynomials in one-variable of the form 2N 2N−1 g(x) = a2N x + a2N−1x + ::: + a0; aj 2 R; j = 0; 1;:::; 2N; a2N > 0: (1.1.2) Then the following probability measure is invariant 1 p(M)DM = e− Tr g(M)DM: (1.1.3) Zn We have assumed that all matrices are Hermitian simply to be concrete. The above notions extend to ensembles of matrices from Symm(n) and Quart(n). The notion of invariance in each case is distinct: for Symm(n), the natural transformation is M 7! QMQT , Q 2 O(n); for Quart(n) it is M 7! SMSD, S 2 USp(n). The standard Gaussian ensembles in these cases are termed GOE (the Gaussian Orthogonal Ensemble) and GSE (the Gaussian Symplectic Ensemble), and they are normalized as follows: 1 − 1 Tr(M 2) 1 − Tr(M 2) pGOE(M)dM = e 4 dM; pGSE(M)dM = e DM: (1.1.4) Zn Zn The differing normalizations arise from the different volume forms on Symm(n), Her(n) and Quart(n) as will be explained in Lecture 2. For now, let us note that the densities for all the Gaussian ensembles may be written in the unified form β −1 − Tr(M 2) Zn(β) e 4 (1.1.5) where β = 1,2 and 4 for GOE, GUE and GSE respectively. While it is true that there are no other ensembles that respect fundamental physical invariance (in the sense of Dyson), many fundamental results of random matrix theory can be established for all β > 0. These results follow from the existence of ensembles of tridiagonal matrices, whose eigenvalues have a joint distribution that interpolates those of the β = 1,2 and 4 ensembles to all β > 0 [8]. 1.2 The main limit theorems The basic question in random matrix theory is the following: what can one say about the statistics of the eigenvalues of a random matrix? For example, what is the probability that the largest eigenvalue lies below a threshold? Or, what is the probability that there are no eigenvalues in a given interval? The difficulty here is that even if the entries of a random matrix are independent, the eigenvalues are strongly coupled. Gaussian ensembles play a very special, and important, role in random ma- trix theory. These are the only ensembles that are both Wigner and invari- ant (see Theorem 18 below). Pioneering, ingenious calculations by Dyson [9], 1.2. THE MAIN LIMIT THEOREMS 9 Gaudin and Mehta [26, 25], on the Gaussian ensembles served to elucidate the fundamental limit theorems of random matrix theory. In this section we outline these theorems, assuming always that the ensemble is GUE. Our purpose is to explain the form of the main questions (and their answers) in the simplest setting. All the results hold in far greater generality as is briefly outlined at the end of this section. By the normalization (1.1.1), a GUE matrix has independent standard nor- mal entries on its diagonal (mean zero, variance 1). The off-diagonal entries have mean zero and variance 1=2. We denote the ordered eigenvalues of the GUE matrix by λ1 ≤ λ2 ≤ : : : λn. A fundamental heuristic for GUE matrices (that will be proven later, and mayp be easily simulated) isp that the largestp and smallest eigenvalues have size O( n).