Random Matrix Theory
Total Page:16
File Type:pdf, Size:1020Kb
Random matrix theory Manjunath Krishnapur Indian Institute of Science, Bangalore October 27, 2017 Contents 1 The simplest non-trivial matrices8 Jacobi matrices....................................8 The 1-dimensional discrete Laplacian matrix..................9 Empirical spectral distribution and related notions............... 11 The oscillator matrix................................ 14 A class of Jacobi matrices.............................. 16 Exercises....................................... 18 2 The simplest non-trivial random matrix 20 Parameterizing a Jacobi matrix by its spectral measure at e1 .......... 20 The Jacobian determinant............................. 21 A class of random Jacobi matrices......................... 22 Computation of the Jacobian determinant.................... 24 Remarks on the moment problem and Jacobi matrices............. 28 Laguerre Beta ensembles: Another random tridiagonal matrix........ 30 Exercises....................................... 32 Notes......................................... 33 3 The beta log-gas 34 The quadratic beta log-gas............................. 35 Mehta integral.................................... 36 The range of the log-gas.............................. 37 Exercises....................................... 39 4 The method of moments applied to Jacobi matrices, deterministic and random 40 A class of deterministic Jacobi matrices..................... 40 4 The method of moments.............................. 40 A more general Jacobi matrix........................... 43 The limiting distribution of beta log-gases.................... 45 Exercises....................................... 47 5 Stieltjes’ transform method for Jacobi matrices 49 Spectral measure of Tn( f ) at e1 ........................... 49 Limiting spectral distribution of Tn( f ) ...................... 50 One dimensional Anderson model and the method of spectral averaging.. 52 6 Gaussian random matrices 54 GOE and GUE.................................... 54 Reduction to a Jacobi matrix............................ 55 Eigenvalue distribution of GOE and GUE.................... 56 A direct proof by change of variable? Some remarks.............. 58 Generalizations of GOE and GUE in two directions.............. 60 Exercises....................................... 61 7 Wigner matrices: The semi-circle law 63 The invariance principle.............................. 63 An illustration: The Lindeberg-Feller central limit theorem.......... 64 Proof of the invariance principle......................... 65 Semicircle law for Wigner matrices........................ 66 8 Moment method applied to GOE and connections to enumeration problems 70 Expected ESD of the GOE matrix......................... 70 Semi-circle law for a more general class of random matrices......... 76 9 Free probability and random matrices 78 Cumulants and moments in classical probability................ 78 Noncommutative probability spaces, free independence............ 83 Moment-cumulant calculus............................ 88 Free convolution.................................. 91 Integral transforms................................. 92 Free central limit theorem............................. 94 Random matrices and freeness.......................... 95 5 Spectrum of the sum of two matrices and free convolution.......... 96 Exercises....................................... 97 10 Non-asymptotic questions 98 Gaussian matrices.................................. 99 Rectangular matrices with independent entries................. 104 Appendix 1: Weak convergence and techniques to show it 105 Probability measures on the real line....................... 105 Stieltjes’ transform of a probability measure................... 107 Examples....................................... 109 Bounding Levy´ distance in terms of Stieltjes transform............ 111 Method of moments................................ 112 Exercises....................................... 113 Appendix 2: Some linear algebra facts 114 Bounds on eigenvalues............................... 114 Perturbations of eigenvalues............................ 114 Block matrix inversion formula.......................... 117 Shooting description of eigenvectors and eigenvalues of a Jacobi matrix.. 118 Appendix 3: Gaussian random variables 120 Basics of Gaussians, moments, cumulants.................... 120 Comparison inequalities.............................. 124 Gaussian isoperimetric inequality......................... 129 Appendix 4: Some combinatorics facts 131 The Mobius function of a lattice.......................... 131 Acknowledgments: Lecture notes from a course on random matrix theory in the fall of 2017 at IISc, Bangalore. An earlier version was made during a course in the spring of 2011 at IISc, Bangalore. Thanks to those who attended the course (Rajesh Sundaresan, Tulasi Ram Reddy, Kartick Adhikari, Indrajit Jana and Subhamay Saha first time). Thanks to Anirban Dasgupta for pointing out some errors in the notes. 6 Chapter 1 The simplest non-trivial matrices Random matrix theory is largely the study of eigenvalues and eigenvectors of matrices whose entries are random variables. In this chapter, we shall motivate the kinds of ques- tions studied in random matrix theory, but using deterministic matrices. That will also help us to set up the language in which to phrase the questions and answers. Spectral quantities have more geometric meaning for symmetric (and Hermitian and normal) matrices than for general matrices. Since there is not much to say about diagonal matrices, we are led to real symmetric tridiagonal matrices as the simplest non-trivial matrices. Jacobi matrices Given real numbers a1;:::;an and strictly positive numbers b1;:::;bn−1, let 2 3 a1 b1 0 0 0 0 6 .. 7 6 b1 a2 b2 . 0 0 7 6 7 6 .. .. .. 7 6 0 b2 . 0 7 T = Tn(a;b) = 6 7: (1) 6 .. .. .. 7 6 0 . bn−2 0 7 6 7 6 .. 7 4 0 0 . bn−2 an−1 bn−1 5 0 0 0 0 bn−1 an This is the real real symmetric n × n tridiagonal matrix with diagonal entries Tk;k = ak for 1 ≤ k ≤ n and Tk;k+1 = Tk+1;k = bk for 1 ≤ k ≤ n − 1. Why did we assume strict positivity of b js? If bk = 0, the matrix breaks into a direct sum of two matrices, hence we impose the condition bk 6= 0 for all k. By conjugating 7 Figure 1.1: Line graph on 8 vertices. For f :V 7! R, define D f (k) = 2 f (k)− f (k−1)− f (k+1) where f (0) and f (n + 1) are interpreted as 0. The matrix of D in standard basis is 2In − Tn where Tn has a· = 0 and b· = 1. with an appropriate diagonal matrix having ±1 entries on the diagonal, any real sym- metric tridiagonal matrix can be transformed into a (unique) Jacobi matrix. Indeed, if −1 D = diag(e1;:::;en) where ei = ±1, then DTn(a;b)D = Tn(a;c) where ck = ekek+1bk. From this it is clear how to choose eis so that ck > 0 for all k. As we are interested in eigenvalues and they don’t change under conjugation (and eigenvectors transform in a simple way), we may as well start with the assumption that bk are positive. The 1-dimensional discrete Laplacian matrix The single most important example is the case ak = 0 and bk = 1. In this case, 2 0 1 0 0 0 0 3 6 . 7 6 1 0 1 .. 0 0 7 6 7 6 .. .. .. 7 6 0 1 . 0 7 Tn(a;b) = 6 7 : 6 .. .. .. 7 6 0 . 1 0 7 6 7 6 .. 7 4 0 0 . 1 0 1 5 0 0 0 0 1 0 n×n This matrix arises in innumerable contexts. For example, 2In − Tn is the discrete second derivative operator. It is also the Laplacian on the to the line graph with n vertices, 1 provided we add a loop at 1 and at n. It can also be considered as the generator of the simple random walk on this graph. The eigenvalue equation Tv = lv can be written out explicitly as vk−1 + vk+1 = lvk for 1 ≤ k ≤ n; V V 1In general, for a graph G = (V;E), the laplacian is the linear operator D : R 7! R defined by D f (u) = deg(u) − ∑v:v∼u f (v). In standard basis its matrix is D − A where A is the adjacency matrix and D is the V ×V diagonal matrix whose (u;u) entry is deg(u). 8 t where v = (v1;:::;vn) and we adopt the convention that v0 = vn+1 = 0. We try vk = sin(kq) since sin((k − 1)q) + sin((k + 1)q) = 2cosqsin(kq): p` To satisfy the boundary conditions v0 = vn+1 = 0, we take q = q` = n+1 for some 1 ≤ ` ≤ n. This gives us the eigenvalues l` = 2cos(q`) with the corresponding eigenvectors (caution: t they are not normalized) v` = (sin(q`);sin(2q`);:::;sin(nq`)). Then by the spectral theorem t t Tn = l1v1v1 + ::: + lnvnvn: Strictly speaking, we should put the subscript n in l`, v` etc., but for simplicity of notation we don’t. Histogram of the eigenvalues: The eigenvalues are all between −2 and +2. What pro- portion are inside an interval [a;b] ⊆ [−2;2]? A calculation free way to find this is to note that the points 2eiq` , 1 ≤ ` ≤ n, are equispaced points on the top half of the circle of ra- dius 2 centered at the origin in the complex plane. Our eigenvalues are just the real parts of these points. Thus the proportion of eigenvalues in [a;b] must converge to the nor- malized length of the circular arc between the lines x = a and x = b. This we calculate 1 as p (arcsin(b=2) − arcsin(a=2)). In other words, as n ! ¥, the histogram of eigenvalues 1 p approximates the arcsine measure whose distribution function is p (arcsin(x=2) + 2 ) with arcsin taking values in (−p=2;p=2). By differentiating, we get the density r(x) = p 1 . p 4−x2 Spacing between eigenvalues in the bulk: Observe that 2p` 2p(` + 1) l − l = 2cos − 2cos ` `+1 n + 1 n + 1 4p = sin(q∗) n + 1 ` ∗ for some q` 2 (q`;q`+1). This is of order at most 1=n, which makes sense because if n eigenvalues are packed into an interval of length 4, many of the successive differences must be below 1=n. More precisely, suppose ` is such that l is close to a point x 2 (−2;2). This means that ` p ∗ ∗ 1 2 cos(q` ) is close to x=2 and hence sin(q` ) is close to 2 4 − x . Hence, 2p p l − l ≈ 4 − x2: `+1 ` (n + 1) 9 2 2 In words, the eigenvalues near x look like nr(x) Z, the integer lattice with spacing nr(x) . The factor 1=n makes sense as n eigenvalues are packed into an interval of length 4. The factor 1=r(x) makes sense because the lower the density, farther apart are the eigenvalues.