Random Matrices
Total Page:16
File Type:pdf, Size:1020Kb
Random Matrices Estelle Basor Mathematics Department California Polytechnic State University San Luis Obispo, 93407 Winter 1995 and Spring 2007 1 Contents 2 1 Introduction These notes are based on a series of lectures given by Estelle Basor in an applied analysis graduate course at Cal Poly first done in Winter of 1996. They were based on notes taken by the students in the class, typed by Mike Robertson, and then finally edited by Estelle Basor and Jon Jacobsen. The notes were again used in Spring of 2007 and corrections to the notes were provided by the students in the class and some other sections were added. These notes are not meant to be complete or perfect, but rather an informal set of notes describing a new and lively field in mathematics. It is the intention of the author that they provide a heuristic background so that one can begin to understand random matrices and consider future questions in the subject. The notes provide an introduction to the basics of the Gaussian Unitary Ensemble (GUE), a derivation of the semi-circle law, a derivation of the Painlev´eequation connected to the basic probabilites of the GUE and at the end a derivation of the distribution function for linear statistics. Many of the tools used are provided in the text, while others are missing. In particular, the asymptotics of the relevant orthogonal polynomials are used but not derived. Several properties of Hilbert spaces, operators, trace class operators are used and not proved. 3 4 2 The probability distribution for the eigenvalues of a random hermitian matrix Given a random Hermitian matrix what can we say about its eigenvalues? Some questions include: What is the largest one? How are they spaced? What is the probability that one does (or does not) lie in a given interval? This section will describe a probability distribution on the space of matrices and show how this distribution induces one on the space of eigenvalues. To begin, we consider a general 2 × 2 Hermitian matrix " (0) (1) # x11 x12 + ix12 H = (0) (1) . x12 − ix12 x22 (0) (1) The matrix H has four free parameters, namely x11,x12 , x12 , x22. Thus we may identify the space of 2 × 2 Hermitian matrices with R4. That is to say, if 4 (0) (1) A ⊆ R , then H ∈ A iff x11,x12 , x12 , x22 ∈ A. It is not hard to see that in a more general situation an n × n Hermitian matrix will have n2 free parameters and we may similarly identify the space of n × n Hermitian matrices with Rn2 . Given A ⊆ Rn2 , we seek the probability that H ∈ A and also information about the eigenvalues of H. Definition 1. Let H be a Hermitian matrix. We define the probability density P (H) by −trace(H2) P (H) = cne . This means that the probability of a random n × n Hermitian matrix occurring in A ⊆ Rn2 is Z Z n Y Y (0) Y (1) cn ... P (H) dxii dxij dxij . (1) A i=1 1≤i<j≤n 1≤i<j≤n where cn is a normalizing constant satisfying Z Z n Y Y (0) Y (1) cn ... P (H) dxii dxij dxij = 1 n2 R i=1 1≤i<j≤n 1≤i<j≤n For a 2 × 2 H we have „ “ ”2 “ ”2 « − x2 +2 x(0) +2 x(1) +x2 P (H) = e 11 12 12 22 . Thus the probability of H being in a set A is given by „ “ ”2 “ ”2 « ZZZZ 2 (0) (1) 2 − x11+2 x12 +2 x12 +x22 (0) (1) c2 e dx11dx22dx12 dx12 . (2) A As it stands this contains too much information to be useful. We need to scale our search down to specifically the eigenvalues of H. Our goal is to apply a change of variables to (??) and let the constant absorb some information leaving us with a finer look at the eigenvalues themselves. To do this we will need some general facts. 5 1. The probability given in (??) is independent with respect to any change of basis. Proof: Let H0 = M −1HM for some orthogonal matrix M. Then (H0)2 = M −1HMM −1HM = M −1H2M, and tr(H02) = tr M −1H2M = tr M −1MH2 = tr H2 . So P (H0) = P (H). Thus P (H) is left unchanged by a change in basis, in particular, a unitary change of basis. 2. Every Hermitian matrix is unitarily diagonalizable, that is if H is Hermitian then H may be written as the product H = U −1DU where D is diagonal and U is unitary (U −1 = U ∗). 3. The eigenvalues of a Hermitian matrix are real. Proof: Using the inner product, notice hv, λvi = λ¯ ||v||2 and hv, λvi = hv, Hvi = hHv, vi = hλv, vi = λ ||v||2. Hence λ = λ¯ and is therefore real. Thus D is a real diagonal matrix containing the eigenvalues of H. Our goal is to apply a change of variables to (??) corresponding to H ↔ U −1DU. For clarity we will restrict our attention to the 2 × 2 case first and then address the n × n case. A problem arises though that in general the U −1DU representation is not unique as the eigenvalues can be interchanged and there are many choices for the eigenvectors forming U. We need a method of selecting a particular U and D so that they are unique. Since the entries of D are real we may simply order them specifying θ 0 D = 1 0 θ2 where θ1 ≤ θ2. Since H has four free variables and D has only two, U must depend on two free parameters. Let U consist of two column vectors U = v1 v2 . We may choose a so that a √ 0 ≤ a ≤ 1 and v = . Since U is unitary ||v || = 1 thus b = 1 − a2. And so 1 beiα 1 a v = √ . 1 1 − a2eiα Now since hv1, v2i = 0 and ||v2|| = 1, v2 is completely determined if we also insist that its first c non-zero entry is positive. Suppose v = with c ≥ 0. Since ||v || = 1 we have as above 2 deiβ 2 c √ √ v = √ . Since hv , v i = 0 we have ac + 1 − a2 1 − c2ei(α−β) = 0, 2 1 − c2eiβ 1 2 −ac ei(α−β) = . p(1 − a2) (1 − c2) Thus 1) ei(α−β) ∈ R −ac 2) √ = 1. (1−a2)(1−c2) 6 Since a ≥ 0, c ≥ 0 it must be that −ac = −1 p(1 − a2) (1 − c2) √ c 1 − a2 √ = . 1 − c2 a 1−a2 Thus our choice of a determines c. If we let a2 = k then, c2 = 1 − c2 k k c2 = 1 + k r k c = . 1 + k Furthermore if ei(α−β) = −1 then given α we can determine β. Thus we have shown that a, α, θ1, and θ2 completely determine a unique representation for H as H = U −1DU with U and D specified as above. Conversely given a U and D we may find H by multiplication. We have established a one to one correspondence between Hermitian matrices and products of the form U −1DU and can use this correspondence to perform a change of variables (0) (1) f x11,x12 , x12 , x22 = (θ1,θ2, p1, p2) with θ1 ≤ θ2, p1 ∈ [0, 1] , p2 ∈ [0, 2π) . Now for the general case we cannot be as explicit, but we can say that H = U −1DU where D is diagonal with diagonal elements θ1, . , θn satisfying θ1 ≤ θ2 ≤ ... ≤ θn and U unitary with column vectors of length one and with first non-zero coordinate positive. Notice that if u1 is the first column vector of U it has 2n − 2 real free parameters and that u2 has 2n − 4 free parameters and so on so that the variables in U account for n2 − n parameters. We thus have a change of variables given by some function f so that (1) f x11, . , x(n−1)n = (θ1, . , θn, p1, . , pn2−n) . Using this change of variables we need to transform the integral: Z Z (1) cn ... P (H) dx11 . dx(n−1)n (3) A into one with the new variables: Z Z cn ... P (H) |J (θi, pl)| dθ1 . dpn2−n. f(A) 7 Here the expression |J (θi, pl)| stands for the absolute value of the determinant of the Jacobian matrix. Notice that the P (H) term is easy to convert since −traceH2 −trace(U ∗DU) −traceD2 − Pn θ2 P (H) = e = e = e = e i=1 1 . Before we begin the Jacobian computation we notice the following: 1. Recall that if A is a matrix then ∂A ∂a = ij . ∂y ij ∂y ij (This means simply that the partial derivative of a matrix is defined to be the partials of all its entries.) ∂AB ∂A ∂B 2. If A and B are matrices then the product rule ∂y = ∂y B + A ∂y holds true. ∗ ∗ 3. Since UU ∗ = I we have that ∂UU = 0 or by the product rule ∂U U ∗ + U ∂U = 0 or ∂pi ∂pi ∂pi ∂U ∂U ∗ U ∗ = −U . ∂pi ∂pi Similarly since U ∗U = I ∂U ∗ ∂U U = −U ∗ . ∂pi ∂pi ∗ Define Si = U ∂U and note that since U is independent of the θ variables, Si is also. Now letting ∂pi H equal U ∗DU we find that ∂H ∂U ∗DU ∂U ∗ ∂DU = = DU + U ∗ ∂pi ∂pi ∂pi ∂pi ∂U ∗ ∂D ∂U = DU + U ∗ U + U ∗D ∂pi ∂pi ∂pi ∂U ∗ ∂U ∂D = DU + U ∗D since = 0 . ∂pi ∂pi ∂pi This implies ∂H ∂U ∗ ∂U U U ∗ = U D + D U ∗ = SiD − DSi.