Determinantal Point Processes and Random Matrix Theory in a Nutshell – Part III – (Orthogonal Polynomials and Riemann–Hilbert Method Approach)

Determinantal Point Processes and Random Matrix Theory in a Nutshell – Part III – (Orthogonal Polynomials and Riemann–Hilbert Method Approach)

Determinantal point processes and random matrix theory in a nutshell – part III – (Orthogonal Polynomials and Riemann–Hilbert method approach) Manuela Girotti based on M. Bertola’s lectures from Les Houches Winter School 2012 and P. Miller’s lecture from the thematic semester at SAMSI in 2006 Contents 1 Orthogonal polynomials2 2 Riemann–Hilbert problem4 3 Asymptotics 6 3.1 The Steepest Descent method...............................7 3.2 Massaging the problem: the g function..........................8 3.2.1 Repairing the normalization condition......................8 3.2.2 Logarithmic potential theory and equilibrium measure.............9 3.2.3 Finding the g-function............................... 10 3.3 "Opening lenses"...................................... 11 3.4 The model problem..................................... 13 3.4.1 The error...................................... 14 3.5 Reaping the harvest and back to the kernel....................... 14 Although the field of Orthogonal Polynomials is extremely vast, we will introduce here the concept of OPs purely as a tool to compute meaningful quantities coming from the Random Matrix world. The goal will be to establish a powerful connection between the eigenvalue statistics of the unitary ensemble (represented by the kernel K(x; y)) and the theory of OPs. Recap: consider the Unitary Ensemble, i.e. the set of Hermitian matrices with probability distribution 1 dµ(M) = e−Λ Tr(V (M))dM Zn 1 where we inserted now a scaling parameter Λ and we will actually take it to be exactly n, the n dimension of the matrices in the ensemble (more generally, one can take Λ = T for some T > 0). We saw that the induced joint probability distribution of the eigenvalues of this ensemble is n 1 Y dµ(x ; : : : ; x ) = ∆(x ; : : : x )2 e−nV (xj )dx ::: dx (1) 1 n Z 1 n 1 n n j=1 R with Zn = n dµ(x1; : : : ; xn) a suitable normalization constant (partition function) and a potential R V (x) sufficiently smooth and growing sufficiently fast at infinity. For the rest of the notes, we shall choose V (x) to be a polynomial of even degree, with positive leading coefficient (e.g. V (x) = x2). We also saw that the jpdf can be crucially rewritten in a determinantal form: 1 n 1 h i Y 2 Y −nV (xi) (x1; : : : xn) e = det Kn(xi; xj) (2) Zn n! 1≤i;j≤n 1≤i<j≤n i=1 where n−1 n X − 2 (V (x)+V (y)) j −1 k K(x; y) = e x [M]jk y (3) j;k=0 and the matrix M has entries Z a+b −nV (x) Mab = x e dx 0 ≤ a; b ≤ n − 1: (4) Rn 1 Orthogonal polynomials It can be shown that M is (for any size) positive definite (and symmetric). Consider the Lower- Diagonal-Upper decomposition (keeping into account the symmetry) T M = LHL where L is a lower unipotent matrix (with ones on the diagonal) and H = diagfh1; : : : ; hng, then − n (V (x)+V (y)) n−1 −T −1 −1 n−1T K(x; y) = e 2 1 x : : : x L H L 1 y : : : y (5) Definition 1. The polynomials 2 3 2 3 p0(x) 1 6 p1(x) 7 6 x 7 6 7 = L−1 6 7 (6) 6 . 7 6 . 7 4 . 5 4 . 5 n−1 pn−1(x) x are called orthogonal polynomials (OPs) for the measure e−V (x)dx. 2 From the above definition and using formula (5), we can rephrase the kernel as n−1 − n (V (x)+V (y)) X pj(x)pj(y) K(x; y) = e 2 (7) h j=0 j Proposition 2. The following properties holds for the OPs pn(x) and are equivalent to the above definition: n • deg pn(x) = n and pn(x) = x + :::; Z −V (x) • pn(x)pm(x)e dx = hnδnm; R •f pj(x)g solve a three terms recurrence relation: hn xpn(x) = pn+1 + αnpn(x) + pn−1(x) 8 n: hn−1 In addition we have • hn > 0; Qn−1 • Zn = n! det M = n! j=0 hj. Paradigma (GUE). In the case of GUE matrices, the kernel is n−1 − n (x2+y2) X pj(x)pj(y) K(x; y) = e 4 h j=0 j where the polynomials pn(x) are (a rescaled version of) the Hermite polynomials (Hermite poly- nomials can be indeed defined as the set of polynomials that are orthogonal with respect to the measure e−x2 dx). Furthermore, if we consider n non-intersecting Brownian paths X1(t);:::;Xn(t), all starting at x = 0 and finishing at x = 0 after time T > 0. Their transition probability is a Gaussian 2 1 − (x−y) pt(x; y) = p e 2t (8) 2πt and thanks to Karlin-McGregor Theorem, we have that the joint probability distribution of the paths at any time 0 < t < T is proportional to " 2 #n x2 n xj − i − ∼ det Fj−1(xi)e 2t det Gj−1(xi)e 2(T −t) dx1 ::: dxn (9) i;j=1 i;j=1 (up to the normalization constant) and Fj−1 and Gj−1 are polynomials of degree j − 1 obtained by consecutive derivatives of the exponential function. We recognize here an equivalent formulation of the Hermite polynomials. Therefore, the positions of the paths at any time 0 < t < T are distributed as an ensemble of GUE-eigenvalues (see Figure1). We can push the dependency of the kernel on OPs even further and get a simple formula for the kernel. 3 0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 1: Numerical simulation of 50 non-intersecting Brownian paths in the confluent case with one starting and one ending point. Proposition 3 (Christoffel–Darboux formula). For any set of OPs we have − n (V (x)+V (y)) 1 pn(x)pn−1(y) − pn(y)pn−1(x) K(x; y) = e 2 (10) hn x − y Proof. Use the three terms recurrence relation and write it as a telescoping sum. 2 Riemann–Hilbert problem A Riemann–Hilbert problem is a boundary–value problem for a k × k matrix–valued, piecewise analytic function Y (z). Riemann–Hilbert problem 4. Let Σ be an oriented union of curves and J(z) a (sufficiently smooth) matrix function defined on Σ, called the jump matrix. Find a function Y (z) such that 1. Y (z) is analytic on C n Σ; 2. limz!1 Y (z) = 1 (or some other normalization); 3. denoting by Y±(z) the (non-tangential) boundary values of Y (z) from the left/right of Σ (according to the orientaiton), we have Y+(z) = Y−(z)J(z); 8 z 2 Σ 4 For the sake of simplicity, let’s assume that everything is smooth enough for all the statements that follow and that the curves in Σ are either loops or they extend at infinity. We should add another condition in the case where the curves have endpoints. In the simple case where the RHP is scalar (k = 1), the solution can be easily found by applying the following formula: Theorem 5 (Sokhotsy–Plemelji formula). Let h(w) be α-Hölder continuous (for simplicity, we can assume h to be Lipschitz) on Σ and 1 Z h(w) f(z) := dw 2πi Σ w − z Then, f+(z) − f−(z) = h(w) and f+(z) + f−(z) = H[h](w) exists (the Cauchy Principal value). In the 90’s Fokas, Its and Kitaev [3] proved a fundamental theorem establishing the relationship between OPs and RHPs. Riemann–Hilbert problem 6 (for Orthogonal Polynomials). Find a 2 × 2 matrix–valued function Y (z) = Yn(z) such that 1. Y (z) is analytic 8 z 2 C± = {±=(z) > 0g 2. the boundary values of Y (z) on Σ = R (oriented in the natural direction) are 1 e−nV (z) Y (z) = Y (z) (11) + − 0 1 3. in the sectors arg(z) 2 (0; π) and arg(z) 2 (π; 2π), the matrix Y (z) has the following asymp- totic expansion 1 1 0 Y (z) = 1 + O znσ3 ; σ = (12) z 3 0 −1 The above asymptotic expansion is uniform in the sense that for any R > 0 there exists C > 0 1 such that for all z 2 C n R, jzj > R we have kY (z) − 1k ≤ C jzj . Before stating the solution theorem, let’s assume that V (x) is real-analytic and in particular let’s consider the case where V (x) is a polynomial of even degree and positive leading coefficient (e.g. V (x) = x2). Theorem 7 (Fokas, Its, Kitaev). The unique solution to the RHP6 is the following: 2 Z −nV (x) 3 pn(x)e dx pn(z) 6 R x − z 2πi 7 Yn(z) = 6 Z −nV (x) 7 (13) 4−2πi −1 pn−1(x)e dx 5 pn−1(z) hn−1 hn−1 R x − z 2πi −nV (x) where pn(z), pn−1(z) are the OPs for the measure e dx on R and hn the corresponding squared norms. 5 Proof. To prove uniqueness: 1. show that det Y (z) has no jump on R (so it is an entire function); 2. show that det Y (z) ! 1 as jzj ! 1 and hence (by Liouville’s theorem) it is identically one. Thus any solution to the RHP6 is invertible and with analytic inverse; −1 3. if Ye(z) is another solution, then show that R(z) = Ye(z)Y (z) has no jumps on R and hence it is entire; 4.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us