Random Matrix Theory for Sample Covariance Matrix

Total Page:16

File Type:pdf, Size:1020Kb

Random Matrix Theory for Sample Covariance Matrix Random Matrix Theory for sample covariance matrix Narae Lee May 1, 2014 1 Introduction This paper will investigate the statistical behavior of the eigenvalues of real symmetric random matrices, especially sample covariance matrices. A random matrix is a matrix- valued random variable in probability theory. Many complex systems in nature and society show chaotic behavior at microscopic level and order at macroscopic level. Investigating the distribution of eigenvalues of random matrices is to understand ”the order” at macroscopic level when physical systems are expressed as random matrix. In many applications of random matrix theory, eigenvalues are very related to understand how the systems with randomness elements work. 2 Motivation - Dimensionality reduction In machine learning and statistics, dimensionality reduction is the process of reducing the number of random variables while preserving essential information on data. Principal Com- ponent Analysis(PCA), widely-used technique, is orthogonal linear transformation that trans- forms the data to a new (projected) coordinate system such that the greatest variance on the projected coordinate has achieved. 2.1 Principal Component Analysis(PCA) Let X be an n p data matrix. Typically, one thinks of n observations xi of a p-dimensional row vector which⇥ has covariance matrix ⌃. We can assume that X has zero empirical mean without loss of generosity by constructing new data Y = X X¯ and applying the following − 1 p p properties to Y . Next, sample covariance matrix is defined as S = X0X R ⇥ . Note that n n 2 Sn is symmetric and positive semi-definite. If all features of data are linearly independent, we can assume that Sn has full-rank. Let Sn have the ordered sample eigenvalues l1 l2 1 ≥ ≥ l .Bysingularvaluedecomposition,wecanfactorizeS = X0X = ULU0 = l u u0 ···≥ n n n j j j with eigenvalues in the diagonal matrix L and uj are orthonormal eigenvectors collected as the columns of U. { } P Eigenvalues occurs in PCA, also widely known as the Karhunen-Lo`eve transformation. 1 Remark. Algorithm to find successively orthonormal basis of (d-dimensional) projected co- ordinate systems having the greatest variance is equivalent to that to find the eigenvectors corresponding to the first d largest eigenvalues l ,l , ,l of S .Thatis, 1 2 ··· d n lj =max u0Snu : u u1, ,uj 1, u =1 { ? ··· − || || } Such uk maximing u0Snu subject to u u1, ,uj 1, u =1isaneigenvectoroflk and − called ”k-th principal component” ? ··· || || We can derive this remark easily by using Lagrange Multiplier Method. For the first 1 2 principal component u1,wewanttomaximizevarianceofprojecteddata n i(u10 xi) 1 2 1 1 P (u0 x ) = u0 x x0 u = u0 ( x x0 )u = u0 S u n 1 i n 1 i i 1 1 n i i 1 1 n 1 i i X X X We can express this problem as maxu1 u10 Su1 subject to u1 =1.ByusingLagrange 2 || || Multiplier, L(u )=u0 S u + λ(1 u ). 1 1 n 1 −|| 1|| @L =2Snu1 2λu1 =0= Snu1 = λu1 @u1 − ) So, (λ, u1)istheeigenvaluesandeigenvectortoitofSn. To maximize objective function u10 Snu1 = u10 λu1 = λ,weneedtochoosethelargesteigenvalueλ = l1 and eigenvector u1 corresponding to l .Forotherprincipalcomponents,findu such that u u =0,and 1 2 1? 2 u2Snu2 is maximized. This results in Snu2 = λu2 such that u20 u1 =0.Easytosee,the second principal component should be the second largest eigenvalue l2 and its eigenvector u2. This remark illustrates the relation between eigenvalues and PCA. 3 Global Distribution of Eigenvalues 3.1 Definitions and Notations Definition 3.1. A Wigner random matrix is defined as n n symmetric matrix A =(Aij) with i.i.d. entries satisfying ⇥ 2 1+δij Aij = Aji, E(Aij)=0,E(Aij)= 2 and Aij’s moments are all finite for all i, j 1+δij For a Wigner random matrix, if Aij has normal distribution of N(0, 2 ), then we call this matrix A as Gaussian Orthogonal Ensembles(GOE). Definition 3.2. A p p random matrix M is said to have a A Wishart Distribution with ⇥ scale matrix ⌃ and degrees of freedom n if M = X0X where X Nn p(µ, ⌃). This is denoted ⇥ by M W (n, ⌃) ⇠ ⇠ p The Wishart Wp(n, ⌃) distribution has a density function only when n p,andif M W (n, ⌃), it⇠ has the following density function ≥ ⇠ p np/2 2− 1 1 (n p 1)/2 − − − n n/2 etr( ⌃ M)(detM) (1) Γp( 2 )(det⌃) −2 2 where ext stands for exponential of trace of matrix. 1 Note that if data matrix X Nn p(µ, ⌃), the sample covariance matrix Sn = X0X has the ⇥ n Wishart distribution W (n ⇠ 1, 1 ⌃). Hence any general result regarding the eigenvalues of p − n matrices in Wp(n, ⌃) can be easily applied to the eigenvalues of sample covariance matrices. Definition 3.3. Let A be a p p matrix with eigenvalues l1, ,lp. The empirical (cumu- lative) distribution function for⇥ the eigenvalues of A is ··· p 1 l(x):= χ(l x) p i i=1 X 1 p Then, empirical density function l0(x)= p i=1 δ(x li). Now, we are ready to investigate the global and local distribution of the eigenvalues of− Wigner matrices and sample covariance matrices. P 3.2 Wigner Semi-circle Law Consider a family of Wigner matrices A,ofdimensionn,chosenfromsomedistribution.Like the Central Limit Theorem, Wigner Semi-circle Law shows us that dependently on the type of random matrix, the empirical distribution function can converges to a certain non-random law. Theorem 3.1. Let A be a Wigner matrices with dimension n Let Pn(x) be the empirical 1 distribution of the eigenvalues for normalized ( 2pn An) so that the eigenvalues lies in the interval [ 1, 1]. Then, its empirical density distribution − n 1 li 2 2 l0(x)= δ(x ) P (x)= p1 x n − 2pn ! ⇡ − i=1 X with probability 1 as n (See Figure 1) !1 Proof The basic idea to prove this semi-circle law is to compare moments of distribution of eigenvalues with that of semi-circle distribution. That is because the actual distribution is determined by its moments, provided that those moments do not increase too rapidly with k. k Let U(x )bethek-thmomentsofl0(x): n k 1 k 1 lj k U(x )= x l0(x)dx = ( ) n 2pn j=1 Z1 X Compute the expected value of each moment: 1 n l 1 1 n E(U(x1)) = E( ( j )) = E(Tr(A)) = E(A )=0 n 2pn 2n3/2 2n3/2 jj j=1 j=1 X X 1 n l 1 1 n n 1 E(U(x2)) = E( ( j )2)= E(Tr(A2)) = E(A )2)= n 2pn 4n2 4n2 jk 4 j=1 j=1 X X Xk=1 3 And so on, up to higher-order moments. On the other hands, let C(xk)bek-thmomentofthesemicircle,P (x): 1 1 k k 2 k 2 C(x )= x l0(x)dx = x p1 x dx 1 ⇡ 1 − Z− Z− Substitute x =sin✓: ⇡/2 2 C(xk)= sink ✓ cos2 ✓d✓ ⇡/2 ⇡ Z− This integrals can be evaluated analytically. Define n!! = 2 4 n if n is even, and n!! = 1 3 n if n is odd. · ··· Then· ··· 2(k 1)!! C(xk)= − (k +2)!! 1 2 1 In particular, we have C(x )=0,C(x )= 4 .Thesecoincideswiththemomentsofeigenvalue distribution. By extending this approach to include higher moments, we can prove that the eigenvalues distribution goes asymptotically to the semicircle. Figure 1: Distribution of eigenvalues for Gaussian Orthogonal Ensemble:Semi-cricle Law 3.3 Marcˇenko - Pastur Distribution The Marcˇenko - Pastur Distribution shows us the ’semi-circle’ type law to the sample co- variance matrix Sn p p Theorem 3.2. Let Sn R ⇥ be the sample covariance matrix with ordered eigenvalues l l l Then its empirical2 density distribution 1 ≥ 2 ≥··· p n 1 li γ l0(x)= δ(x ) G0(x)= (b x)(x a),a x b n − 2pn ! 2⇡x − − i=1 X p n 1/2 2 1/2 2 almost surely as γ, and a =(1 γ− ) ,b=(1+γ− ) if γ 1. p ! − ≥ When γ<1, there is an additional mass point at x =0of weight (1 γ) − 4 Sketch of proof Similar to proof of Semi-Circle aaw, this theorem is also proved by comparing moments of two distributions - empirical distribution of eigenvalues and Marcˇenko -PasturDistribution.Thek-thmomentsoftheMarcˇenko-Pasturdensityfγ(x)is k k r k 1 k x f (x)dx = γ− − /(r +1) γ r r r=0 Z X ✓ ◆✓ ◆ It suffices to show that k-th moments of l0(x) k k 1 k r k 1 k E(U(x )) = E( Tr(S )) γ− − /(r +1) p n ! r r ⇥ r=0 X ✓ ◆✓ ◆ In this theorem, we can guess convergence of the largest and smallest eigenvalues l1 and lmin n,p .Itisshownthatl1 and lmin n,p converges almost surely to the edges of the support { } { } [a, b]ofG(x)[Geman’80 and Silverstein’85]. 1/2 2 l (1 + γ− ) almost surely 1 ! 1/2 2 lmin n,p (1 γ− ) almost surely { } ! − 4 Local Distribution of Eigenvalues The rest of this paper aims to demonstrate the local distribution of eigenvalues of Wishart ensemble: (i) representation of the join density function and (ii) extraction of the marginal density of the largest eigenvalues of Wishart matrices. 4.1 Joint probability density function for eigenvalues Theorem 4.1. If A Wp(n, λI) with n>p 1, then the joint density function of the eigenvalues 1 > 1 > ⇠ >l > 0 of A is − 1 2 ··· p p2/2 p p ⇡ (n p 1)/2 1 l − − l l exp( l ) (2λ)np/2Γ (p/2)Γ (p/2) i | i − j| −2λ i p p i<j i=1 Y X p Proof.
Recommended publications
  • Random Matrix Theory and Covariance Estimation
    Introduction Random matrix theory Estimating correlations Comparison with Barra Conclusion Appendix Random Matrix Theory and Covariance Estimation Jim Gatheral New York, October 3, 2008 Introduction Random matrix theory Estimating correlations Comparison with Barra Conclusion Appendix Motivation Sophisticated optimal liquidation portfolio algorithms that balance risk against impact cost involve inverting the covariance matrix. Eigenvalues of the covariance matrix that are small (or even zero) correspond to portfolios of stocks that have nonzero returns but extremely low or vanishing risk; such portfolios are invariably related to estimation errors resulting from insuffient data. One of the approaches used to eliminate the problem of small eigenvalues in the estimated covariance matrix is the so-called random matrix technique. We would like to understand: the basis of random matrix theory. (RMT) how to apply RMT to the estimation of covariance matrices. whether the resulting covariance matrix performs better than (for example) the Barra covariance matrix. Introduction Random matrix theory Estimating correlations Comparison with Barra Conclusion Appendix Outline 1 Random matrix theory Random matrix examples Wigner’s semicircle law The Marˇcenko-Pastur density The Tracy-Widom law Impact of fat tails 2 Estimating correlations Uncertainty in correlation estimates. Example with SPX stocks A recipe for filtering the sample correlation matrix 3 Comparison with Barra Comparison of eigenvectors The minimum variance portfolio Comparison of weights In-sample and out-of-sample performance 4 Conclusions 5 Appendix with a sketch of Wigner’s original proof Introduction Random matrix theory Estimating correlations Comparison with Barra Conclusion Appendix Example 1: Normal random symmetric matrix Generate a 5,000 x 5,000 random symmetric matrix with entries aij N(0, 1).
    [Show full text]
  • Rotation Matrix Sampling Scheme for Multidimensional Probability Distribution Transfer
    ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume III-7, 2016 XXIII ISPRS Congress, 12–19 July 2016, Prague, Czech Republic ROTATION MATRIX SAMPLING SCHEME FOR MULTIDIMENSIONAL PROBABILITY DISTRIBUTION TRANSFER P. Srestasathiern ∗, S. Lawawirojwong, R. Suwantong, and P. Phuthong Geo-Informatics and Space Technology Development Agency (GISTDA), Laksi, Bangkok, Thailand (panu,siam,rata,pattrawuth)@gistda.or.th Commission VII,WG VII/4 KEY WORDS: Histogram specification, Distribution transfer, Color transfer, Data Gaussianization ABSTRACT: This paper address the problem of rotation matrix sampling used for multidimensional probability distribution transfer. The distri- bution transfer has many applications in remote sensing and image processing such as color adjustment for image mosaicing, image classification, and change detection. The sampling begins with generating a set of random orthogonal matrix samples by Householder transformation technique. The advantage of using the Householder transformation for generating the set of orthogonal matrices is the uniform distribution of the orthogonal matrix samples. The obtained orthogonal matrices are then converted to proper rotation matrices. The performance of using the proposed rotation matrix sampling scheme was tested against the uniform rotation angle sampling. The applications of the proposed method were also demonstrated using two applications i.e., image to image probability distribution transfer and data Gaussianization. 1. INTRODUCTION is based on the Monge’s transport problem which is to find mini- mal displacement for mass transportation. The solution from this In remote sensing, the analysis of multi-temporal data is widely linear approach can be used as initial solution for non-linear prob- used in many applications such as urban expansion monitoring, ability distribution transfer methods.
    [Show full text]
  • The Distribution of Eigenvalues of Randomized Permutation Matrices Tome 63, No 3 (2013), P
    R AN IE N R A U L E O S F D T E U L T I ’ I T N S ANNALES DE L’INSTITUT FOURIER Joseph NAJNUDEL & Ashkan NIKEGHBALI The distribution of eigenvalues of randomized permutation matrices Tome 63, no 3 (2013), p. 773-838. <http://aif.cedram.org/item?id=AIF_2013__63_3_773_0> © Association des Annales de l’institut Fourier, 2013, tous droits réservés. L’accès aux articles de la revue « Annales de l’institut Fourier » (http://aif.cedram.org/), implique l’accord avec les conditions générales d’utilisation (http://aif.cedram.org/legal/). Toute re- production en tout ou partie de cet article sous quelque forme que ce soit pour tout usage autre que l’utilisation à fin strictement per- sonnelle du copiste est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright. cedram Article mis en ligne dans le cadre du Centre de diffusion des revues académiques de mathématiques http://www.cedram.org/ Ann. Inst. Fourier, Grenoble 63, 3 (2013) 773-838 THE DISTRIBUTION OF EIGENVALUES OF RANDOMIZED PERMUTATION MATRICES by Joseph NAJNUDEL & Ashkan NIKEGHBALI Abstract. — In this article we study in detail a family of random matrix ensembles which are obtained from random permutations matrices (chosen at ran- dom according to the Ewens measure of parameter θ > 0) by replacing the entries equal to one by more general non-vanishing complex random variables. For these ensembles, in contrast with more classical models as the Gaussian Unitary En- semble, or the Circular Unitary Ensemble, the eigenvalues can be very explicitly computed by using the cycle structure of the permutations.
    [Show full text]
  • New Results on the Spectral Density of Random Matrices
    New Results on the Spectral Density of Random Matrices Timothy Rogers Department of Mathematics, King’s College London, July 2010 A thesis submitted for the degree of Doctor of Philosophy: Applied Mathematics Abstract This thesis presents some new results and novel techniques in the studyof thespectral density of random matrices. Spectral density is a central object of interest in random matrix theory, capturing the large-scale statistical behaviour of eigenvalues of random matrices. For certain ran- dom matrix ensembles the spectral density will converge, as the matrix dimension grows, to a well-known limit. Examples of this include Wigner’s famous semi-circle law and Girko’s elliptic law. Apart from these, and a few other cases, little else is known. Two factors in particular can complicate the analysis enormously - the intro- duction of sparsity (that is, many entries being zero) and the breaking of Hermitian symmetry. The results presented in this thesis extend the class of random matrix en- sembles for which the limiting spectral density can be computed to include various sparse and non-Hermitian ensembles. Sparse random matrices are studied through a mathematical analogy with statistical mechanics. Employing the cavity method, a technique from the field of disordered systems, a system of equations is derived which characterises the spectral density of a given sparse matrix. Analytical and numerical solutions to these equations are ex- plored, as well as the ensemble average for various sparse random matrix ensembles. For the case of non-Hermitian random matrices, a result is presented giving a method 1 for exactly calculating the spectral density of matrices under a particular type of per- turbation.
    [Show full text]
  • Random Rotation Ensembles
    Journal of Machine Learning Research 2 (2015) 1-15 Submitted 03/15; Published ??/15 Random Rotation Ensembles Rico Blaser ([email protected]) Piotr Fryzlewicz ([email protected]) Department of Statistics London School of Economics Houghton Street London, WC2A 2AE, UK Editor: TBD Abstract In machine learning, ensemble methods combine the predictions of multiple base learners to construct more accurate aggregate predictions. Established supervised learning algo- rithms inject randomness into the construction of the individual base learners in an effort to promote diversity within the resulting ensembles. An undesirable side effect of this ap- proach is that it generally also reduces the accuracy of the base learners. In this paper, we introduce a method that is simple to implement yet general and effective in improv- ing ensemble diversity with only modest impact on the accuracy of the individual base learners. By randomly rotating the feature space prior to inducing the base learners, we achieve favorable aggregate predictions on standard data sets compared to state of the art ensemble methods, most notably for tree-based ensembles, which are particularly sensitive to rotation. Keywords: Feature Rotation, Ensemble Diversity, Smooth Decision Boundary 1. Introduction Modern statistical learning algorithms combine the predictions of multiple base learners to form ensembles, which typically achieve better aggregate predictive performance than the individual base learners (Rokach, 2010). This approach has proven to be effective in practice and some ensemble methods rank among the most accurate general-purpose supervised learning algorithms currently available. For example, a large-scale empirical study (Caruana and Niculescu-Mizil, 2006) of supervised learning algorithms found that decision tree ensembles consistently outperformed traditional single-predictor models on a representative set of binary classification tasks.
    [Show full text]
  • Moments of Random Matrices and Hypergeometric Orthogonal Polynomials
    Commun. Math. Phys. 369, 1091–1145 (2019) Communications in Digital Object Identifier (DOI) https://doi.org/10.1007/s00220-019-03323-9 Mathematical Physics Moments of Random Matrices and Hypergeometric Orthogonal Polynomials Fabio Deelan Cunden1,FrancescoMezzadri2,NeilO’Connell1 , Nick Simm3 1 School of Mathematics and Statistics, University College Dublin, Belfield, Dublin 4, Ireland. E-mail: [email protected] 2 School of Mathematics, University of Bristol, University Walk, Bristol, BS8 1TW, UK 3 Mathematics Department, University of Sussex, Brighton, BN1 9RH, UK Received: 9 June 2018 / Accepted: 10 November 2018 Published online: 6 February 2019 – © The Author(s) 2019 Abstract: We establish a new connection between moments of n n random matri- × ces Xn and hypergeometric orthogonal polynomials. Specifically, we consider moments s E Tr Xn− as a function of the complex variable s C, whose analytic structure we describe completely. We discover several remarkable∈ features, including a reflection symmetry (or functional equation), zeros on a critical line in the complex plane, and orthogonality relations. An application of the theory resolves part of an integrality con- jecture of Cunden et al. (J Math Phys 57:111901, 2016)onthetime-delaymatrixof chaotic cavities. In each of the classical ensembles of random matrix theory (Gaus- sian, Laguerre, Jacobi) we characterise the moments in terms of the Askey scheme of hypergeometric orthogonal polynomials. We also calculate the leading order n asymptotics of the moments and discuss their symmetries and zeroes. We discuss aspects→∞ of these phenomena beyond the random matrix setting, including the Mellin transform of products and Wronskians of pairs of classical orthogonal polynomials.
    [Show full text]
  • "Integrable Systems, Random Matrices and Random Processes"
    Contents Integrable Systems, Random Matrices and Random Processes Mark Adler ..................................................... 1 1 Matrix Integrals and Solitons . 4 1.1 Random matrix ensembles . 4 1.2 Large n–limits . 7 1.3 KP hierarchy . 9 1.4 Vertex operators, soliton formulas and Fredholm determinants . 11 1.5 Virasoro relations satisfied by the Fredholm determinant . 14 1.6 Differential equations for the probability in scaling limits . 16 2 Recursion Relations for Unitary Integrals . 21 2.1 Results concerning unitary integrals . 21 2.2 Examples from combinatorics . 25 2.3 Bi-orthogonal polynomials on the circle and the Toeplitz lattice 28 2.4 Virasoro constraints and difference relations . 30 2.5 Singularity confinement of recursion relations . 33 3 Coupled Random Matrices and the 2–Toda Lattice . 37 3.1 Main results for coupled random matrices . 37 3.2 Link with the 2–Toda hierarchy . 39 3.3 L U decomposition of the moment matrix, bi-orthogonal polynomials and 2–Toda wave operators . 41 3.4 Bilinear identities and τ-function PDE’s . 44 3.5 Virasoro constraints for the τ-functions . 47 3.6 Consequences of the Virasoro relations . 49 3.7 Final equations . 51 4 Dyson Brownian Motion and the Airy Process . 53 4.1 Processes . 53 4.2 PDE’s and asymptotics for the processes . 59 4.3 Proof of the results . 62 5 The Pearcey Distribution . 70 5.1 GUE with an external source and Brownian motion . 70 5.2 MOPS and a Riemann–Hilbert problem . 73 VI Contents 5.3 Results concerning universal behavior . 75 5.4 3-KP deformation of the random matrix problem .
    [Show full text]
  • Computing the Jordan Structure of an Eigenvalue∗
    SIAM J. MATRIX ANAL.APPL. c 2017 Society for Industrial and Applied Mathematics Vol. 38, No. 3, pp. 949{966 COMPUTING THE JORDAN STRUCTURE OF AN EIGENVALUE∗ NICOLA MASTRONARDIy AND PAUL VAN DOORENz Abstract. In this paper we revisit the problem of finding an orthogonal similarity transformation that puts an n × n matrix A in a block upper-triangular form that reveals its Jordan structure at a particular eigenvalue λ0. The obtained form in fact reveals the dimensions of the null spaces of i (A − λ0I) at that eigenvalue via the sizes of the leading diagonal blocks, and from this the Jordan structure at λ0 is then easily recovered. The method starts from a Hessenberg form that already reveals several properties of the Jordan structure of A. It then updates the Hessenberg form in an efficient way to transform it to a block-triangular form in O(mn2) floating point operations, where m is the total multiplicity of the eigenvalue. The method only uses orthogonal transformations and is backward stable. We illustrate the method with a number of numerical examples. Key words. Jordan structure, staircase form, Hessenberg form AMS subject classifications. 65F15, 65F25 DOI. 10.1137/16M1083098 1. Introduction. Finding the eigenvalues and their corresponding Jordan struc- ture of a matrix A is one of the most studied problems in numerical linear algebra. This structure plays an important role in the solution of explicit differential equations, which can be modeled as n×n (1.1) λx(t) = Ax(t); x(0) = x0;A 2 R ; where λ stands for the differential operator.
    [Show full text]
  • Cycle Indices for Finite Orthogonal Groups of Even Characteristic
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 364, Number 5, May 2012, Pages 2539–2566 S 0002-9947(2012)05406-7 Article electronically published on January 6, 2012 CYCLE INDICES FOR FINITE ORTHOGONAL GROUPS OF EVEN CHARACTERISTIC JASON FULMAN, JAN SAXL, AND PHAM HUU TIEP Dedicated to Peter M. Neumann on the occasion of his seventieth birthday Abstract. We develop cycle index generating functions for orthogonal groups in even characteristic and give some enumerative applications. A key step is the determination of the values of the complex linear-Weil characters of the finite symplectic group, and their induction to the general linear group, at unipotent elements. We also define and study several natural probability measures on integer partitions. 1. Introduction P´olya [34], in a landmark paper on combinatorics (see [35] for an English trans- lation), introduced the cycle index of the symmetric groups. This can be written z as follows. Let ai(π)bethenumberof i-cycles of π. The Taylor expansion of e ai and the fact that there are n!/ i(ai!i )elementsofSn with ai i-cycles yield the following theorem. Theorem 1.1 (P´olya [34]). ∞ ∞ n m u a (π) xmu 1+ x i = e m . n! i n=1 π∈Sn i m=1 The P´olya cycle index has been a key tool in understanding what a typical per- mutation π ∈ Sn “looks like”. It is useful for studying properties of a permutation which depend only on its cycle structure. Here are a few examples of theorems which can be proved using the cycle index.
    [Show full text]
  • Random Density Matrices Results at fixed Size Asymptotics
    Random density matrices Results at fixed size Asymptotics Random density matrices Ion Nechita1 1Institut Camille Jordan, Universit´eLyon 1 Lyon, 27 October 2006 Ion Nechita Random density matrices Random density matrices Introduction Results at fixed size Pure states and denstiy matrices Asymptotics The induced measure Random density matrices Ion Nechita Random density matrices Random density matrices Introduction Results at fixed size Pure states and denstiy matrices Asymptotics The induced measure Why random density matrices ? Density matrices are central objects in quantum information theory, quantum computing, quantum communication protocols, etc. We would like to characterize the properties of typical density matrices ⇒ we need a probability measure on the set of density matrices Compute averages over the important quantities, such as von Neumann entropy, moments, etc. Random matrix theory: after all, density matrices are positive, trace one complex matrices Ion Nechita Random density matrices Random density matrices Introduction Results at fixed size Pure states and denstiy matrices Asymptotics The induced measure Why random density matrices ? Density matrices are central objects in quantum information theory, quantum computing, quantum communication protocols, etc. We would like to characterize the properties of typical density matrices ⇒ we need a probability measure on the set of density matrices Compute averages over the important quantities, such as von Neumann entropy, moments, etc. Random matrix theory: after all, density matrices are positive, trace one complex matrices Ion Nechita Random density matrices Random density matrices Introduction Results at fixed size Pure states and denstiy matrices Asymptotics The induced measure Why random density matrices ? Density matrices are central objects in quantum information theory, quantum computing, quantum communication protocols, etc.
    [Show full text]
  • Math-Ph/0111005V2 4 Sep 2003 Osbe Hsrsac a Enspotdi Atb H R Gran FRG Stay the My by Making Part for in Conrey Supported NSF
    RANDOM MATRIX ENSEMBLES ASSOCIATED TO COMPACT SYMMETRIC SPACES EDUARDO DUENEZ˜ Abstract. We introduce random matrix ensembles that corre- spond to the infinite families of irreducible Riemannian symmetric spaces of type I. In particular, we recover the Circular Orthogo- nal and Symplectic Ensembles of Dyson, and find other families of (unitary, orthogonal and symplectic) ensembles of Jacobi type. We discuss the universal and weakly universal features of the global and local correlations of the levels in the bulk and at the “hard” edge of the spectrum (i. e., at the “central points” 1 on the unit circle). Previously known results are extended, and± we find new simple formulas for the Bessel Kernels that describe the local cor- relations at a hard edge. 1. Introduction Local correlations between eigenvalues of various ensembles of ran- dom unitary, orthogonal or symplectic matrices, in the limit when their size tends to infinity, are known to exhibit universal behavior in the bulk of the spectrum. Dyson’s “Threefold Way” [14] predicts that this behavior is to be expected universally in the bulk of the spec- trum, depending only on the symmetry type of the ensemble (unitary, orthogonal or symplectic). Unfortunately, for general ensembles this arXiv:math-ph/0111005v2 4 Sep 2003 conjecture remains open, though in the unitary case (modeled after the Gaussian Unitary Ensemble) the universality of the local corre- lations has been proven for some classes of families [9, 7, 3, 2]. In the orthogonal and symplectic cases the extension of results known for Gaussian ensembles is technically more complicated but some more re- cent work deals with families of such ensembles [26].
    [Show full text]
  • Random Matrix Theory and Financial Correlations
    Mathematical Models and Methods in Applied Sciences c❢ World Scientific Publishing Company RANDOM MATRIX THEORY AND FINANCIAL CORRELATIONS LAURENT LALOUX, PIERRE CIZEAU and MARC POTTERS Science & Finance 109-111 rue Victor Hugo, 92532 Levallois Cedex, FRANCE JEAN-PHILIPPE BOUCHAUD Science & Finance , and Service de Physique de l’Etat´ Condens´e, Centre d’´etudes de Saclay Orme des Merisiers, 91191 Gif-sur-Yvette C´edex, FRANCE We show that results from the theory of random matrices are potentially of great interest to understand the statistical structure of the empirical correlation matrices appearing in the study of multivariate financial time series. We find a remarkable agreement be- tween the theoretical prediction (based on the assumption that the correlation matrix is random) and empirical data concerning the density of eigenvalues associated to the time series of the different stocks of the S&P500 (or other major markets). Finally, we give a specific example to show how this idea can be sucessfully implemented for improving risk management. Empirical correlation matrices are of great importance for risk management and asset allocation. The probability of large losses for a certain portfolio or option book is dominated by correlated moves of its different constituents – for example, a position which is simultaneously long in stocks and short in bonds will be risky because stocks and bonds usually move in opposite directions in crisis periods. The study of correlation (or covariance) matrices thus has a long history in finance and is one
    [Show full text]