<<

On the structural eigenvalues of block random matrices

Ferenc Juhász

Budapest

GMP-1/1994

Abstract : In the present paper we prove that block random matrices consisting of Wigner type blocks have as many large (structural) eigenvalues as diagonal blocks, and the asymptotics of the eigenvalues is sharpened. This extends an earlier result in [4] shown under the assumption that the corresponding weighted density has linear elementary divisors.

Keywords : Block random matices, eigenvalues.

Computer and Automation Institute Hungarian Academy of Sciences Budapest, XI. Kende u. 13-17 Mail: H-1518 Budapest, POB 63, Hungary Phone: -361-166-5783 Fax: -361-166-7503 E-Mail : [email protected] It was proved in [3] that a nonsymmetric random (0,1) matrix has one extreme eigenvalue proportional to the expectation, the behaviour of others is determined by the variance of the entries.

This result was generalized for matrices having Wigner type random blocks of different expectations. The theorem was proved under the assumption of linear elementary divisors of the weighted : a block random matrix has as many extreme eigenvalues as diagonal blocks [4].

Now we are going to eliminate the above mentioned assumption on the weighted density matrix. = × Definition 1 : Let Aanij() be an nn matrix whose entries are independent random = == × variables. For mmn() let DDnd() (ij ) be an mm matrix. We suppose that == = m ∩=∅ SSn() { S1 ,...,} Sm is a partition of the index set {1 ,...,nS } U k , SSk l for k =1 = ≠ ∈ ∈ = k,l 1 ,...,m , k l . We assume that for iSk and jSl we have Eaij d kl . Then the = matrix AAmDSnn( , , ) is said to be a nonsymmetric block random matrix.

Remark 1: In contrast to the definition of block random matrices in [4], we allow to vary with n the partition, the density matrix and even the size of it.

nn Definition 2 ([4]) : The matrix QQn==( ) ( q ) , qd= kl is said to be the kl kl kl n = = weighted density matrix of the block random matrix AAmDSnn( , , ) , where nSkk .

= λ T κ = −1 Denote by XXXmax ( ) the 2-norm and by ()XXX the spectral condition number of the matrix.

Definition 3 [6]: ι(QG )= infκ ( ) is called to be the Jordan condition number of the G matrix Q where GQG−1 is the of Q.

The main result of the paper == Theorem 1 : Let Aanijn( ) AmDS ( , , ) be a nonsymmetric block random matrix and − let Q be the weighted density matrix of it. Assume that aEaij( ij ) are identically distributed random variables. Suppose that the 4th moments of the entries exist, σ2 stands for variance. λλ λ λλ≥≥≥ λ Denote by 12, ,..., n the characteristic values of An , where 12... n . Then

An has m extreme (structural) eigenvalues that are of order n having the magnitude derived from the expected values while the others are of order n having the magnitude proportional to the dispersion i.e. : µ = (1) Denote by l , l 1,...m the eigenvalues of Q. Then for the extreme eigenvalues λ = l , l 1,...,m we have

λµ−≤ σι + δι2 llnQnQ() () (a.s.),

2 where ι()Q is the Jordan condition number and

0:Qhasonlyelementarydivisors δ =  .  1: otherwise

λ > (2) For the eigenvalues l , l m

λσι≤+ δι2 limsupl (Qn ) ( Q ) (a.s.) l>m

holds. = Proof : Denote by J the Jordan normal form of the matrix MEa( (ij )) , H −1MH ==+J V L , where V and L contain the (diagonal) eigenvalues and the (lower diagonal) ones of the Jordan matrix J, respectively. It is obvious that rank()M ≤ m . If HVH −1 = B and HLH −1 = C then == +=++ AAMRBCRn

We will use the following perturbation theorem.

Theorem 2 (Ref. 5, p.87) : Assume that the matrix B has linear elementary divisors (i.e. −1 = λ λ the Jordan normal form of B is diagonal) and H BH diag()i . If is a characteristic + λ value of the perturbed matrix B P then there is an eigenvalue i of B such that λλ−≤ κ i (HP ) .

Using theorem 2 for the eigenvalue λ of A we have

λλ−≤ κ + ≤ κ2 + κ i ()(HC R ) () HL () HR ,

λ where i is an appropriate eigenvalue of the matrix B having linear elementary divisors. It is obvious that the characteristic values of B and M are identical. The same is true for the eigenvalues of Q and M, where Q is the weighted density matrix of A .

We show that the H and its inverse H −1 can be calculated using G and G −1 , respectively.

Suppose that G −1QG is the Jordan normal form of Q. Then the transformation matrix = Hhnij( ) consists of the blocks

== HHnk()((,))l HSSn k l ,

where

=∈∈ HSSnkl(,)(: hiSjS ij k , l ) .

3 The off-diagonal blocks of Hn are

 g   kl 0 0 m 0  nk   g   kl 0 0 m 0  nk  H (S ,S ) =  g  , k ≠ l , n k l  kl 0 0 m 0  nk   o o o o   g   kl 0 0 m 0  nk 

while the diagonal blocks of Hn are = H n (Sk ,Sk )  g 1 1 1 1   kk − − l − l l −  − −  nk 2 6 jk ( jk 1) nk (nk 1)   g 1 1   kk − o o   nk 2 6   g 2   kk 0 o   nk 6   1  o o 0 r −  j ( j −1)   k k  g j −1  kk o 0 k   −  nk jk ( jk 1)  o r o   0  g 1  kk o o o 0 r −   n n (n −1)   k k k  g n −1  kk l m k   0 0 0 0  n n (n −1)  k k k 

−1 The off-diagonal blocks of Hn are

 (−1) (−1) (−1) (−1)   gkl gkl gkl l gkl 

 nl nl nl nl   0 0 0 l 0  H −1(S ,S ) =   , k ≠ l , n k l  0 0 0 l 0   o o o o     0 0 0 l 0 

−1 while the diagonal blocks of Hn are

4 −1 = Hn (Sk ,Sk )  (−1) (−1) (−1) (−1) (−1)   gkk gkk gkk l gkk l l gkk 

 nk nk nk nk nk   1 1   − 0 m 0 0   2 2   1 1 2  − − 0 o 0  6 6 6   o r o   0  1 i −1  − l l k 0   − −  ik (ik 1) ik (ik 1)  o r o     o r 0   1 1 n −1  − m m − m m k   − − −   nk (nk 1) nk (nk 1) nk (nk 1) 

To make it clear, we can illustrate it in a 22× block situation, where the sizes of the = = blocks are n1 2 and n2 3 .

  d11 d12 Let us assume, that D =   . Then the matrix of the meanvalues is d21 d22 

 d d d d d   11 11 12 12 12   d11 d11 d12 d12 d12  =   M d21 d21 d22 d22 d22  d d d d d   21 21 22 22 22  d21 d21 d22 d22 d22 

Hence

(−1) (−1) (−1) (−1) (−1)     g11 1 g12  g11 g11 g12 g12 g12   − 0 0   n n n n n   n1 2 n1   1 1 2 2 2    1 1 g11 1 g12 − 0 0 0   0 0   2 2   n1 2 n1  (−1) (−1) (−1) (−1) (−1)    g g g g g  g21 g22 1 1 21 21 22 22 22 * M * 0 − −    n n 2 6  n1 n1 n2 n2 n2   2 2   1 1   g21 g22 1 1  0 0 − 0 0 −    n n 2 6  2 2  2 2   1 1 2  − −  g21 g22 2   0 0   0 0   6 6 6   n2 n2 6 

is the Jordan normal form of M. To check it, one can realise that the entries of H −1 MH are those of GQG−1 , and the rest of the entries are zeros.

5 It is easy to verify that

22==λλTT = GGGHHHmax() max ( )

−−−−−−1 2 ==λλ11TT 1 1 = 12 GGGHHHmax(( ) ) max (( ) )

Thus

λλ−≤ κ2 + κ i ()GL () GR

For the estimation of R we use the followig theorem.

== × Theorem 3 ([2], [1]) : Let RRnij( r ) be an nn Wigner type random matrix with variance σ2 . Suppose that the 4th moments of the entries exist. Then

≤ σ limsup Rnn (a.s.). n→∞

Then using theorem 3 and the inequality L ≤ δ we get the results of our theorem.

Remark 2 : In the construction of the transformation matrix H and H −1 , the coordinates of the left and right eigenvectors of Q in [4] are replaced by the appropriate = −−11= () entries of Gg(ij ) and Gg()ij , respectively.

As a summary of the theorem one can say that the eigenvalues carry the information about the deterministic part of the system as well as about random effects. These special extreme eigenvalues depending on the structure of the matrix can be called to be the structural eigenvalues.

The theorem has some consequences on the power method in [4].

References [1] Bai, Z.D. - Yin, Y.Q. (1986): Limiting behaviour of the norm of products of random matrices and two problem of Geman-Hwang Probab. Th. Rel. Fields, 73, 555-569 [2] Geman, S. (1986): The spectral radius of large random matrices The Annals of Probability, 14(4), 1318-1328 [3] Juhász, F. (1982): On the asymptotic behaviour of the spectra of non-symmetric random (0,1) matrices Discrete Mathematics, 41, 161-165 [4] Juhász, F. (1990): On the characteristic values of nonsymmetric block random matrices J. theor. Prob., 3(2), 199-205 [5] Wilkinson, J.H.: The algebraic eigenvalue problem Clarendon Press, Oxford, 1965 [6] Young, D.M. : Iterative solution of large linear systems Academic Press Inc. New York

6