<<

n a l r u o o f J P c r i o o n b a E l e c t r b i l i t y

Vol. 15 (2010), Paper no. 34, pages 1092–1118.

Journal URL http://www.math.washington.edu/~ejpecp/

Permutation matrices and the moments of their characteristic polynomial

Dirk Zeindler Department of University Zürich Winterthurerstrasse 190 8057 Zürich, CH Email: [email protected] Homepage: http://zeindler.dyndns.org

Abstract

In this paper, we are interested in the moments of the characteristic polynomial Zn(x) of the n n matrices with respect to the uniform measure. We use a combinatorial argument× ”Qp s — to write down the generating function of Z k x for s . We show with this E k=1 n ( k) k N ”Qp s — ∈ generating function that lim Z k x exists for max x < 1 and calculate the n E k=1 n ( k) k k →∞ | | growth rate for p = 2, x1 = x2 = 1, x1 = x2 and n . | | | | → ∞ We also look at the case sk C. We use the Feller coupling to show that for each x < 1 and ∈ s s d s ”Qp | s| — s there exists a random variable Z x such that Z x Z x and Z k x C ( ) n( ) ( ) E k=1 n ( k) ∈”Qp s — ∞ −→ ∞ → Z k x for max x < 1 and n . E k=1 ( k) k k ∞ | | → ∞

Key words: random permutation matrices, symmetric , characteristic polynomials, Feller coupling, asymptotic behavior of moments, generating functions.

AMS 2000 Subject Classification: Primary 15B52.

Submitted to EJP on October 29, 2009, final version accepted June 13, 2010.

1092 1 Introduction

The characteristic polynomials of random matrices has been an object of intense interest in recent years. One of the reasons is the paper of Keating and Snaith [11]. They conjectured that the Riemann zeta function on the critical line could be modeled by the characteristic polynomial of a random unitary considered on the unit circle. One of the results in [11] is

Theorem 1.1. Let x be a fixed complex number with x = 1 and gn be a chosen at random with respect to the Haar measure. Then | |   Log det I x g ( n n) d Æ − 1 + i 2 for n 1 log n −→N N → ∞ 2 ( ) and 1, 2 independent, normal distributed random variables. N N A probabilistic proof of this result can be found in [2].

A surprising fact, proven in [9], is that a similar result holds for permutation matrices. A permuta- tion matrix is a unitary matrix of the form (δi,σ(j))1 i,j n with σ Sn and Sn the . ≤ ≤ ∈ It is easy to see that the permutation matrices form a group isomorphic to Sn. We call for simplicity both groups Sn and use this identification without mentioning it explicitly. We will, however, use the notation g Sn for matrices and σ Sn for . We define the characteristic polynomial as ∈ ∈

Zn(x) = Zn(x)(g) := det(I x g) with x C, g Sn. (1.1) − ∈ ∈ We can now state the result in [9]:

Theorem 1.2. Let x be a fixed complex number with x = 1, not a root of unity and of finite type. Let g be a n n permutation matrix chosen uniformly at| random.| Then both the real and imaginary parts of ×   Log Zn(x) p π log n 12 ( ) converge in distribution to a standard normal-random variable.

” s — The goal of this paper is to study the moments E Zn(x) of Zn(x) with respect to the uniform measure on Sn in the three cases

(i) x < 1, s N (section 2), | | ∈ (ii) x < 1, s C (section 3), | | ∈ (iii) x = 1, s N (section 4). | | ∈

1093 Keating and Snaith have studied in [11] also the moments of the characteristic polynomial of a random unitary matrix. A particular result is

n ” iθ s— Y Γ(j)Γ(j + s) U n det(I e U) = . (1.2) E ( ) j s/2 2 − j=1 (Γ( + )) Their proof base on Weyl’s integration formula and Selberg’s integral. An alternative proof of (1.2) with representation theory can be found in [4]. This proof bases on the Cauchy identity (see [3, chapter 43]). Unfortunately both arguments does not work for permutation matrices. We will see ” s — that we cannot give a (non trivial) expression for E Zn(x) , but we can calculate the behavior for n and x, s fixed. We→ use ∞ here two types of arguments: a combinatorial one and a probabilistic one. The combinatorial argument (lemma 2.10) allows us to write down the generating function of  ” s — E Zn(x) (see theorem 2.12). We can use this generating function to calculate the behavior n N ” s — ∈ of E Zn(x) for n in the cases (i) and (iii). Both calculations rely on the fact that the → ∞ generating function is a finite product. This is not true anymore for s C N. We therefore cannot handle case (ii) in the same way. Here comes in into play the probabilistic∈ \ argument, namely the Feller coupling (section 3.4). The advantage of the Feller coupling is that it allows us to directly compare Zn(x) to Zn+1(x). Of course case (ii) includes case (i), but the calculations in section 3 are not as beautiful as in section 2. Another disadvantage of the Feller coupling is that we cannot use it to calculate the behavior in case (iii). The results in section 2 and 3 are in fact true for more than one variable. We introduce in section 2.1 a short notation for this and state the theorems in full generality. We do the proofs only for one or two variables, since they are basically the same as in the general case.

s 2 Expectation of Zn(x)

” s — We give in this section some definitions, write down the generating function for E Zn(x) and ” s — calculate the behavior of E Zn(x) for n . → ∞ 2.1 Definitions and notation

p Let us first introduce a short notation for certain functions on defined C .

Definition 2.1. Let x1, , xp be complex numbers. We write x := (x1, , xp) (similarly for s, k, n). ··· ··· Write x := maxk( xk ) for the norm of x. k k2 | | p Qp Let f : C C and s, k C be given. Then f (s, k) := j 1 f (sj, kj). → ∈ = The multivariate version of Zn(x) is p p Definition 2.2. We set for x C , s N ∈ ∈ p Y s s sk Zn(x) = Zn(x)(g) := det(I xk g) for g Sn (2.1) k=1 − ∈ where g is chosen randomly from Sn with respect to the uniform measure.

1094 ” s — 2.2 An representation for E Zn(x)

s ” s — We give now a "good" representation for Zn(x) and E Zn(x) . We use here well known definitions and results on the symmetric group. We therefore restrict ourselves to stating results. More details can be found in [12, chapter I.2 and I.7] or in [3, chapter 39].

s We begin with Zn(x)(g) for g a cycle.

Example 2.3. Let σ = (123 n). Then ···  0 0 0 1   1 0 ··· 0 0     ··· .. .  g = (δi,σ j )1 i,j n =  0 1 0 . .  . (2.2) ( )   ≤ ≤  ......   . . . . .  0 0 1 0 ··· Let α m : exp m 2πi . Then ( ) = ( n )       α(m) 1 α(m)  2     2   α(m)   α(m)   α(m)   .   .   .  g  .  =  .  = α(m)  .  . (2.3)        n 1   n 2   n 1  α(m) − α(m) − α(m) −    n 1    1 α(m) − 1

n o n Therefore the eigenvalues of g are α(m) = α(m) and det(I x g) = 1 x . { } − −

The arbitrary case now follows easily from example 2.3. Fix an element σ = σ1σ2 σl Sn with ··· ∈ σi disjoint cycles of length λi. Then   P1    P2  g = g(σ) =  .  (2.4)  ..    Pl where Pi is a of a cycle as in example 2.3 (after a possible renumbering of the basis). Then

p l Y Y s λi sk Zn(x)(g) = (1 xk ) . (2.5) k=1 i=1 −

” s — s Next, we look at E Zn(x) . It is easy to see that Zn(x)(g) only depends on the conjugacy class of g.

We parameterize the conjugation classes of Sn with partitions of n.

1095 Definition 2.4. A partition λ is a sequence of nonnegative integers (λ1, λ2, ) with ··· X∞ λ1 λ2 and λi < . ≥ ≥ · · · i=1 ∞

The length l(λ) and the size λ of λ are defined as | |  X∞ l(λ) := max i N; λi = 0 and λ := λi. ∈ 6 | | i=1  We set λ n := λ partition ; λ = n for n N. An element of λ n is called a partition of n. ` | | ∈ ` Remark: we only write the non zero components of a partition. Choose any σ Sn and write it as σ1σ2 σl with σi disjoint cycles of length λi. Since disjoint ∈ ··· cycles commute, we can assume that λ1 λ2 λl . Therefore λ := (λ1, , λl ) is a partition of n. ≥ · · · ≥ ···

Definition 2.5. We call the partition λ the cycle-type of σ Sn. ∈ Definition 2.6. Let λ be a partition of n. We define λ Sn to be the set of all elements with cycle type λ. C ⊂

It follows immediately from (2.5) that Zn(x)(σ) only depends on the cycle type of σ. If σ, θ Sn ∈ have different cycle type then Zn(x)(σ) = Zn(x)(θ). 6 Therefore two elements in Sn can only be conjugated if they have the same cycle type (since Zn(x) is a class function). One can find in [3, chapter 39] or in [12, chapter I.7, p.60)] that this condition is sufficient. The cardinality of each Cλ can be found also in [3, chapter 39]. Lemma 2.7. We have n! with λ = z |C | λ n Y cr  zλ := r cr ! and cr = cr (λ) := # i λi = r . (2.6) r=1 |

We put lemma 2.7 and (2.5) together and get

p p Lemma 2.8. Let x C and s N be given. Then ∈ ∈ p l λ X 1 Y Y( ) ” s — λm sk E Zn(x) = (1 xk ) . (2.7) zλ λ n k=1 m=1 − ` ” s — Obviously, it is very difficult to calculate E Zn(x) explicitly. It is much easier to write down the generating function.

” s — 2.3 Generating function of E Zn(x)

We now give the definition of a generating function, some lemmas and apply them to the sequence € ” —Š Zs x . E n( ) n N ∈ 1096 Definition 2.9. Let f be given. The formal power series f t : P f t n is called the gen- ( n)n N ( ) = ∞n=0 n erating function of the sequence∈ f . If a formal power series f t P f t n is given then ( n)n N ( ) = n N n   ∈ ∈ f n := fn.

We will only look at the case fn C and f convergent. ∈ Lemma 2.10. Let a be a sequence of complex numbers. Then ( m)m N ∈ ! l λ X 1 X 1 Y( ) a t λ exp a t m with a : a . (2.8) λ | | = m λ = λi zλ m λ m=1 i=1 If RHS or LHS of (2.8) is absolutely convergent then so is the other.

Proof. The proof can be found in [12, chapter I.2, p.16-17] or can be directly verified using the definitions of zλ and the exponential function.

p Q m sj In view of (2.7) is natural to use lemma 2.10 with am = j 1(1 x j ) to write down the generating € ” —Š = − function of the sequence Zs x . We formulate this a lemma, but with a P xm for P a E n( ) n m = ( ) N p ∈ Q m sj polynomial. We do this since the calculations are the same as for am = j 1(1 x j ) . = − P k k Qp kj Lemma 2.11. Let P x p b x be a polynomial with b and x : x . We set for a ( ) = k N k k C = j=1 j partition λ ∈ ∈

l(λ) Y λ λm λm m λm ¦ p+1 © Pλ(x) := P(x ) with x = (x1 , , xp ) and Ω := (t, x) C ; t < 1, x 1 . m=1 ··· ⊂ | | k k ≤ We then have X 1 Y λ k bk Pλ(x)t| | = (1 x t)− (2.9) zλ p λ k N − ∈ and both sides of (2.9) are holomorphic on Ω. s We use the principal branch of the logarithm to define z for z C R . ∈ \ − m m Proof. We only prove the case p = 2. The other cases are similar. We use (2.8) with am = P(x1 , x2 ) and get   l λ ! X 1 Y( ) X 1 X X t m P xλm , xλm t λ exp P x m, x m t m exp  ∞ b ∞ x k1 x k2 m ( 1 2 ) | | = ( 1 2 ) =  k1,k2 ( 1 2 )  zλ m m λ m=1 m=1 k1,k2=0 m=1   X exp  ∞ b 1 Log 1 x k1 x k2 t  =  k1,k2 ( ) ( 1 2 ) k1,k2=0 − −

Y∞ k k b 1 2 k1,k2 = (1 x1 x2 t)− . k1,k2=0 − The exchange of the sums is allowed since there are only finitely many non zero b . Since we k1,k2 k1 k2 have used the Taylor-expansion of Log(1 + z) near 0, we have to assume that t x1 x2 < 1 if b 0. | | k1,k2 = 6 1097 ” s — We now write down the generating function of E Zn(x) . p p Theorem 2.12. Let s N and x C . We set ∈ ∈ p s k     Y 1 s Y sj s € k Š (k)( ) k kj f ( )(x, t) := 1 x t − − with ( 1) := ( 1) . (2.10) p k kj k N − − j=1 − ∈ Then ” — ” — Zs x f (s) x, t . E n( ) = ( ) n

p Q sj Proof. This is (2.7) and lemma 2.11 with P(x) = j 1(1 x j) . = −

” s — Remark: we use the convention E Z0(x) := 1. Remark: In an earlier draft version, the above proof was based on representation theory. It was at least twice as long and much more difficult. We wish to acknowledge Paul-Olivier Dehaye, who has suggested the above proof and allowed us to us it.

Corollary 2.12.1. For n 1, p = s = 1, we have that ≥   E Zn(x) = 1 x. (2.11) − Proof. 1 x t X X X ∞ t n x t ∞ t n 1 1 x ∞ t n 1− t = = + ( ) n 0 − n 0 − n 1 − = = =

2.4 Asymptotics for x < 1 k k p p Theorem 2.13. Let s N and x C with x < 1 be fixed. Then ∈ ∈ k k s 1 k ” s — Y € kŠ (k)( ) lim Z x 1 x − − . (2.12) n E n( ) = p →∞ k N 0 − ∈ \{ } There is no problem of convergence in the RHS of (2.12) since by definition s 1 k 0 only for k ( ) = p − 6 finitely many k N 0 . ∈ \{ } There exists several ways to prove this theorem. The proof in this section is a simple, direct compu- tation. In section 3, we extend theorem 2.13 to complex s and prove it with probability theory. The third way to prove this theorem is to use the theorems IV.1 and IV.3 in [5] which base on function theory. We now prove theorem 2.13 with theorem 2.12 and  Lemma 2.14. Let a, b N and y1, , ya, z1, , zb be complex numbers with max yi , zi < 1. Then ∈ ··· ··· | | | |  Qa  Qa 1 i 1(1 yi t) i 1(1 yi) lim  = −  = = − . n 1 t Qb Qb →∞ i 1(1 zi t) n i 1(1 zi) − = − = −

1098 Proof. We show this by induction on the number of factors. h i For a b 0 there is nothing to do, since 1 1. = = 1 t n = Induction (a, b) (a + 1, b). − We set → Qa Qa 1 (1 yi t) (1 yi) g t : i=1 , γ : i=1 . ( ) = 1 t Qb − = Qb − i 1(1 zi t) i 1(1 zi) − = − = − We know by induction that limn [g(t)]n = γ. We get →∞  a 1  1 Q + 1 y t i=1 ( i )       lim  −  = lim g(t)(1 ya 1 t) n = lim g(t) n lim ya 1 g(t) n 1 n 1 t Qb n + n n + (1 zi t) − − − →∞ − i=1 n →∞ →∞ →∞ − Qa+1 (1 yi) 1 y γ i=1 . = ( a+1) = Qb − − i 1(1 zi) = − Induction (a, b) (a, b + 1). → This case is slightly more difficult. We define g(t) and γ as above and write for shortness z = zb+1. We have  Qa      n 1 i 1(1 yi t) g(t) X∞ X = g t zt k zk g t  .  b 1 −  = =  ( ) ( )  = ( ) n k 1 t Q + 1 zt n i 1 (1 zi t) n ( ) k=0 n k=0 − − = − −     Let ε > 0 be arbitrary. Since g(t) n γ, we know that there exists an n0 N with g(t) n γ < → ∈ − ε for all n n0. We have ≥ n n n0 n0 1 X k   X− k   X− n k   z g(t) n k = z g(t) n k + z − g(t) k . k=0 − k=0 − k=0 Since z < 1, the second sum converges to 0 as n . But | | → ∞ n n0 n n0 n n0 X− X− X− ε γzk g t  zk ε zn k . ( ) n k − 1 z k 0 − k 0 − ≤ | | k 0 ≤ = = = − | | Since ε was arbitrary, we are done.

Corollary 2.14.1. For each s N and x C with x < 1, we have ∈ ∈ | | 2 s 2 s s k k 1 ” 2s— Y 2k s Y k k ( 1) 1+ 2+ (k) 1 2 (k1)(k2) lim E Zn(x) = (1 x )− (1 x x ) − . n →∞ | | k=1 − | | 0 k1

Proof. It follows immediately from (2.5), that Zn(x) = Zn(x). We put p = 2, s1 = s2 = s, x1 = x and x2 = x.

Remark: the corollary is in fact true for all s R. One simply has to replace theorem 2.13 by theorem 3.1 (see section 3) in the proof. ∈

1099 3 Holomorphicity in s

p This section is devoted to extend theorem 2.13 to s C . We do this by showing that all functions appearing in theorem 2.13 are holomorphic functions∈ in (x, s) for x < 1 and then prove point-wise convergence of these functions. We do this since we need the holomorphicityk k in the direct proof of theorem 3.1 (see section 3.6.2) and since there are only changes between s fix and s as variables.

We do not introduce here holomorphic functions in more than one variable since we do not need it in the calculations (except in section 3.6.2). A good introduction to holomorphic functions in more than one variable is the book “From holomorphic functions to complex manifolds” [7].

We now state the main theorem of this section Theorem 3.1. We have € s Š 1 k ” s — Y € kŠ (k)( ) p E Zn(x) 1 x − − for n and all x, s C with x < 1. (3.1) p → k N 0 − → ∞ ∈ k k ∈ \{ } b We use the principal branch of logarithm to define a for a / R 0. ∈ ≤ 3.1 Corollaries of theorem 3.1

Before we prove theorem 3.1, we give some corollaries p Corollary 3.1.1. We have for s1, s2, x1, x2 C with x1 < 1, x2 < 1 s ∈ k k k k ‹ – 1 ™ s1 s2+k2 1 1 k1 Zn (x1) Y  k k  (k )( k − )( ) 1 x 1 x 2 − 1 2 − n . E s2 1 2 ( ) Zn x2 p ( ) → k1,k2 N − → ∞ k1+k∈2=0 6 Proof. We use the definition of s in (3.2) for s , k (see later). k C N ∈ s ∈ s k 1 We apply theorem 3.1 for p : 2p and the identity  1 k + . 0 = −k = ( ) k− −  Corollary 3.1.2. We have for x1, x2, x3, x4, s1, s2, s3, s4 C with max xi < 1 s s ∈ | | – 1 2 ™ s1 s2 s3+k3 1 s4+k3 1 1 k1+k2+1 Zn (x1)Zn (x2) Y  k k k k (k )(k )( k − )( k − )( ) 1 x 1 x 2 x 3 x 4 1 2 3 4 − E s3 s4 1 2 3 4 Zn x3 Zn x4 ( ) ( ) → k1,k2,k3,k4 N − k1+k2+k3+k∈4=0 6 We can also calculate the limit of the Mellin-Fourier-transformation of Zn(x), as Keating and Snaith did in their paper [11] for the unitary group.

Corollary 3.1.3. We have for s1, s2 R, x C with x < 1 ∈ ∈ | |  s1 s2 s1+s2  k 1 Y −2 2 ( 1) 1+ ” s1 is2arg(Zn(x))— ∞ € k1 k2 Š ( k1 )( k2 ) E Zn(x) e 1 x x − . | | → k1,k2 N − k1+k2∈=0 6 s /2 zs2 Proof. We have z s1 zs1/2z 1 and eis2ar g(z) . = = z s2 | | | |

1100 3.2 Easy facts and definitions

m iϕm We simplify the proof of theorem 3.1 by assuming p = 1. We first rewrite (1 x ) as rme with − rm > 0 and ϕm ] π, π] for all m N. ∈ − ∈ Convention: we choose 0 < r < 1 fixed and prove theorem 3.1 for x < r. We restrict x to x < r because some inequalities in the next lemma| | are only true for r < 1. {| | } Lemma 3.2. The following hold:

m m 1. 1 r rm 1 + r and ϕm αm, where αm is defined in figure 1. − ≤ ≤ | | ≤

Figure 1: Definition of αm

m 2. One can find a β1 > 1 such that 0 αm β1 r . ≤ ≤ 3. For r < y < r one can find a β2 = β2(r) > 1 with log(1 + y) β2 y . − | | ≤ | | 4. There exists a β3 = β3(r), such that for all m and 0 y r ≤ ≤ m 1 m 1 + y m 1 + β3 y . ≤ 1 y ≤ − 5. We have for all s C m s m ∈ Log ((1 x ) ) sLog (1 x ) mod 2πi − ≡ − with Log(.) the principal branch of logarithm.

Proof. The proof is straight forward. We therefore give only an overview

1. We have x m < rm and thus 1 x m lies inside the circle in figure 1. This proves point (1) | | − m 2. We have that sin(αm) = r by definition and sin(z) z for z 0. This proves point (2). ∼ → 3. We have log(1 + y) = log 1 + y + iarg(1 + y). This point now follows by takeing a look at log 1 + y and arg(1 + y) |separately.| | | 4. Obvious.

1101 5. Obvious.

Lemma 3.3. Let Y be a Poisson distributed random variable with Y  1 . Then m E m = m

‚ d Œ ” dY — y 1 E y( m) = exp − for y, d 0. m ≥

3.3 Extension of the definitions

s (s) The first thing we have to do is to extend the definitions of Zn(x) and f (x) = s 1 k+1 Qs € kŠ(k)( ) k 1 1 x − to holomorphic functions in (x, s). = −

s 3.3.1 Extension of f ( )(x)

We first look at s. We set k

k s s(s 1) (s k + 1) Y s m + 1 : . (3.2) k = − ···k! − = − m m=1 Obviously, s can be extended to a holomorphic function in s. k We set € s Š 1 k+1 s Y∞ € kŠ (k)( ) f ( )(x) := 1 x − . k=1 − k .. k The factor (1 x )( ) is well defined, because Re(1 x ) > 0. This definition− agrees with the old one, since s −0 for k > s and s, k . Of course we have to k = N show that the infinite product is convergent. ∈

s Lemma 3.4. The function f ( )(x) is a holomorphic function in (x, s).

Proof. Choose s K C with K compact. We have ∈ ⊂

s m + 1 sup − 1 for m . s K m → → ∞ ∈ We obtain that for each a > 1 there exists a C C a, K with s Cak for all k , s K. = ( ) k N | | ≤ ∈ ∈ We choose an a > 1 such that ra < 1 and set Log( y) := log(y) + iπ for y R>0. Then − ∈ € s Š     X∞  k 1 k+1  X∞ s k X∞ s i ( ) k ϕk Log (1 x ) ( ) Log(1 x ) = Log(rke ) − k k k=1 − ≤ k=1 − k=1   X∞ s k X∞ k ( Log(1 r ) + αk ) C(β1 + β2)(ar) < . k ≤ k=1 | − | | | ≤ k=1 ∞

1102 Since this upper bound is independent of x, we can find a k0 such that

€ s Š  k 1 k+1  ( ) (k) arg (1 x ) − [ π/2, π/2] k k0 and x < r. − ∈ − ∀ ≥ | | € s Š P  k 1 k+1  Therefore Log 1 x ( ) (k) is a holomorphic function. This proves the holomorphic- ∞k=k0 ( ) − s − ity of f ( )(x).

s 3.3.2 Extension of Zn(x)

We have found in (2.5) that

l λ Y( ) 1 λi Zn(x)(g) = Zn (x)(g) = (1 x ) for g λ. i=1 − ∈ C We slightly reformulate this formula.

(n) (n) Definition 3.5. Let σ Sn be given. We define Cm = Cm = Cm (σ) to be the number of cycles of length m of σ. ∈

(n) The relationship between partitions and Cm is as follows: if σ λ is given then Cm (σ) =  (n) ∈ C # i; λi = m . The function Cm only depends on the cycle-type and is therefore as class function on Sn. We get n Y (n) 1 m Cm Zn(x) = Zn (x)(g) = (1 x ) . (3.3) m=1 − m s Since Re(1 x ) > 0, we can use this equation to extend the definition of Zn(x). − Definition 3.6. We set for s C and x < 1 ∈ | | n Y (n) s m (sCm ) Zn(x) := (1 x ) . m=1 − s It is clear that Zn(x) agrees with the old function for s N and is a holomorphic function in (x, s) (n) ∈ for all values of Cm .

3.4 The Feller-coupling

In this subsection we follow the book of Arratia, Barbour and Tavaré [1]. The first thing we mention is

(n) Lemma 3.7. The random variables Cm converge for each m N in distribution to a Poisson-distributed random variable Y with Y  1 . In fact, we have for all∈ b m E m = m N ∈ (n) (n) (n) d (C1 , C1 , , Cb ) (Y1, Y2, , Yb)(n ) ··· −→ ··· → ∞ and the random variables Ym are all independent.

1103 Proof. See [1, theorem 1.3].

Since Zn(x) and Zn+1(x) are defined on different spaces, it is difficult to compare them. This is where the Feller-coupling comes into play. The Feller-coupling constructs a probability space and (n) (n) new random variables Cm and Ym, which have the same distributions as the Cm and Ym above and can be compared very well. We give here only a short overview. The details of the Feller-coupling can be found in [1, p.8].

The construction is as follows: let ξ n be a sequence of independent Bernoulli-random variables t t=1 with ξ  1 . We use the following notation for the sequence E m = m

ξ = (ξ1ξ2ξ3ξ4ξ5 ). ··· An m spacing is a finite sequence of the form − 1 0 0 1 | {z } m 1··· times − (n) (n) Definition 3.8. Let Cm = Cm (ξ) be the number of m-spacings in 1ξ2 ξn1. ··· We define Ym = Ym(ξ) to be the number of m-spacings in the whole sequence. Theorem 3.9. We have

(n) (n) 1. The above-constructed Cm (ξ) have the same distribution as the Cm in definition 3.5. 2. Y ξ is a.s. finite and Poisson-distributed with Y  1 . m( ) E m = m

3. All Ym(ξ) are independent.

4. For fixed b N we have ∈ h (n) (n)  i P C1 (ξ), , Cb (ξ) = Y1(ξ), , Yb(ξ) 0 (n ). ··· 6 ··· → → ∞ Proof. See [1, p.8-10].

(n) We use in the rest of this section only the random variables Cm (ξ) and Ym(ξ). We therefore just (n) write Cm and Ym for them. (n) (n) One might guess that Cm Ym, but this is not true. It is possible that Cm = Ym + 1. However, this ≤ (n) can only happen if ξn m ξn+1 = 10 0. If n is fixed, we have at most one m with Cm = Ym + 1. We set − ··· ···

(n)  Bm = ξn m ξn+1 = 10 0 . (3.4) − ··· ··· Lemma 3.10. We have

(n) 1. Cm Ym + 1B(n) , ≤ m ” — 1 2. B(n) , P m = n+1

1104 ” — 2 3. C(n) Y , E m m n 1 − ≤ + (n) 4. Cm does not converges a.s. to Ym.

Proof. The first point follows immediately from the above considerations. The second point is a simple calculation. (n) The proof of the third point can be found in [1, p.10]. The proof is based on the idea that Ym Cm is (more or less) the number of m-spacings appearing after the n’th position. −

We now look at the last point. Let ξvξv+1 ξv+m+1 = 10 01. We then have for 1 m0 m 1 and v n v + m + 1 ··· ··· ≤ ≤ − ¨ (v) ≤ ≤ C + 1, if n = v + m0, C(n) m0 m0 = (v) Cm , if n = v + m0. 0 6 P Since all Ym < a.s. and ∞m 1 Ym = a.s. we are done. ∞ = ∞ 3.5 The limit

s Qn m sC(n) n d s We have Z x 1 x ( m ) and we know C( ) Y . Does Z x converge in distribution n( ) = m=1( ) m m n( ) to a random variable? If yes,− then a possible limit is −→

s Y∞ m sY Z (x) := (1 x )( m) . (3.5) ∞ m=1 −

We show indeed in lemma 3.14 that for s C fixed ∈ s d s Zn(x) Z (x)(n ). (3.6) −→ ∞ → ∞ s s ” s — We first show that Z (x) is a good candidate for the limit. We prove that Z (x) and E Z (x) are holomorphic functions∞ in (x, s). ∞ ∞ s Since Z (x) is defined via an infinite product, we have to prove convergence. The following lemma supplies∞ us with the needed tools.

Lemma 3.11. We have:

P m 1. ∞m 1 Ym r is a.s. absolute convergent for r < 1. = | | ”Q m (σYm)— 2. E ∞m 1(1 r ) is finite for all σ R. = ± ∈ ”Q — 3. E ∞m 1 exp(tαmYm) is finite for all t R. = ∈ Proof. First part: we prove the absolute convergence of the sum P Y rm by showing ∞m=1 m

pm m lim sup Ym r < 1 a.s. m | | →∞

1105  m m n pm m o We fix an a with r < a < 1 and set Am := Ym r > a = Ym r > a . Then | |     pm m pm m ” — P lim sup Ym r < 1 1 P lim sup Ym r > a = 1 P ∞n 1 ∞m n Am m | | ≥ − m | | − ∩ = ∪ = →∞  →∞  = 1 P lim sup(Am) . − We get with Markov’s inequality

m   m •  a ‹ ˜ E Ym 1  r ‹ Y > . P m r € a Šm = m a ≤ r

h € a Šmi Therefore P Y > < . It follows from the Borel-Cantelli-lemma that m P m r ∞   P lim sup(Am) = 0. Second part: Case σ > 0. m σY We only have to look at the terms with a plus, since the (1 r )( m) 1. We get with monotone convergence − ≤

   m0  Y∞ m σY Y m σY E  (1 + r )( m) = lim E  (1 + r )( m) m0 m=1 →∞ m=1 m0  m σ  m0 ‚ m σ Œ Y (1 + r ) 1 Y (1 + r )d e 1 = lim exp − lim exp − . m0 m m0 m →∞ m=1 ≤ →∞ m=1 We have for σ fixed and m big enough d e   σ  σ  m σ  m Xd e m k 1 m 1 r 1 r  σ r  2 σ r ( + )d e = + dke ( ) − − d e k=2 ≤ d e It follows

m0  m σ  Y (1 + r ) 1 Y∞ m lim exp − C exp(2 σ r ) < . m0 m →∞ m=1 ≤ m=1 d e ∞ The constant C only depends on r and σ . Case σ < 0 d e We only have to look at the terms with a minus sign. We have 1 m (σYm) m ( σ Ym) (1 r ) = (1 + β3 r ) | | 1 rm ( σ Ym) − ( ) | | ≤ − We can now argue as in the case σ > 0. Third part: the product Q exp tα Y is a.s. well defined, since ∞m=1 ( m m) ! Y∞ X∞ X∞ m Log exp(tαmYm) = tαmYm t Ymβ1 r < a.s.. m=1 m=1 ≤ m=1 ∞

1106 We get

    m ‚ tβ1 r Œ Y∞ Y∞ m Y∞ e 1  exp tαmYm   exp tβ1 r Ym  exp . E ( ) E ( ) = m − m=1 ≤ m=1 m=1

m m tβ1 r m Since tβ1 r is bounded, we can find a constant C = C(r, t) with e 1 C tβ1 r . We get − ≤    m  ! Y∞ Y∞ C(tβ1 r ) X∞ m  exp tαmYm  exp exp C tβ1 r < . E ( ) m m=1 ≤ m=1 ≤ m=1 ∞

Now, we can prove:

s Lemma 3.12. Z (x) is a.s. a holomorphic function in (x, s). ∞ Proof. We have

X∞ X∞ X∞ € m (sYm)Š m Log (1 x ) sYm Log(1 x ) s Ym( log(rm) + iϕm ) m=1 − ≤ m=1 | || − | ≤ | | m=1 | | | |

X∞ m s Ym(β1 + β2)r < a.s. by Lemma 3.11. ≤ | | m=1 ∞

€ m (sYm)Š Since the sum is a.s. finite, we can find a.s. an m0, such that arg (1 x ) [ π/2, π/2] for € Š all m m . Then P Log 1 x m (sYm) is a holomorphic function− and so is∈Zs−x . 0 ∞m=m0 ( ) ( ) ≥ − ∞ s ” s — ” s — Lemma 3.13. All moments of Z (x) exist. E Z (x) is holomorphic in (x, s) with E Z (x) = s ∞ ∞ ∞ f ( )(x).

Proof. Let s K C with K compact. ∈ ⊂ ” s — Step 1: existence of E Z (x) . Let s = σ + it. Then ∞ ! ! Y∞ Y∞ Y∞ Y∞ Zs x 1 x m (sYm) r(σYm)e tYmϕm 1 β rm ( σ Ym) etYmαm . ( ) = ( ) = m − ( + 3 ) | | | ∞ | m=1 − m=1 ≤ m=1 m=1

We set σ0 := sups K σ and t0 := sups K t . We define ∈ | | ∈ | | ! ! Y Y ∞ m (σ0Ym) ∞ t0Ymαm F(r) = F(r, K) := (1 + β3 r ) e . (3.7) m=1 m=1

2 It follows from the Cauchy Schwarz inequality (for L ) and lemma 3.11, that E [F(r)] is finite. ” s — Therefore E Z (x) exists. ∞

1107 ” s — Step 2: the value and the holomorphicity of E Z (x) . We have ∞    m s  ” s — Y∞ m sY Y∞ ” m sY — Y∞ (1 x ) 1 Z x  1 x m  1 x m exp E ( ) = E ( ) = E ( ) = − m − ∞ m=1 − m=1 − m=1 m s ! X∞ (1 x ) 1 exp . (3.8) = − m − m=1 The exchange of the product and E [..] is justified by step 1. The exchange of exp and the product is justified by the following calculation. We need Newton’s binomial series to calculate the last expression. We have

X s 1 x s ∞ x k for x < 1, s (3.9) ( + ) = k C k=0 | | ∈ where the sum on the right side is absolutely convergent (See [6, p.26]). We get m s     k m X∞ (1 x ) 1 X∞ 1 X∞ s X∞ s X∞ (x ) x m k 1 k − m − = m k ( ) = ( ) k m m=1 m=1 k=1 − k=1 − m=1 X s ∞ 1 k+1 Log 1 x k . = ( ) k ( ) k=1 − − We have to justify the exchange of the two sums in the first line. We have seen in the proof of lemma 3.4 that for each a > 1 there exists a constant C C a, K with = ( ) s < Cak for k , s K. Now choose a > 1 with ar < 1. We get k N ∈ ∈

  m X∞ X∞ 1 s X∞ X∞ 1 X∞ 1 ar x m k Cak rm k C < . m k ( ) m ( ) = m 1 arm m 1 k 1 − ≤ m 1 k 1 m 1 ∞ = = = = = − ” s — Step 3: holomorphicity of E Z (x) . ∞ s We know from step 2 and the definition of f ( )(x) that ” s — s E Z (x) = f ( )(x). ∞ s Since we have shown in lemma 3.4 the holomorphicity of f ( )(x), we are done. Step 4: existence of the moments. m € s Š (ms) s s z z We have Zn(x) = Zn (x) for each m N and Zn(x) = Zn(x) (use e = e ). The existence of the moments now follows∈ from step 1.

3.6 Convergence to the limit

We have so far extended the definitions and found a possible limit. We complete the proof of theorem 3.1 by showing that

” s — ” s — (s) E Zn(x) E Z (x) = f (x)(n ) → ∞ → ∞ 1108 for x < r and s C. | | ∈ d We give here two different proofs. The idea of the first is to prove Zn(x) Z (x) and then use −→ ∞ uniform integrability (see lemma 3.16). The idea of the second is to prove theorem 3.1 for x, s R and then apply the theorem of Montel. ∈ d Note that the second proof does not imply Zn(x) Z (x). One would need that Z (x) is uniquely defined by its moments. Unfortunately we have not−→ been∞ able to prove or disprove∞ this.

3.6.1 First proof of theorem 3.1

Lemma 3.14. We have for all fixed x and s

n d X (n) m X∞ m s Cm Log(1 x ) s YmLog(1 x )(n ), (3.10) m=1 − −→ m=1 − → ∞ s d s Zn(x) Z (x)(n ). (3.11) −→ ∞ → ∞ Proof. Since the exponential map is continuous, the second part follows immediately from the first part. We know from lemma 3.10 that • ˜ 2 C(n) Y . (3.12) E m m − ≤ n + 1 We get

 n  n X X ” — s Y C(n) Log 1 x m s Y C(n) Log 1 x m E  ( m m ) ( )  E m m ( ) m=1 − − ≤ | | m=1 | − | | − | n 2 s X 2 s r β1 + β2 β β rm 0 n . n | |1 ( 1 + 2) = n| |1 1 r ( ) ≤ + m 1 + −→ → ∞ = −

Weak convergence does not imply automatically convergence of the moments. One needs some additional properties. We introduce therefore

Definition 3.15. A sequence of (complex valued) random variables X is called uniformly inte- ( m)m N grable if ∈ ” — sup X 1 0 for c . E n Xn >c n N | | | | −→ → ∞ ∈ d Lemma 3.16. Let X be uniformly integrable and assume that X X . Then ( m)m N n ∈ −→   E Xn E [X ] . −→ Proof. See [8, chapter 6.10, p. 309].

1109 We finish the proof of theorem 3.1. Let s K C with K compact. We have found in the proof of lemma 3.13 ∈ ⊂ ! ! Y Y s ∞ m ( σ0 Ym) ∞ t0Ymαm Z (x) F(r) = (1 + β3 r ) | | e (3.13) | ∞ | ≤ m=1 m=1

(n) s with E [F(r)] < . It is possible that Cm = Ym + 1 and so the inequality Zn(x) < F(r) does not ∞ | | have to be true. We replace therefore Ym by Ym + 1 in the definition of F(r). This new F(r) fulfills E [F(r)] < and ∞ s s Zn(x) < F(r) , Z (x) < F(r) n N. | | | ∞ | ∀ ∈ We get h s i ” — sup Z x 1 s F r 1 0 c . E n( ) Zn(x) >c E ( ) F(r)>c ( ) n N | | | | ≤ −→ → ∞ ∈ The sequence Zs x is therefore uniformly integrable and theorem 3.1 follows immediately ( n( ))n N from lemma 3.14 and 3.16.∈

3.6.2 Second proof of theorem 3.1

We first prove the convergence in the special case x [0, r[, s [1, 2]. ∈ ∈ Lemma 3.17. For all x [0, r[ and s [1, 2], we have ∈ ∈ ” s — ” s — E Zn(x) E Z (x) (n ). → ∞ → ∞ Proof. Let x and s be fixed. ” ” Let m0 be arbitrary and fixed. For n m0 we have ≤ ≥  n   m0  Y (n) Y (n) ” s — m sCm m sCm E Zn(x) = E  (1 x )  E  (1 x )  m=1 − ≤ m=1 −

We know from lemma 3.7 that

d C(n), C(n), , C(n) Y , Y , , Y n . ( 1 1 m ) ( 1 2 m0 )( ) ··· 0 −→ ··· → ∞ m The function Q 0 1 x m (s cm) is clearly continuous in c , , c and bounded by 1. We there- m=1( ) · ( 1 m0 ) fore get − ···  m0   m0  Y m sC(n) Y m sY E  (1 x ) m  E  (1 x ) m  (n ). m=1 − → m=1 − → ∞

m sYm Since m0 was arbitrary and (1 x ) 1, it follows with dominated convergence that − ≤  m0    ” — Y sY Y∞ sY ” — lim sup Zs x inf 1 x m m 1 x m m Zs x . E n( ) m E  ( )  = E  ( )  = E ( ) n ≤ 0 m 1 − m 1 − ∞ →∞ = = ” ” The second part is more difficult. Here we need the Feller coupling. ≥ (n)  Remember: Bm = ξn m ξn+1 = 10 0 . − ··· ··· 1110 (n) (n) If Cm Ym a.s. we would not have any problems. But it may happen that Cm = Ym + 1. If ≤ (n) ξn+1 = 1, then Cm Ym. For this reason we have ≤ n n X X m Zn(x) = Zn(x)1 (n) = Zn(x)1 (n) + (1 x )Zn m(x)1 (n) . (3.14) Bm B0 Bm m=0 m=1 − − m s We choose 1 > ε > 0 fixed and m0 large enough, such that (1 x ) > 1 ε for m m0. Then − − ≥  n   n  ” — X X Zs x 1 x m s Zs x 1 1 x m s Zs x 1 E n( ) E  ( ) n m( ) Bm  E  ( ) ( ) Bm  ≥ m=1 − − ≥ m=1 − ∞    m  X X0 ∞ 1 ε Z x 1 1 x Zs x 1 . E  ( ) ( ) Bm  + E  ( ) ( ) Bm  ∞ ≥ m=m0+1 − m=1 − ∞

” — 1 The last summand goes to 0 by the Cauchy Schwarz inequality, since B(n) and Zs x P m = n+1 ( ) ∞ has finite expectation for all s. We can replace m0 by 0 in the other Summand with the same argument.

We have proven theorem 3.1 for some special values of x and s. This proof is based on the fact that m 0 < (1 x ) 1 for x [0, 1[. Therefore we cannot use this proof directly for arbitrary (x, s). We could try− to modify≤ the∈ proof, but this turns out to be rather complicated. An easier way is to use the theorem of Montel (See [6, p.230] or [7, p.23]). Suppose that there exists a (x0, s0) and a subsequence Λ such that ” s0 — ” s0 — E Zn (x0) E Z (x0) > ε for some ε and all n Λ. We have found in the first proof in (3.13) ∞ € ” —Š (and| in the proof− of lemma 3.13)| an upper bound F r∈for the sequence Zs x for x < r. ( ) E n( ) n € ” —Š Λ The sequence Zs x is thus locally bounded and we can use the theorem∈ of Montel.| | E n( ) n Λ ∈ ” s — We therefore can find a subsequence Λ0 of Λ and a holomorphic function g with E Zn(x) g ” s — → on Br (0) K (for n Λ0). But g(x) has to agree with E Z (x) on [0, r[ [1, 2]. Therefore ה s — ∈ ”∞s0 — × g(x) = E Z (x) . But this is impossible since g(x0, s0) = E Z (x0) . This completes the proof of theorem 3.1.∞ 6 ∞

4 Growth Rates for x = 1 | | We consider in this section only the case p = 2 and x = x1 = x2. We assume s1, s2 N, x = 1 and k ∈ | | x not a root of unity, i.e x = 1 for all k Z 0 . 6 ”∈s1 \{ s}2 — We first calculate the growth rate of E Zn (x)Zn (x) for s2 = 0 (see lemma 4.2 and 4.3) and then for s2 arbitrary (see theorem 4.5). The main results in this section can be obtained by using the theorems VI.3 and VI.5 in [5]. This theorems base on Cauchy’s integral formula and are very general. To apply them, one just has to take a look at the singularities of the generating function. We do not use this theorems since we can reach our target with a much simpler tool: a partial fraction decomposition. The advantage is that we get the asymptotic behavior with a simple calculation (see (4.1), (4.2) and (4.3)) and the calculations after (4.3) are unchanged.

1111 4.1 Case s2 = 0

We use the generating function of theorem 2.12. We have to calculate the growth rate of ” — f (s) x, t . But f (s) x, t is a quite complicated expression and we therefore express it in a dif- ( ) n ( ) s ferent way. Since f ( )(x, t) is a rational function, we can do a partial fractional decomposition with respect to t (and x fixed).

s s k 1 s (k) Y € Š( 1) + X X ak,l f (s) x, t 1 x k t (k) P t (4.1) ( ) = − = ( ) + k l k 0 − k even l 1 (1 x t) = = − where P is a polynomial and ak,l C. This formulation can be found in [10, chapter 69.4, p.401]. Note that at this point we need the∈ condition that x is not a root of unity. If x is a root of unity, s some factors can be equal and cancel or increase the power. For example, we have f ( )(1, t) = 1 for all s C. ∈ h 1 i What is the growth rate of k l ? We have for l and t < 1 (1 x t) n N − ∈ | | l 1 1 1 XY  ∞ − n k t n. (4.2) l = l 1 ! ( + ) (1 t) ( ) n 0 k 1 − − = = This equation can be shown by differentiating the geometric series. We get

l 1 k n  1  n (x ) − . k l (1 x t) n ∼ (l 1)! − − A(n) Recall that A n B n if limn 1. Since x 1, we have ( ) ( ) B(n) = = ∼ →∞ | |   l 1 1 n − . (4.3) k l (1 x t) n ∼ (l 1)! − − We know from (4.3) the growth rate of each summand in (4.1). Since we have only finitely many ak,l summands, only these k l are relevant with l maximal and ak,l 0. The LHS of (4.1) has for (1 x t) = k s− 6 t x a pole of order  (for 0 k s and k even). We therefore have a s 0 since there is = k k, = ≤ ≤ (k) 6 only one summand on the RHS of (4.1) with a pole at t x k of order at least s. Before we can = k ” s — write down the growth rate of E Zn(x) , we have to define

Definition 4.1. Let s, k0 N with 0 k0 s and k0 even. We set ∈ ≤ ≤ 1 1 k+1 s Y € k k0 Š( ) (k) C(k0) := s 1 x x − . (4.4)  1! k0 k=k0 − − 6 We put everything together and get

Lemma 4.2. We have for s = 4m + 2 6 ” — s 1 s (k0) k0 n E Zn(x) n − C(k0)(x ) (4.5) ∼

1112 with  2m, for s 4m,  = k0 := 2m, for s = 4m + 1,  2m + 2, for s = 4m + 3.

s s  Proof. We have to calculate M : maxk even . A straight forward verification shows that M = k = k0 and that there is only one summand with exponent M in the case s = 4m + 2. We apply (4.3) and get 6 s s ak , 1 0 (k ) ” s — (k ) 0 k0 n Z (x) n 0 − s (x ) . E n  1! ∼ k 0 − 1 k+1 s Q € k k0 Š( ) (k) By the residue theorem ak , s = k k 1 x x − . This proves (4.5). 0 (k ) = 0 0 6 −

We see in the next lemma that there can appear more than one constant. This is the reason why we write C(k0) for the constant and not C or C(s, x). The case s 4m 2 is a little bit more difficult, since there are two maximal terms, i.e. 4m+2 = + 2m = 4m+2. 2m+2 Lemma 4.3. If s = 4m + 2 then

” s — 4m+2 1 2m n 2m 2 n ( 2m ) + E Zn(x) n − C(2m)(x ) + C(2m + 2)(x ) (4.6) ∼ 2m n 2m 2 n with C(2m)(x ) + C(2m + 2)(x + ) = 0 for at most one n.

Proof. A straight forward verification as in lemma 4.2 shows that M 4m+2 4m+2. Now we = 2m = 2m+2 have two summands with a maximal l. 2m n 2m 2 n To prove (4.6), we have to show C(2m)(x ) + C(2m + 2)(x + ) = 0 for only finitely many n. But C 2m x2m n C 2m 2 x2m+2 n 0 implies x2n C(2m) . Since x is not a root of unity, ( )( ) + ( + )( ) = = C 2m 2 − ( + ) all x k are different.

4.2 Case with s2 arbitrary

We argue as before. s,s Some factors appearing in f ( )(x, x, t) (see (2.10)) are equal, so we have to collect them before we can write down the partial fraction decomposition.

s1 s2 s1 Y Y 1 k1+k2+1 s1 s2 Y s1,s2 € k1 k2 Š( ) (k )(k ) k S k f ( )(x, x, t) = 1 x x t − 1 2 = (1 x t) ( ) (4.7) k1=0 k2=0 − k= s2 − − with X  s s  S k ∞ 1 2 1 k+2j+1. ( ) = k j j ( ) j=0 + −

1113 To calculate S(k) explicit, we need Vandermonde’s identity for binomial coefficients:      m1 + m2 X∞ m1 m2 . (4.8) q = q j j j 0 = − We get with m s , m s , q s k and m m  that 1 = 1 2 = 2 = 1 k = m k − −   k 1 s1 + s2 S(k) = ( 1) + (4.9) − s1 k − and therefore

s1 s1 Y 1 k+1 s1+s2 Y 1 k+1 s1+s2 (s1,s2) k ( ) ( s1 k ) k ( ) ( s2+k ) f (x, x, t) = (1 x t) − − = (1 x t) − . k= s2 − k= s2 − − −

” s1 s2 — Before we look at the growth rate of E Zn (x)Zn (x) , we define

Definition 4.4. We set for s1, s2 N, k0 Z with s2 k0 s1 and k0 even ∈ ∈ − ≤ ≤ 1 Y 1 k+1 s1+s2 k k0 ( ) ( s k ) C(k0) = C(s1, s2, k0, x) =   (1 x x ) − 2+ . (4.10) s1+s2  1 ! − s k k=k0 2+ 0 − 6 Remark: definition 4.1 is a special case of definition 4.4. Therefore there is no danger of confusion and we can write C(k0) for both of them.

We get

Theorem 4.5.

If s1 s2 = 4m + 2 then • − 6 ” — s1+s2 1 s1 s2 ( (s1+s2)/2 ) k0 n E Zn (x)Zn (x) n b c − C(k0)(x ) (4.11) ∼  s1 s2 − , for s1 s2 = 4m,  2 with k : s1 s2 1 , for s − s 4m 1, 0 = −2 − 1 2 = +  s1 s2+1 , for s − s 4m 3. −2 1 2 = + − If s s 4m 2 we set k : s1 s2 . Then 1 2 = + 0 = −2 • − ” — s1+s2   s1 s2 ( k0 1 ) k0 1 n k0+1 n E Zn (x)Zn (x) n − C(k0 1)(x − ) + C(k0 + 1)(x ) . (4.12) ∼ −

Additionally, for every even k0 with s2 k0 s1 − ≤ ≤ C(s1, s2, k0) = C(s2, s1, k0). −

1114 Proof. We prove here only the case s1 + s2 = 4p + 1 and s2 even or odd. The other cases are similar. We have to calculate   s1 + s2 M := max . k even s2 + k 4p+1 4p+1 4p+1 We know that max . If s2 is even then k s2 runs through all even numbers k = 2p = 2p+1 + between 0 and s s . Therefore M 4p+1 and the maximum is attained for k 2p s 1 + 2 = 2p 0 = 2 = − s1+s2 1 s s1 s2 1 . We have in this case s s 4p 1 2s 4m 1 and formula (4.11) 2 − 2 = −2 − 1 2 = + 2 = + − − − follows from (4.3). The argument for s2 odd is similar. It remains to show that C(s1, s2, k0) = C(s2, s1, k0). − This follows from

s1 k 1 s s 1 Y k 1 + 1+ 2 k 0 ( ) ( s2+k ) C(s1, s2, k0, x) =   (1 x x− ) − s1+s2  − 1 ! k= s2 − s2 k0 − − k=−k0 6 − s2 1 Y 1 k+1 s1+s2 k k0 ( ) ( s1+k ) =   (1 x− x ) − s1+s2  1 ! k= s1 − s1+k0 − k=−k0 6 = C(s2, s1, k0, x).

Corollary 4.5.1. s 2 2s Q (s k) ” 2s— 2s 1 k 1 1 x + ( s ) = E Zn(x) n −  | − | . | | ∼ 2s 1 ! s −

Proof. Put s1 = s2 = s in theorem 4.5. Corollary 4.5.2.   2 Var Zn(x) n 1 x . ∼ | − | ” 2—   Proof. We have E Zn(x) C(1, 1, 0, x)n and E Zn(x) = 1 x (see (2.11)). | | ∼ − 4.3 The real and the imaginary part

We mentioned in the introduction the results in [9]. Do we have the same results for Zn(x)? We first look at the expectation and the variance of the real and the imaginary of Zn(x). We set     Rn(x) := Re Zn(x) and In(x) := Im Zn(x) . We have

iϕ Lemma 4.6. We write x = e . Then   1. E Rn(x) = 1 cos(ϕ), −   2. E In(x) = sin(ϕ), − 1115   2 3. Var R x n 1 x , n( ) | −2 | ∼   2 4. Var I x n 1 x , n( ) | −2 | ∼ 5. Corr(Rn, In) 0 for n . → → ∞ Proof. (1) and (2) follows from (2.11). As next we prove (3) and (4). We use the growth rates for s1 = s2 = 1 and s1 = 2, s2 = 0. We only give the important constants explicitly.   ” 2 2— 2 E Zn(x)Zn(x) =E Rn + In 1 x n, (4.13a) ” 2 — ” 2 2— ∼ | − 2| n E Zn (x) =E Rn + 2iRn In In C1 + C2(x ) . (4.13b) − ∼ We calculate (4.13a) Re(4.13b) and get ± ” 2— 2 E 2Rn 1 x n ∼ | − | ” 2— 2 E 2In 1 x n ∼ | − | We now prove the last point. We know from (4.13b) that    Cov(Rn, In) = E Rn In + sin(ϕ)(1 cos(ϕ)) C4 + C5 sin(2nϕ) + C6 cos 2nϕ . − ∼ The point (5) now follows from (3) and (4).

” s — ” s — What are the growth rates or E Rn and E In ? We need the following lemma to answer this question.

Lemma 4.7. Let s N and z = x + i y be given with x, y R. Then ∈ s ∈ s 1 X s 1 X s xs zkzs k; ys 1 s+k zkzs k (4.14) = 2s k − = 2i s ( ) k − k=0 ( ) k=0 −

Proof. We argue with induction. If s 1 then x 1 z z . = = 2 ( + ) s s + 1: We have → 1 1 s s 1 s+1 s 1 s+1 s X k s k X + k (s+1) k x = x x = (z + z) z z − = z z − . 2 2s k 2s+1 k k=0 k=0 The proof for ys is similar.

We then have iϕ Theorem 4.8. Choose any s N and write x = e with ϕ [0, 2π] 2πQ. Then there exists (real) ∈ ∈ \ constants a2k = a2k(ϕ, s) and b2k = b2k(ϕ, s) for 0 k (s + 1)/4 with ≤ ≤ b c  (s+1)/4  ” — s b X c Rs n( s/2 ) a cos 2k n  , (4.15) E n b c  2k ( ) ϕ  ∼ k=0  (s+1)/4  ” — s b X c I s n( s/2 ) b sin 2k n  . (4.16) E n b c  2k ( ) ϕ  ∼ k=0

At least one a2k and one b2k is not equal to zero.

1116 ” s — Proof. We only prove the behavior for E Rn and s = 4m. The other cases are similar. We have s   ” — 1 X s ” — Rs Z k x Zs k x E n = 2s k E n ( ) n− ( ) k=0   s   s   1  X s ” k s k — X s ” k s k — =  E Zn (x)Zn (x) + E Zn (x)Zn (x)  . 2s  k − k −  k=0 k=0 k even k odd

We now apply theorem 4.5. If k is odd then k (4m k) = 4p + 2 for a p Z and the growth rate ” — 4m of Z k x Zs k x is  ... . If k is even− then−k 4m k 4p for∈ a p and the growth E n ( ) n− ( ) 2m+1 ( ) ( ) = Z ” — 4m − − ∈ rate of Z k x Zs k x is  ... . It is therefore sufficient to look at even k. We get E n ( ) n− ( ) 2m ( )   s   n   ” s — 4m 1  1 X s  k (s k)  k (s k)  (2m) − 2− E Rn n −  x C − −  2s k 2  ∼ k=0 k even m   ! 4m 1 1 X s 2k n n(2m) x C 2k − 2s 2m 2k ( ) ( ) ∼ k= m +  −  m   ! 4m 1 1 s X s   n(2m) C 0 2 Re C 2k cos 2k nϕ Im C 2k sin 2k nϕ . − 2s 2m ( ) + 2m 2k ( ( )) ( ) ( ( )) ( ) ∼ k=1 + −

We have used in the last inequality that C(s1, s2, k0) = C(s2, s1, k0). − This proves (4.15) if we can show that the last bracket is equal zero only for finitely many n (and x fixed). We define m  s  X  s   g t : C 0 2 Re C 2k cos 2k t Im C 2k sin 2k t . ( ) = 2m ( ) + 2m 2k ( ( )) ( ) ( ( )) ( ) k=1 + − Suppose there are infinitely many (different) n N with g(nϕ) = 0. All numbers nϕ are different ∈ modulo 2π, since ϕ / 2πQ. Therefore there are infinite many t [0, 2π] with g(t) = 0. But g(t) is a non trivial linear combination∈ of cos( ) and sin( ) and is therefore∈ a holomorphic function in t. It follows immediately from the identity theorem· (see· [6]) that g(t) 0. This is a contradiction since ≡ the functions cos(m1 t) and sin(m2 t) are linearly independent for m1 0, m2 > 0. ≥

5 Concluding Remarks

h€ d Šsi The behavior of Z x for s , x < 1 is not included in this paper. We have been able E d x n( ) N to prove a result similar to theorem 2.4.∈ Our| | proof uses the techniques of section 3 and Hartog’s theorem. Unfortunately we could not give an explicit expression for the limit and the prove is more ” s — difficult than the proof for E Zn(x) . We therefore decided to omit this result. We have proven in this paper several result, but there are still open questions:

1117 1. Is Z (x) uniquely defined by its moments? ∞ d d 2. Are there random variables R and I such that Rn R and In I ? ∞ ∞ −→ ∞ −→ ∞ ” s — 3. What are the growth rates of E Zn(x) for s C, x = 1 and x not a root of unity? ∈ | | Acknowledgement I would like to thank Ashkan Nikeghbali and Paul-Olivier Dehaye for their help writing my first paper.

References

[1] Richard Arratia, A.D. Barbour, and Simon Tavaré. Logarithmic combinatorial structures: a probabilistic approach. EMS Monographs in Mathematics. European Mathematical Society (EMS), Zürich, 2003. MR2032426

[2] Paul Bourgade, Chris Hughes, Ashkan Nikeghbali, and Marc Yor. The characteristic polynomial of a random unitary matrix: a probabilistic approach, 2007. MR2451289

[3] Daniel Bump. Lie groups, volume 225 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2004. MR2062813

[4] Daniel Bump and Alex Gamburd. On the averages of characteristic polynomials from classical groups. Comm. Math. Phys., 265:227–274, 2006. MR2217304

[5] Philippe Flajolet and Robert Sedgewick. Analytic Combinatorics. Cambridge University Press, New York, NY, USA, 2009. MR2483235

[6] Eberhard Freitag and Rolf Busam. Complex analysis. Universitext. Springer-Verlag, Berlin, 2005. Translated from the 2005 German edition by Dan Fulea. MR2172762

[7] Klaus Fritzsche and Hans Grauert. From holomorphic functions to complex manifolds, volume 213 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2002. MR1893803

[8] Allan Gut. Probability: a graduate course. Springer Texts in . Springer, New York, 2005. MR2125120

[9] B.M. Hambly, P. Keevash, N. O’Connell, and D. Stark. The characteristic polynomial of a random permutation matrix. Stochastic Process. Appl., 90(2):335–346, 2000. MR1794543

[10] Harro Heuser. Lehrbuch der Analysis Teil 1. B. G. Teubner, 10 edition, 1993.

[11] J.P.Keating and N.C. Snaith. theory and ζ(1/2 + it). Commun. Math. Phys., 214:57–89, 2000. MR1794265

[12] I. G. Macdonald. Symmetric functions and Hall polynomials. Oxford Mathematical Mono- graphs. The Clarendon Press Oxford University Press, New York, second edition, 1995. With contributions by A. Zelevinsky, Oxford Science Publications. MR1354144

1118