<<

Lectures on A Method in the Theory of Exponential Sums

By M. Jutila

Tata Institute of Fundamental Research Bombay 1987 Lectures on A Method in the Theory of Exponential Sums

By M. Jutila

Published for the Tata Institute of Fundamental Research, Bombay Springer-Verlag Berlin Heidelberg New York Tokyo 1987 Author M. Jutila Department of Mathematics University of Turku SF–20500 Turku 50 Finland

c Tata Institute of Fundamental Research, 1987

ISBN 3–540–18366–3 Springer-Verlag Berlin. Heidelberg. New York. Tokyo ISBN 0–387–18366–3 Springer-Verlag New York Heidelberg. Berlin. Tokyo

No part of this book may be reproduced in any form by print, microfilm or any other means with- out written permission from the Tata Institute of Fundamental Research, Bombay 400 005

Printed by INSDOC Regional Centre, Indian In- stitute of Science Campus, Bangalore 560 012 and published by H. Goetze, Springer-Verlag, Heidel- berg, West Germany

Printed In India Preface

These lectures were given at the Tata Institute of Fundamental Research in October - November 1985. It was my first object to present a self- contained introduction to summation and transformation formulae for exponential sums involving either the divisor function d(n) or the Fourier coefficients of a cusp form; these two cases are in fact closely analogous. Secondly, I wished to show how these formulae - in combination with some standard methods of - can be applied to the estimation of the exponential sums in question. I would like to thank Professor K. Ramachandra, Professor R. Bala- subramanian, Professor S. Raghavan, Professor T.N. Shorey, and Dr. S. Srinivasan for their kind hospitality, and my whole audience for interest and stimulating discussions. In addition, I am grateful to my colleagues D.R. Heath-Brown, M.N. Huxley, A. Ivic, T. Meurman, Y. Motohashi, and many others for valuable remarks concerning the present notes and my earlier work on these topics.

iv Notation

The following notation, mostly standard, will occur repeatedly in these notes.

γ Euler’s constant. s = σ + it, a complex variable. ζ(s) Riemann’s zeta-function. Γ(s) The gamma-function. s s 1 χ(s) = 2 π − Γ(1 s) sin(πs/2). − Jn(z), Yn(z), Kn(z) Bessel functions. e(α) = e2πiα. 2πiα/k ek(α) = e . Res( f, a) The residue of the function f at the point a. f (s) ds The integral of the function f over the line Re s = c. (Rc) d(n) The number of positive divisors of the integer n. a(n) Fourier coefficient of a cusp form. ∞ s ϕ(s) = a(n)n− . n=1 κ TheP weight of a cusp form. (κ 1)/2 a˜(n) = a(n)n− − . r = h/k, a rational number with (h, k) = 1 and k 1. ≥ h¯ The residue (mod k) defined by hh¯ 1 (mod k). ≡ ∞ s E(s, r) = d(n)e(nr)n− . n=1 P∞ s ϕ(s, r) = a(n)e(nr)n− . n=1 α TheP distance of α from the nearest integer. k k ′ f (n) = f (n), except that if x is an integer, ≦ ≦ Xn≦x 1 n x P 1 then the term f (x) is to be replaced by 2 f (x). ′ f (n) A sum with similar conventions as above aX≦n≦b if a or b is an integer.

v vi Notation

D(x) = ′ d(n). Xn≦x A(x) = ′ a(n). Xn≦x D(x, α) = ′ d(n)e(nα). Xn≦x A(, α) = ′ a(n)e(nα). Xn≦x D (x) = 1 ′ d(n)(x n)a. a a! − Xn≦x Aa(x), Da(x, α), Aa(x, α) are analogously defined. ǫ An arbitrarily small positive constant. A A constant, not necessarily the same at each occurrence. Cn[a, b] The class of functions having a continuous nth derivative in the interval [a, b].

The symbols 0(), , and are used in their standard meaning. ≪ ≫ Also, f g means that f and g are of equal order of magnitude, i.e. that ≍ 1 f /g 1. The constants implied by these notations depend at most ≪ ≪ on . ∈ Contents

Preface iv

Notation v

Introduction 1

1 Summation Formulae 7 1.1 The Function E(s, r) ...... 8 1.2 The Function ϕ(s, r)...... 12 1.3 Aysmptotic Formula ...... 15 1.4 Evaluation of Some Complex Integrals ...... 18 1.5 Approximate Formulae and...... 21 1.6 Identities for Da(x, r) and Aa(x, r) ...... 31 1.7 Analysis of the Convergence of the Voronoi Series . . . 34 1.8 Identities for D(x, r) and A(x, r)...... 38 1.9 The Summation Formulae ...... 41

2 Exponential Integrals 46 2.1 A Saddle-Point Theorem for ...... 47 2.2 Smoothed Exponential Integrals without a Saddle Point . 60

3 Transformation Formulae for Exponential Sums 63 3.1 Transformation of Exponential Sums ...... 63 3.2 Transformation of Smoothed Exponential Sums . . . . . 74

vii viii CONTENTS

4 Applications 80 4.1 Transformation Formulae for Dirichlet Polynomials . . . 80 4.2 On the Order of ϕ(k/2 + it)...... 88 4.3 Estimation of “Long” Exponential Sums ...... 96 4.4 TheTwelthMomentof ...... 106 Introduction

ONE OF THE basic devices (usually called “process B”; see [13], 1 § 2.3) in van der Corput’s method is to transform an exponential sum into a new shape by an application of van der Corput’s lemma and the saddle- point method. An exponential sum

(0.1) e( f (n)), aX

2 where f ǫC [a, b], f ′′(x) < 0 in [a, b], f ′(b) = α, and f ′(a) = β, is first written, by use of van der Corput’s lemma, as

b (0.2) e( f (x) nx) dx + 0(log(β α + 2)), − − α η

(0.3) d(n)g(n)e( f (n)) aXn b ≤ ≤ 1 2 Introduction

as well. The role of van der Corput’s lemma or Poisson’s summation formula is now played by Voronoi’s summation formula

b b ∞ ′ d(n) f (n) = (log x + 2γ) f (x) dx + d(n) f (x)α(nx) dx, aXn b Z Xn=1 Z ≤ ≤ a a (0.4) α(x) = 4K (4πx1/2) 2πY (4πx1/2). ◦ − ◦ 2 The well-known asymptotic formulae for the Bessel functions K and γ imply an approximation for α(nx) in terms of trigonometric func-◦ tions,◦ and, when the corresponding exponential integrals in (0.4) with g(x)e( f (x)) in place of f (x)-are treated by the saddle-point method, a certain exponential sum involving d(n) can be singled out, the contri- bution of the other terms of the series (0.4) being estimated as an error term. The leading integral normally represents the expected value of the sum in question. As a technical device, it may be helpful to provide the sum (0.3) with suitable smooth weights η(n) which do not affect the sum too much but which make the series in Voronoi formula for the sum

′ η(n)d(n)g(n)e( f (n)) aXn b ≤ ≤ absolutely convergent. Another device, at first sight nothing but a triviality, consists of re- placing f (n) in (0.3) by f (n) + rn, where r is an integer to be chosen suitably, namely so as to make the function f ′(x) + r small in [a, b]. This formal modification does not, of course, affect the sum itself in any way, but the outcome of applying Vornoi’s summation formula and the saddle-point method takes quite a new shape. The last-mentioned argument appeared for the first time [16], where a transformation formula for the Dirichlet polynomial

1/2 it (0.5) S (M1, M2) = d(m)m− − M1 Xm M2 ≤ ≤ 3 was derived. An interesting resemblance between the resulting expres- Introduction 3

sion for S (M1, M2) and the well-known formula of F.V. Atkinson [2] for the error term E(T) in the asymptotic formula T 1 (0.6) ζ + it 2 dt = (log(T/2π) + 2γ 1)T + E(T) Z | 2 ! | − 0 was clearly visible, especially in the case r = 1. This phenomenon has, in fact, a natural explanation. For differentiation of (0.6) with respect to T, ignoring the error term o(log2 T) in Atkinson’s formula for E(T), yields heuristically an expression for ζ( 1 + it) 2, which can be indeed | 2 | verified, up to a certain error, if

1 2 2 1 1 1 ζ + it = ζ + it χ− ( + it) | 2 ! | 2 ! 2 is suitably rewritten invoking the approximate functional equation for 2 ζ (s) and the transformation formula for S (M1, M2) (for details, see Theorem 2 in [16]). The method of [16] also works, with minor modifications, if the co- efficients d(m) in (0.5) are replaced by the Fourier coefficients a(m) of a cusp form of weight κ for the full modular group; the Dirichlet poly- nomial is now considered on the critical line σ = κ/2 of the Dirichlet series ∞ s ϕ(s) = a(n)n− . Xn=1 This analogy between d(m) and a(m) will prevail throughout these notes, and in order to avoid repetitions, we are not going to give details of the proofs in both cases. As we shall see, the method could be gen- eralized to other cases, related to Dirichlet series satisfying a functional 4 equation of a suitable type. But we are leaving these topics aside here, for those two cases mentioned above seem to be already representative enough. The transformation formula of [16] has found an application in the proof of the mean twelfth power estimate T 1 (0.7) ζ + it 12 dt T 2 log17 T Z | 2 ! | ≪ 0 4 Introduction

of D.R. Heath-Brown [11]. The original proof by Heath-Brown was based on Atkinson’s formula. The details of the alternative approach can be found in [13], 8.3. § The formula of [16] is useful only if the Dirichlet polynomial to be 1 transformed is fairly short and the numbers t(2πMi)− lie near to an in- teger r. But in applications to Dirichlet series it is desirable to be able to deal with “long” sums as well. It is not advisable to transform such a sum by a single formula; but a more practical representation will be obtained if the sum is first split up into segments which are individually transformed using an optimally chosen value of r for each of them. The set of possible values of r can be extended from the integers to the ra- tional numbers if a summation formula of the Voronoi type, to be given in 1.9, for for sums §

′ b(n)ek(hn) f (n), b(n) = d(n) or a(n), aXn b ≤ ≤ is applied. The transformation formula for Dirichlet polynomiala are de- duced in 4.1 as consequences of the theorems of Chapter 3 concerning § 5 the transformation of more general exponential sums

(0.8) b(m)g(m)e( f (m))

M1 Xm M2 ≤ ≤ or their smoothed versions. An interesting problem is estimating long exponential sums of the type (0.8). A result of this kind will be given in 4.3, but only under § rather restrictive assumptions on the function f , for we have to suppose that f (x) Bxα. It is of course possible that comparable, or at least ′ ≈ nontrivial, estimates can be obtained in concrete cases without this as- sumption, making use of the special properties of the function f . In view of the analogy between ζ2(s) and ϕ(s), a mean value result corresponding to Heath-Brown’s estimate (0.7) should be an estimate for the sixth moment of ϕ(κ/2+it). However, the proofs of (0.7) given in [11] and [13] utilize special properties of the function ζ2(s) and cannot be immediately carried over to ϕ(s). An alternative approach will be presented in 4.4, giving not only (0.7) (up to the logarithmic factor), § Introduction 5 but also its analogue

T (0.9) ϕ(κ/2 + it) 6 dt T 2+ǫ . Z | | ≪ 0 This implies the estimate

(0.10) ϕ(κ/2 + it) t1/3+ǫ for t 1, | | ≪ ≥ which is not new but neverthless essentially the best known presently. In fact, (0.10) is a corollary of the mean value theorem

T 2 2/3 (0.11) ϕ(κ/2 + it) dt = (C log T + C1)T + o((T log T) ) Z | | ◦ 0 of A. Good [9]. The estimate (0.10) can also proved directly in a rela- 6 tively simple way, as will be shown in 4.2. § The plan of these notes is as follows. Chapters 1 and 2 contain the necessary tools - summation formulae of the Voronoi type and theorems on exponential integrals - which are combined in Chapter 3 to yield general transformation formulae for exponential sums involving d(n) or a(n). Chapter 4, the contents of which were briefly outlined above, is devoted to specializations and applications of the results of the preced- ing chapter. Most of the material in Chapters 3 and 4 is new and appears here the first time in print. An attempt is made to keep the presentation selfcontained, with an adequate amount of detail. The necessary prerequisites include, beside standard complex function theory, hardly anything but familiarity with some well-known properties of the following functions: the Riemann and Hurwitz zeta functions, the , Bessel functions, and cusp forms together with their associated Dirichlet series. The method of van der Corput is occasionally used, but only in its simplest form. As we pointed out, the theory of transformations of exponential sums to be presented in these notes can be viewed as a continuation or extension of some fundamental ideas underlying van der Coroput’s 6 Introduction

method. A similarity though admittedly of a more formal nature can also be found with the circle method and the large sieve method, namely a 7 judicious choice of a system of rational numbers at the outset. In short, our principal goal is to analyse what can be said about Dirichlet se- ries and related Dirichlet polynomials or exponential sums by appealing only to the functional equation of the allied Dirichlet series involving the exponential factors e(nr) and making only minimal use of the ac- tual structure or properties of the individual coefficients of the Dirichlet series in question. Chapter 1

Summation Formulae

THERE IS AN extensive literature on various summation formulae of 8 the Voronoi type and on different ways to prove such results (see e.g. the series of papers by B.C. Berndt [3] and his survey article [4]). We are going to need such identities for the sums

′ b(n)e(nr) f (n), aXn b ≤ ≤ where 0 < a < b, f C1[a, b], r = h/k, and b(n) = d(n) or a(n). ∈ The case f (x) = 1 is actually the important one, for the generalization is easily made by partial summation. So the basic problem is to prove identities for the sums D(x, r) and A(x, r) (see Notation for definitions). In view of their importance and interest, we found it expedient to derive these identities from scratch, with a minimum of background and effort. Our argument proceeds via Riesz means Da(x, r) and Aa(x, r) where a 0 is an integer. We follow A.L. Dixon and W.L. Ferrar [6] with some ≥ simplifications. First, in [6] the more general case when a is not neces- sarily an integer was discussed, and this leads to complications since the final result can be formulated in terms of ordinary Bessel functions only if a is an integer. Secondly, it turned out that for a = 0 the case x Z, which requires a lengthy separate treatment in [6], can actually ∈ be reduced to the case x < Z in a fairly simple way. To get started with the proofs of the main results of this chapter, we 9

7 8 1. Summation Formulae

need information on the Dirichlet series E(s, r) and ϕ(s, r), in particu- lar their analytic continuations and functional equations. The necessary facts are provided in 1.1 and 1.2. §§ Bessel functions emerge in the proofs of the summation formulae when certain complex integrals involving the gamma function are cal- culated. We could refer here to Watson [29] or Titchmarsh [26], but for convenience, in 1.4 , we calculate these integrals directly by the § theorem of residues. In practice, it is useful to have besides the identities also approxi- mate and mean value results on D(x, r) and A(x, r), to be given in 1.5. § Identities for D (x, r) and A (x, r) are proved in 1.6–1.8, first for a a §§ a 1 and then for a = 0. The general summation formulae are finally ≥ deduced in 1.9. §

1.1 The Function E(s, r)

The function

∞ s (1.1.1) E(s, r) = d(n)e(nr)n− (σ> 1) Xn=1 where r = h/k, was investigated by T. Estermann [8], who proved the results of the following lemma. Our proofs are somewhat different in details, for we are making systematic use of the Hurwitz zeta-function ζ(s, a).

Lemma 1.1. The function E(s, h/k) can be continued analytically to a meromorphic function, which is holomorphic in the whole complex plane up to a double pole at s = 1, satisfies the functional equation

2s 2 2 1 2s E(s, h/k) = 2(2π) − Γ (1 s)k − (1.1.2) − × E(1 s, h¯/k) cos(πs)E(1 s, h¯/k) , ×{ − − − } 10 and has at s = 1 the Laurent expansion

1 2 1 1 (1.1.3) E(s, h/k) = k− (s 1)− + k− (2γ 2 log k)(s 1)− + − − − ··· 1.1. The Function E(s, r) 9

Also,

(1.1.4) E(0, h/k) k log 2k. ≪ Proof. The Dirichlet series (1.1.1) converges absolutely and thus de- fines a holomorphic function in the half-plane σ > 1. The function E(s, h/k) can be expressed in terms of the Hurwitz zeta-function

∞ s ζ(s, a) = (n + a)− (σ> 1, 0 < a 1). ≤ Xn=0 Indeed, for σ> 1 we have

∞ s E(s, h/k) = ek(mnh)(mn)− mX,n=1 k s = ek(αβh) (mn)− α,βX=1 m α X(mod k) ≡n β (mod k) ≡ k ∞ s = ek(αβh) ((α + µk)(β + νk))− , α,βX=1 µ,νX=0 so that

k 2s (1.1.5) E(s, h/k) = k− ek(αβh)ζ(s, α/k)ζ(s, β/k). α,βX=1 This holds, in the first place, for σ> 1, but since ζ(s, a) can be ana- lytically continued to a meromorphic function which has a simple pole 11 with residue 1 at s = 1 as its only singularity (see [27], p. 37), the equa- tion (1.1.5) gives an analytic continuation of E(s, h/k) to a meromorphic function. Moreover, its only possible pole, of order at most 2, is s = 1. To study the behaviour of E(s, h/k) near s = 1, let us compare it with the function

k 2s 1 2s 2 k− ζ(s) ek(αβh)ζ(s, β/k) = k − ζ (s). α,βX=1 10 1. Summation Formulae

The difference of these functions is by (1.1.5) equal to

k k 2s (1.1.6) k− e (αβh)ζ(s, β/k) (ζ(s, α/k) ζ(s)).  k  − Xα=1 Xβ=1      Here the factor ζ(s, α/k) ζ(s) is holomorphic at s = 1 for all α, and − vanishes for α = k. Since the sum with respect to β is also holomorphic at s = 1 for α , k, the function (1.1.6) is holomorphic at s = 1. Accord- 1 2s 2 ingly, the functions E(s, h/k) and k − ζ (s) have the same principal part at s = 1. Because 1 ζ(s) = + , s 1 ··· − this principal part is that given in (1.1.3). To prove the functional equation (1.1.2), we utilize the formula ([27], equation (2.17.3))

s 1 ∞ 1 s 1 (1.1.7) ζ(s, a) = 2(2π) − Γ(1 s) sin( πs + 2πma)m − (σ< 0). − 2 mX=1

Then the equation (1.1.5) becomes

2s 2 2 2s E(s, h/k) = (2π) − Γ (1 s)k− − − × k ∞ πis πis e (αβh) e e (mα + nβ) + e− e ( mα nβ) × k { k k − − α,βX=1 mX,n=1 s 1 e (mα nβ) e ( mα + nβ) (mn) − (σ< 0). − k − − k − } 12 Note that

k k if β mh¯ (mod k), ek(αβh mα) = ≡± ∓ 0 otherwise Xα=1   The functional equation (1.1.2) now follows, first for σ < 0, but by analytic continuation elsewhere also. 1.1. The Function E(s, r) 11

For a proof of (1.1.4), we derive for E(0, h/k) an expression in a closed form. By (1.1.5),

k (1.1.8) E(0, h/k) = ek(αβh)ζ(0, α/k)ζ(0, β/k). α,βX=1 If 0 < a < 1, then the series in (1.1.7) converges uniformly and thus defines a continuous function for all real s 0. Hence, by continuity, ≤ (1.1.7) remains valid also for s = 0 in this case. It follows that

1 ∞ 1 ζ(0, a) = π− sin(2πma)m− . mX=1 But the series on the right equals π(1/2 a) for 0 < a < 1, whence − (1.1.9) ζ(0, a) = 1/2 a. − Since ζ(0, 1) = ζ(0) = 1/2, this holds for a = 1 as well. Now, by − (1.1.8) and (1.1.9)

k E(0, h/k) = e (αβh)(1/2 α/k)(1/2 β/k). k − − α,βX=1

From this it follows easily that 13

k 3 2 (1.1.10) E(0, h/k) = k + k− e (αβh)αβ. −4 k α,βX=1 To estimate the double sum on the right, observe that if 1 α k 1 ≤ ≤ − and β runs over an arbitrary interval, then

1 ek(αβh) αh/k − . ≪k k Xβ

Thus, by partial summation,

k 1 k k 1 − 2 − 1 ek(αβh)αβ k αh/k − ≪ k k Xα=1 Xβ=1 Xα=1

12 1. Summation Formulae

k2 k/α k3 log k, ≪ ≪ 1 Xα k/2 ≤ ≤ and (1.1.4) follows from (1.1.10). 

1.2 The Function ϕ(s, r)

Let H be the upper half-plane Im τ> 0. The mappings aτ + b τ → cτ + d a b where c d is an integral matrix of determinant 1, take H onto itself and constitute the (full) modular group. A function f which is holomorphic in H and not identically zero, is a cusp form of weight k for the modular group if aτ + b (1.2.1) f = (cτ + d)k f (τ), τ H cτ + d ! ∈ for all mappings of the modular group, and moreover (1.2.2) lim f (τ) = 0. Im τ →∞ 14 It is well-known that k is an even integer at least 12, and that the dimension of the vector space of cusp forms of weight k is [k/12] if k . 2 (mod 12) and [k/12] 1 if k 2 (mod 12) (see [1], 6.3 and − ≡ §§ 6.5). A special case of (1.2.1) is f (τ + 1) = f (τ). Hence, by periodicity, f has a Fourier series, which by (1.2.2) is necessarily of the form

∞ (1.2.3) f (τ) = a(n)e(nτ). Xn=1 The numbers a(n) are called the Fourier coefficients of the cusp form f . The case k = 12 is of particular interest, for then a(n) = τ(n), the Ramanujan function defined by

∝ ∞ τ(n)xn = x (1 xm)24 ( x < 1). − | | Xn=1 Ym=1 1.2. The Function ϕ(s, r) 13

We are going to need some information on the order of magnitude of the Fourier coefficients a(n). For most purposes, the classical mean value theorem

2 k k 2/5 (1.2.4) a(n) = Ax + o(x − ) Xn x | | ≤ of R.A. Rankin [24] suffices, though sometimes it will be convenient of necessary to refer to the estimate

(k 1)/2 (1.2.5) a(n) n − d(n). | |≤ This was known as the Ramanujan-Petersson conjecture, untill it became a theorem after having been proved by P. Deligne [5]. In (1.2.5), it should be understood that f is a normalized eigenform (i.e. a(1) = 1) of all Hecke operators T(n), but this is not an essential restriction, for a basis of the vector space of cusp forms of a given weight can be 15 constructed of such forms. Now (1.2.4) implies that the estimate (1.2.5), and even more, is true in a mean sense, and since we shall be dealing with expressions involv- ing a(n) for many values of n, it will be usually enough to know the order of a(n) on the average. It follows easily from (1.2.4) that the Dirichlet series

∞ s ϕ(s) = a(n)n− , Xn=1 and, more generally, the series

∞ s ϕ(s, r) = a(n)e(nr)n− , Xn=1 where r = h/k, converges absolutely and defines a holomorphic function in the half-plane σ> (k + 1)/2. It was shown by J.R. Wilton [30], in the case a(n) = τ(n), that ϕ(s, r) can be continued analytically to an entire function satisfying a functional equation of the Riemann type. But his argument applies as such also in the general case, and the result is as follows. 14 1. Summation Formulae

Lemma 1.2. The function ϕ(s, h/k) can be continued analytically to an entire function satisfying the functional equation

(1.2.6) (k/2π)sΓ(s)ϕ(s, h/k) k/2 k s = ( 1) (k/2π) − Γ(k s)ϕ(k s, h¯/k). − − − − Proof. Let h iz h¯ i τ = + , τ′ = + , k k − k zk 16 where Re z > 0. Then τ, τ H, and we show first that ′ ∈ k/2 k (1.2.7) f (τ′) = ( 1) z f (τ). −

The points τ and τ′ are equivalent under the modular group, for putting a = h¯, b = (1 hh¯)/k, c = k, and d = h, we have ad bc = 1 − − − and aτ + b = τ . cτ + d ′ Also, cτ + d = iz, − so that (1.2.7) is a consequence of the relation (1.2.1). Now let σ> (k + 1)/2. Then we have

∞ s ∞ s 1 2πnx/k (k/2π) Γ(s)ϕ(s, h/k) = a(n)ek(nh) x − e− dx = Z Xn 1 0 ∞ s 1 h ix = x − f + dx. Z k k ! 0

Here the integral over (0,1) can be written by (1.2.7) as

1 ¯ ∞ ¯ k/2 s 1 k h i k/2 k 1 s h ix ( 1) x − − f + dx = ( 1) x − − f + dx. − Z − k xk ! − Z − k k ! 0 1 1.3. Aysmptotic Formula 15

Hence

(1.2.8) (k/2π)sΓ(s)ϕ(s, h/k) ∞ ¯ s 1 h ix k/2 k 1 s h ix = x − f + + ( 1) x − − f + dx. Z ( k k ! − − k k !) 1

But the integral on the right defines an entire function of s, for, by (1.2.3), the function f (τ) decays exponentially as Im τ tends to infinity. Thus (1.2.8) gives an analytic continuation of ϕ(s, h/k) to an entire func- tion. Moreover, it is immediately seen that the right hand side remains 17 invariant under the transformation h/k h¯/k, s k s if k/2 is even, →− → − and changes its sign if k/2 is odd. Thus the functional equation (1.2.6) holds in any case. 

REMARK. The special case k = 1 of (1.2.6) amounts to Hecke’s func- tional equation

s k/2 s k (2π)− Γ(s)ϕ(s) = ( 1) (2π) − Γ(k s)ϕ(k s). − − −

1.3 Asymptotic Formulae for the Gamma Function and Bessel Functions

The special functions that will occur in this text are the gamma function Γ(s) and the Bessel functions Jn(z), Yn(z), Kn(z) of nonnegative integral order n. By definition,

+ ∞ ( 1)k(z/2)2k n (1.3.1) J (z) = − , n k!(n + k)! Xk=0 n 1 1 − (n k 1)! 2k n (1.3.2) Y (z) = π− − − (z/2) − n − k! Xk=0 k 2k+n 1 ∞ ( 1) (z/2) +π− − (2 log(z/2) ψ(k + 1) ψ(k + n + 1)), k!(n + k)! − − Xk=0 16 1. Summation Formulae

and

n 1 1 − ( 1)k(n k 1)! (1.3.3) K (z) = − − − (z/2)2k n n 2 k! − Xk=0 2k+n 1 n 1 ∞ (z 2) + ( 1) − − 92 log(z/2) ψ(k + 1) ψ(k + n + 1)), 2 − k!(n + k)! − − Xk=0 where Γ ψ(z) = ′ (z). Γ In particular,

n 1 ψ(1) = γ, ψ(n + 1) = γ + k− , n = 1, 2,... − − Xk=1

18 Repeated use will be made of Stirling’s formula for Γ(s) and of the asymptotic formulae for Bessel functions. Therefore we recall these well-konwn results here for the convenience of further reference. The following version of Stirling’s formula is precise enough for our purposes. Lemma 1.3. Let δ<π be a fixed positive number. Then

1 (1.3.4) Γ(s) = √2π exp s 1/2) log s s (1 + o( s − )) { − − } | | in the sector arg s π δ, s 1. Also, in any fixed strip A σ A | |≤ − | |≥ 1 ≤ ≤ 2 we have for t 1 ≥ s 1/2 1 1 1 (1.3.5) Γ(s) = √2πt − exp( πt it + π(σ 1/2)i)(1 + o(t− )), −2 − 2 − and

σ 1/2 (π/2)t 1 (1.3.6) Γ(s) = √2πt − e− (1 + 0(t− )). | |

The asymptotic formulae for the functions Jn(z), Yn(z), and Kn(z) can be derived from the analogous results for Hankel functions

( j) j 1 (1.3.7) H (z) = J (z) + ( 1) − iY (z), j = 1, 2. n n − n 1.3. Aysmptotic Formula 17

The variable z is here restricted to the slit complex plane z , 0, arg | z <π. Obviously, | 1 (1.3.8) J (z) = H(1)(z) + H(2)(z) , n 2 n n 1   (1.3.9) Y (z) = H(1)(z) H(2)(z) . n 2i n n  −  The function Kn(z) can also be written in terms of Hankel functions, for (see [29], p. 78) π (1.3.10) K (z) = in+1H(1)(iz) for π< arg z < π/2, n 2 n − π n+1 (2) π (1.3.11) K (z) = i− H ( iz) for < arg z < π. n 2 n − 2 19 The asymptotic formulae for Hankel functions are usually derived from appropriate integral representations, and then the asymptotic be- haviour of Jn, Yn, and Kn can be determined by the relations (1.3.8) - (1.3.11) (see [29], 7.2, 7.21 and 7.23). The results are as follows. §§ Lemma 1.4. Let δ1 < π and δ2 be fixed positive numbers. Then in the sector

(1.3.12) argz π δ , z δ | |≤ − 1 | |≥ 2 we have

( j) 1/2 j 1 1 1 (1.3.13) H (z) = (2/πz) exp ( 1) − i z nπ π (1 + g j(z)), n − − 2 − 4 !! where the functions g j(z) are holomorphic in the slit complex plane z , 0, arg z <π, and satisfy | | 1 (1.3.14) g (z) z − | j | ≪ | | in the sector (1.3.12). Also, for real x δ , ≥ 2 1/2 1 1 3/2 (1.3.15) Jn(x) = (2/πx) cos x nπ π + o(x− ), − 2 − 4 ! 18 1. Summation Formulae

1/2 1 1 3/2 (1.3.16) Yn(x) = (2/πx) sin x nπ π + o(x− ), − 2 − 4 ! and

1/2 x 1 (1.3.17) Kn(x) = (π/2x) e− 1 + o(x− ) .   Strictly speaking, the functions g j should actually be denoted by g j,n, 20 say, because they depend on n as well, but for simplicity we dropped the index n, which will always be known from the context.

1.4 Evaluation of Some Complex Integrals

Let a be a nonnegative integer, σ a/2,σ < a, T > 0, and let C 1 ≥ − 2 − a be the contour joining the points σ i ,σ Ti,σ Ti,σ +Ti,σ +Ti, 1− ∞ 1− 2− 2 1 and σ + i by straight lines. Let X > 0, k a positive integer, and c a 1 ∞ number such that (k a 1)/2 c < k. − − ≤ In the next two sections we are going to need the values of the com- plex integrals

1 2 s 1 (1.4.1) I1 = Γ (1 s)X (s(s + 1) ... (s + a))− ds, 2πi Z − Ca 1 2 s 1 (1.4.2) I2 = Γ (1 s) cos(πs)X (s(s + 1) ... (s + a))− ds, 2πi Z − Ca

and

1 1 s 1 (1.4.3) I3 = Γ(k s)Γ− (s)X (s(s + 1) ... (s + a))− ds. 2πi Z − (c)

Lemma 1.5. We have

a+1 (1 a)/2 1/2 (1.4.4) I = 2( 1) X − K + (2X ), 1 − a 1 (1 a)/2 1/2 (1.4.5) I2 = πX − Ya+1(2X ), 1.4. Evaluation of Some Complex Integrals 19 and

(k a)/2 1/2 (1.4.6) I3 = X − Jk+a(2X ).

Proof. For positive numbers T1 and T2 exceeding T, denote by Ca(T1, T ) that part of C which lies in the strip T t T . The integrals 2 a − 1 ≤ ≤ 2 I1 and I2 are understood as limits of the corresponding integrals over 21 Ca(T1, T2) as T1 and T2 tend to infinity independently. Similarly, I3 is the limit of the integral over the line segment [c iT , c + iT ]. − 1 2 Let N be a large positive integer, which is kept fixed for a moment. Denote by Γ(T , T ; N) the closed contour joining the points N+1/2 iT 1 2 − 1 and N + 1/2 + iT2 with each other and with the initial and end point of C (T , T ), respectively, or with the points c iT and c + iT in the case a 1 2 − 1 2 of I3. Then, by the theorem of residues, 1 (1.4.7) (...) ds = Res, 2πi Z Γ X where (...) means the integrand of the respective I j, whose residues inside Γ=Γ(T1, T2; N) are summed on the right. By (1.3.6) and our assumptions on σ1 and c, the integrals over those horizontal parts of Γ(T , T ; N) lying on the lines t = T and t = T 1 2 − 1 2 are seen to be (log T ) 1, i = 1, 2. Hence these integrals vanish in the ≪ i − limit as the Ti tend to infinity. Then the equation (1.4.7) becomes 1 (1.4.8) I j + (...) ds = Res . − 2πi Z (N+1/2) X

Consider now the integrals over the line σ = N + 1/2. 1 By a repeated application of the formula Γ(s) = s− Γ(s + 1), the Γ- factors in the integrands can be expressed in terms of Γ(1/2 + it). Then, by some simple estimations, we find that the integrals in question vanish in the limit as N tends to infinity. Therefore (1.4.8) gives

a ∞ (1.4.9) I = Res( , k) Res( , k), j = 1, 2, J − · − − · Xk=0 Xk=1 20 1. Summation Formulae

∞ (1.4.10) I = Res( , k), 3 − · Xk=k

22 where the dot denotes the respective integrand. Consider the integral I1 first. Obviously

h 1 h Res( , h) = ( 1) h!((a h)!)− X− for h = 0, 1,..., a. · − − − The sum of these can be written, on putting k = a h, as − a a 1/2 1 a k 1 1/2 2k a 1 (1.4.11) ( 1) (X ) − ( 1) (a k)!(k!)− (2X /2) − − . − − − Xk=0 The integrand has double poles at s = 1, 2,..., and the residue at k can be calculated, multiplying (for s = k + δ) the expansions

2 2 2 2 2 2 Γ (1 s) = δ− Γ (1 δ)(k 1 + δ)− (k 2 + δ)− ... (1 + δ)− − − − − 2 2 = δ− ((k 1)!)− (1 2ψ(k)δ + ...), − − 1 1 (s(s + 1) ... (s + a))− = (k 1)!((k + a)!)− − (1 (ψ(k + a + 1) ψ(k))δ + ), − − ··· and Xs = Xk(1 + δ log X + ). ··· We obtain

k 1 Res( , k) = X ((k +a)!(k 1)!)− (log X ψ(k +a+1) ψ(k)), k = 1, 2,... · − − − Hence, also taking into account (1.4.11) and (1.3.3), we may write the sum of residues as

(a+1) 1 a 1/2 1 a 1 − k 1 1/2 2k (a+1) 2( 1) (X ) − ( 1) ((a + 1) k 1)!(k!)− (2X /2) − − 2 − − −  Xk=0  1  (a+1) 1 ∞ 1 1/2 2k+(a+1) + ( 1) − (k!(k + (a + 1))!)− (2X /2) . 2 − Xk=0 1/2 a (1 a)/2 1/2 (2 log(2X /2) ψ(k+1) ψ(k+(a+1)+1)) = 2( 1) X − K + (2X ) · − − } − a 1 · 1.5. Approximate Formulae and... 21

Now (1.4.4) follows from (1.4.9). 23 The residues of the integrands of I1 and I2 at s = k differ only by the sign ( 1)k. The series of residues of the integrand of I can be written − 2 in terms of the function Ya+1, and the assertion (1.4.5) follows by a calculation similar to that above. Finally, the residue of the integrand of I at h k is 3 ≥ h k h 1 ( 1) − X ((h + a)!(h k)!)− , − − and putting k = h k the sum of these terms can be arranged so as to − give (k a)/2 1/2 X − J + (2X ). − k a 

This proves (1.4.6).

1.5 Approximate Formula and Mean Value Estimates for D(x, r) and A(x, r)

Our object in this section is to derive approximate formulae of the Voro- noi type for the exponential sums

D(x, r) = ′ d(n)e(nr) Xn x ≤ and A(x, r) = ′ a(n)e(nr), Xn x ≤ and to apply these to the pointwise and mean square estimation of D(x, r) and A(x, r). As before, r = h/k is a rational number. A model of a result like this is the following classical formul for D(x) = D(x, 1):

(1.5.1) D(x) = (log x + 2γ 1)x − 1 1x 3/4 1/2+ǫN 1/2 +(π √2)− 4 d(n)n− cos(4π √nx π/4) + o(x − ), − nXN ≤ 22 1. Summation Formulae

where x 1 and 1 N x (see [27], p. 269). The corresponding 24 ≥ ≤ ≪ formula for D(x, r) will be of the form

1 (1.5.2) D(x, h/k) = k− (log x+2γ 1 2 log k)x+E(0, h/k)+∆(x, h/k), − − where ∆(x, h/k) is an error term. The next theorem reveals an analogy between ∆(x, r) and A(x, r). THEOREM 1.1. For x 1, k x, and 1 N x the equation (1.5.2) ≥ ≤ ≤ ≪ holds with (1.5.3) 1 1/2 1/4 3/4 ∆(x, h/k) = (π √2)− k x d(n)e ( nh¯)n− cos(4π √nx/k π/4) k − − nXN ≤ 1 1 +ǫN +0(kx 2 − 2 ). Also,

(1.5.4)

1 1 1 + k 1/4 k/2 4π √nx π A(x, h/k) = (π √2)− k 2 x− 4 2 a(n)ek( nh¯)n− − cos − k − 4 ! Xn N ≤ k/2+ǫ 1/2 +0(kx N− ).

Proof. Consider first the formula (1.5.3). We follow the argument of proof of (1.5.1) in [27], pp. 266–269, with minor modifications. Let δ be a small positive number which will be kept fixed throughout the proof. By Perron’s formula,

1+δ+iT 1 s 1 1+δ 1 (1.5.5) D(x, r) = E(s, r)x s− ds + 0(x T − ), 2πi Z 1+δ iT − where r = h/k and T is a parameter such that

1 (1.5.6) 1 T k− x. ≤ ≪ 25 As a preliminary for the next step, which consists of moving the integration in (1.5.5) to the line σ = δ, we need an estimate for E(s, r) − in the strip δ σ 1 + δ for t 1. − ≤ ≤ | |≥ 1.5. Approximate Formulae and... 23

The auxiliary function

s 1 2 (1.5.7) − E(s, r) s 2! − is holomorphic in the strip δ σ 1 + δ, and in the part where t 1 − ≤ ≤ | |≥ it is of the same order of magnitude as E(s, r). This function is bounded on the line σ = 1 + δ, and on the line σ = δ it is − (k( t + 1))1+2δ ≪ | | by the functional equation (1.1.2) and the estimate (1.3.6) of the gamma function. The convexity principle now gives an estimate for the function (1.5.7), and as a consequence we obtain

1 σ+δ (1.5.8) E(s, r) (k t ) − for δ σ 1 + δ, t 1. | | ≪ | | − ≤ ≤ | |≥ Let C be the rectangular contour with vertices 1+δ iT and δ iT. ± − ± By the theorem of residues, we have

1 s 1 1 (1.5.9) E(s, r)x s− ds = k− (log x+2γ 1 2 log k)x+ E(0, r), 2πi Z − − C where the expansion (1.1.3) has been used in the calculation of the residue at s = 1. The integrals over the horizontal parts fo C are x1+δT 1 by (1.5.8) ≪ − and (1.5.6). Hence (1.5.2), (1.5.5), and (1.5.9) give together

δ+iT − 1 s 1 1+δ 1 (1.5.10) ∆(x, r) = E(s, r)x s− ds + o(x T − ). 2πi Z δ iT − The functional equation (1.1.2) for E(s, r) is now applied. The term involving E(1 s, h¯/k) decreases rapidly as t increases and it will be 26 − | | estimated as an error term. Then for σ = δ, we obtain −

2s 2 2 1 2s ∞ s 1 E(s, r) = 2(2π) − Γ (1 s)k − cos(πs) d(n)e ( nh¯)n − − − k − Xn=1 24 1. Summation Formulae

1+2δ π t +o((k( t + 1)) e− | |). | | The contribution of the error term to the integral in (1.5.10) is

1+2δ δ δ 1+δ 1 k x− kx x T − . ≪ ≪ ≪ Thus we have

1 2 ∞ 1 1+δ 1 (1.5.11) ∆(x, r) = π− k d(n)n− e ( nh¯) j + o(x T − ), −2 k − n Xn=1

where

δ+iT − 1 2 2 2 s 1 (1.5.12) jn = Γ (1 s) cos(πs)(4π nxk− ) s− ds. 2πi Z − δ iT − − At this stage we fix the parameter T, putting

2 2 2 1 (1.5.13) T k (4π x)− = N + 1/2,

where N is an integer such that 1 N x. It is immediately seen that ≤ ≪ T k 1 x. In order that the condition (1.5.6) be satisfied, we should ≪ − also have T 1, which presupposes that N k2 x 1. We may assume ≥ ≫ − this, for otherwise the assertion (1.5.3) holds for trivial reasons. Indeed, if 1 N k2 x 1, then (1.5.3) is implied by the estimate ∆(x, h/k) ≤ ≪ − ≪ x1+ǫ , which is definitely true by (1.5.2) and (1.1.4). Next we dispose of the tail n > N of the series in (1.5.11). The integral jn splits into three parts, in which t runs respectively over the intervals [ T, 1], [ 1, 1], and [1, T]. The second integral is clearly − − − 27 k2δn δ x δ, and these terms contribute kxδ. The first and third ≪ − − ≪ integrals are similar; consider the third one, say jn′ . By (1.3.5) we have for δ σ δ and t 1 − ≤ ≤ ≥ 2 2 2 s 1 (1.5.14) Γ (1 s) cos(πs)(4π nxk− ) s− − 2σ 2 2 σ iF(t) 1 = A(σ)t− (4π nxk− ) e (1 + O(t− )), 1.5. Approximate Formulae and... 25 where A(σ) is bounded and

2 2 (1.5.15) F(t) = 2t log t + 2t + t log(4π nxk− ). − Thus

T 2δ δ δ 2δ iF(t) 2δ (1.5.16) jn′ = Ak n− x−  t e dt + o(T ) . Z    1    The last integral is estimated by the following elementary lemma ([27], Lemma 4.3) on exponential integrals. 

Lemma 1.6. Let F(x) and G(x) be real functions in the interval [a, b] where G(x) is continuous and F(x) continuously differentiable. Suppose that G(x)/F (x) is monotonic and F (x)/G(x) m > 0. Then ′ | ′ |≥ b (1.5.17) G(x)eiF(x) dx 4/m. | Z |≤ a Now by (1.5.15) and (1.5.13) we have

2 2 2 n (1.5.18) F′(t) = log(4π nxk− t− ) log ≥ N + 1/2! for 1 t T, whence by (1.5.16) and (1.5.17) ≤ ≤ 1 2δ δ 2δ δ n − j′ k n− T x− log + 1 . n ≪  N + 1/2!!      Thus  

1 δ δ d(n)n− j′ N x , | n| ≪ ≪ nX2N ≥ 1 1 δ and d(n)n− j′ d(N + m)m− x . | n| ≪ ≪ N N of the series can be omitted with 28 26 1. Summation Formulae

an error kxδ, and taking into account the choice (1.5.13) of T, we ≪ obtain

1 2 1 1 +δ 1 (1.5.19) ∆(x, r) = π− k d(n)n− e ( nh¯) j + o(kx 2 N 2 ). −2 k − n nXN ≤

The remaining integrals jn will be calculated approximately by Lem- ma 1.5, and to this end we extend the path of integration in (1.5.12) to the infinite broken line through the points δ i , δ iT, δ iT, δ − ∞ − − − − − iT, δ + iT, δ + iT and δ + i , estimating the consequent error when the − ∞ jn in (1.5.19) are replaced by the new integrals. First, by (1.5.14) and (1.5.13),

δ+iT δ 1 1 σ d(n)n− ( ) d(n)n− (n/N) dσ Z ··· ≪ Z nXN δ+iT nXN δ ≤ − ≤ − δ 1 δ δ N d(n)n− − x , ≪ ≪ nXN ≤ where ( ) means the integrand of j . The same estimate holds for the ··· n integrals over the line segment [ δ iT, δ iT]. − − − Next, by (1.5.14), (1.5.18), and Lemma 1.6, we have

δ+i ∞ 1 2 δ 1+δ d(n)n− ( ) (k− x) d(n)n− Z ··· ≪ nXN nXN ≤ δ+iT ≤

∞ t 2δ eiF(t) + o(t 1) dt − − Z   T 1 2 2 δ 1+δ N + 1/2 − (k− T − x) d(n)n− log + 1 ≪  n !!  nXN   ≤   1  1 δ d(n)n− + d(n)(N + 1/2 n)− x, ≪ − ≪ nXN/2 N/2X

These estimations show that (1.5.19) remains valid if the jn are re- placed by the modified integrals, which are of the type I2 in (1.4.2) for 2 2 a = 0 and X = 4π nxk− , and thus equal to

2 1/2 1 2π (nx) k− Y1(4π √nx/k) by (1.4.5). The assertion (1.5.3) now follows when Y1 is replaced by its expression (1.3.16) (which holds trivially for n = 1 with the error term 1 0(x− ) even in the interval (o, δ2)). The proof of (1.5.4) is quite similar. The starting point is the equa- tion

(k+1)/2+δ+iT 1 s 1 (k+1)/2+δ 1 A(x, r) = ϕ(s, r)x s− ds + o(x T − ), 2πi Z (k+1)/2+δ iT − where 1 T k 1 x. It should be noted that Deligne’s estimate (1.2.5) ≤ ≪ − is needed here; otherwise the error term would be bigger. The integration is next shifted to the line segment [(k 1)/2 δ − − − iT, (k 1)/2 δ + iT] with arguments as in the proof of (1.5.10), except − − that now there are no residue terms. Applying the functional equation (1.2.6) of ϕ(s, r), we obtain

k/2 k ∞ k A(x, r) = ( 1) (k/2π) a(n)n− e ( hn¯ ) − k − × Xn=1 (k 1)/2 δ+iT − − 1 1 2 2 s 1 (k+1)/2+δ 1 Γ(k s)Γ(s)− (4π nxk− ) s− ds + o(x T − ). × 2πi Z − (k 1)/2 δ iT − − − The parameter T is chosen as in (1.5.13) again. Next it is shown, as before, that the tail n > N of the above series can be omitted, and that in the remaining terms the integration can be shifted to the whole 30 line σ = (k 1)/2 + δ. The new integrals are evaluated in terms of the − function Jk using (1.4.6). Finally Jk is approximated by (1.3.15) (which 1/2 holds trivially with the error term o(x− ) even in the interval (0, δ2)) to give the formula (1.5.4). The proof of the theorem is now complete. 28 1. Summation Formulae

Choosing N = k2/3 x1/3 and estimating the sums on the right of (1.5.3) and (1.5.4) by absolute values, one obtains the following esti- mates for ∆(x, r) and A(x, r), COROLLARY. For x 1 and k x we have ≥ ≤ (1.5.20) ∆(x, h/k) k2/3 x1/3+ǫ , ≪ 2/3 k/2 1/6+ǫ (1.5.21) A(x, h/k) k x − . ≪ As another application of Theorem 1.1 we deduce mean value re- sults for ∆(x, r) and A(x, r). THEOREM 1.2. For X 1 we have ≥ (1.5.22) X 2 3/2 2 1+ǫ 3/2 5/4+ǫ ∆(x, h/k) dx = c1kX + o(k X ) + o(k X ), Z | | 1 (1.5.23) X 2 k+1/2 2 k+ǫ 3/2 k+1/4+ǫ and A(x, h/k) dx = c2(k)kX + o(k X ) + o(k X ), Z | | 1 where

2 1 ∞ 2 3/2 c1 = (6π )− d (n)n− Xn=1

and

2 1 ∞ 2 k 1/2 c (k) = (4k + 2)π − a(n) n− − . 2 | |   Xn=1 31 Proof. The proofs of these assertions are very similar; so it suffices to consider the verification of (1.5.22) as an example. We are actually going to prove the formula 2X 2 3/2 3/2 (1.5.24) ∆(x, h/k) dx = c1k (2X) X Z | | − X   1.5. Approximate Formulae and... 29

+o k2X1+ǫ + 0 k3/2X5/4+ǫ     for k X, and for X k we estimate trivially ≤ ≪ X (1.5.25) ∆(x, h/k) 2 dx k2X1+ǫ Z | | ≪ 1 noting that ∆(x, h/k) k log 2k for x k by (1.5.2) and (1.1.4). Clearly ≪ ≪ (1.5.22) follows from (1.5.24) and (1.5.25). Turning to the proof of (1.5.24), let X x 2X, and choose N = X ≤ ≤ in the formula (1.5.3), which we write as

∆(x, h/k) = S (x, h/k) + 0(kxǫ ).

We are going to prove that

2X 2 3/2 3/2 2 1+ǫ (1.5.26) S (x, h/k) dx = c1k (2X) X + o k X , Z | | − X     which implies (1.5.24) by Cauchy’s inequality. Squaring out S (x, h/k) 2 and integrating term by term, we find that | | 2X 2 (1.5.27) S (x, h/k) dx = S + o(k( S 1 + S 2 )), Z | | ◦ | | | | X where 2X 2 1 2 3/2 1/2 S = (4π )− k d (n)n− X dx, ◦ Xn X Z ≤ X 2X 3/4 1/2 S = d(m)d(n)(mn)− x e(2( √m √n) √x/k) dx, 1 − mX,n X Z m,≤n X 2X 3/4 1/2 S 2 = d(m)d(n)(mn)− x e 2 √m + √n √x/k dx. Z mX,n X     ≤ X 30 1. Summation Formulae

32 The sum S gives the leading term in (1.5.26), for ◦ 3/2 3/2 1+ǫ S = c1k (2X) X + o kX . ◦ −     Further, by Lemma 1.6,

3/4 1 S kX d(m)d(n)(mn)− √n √m − 1 ≪ − mX,n X   m<≤n 3/4 1/4 1 kX d(m)d(n)m− n− (n m)− ≪ − mX,n X m<≤n 1+ǫ/2 1 1+ǫ kX m− kX , ≪ ≪ mXX ≤ and similarly for S 2. Hence (1.5.26) follows from (1.5.27), and the proof of (1.5.22) is complete. 

COROLLARY. For k X1/2 ǫ and X we have ≪ − →∞ X 2 3/2 (1.5.28) ∆(x, h/k) dx c1kX , Z | | ∼ 1 X 2 k+1/2 (1.5.29) A(x, h/k) dx c2(k)kX . Z | | ∼ 1 It is seen that for k x1/2 ǫ the typical order of ∆(x, h/k) is ≪ − | | k1/2 x1/4, and that of A(x, h/k) is k1/2 xk/2 1/4. This suggests the fol- | | − lowing CONJECTURE. For x 1 and k x1/2 ≥ ≪ (1.5.30) ∆(x, h/k) k1/2 x1/4+ǫ | | ≪ 1/2 k/2 1/4+ǫ (1.5.31) A(x, h/k) k x − . | | ≪ 33 Note that (1.5.30) is a generalization of the old conjecture ∆(x) x1/4+ǫ | | ≪ in Dirichlet’s divisor problem. 1.6. Identities for Da(x, r) and Aa(x, r) 31

1.6 Identities for Da(x, r) and Aa(x, r)

The Case a 1. ≥ As generalizations of the sum functions D(x, r) and A(x, r), define the Riesz means 1 (1.6.1) D (x, r) = ′ d(n)e(nr)(x n)a a a! Xn x − ≤ and 1 (1.6.2) A (x, r) = ′ a(n)e(nr)(x n)a, a a! Xn x − ≤ where a is a nonnegative integer. Thus D (x, r) = D(x, r) and A (x, r) = A(x, r). Actually, for our later purposes,◦ only the case a = 0 will◦ be of relevance, but just in order to be able to deal with this somewhat delicate case by an induction from a to a 1, we shall need identities for D (x, r) − a and Aa(x, r) as well. These are contained in the following theorem. THEOREM 1.3. Let a 0 be an integer. Then for x > 0 we have ≥ + x1+a a 1 1 (1.6.3) D (x, h/k) = log x + 2γ 2 log k a (1 + a)!k  − − n  Xn=1    a n   ( 1)  a n  + − E( n, h/k)x − + ∆ (x, h/k), n!(a n)! − a Xn=0 − where (1.6.4)

a (1+a)/2 ∞ (1+a)/2 ∆ (x, h/k) = (k/2π) x d(n)n− a − × Xn=1 a ek( nh¯)Y1+a(4π √nx/k) + ( 1) (2/π)ek(nh¯)K1+a(4π √nx/k) . × n − − o Also, 34

(1.6.5) A (x, h/k) = ( 1)k/2(k/2π)a x(k+a)/2 a − × 32 1. Summation Formulae

∞ (k+a)/2 a(n)n− e ( nh¯)J + (4π √nx/k). × k − k a Xn=1

Proof. (the case a 1). By a well-known summation formula (see [13], ≥ p. 487, equation (A.14)), we have for any c > 1

1 s+a 1 Da(x, r) = E(s, r)x (s(s + 1) (s + a))− ds. 2πi Z ··· (c)

First let a 2, and move the integration to the broken line C joining ≥ a the points 1/3 i , 1/3 i, (a + 1/2) i, (a + 1/2) + i, 1/3 + i, − − ∞ − − − − − − and 1/3 + i . The residues at 1, 0, 1,..., a give the initial terms − ∞ − − in (1.6.3); the expansion (1.1.3) is used in the calculation of the residue at s = 1. Note also that the integrand is t 2σ a for t 1 and σ ≪ | |− − | | ≥ bounded (the implied constant depends on k and x), so that the theorem of residues gives

1 s+a 1 ∆a(x, r) = E(s, r)x (s(s + 1) (s + a))− ds. 2πi Z ··· Ca

The function E(s, h/k) is now expressed by the functional equation (1.1.2), and the resulting series can be integrated term by term by the last mentioned estimate. The new integrals are of the type I1 and I2 in the notation of 1.4, and (1.6.4) follows, for a 2, when these integrals § ≥ are evaluated by Lemma 1.5. Next we differentiate both sides of (1.6.3) with respect to x. By the definition (1.6.1) we have for a 2 ≥

(1.6.6) Da′ (x, r) = Da 1(x, r), −

and consequently by (1.6.3)

(1.6.7) ∆a′ (x, r) = ∆a 1(x, r). −

35 1.6. Identities for Da(x, r) and Aa(x, r) 33

The right hand side of (1.6.4) shares the same property, for its deriva- tive equals the same expression with a replaced by a 1, Formally, this − can be verified by differentiation term by term using the relations

n n (1.6.8) (x Kn(x))′ = x Kn 1(x) − − and

n n (1.6.9) (x Yn(x))′ = x Yn 1(x). − But by (1.3.16) and (1.3.17) the series in (1.6.4) converges abso- lutely for a 1, and the convergence is uniform in any interval [x , x ] ≥ 1 2 ⊂ (0, ), which justifies the differentiation term by term for a 2. This ∞ ≥ argument proves (1.6.4) for a = 1 also. The identity (1.6.5) is proved in the same way, starting from the formula

1 s+a 1 Aa(x, r) = ϕ(s, r)x (s(s + 1) (s + a))− ds, 2πi Z ··· (c) where c > (k + 1)/2. For a 2 the integration can be shifted to the line ≥ σ = k/2 2/3, where we use the functional equation (1.2.6) to rewrite − ϕ(s, h/k). This leads to integrals of the type I3, which can be expressed in terms of the Bessel function Jk+a by (1.4.6). As a result, we obtain the assertion (1.6.5) for a 2. The case a = 1 is deduced from this by ≥ differentiation as above, using the relation

n n (1.6.10) (x Jn(x))′ = x Jn 1(x) − and the asymptotic formula (1.3.15). We have now proved the theorem in the case a 1, and the case a = 0 is postponed to 1.8. ≥ § Estimating the series in (1.6.4) and (1.6.5) by absolute values, one 36 obtains estimates for ∆a(x, r) and Aa(x, r). In the case a = 1, the result is as follows. 

COROLLARY. For x k2 we have ≫ (1.6.11) ∆ (x, h/k) k3/2 x3/4 | 1 | ≪ 34 1. Summation Formulae

and

(1.6.12) A (x, h/k) k3/2 xk/2+1/4. | 1 | ≪ REMARK. The error term ∆ (x, r) coincides with ∆(x, r), defined in (1.5.2). The relations (1.6.6) and◦ (1.6.7) remain valid also for a = 1 if x is not an integer. Thus, in particular,

Z (1.6.13) ∆1′ (x, r) = ∆(x, r) for x > 0, x < . Together with (1.6.4) for a = 1, this yields (1.6.4) for a = 0, x < Z as well, if the differentiation term by term of (1.6.4) for a = 1 can be justified. This step is not obvious but requires an analysis which is carried out in the next section. After that the remaining case a = 0, x Z ∈ is dealt with in 1.8 by a limiting argument. § In analogy with (1.6.13), we have

Z (1.6.14) A1′ (x, r) = A(x, r) for x > 0, x < . This relation, which follows immediately from the definition (1.6.2), is the starting point in the proof of (1.6.5) for a = 0.

1.7 Analysis of the Convergence of the Voronoi Se- ries

In this section we are going to study the series (1.6.4) and (1.6.5) for 37 a = 0 as a preliminary for the proof of Theorem 1.3 for this remaining value of a. In virtue of the analogy between d(n) and a(n), we may restrict ourselves to the analysis of the first mentioned series. Thus, let us consider the series (1.7.1) 1/2 ∞ 1/2 x d(n)n− e ( nh¯)Y (4π √nx/k) + (2/π)e (nh¯)K (4π √nx/k) . k − 1 k 1 Xn=1 n o For k = 1 this is - up to sign - Voronoi’s expression for ∆(x), and the more general series (1.7.1) will also be called a Voronoi series. 1.7. Analysis of the Convergence of the Voronoi Series 35

From the point of view of convergence, the factor x1/2 in front of the Voronoi series is of course irrelevant, but because we are going to consider x as a variable in the next section, we prefer keeping x explicit all the time. Denote by (a, b; x) that part of the Voronoi series in which the summation is takenP over the finite interval [a, b]. The following theorem gives an approximate formula for (a, b; x). P THEOREM 1.4. Let [x , x ] (0, ) be a fixed interval. Then uni- 1 2 ⊂ ∞ formly for x [x , x ] and 2 a < b < we have ∈ 1 2 ≤ ∞ (1.7.2) √b 5/4 5/4 1 (a, b; x) = Ax d(m)m− ek(mh) u− sin(4π( √m √x)u/k) du X Z − √a 14 +o(a− log a), where m is the positive integer nearest to x (or any one of the two pos- sibilities if x > 1 is half an odd integer), and A is a number depending only on k. For the proof, we shall need the following elementary lemma.

Lemma 1.7. Let f C2[a, b], where 0 < a < b. Then 38 ∈ b

(1.7.3) ′ f (n)d(n)e (nh) = (∆(t, h/k) f (t) ∆ (t, h/k) f ′(t)) k − 1 aXn b Z ≤ ≤ a b b 1 + ∆1(t, h/k) f ′′(t) dt + k− (log t + 2γ 2 log k) f (t) dt. Z Z − a a

Proof. According to (1.5.2), the sum under consideration is

b b b 1 f (t)dD(t, h/k) = k− f (t)(log t+2γ 2 log k) dt+ f (t)d∆(t, h/k). Z Z − Z a a a 36 1. Summation Formulae

By repeated integrations by parts and using (1.6.13), we obtain

b b b

f (t)d∆(t, h/k) = f (t)∆(t, h/k) f ′(t)∆(t, h/k) dt Z Z − Z a a a b

= ( f (t)∆(t, h/k) ∆1(t, h/k) f ′(t)) Z − a b

+ ∆(t, h/k) f ′′(t) dt, Z a and the formula (1.7.3) follows. 

Proof of Theorem 1.4. Because h/k, x1, and x2 will be fixed during the following discussion, we may ignore the dependence of constants on time. First, by the asymptotic formulae (1.3.16) and (1.3.17) for Bessel functions, we have

(1.7.4) 1/4 3/4 (a, b; x) = Ax d(n)n− e ( nh¯) cos(4π √nx/k π/4) k − − X aXn b ≤ ≤ 1/4 + o(a− log a).

Lemma 1.7 is now applied to the sum here, with h¯/k in place of − h/k, and with

1/4 3/4 f (t) = x t− cos(4π √tx/k π/4). − 39 The integrated terms in (1.7.3) are a 1/4, by (1.5.20) and (1.6.11). ≪ − Also, by Lemma 1.6, the last term in (1.7.3) is a 1/4 log a. Thus it ≪ − remains to consider the integral

b

(1.7.5) ∆1(t, h¯/k) f ′′(t) dt. Z − a 1.7. Analysis of the Convergence of the Voronoi Series 37

In our case,

5/4 7/4 9/4 f ′′(t) = Ax t− cos(4π √tx/k π/4) + o(t− ). − The contribution of the error term to (1.7.5) is a 1/2. Hence, in place ≪ − of (1.7.5), it suffices to deal with the integral

b 5/4 7/4 (1.7.6) x t− ∆1(t, h¯/k) cos(4π √tx/k π/4) dt. Z − − a

For ∆1(t, h¯/k) we have the formula (1.6.4), which gives

3/4 ∞ 5/4 1/4 ∆ (t, h¯/k) = At d(n)n− e (nh) cos(4π √nt/k + π/4) + o(t ). 1 − k Xn=1

The contribution of the error term to (1.7.6) is a 1/2. Thus, the result ≪ − of all the calculations so far is that

5/4 ∞ 5/4 (a, b; x) = Ax d(n)n− e (nh) k × X Xn=1 b 1 1/4 t− cos(4π √tx/k π/4) cos(4π √nt/k + π/4) dt + o(a− log a). × Z − a Further, when the product of the cosines is written as the sum of two cosines, and the variable u = √t is introduced, this equation takes the shape

5/4 ∞ 5/4 (a, b; x) = Ax d(n)n− e (nh) k × X Xn=1 √b √b  1 1   u− cos(4π( √n + √x)u/k)du u− sin(4π( √n √x)u/k)du × Z − Z −  √a √a     1/4   + o(a− log a).  38 1. Summation Formulae

By Lemma 1.6, the integrals here are a 1/2 √n √x 1. Hence, 40 ≪ − | ± |− if the integral standing on the right of (1.7.2) is singled out, the rest can be estimated uniformly as a 1/2. This completes the proof. ≪ − The problem on the nature of convergence of the Voronoi series is now reduced to the estimation of an elementary integral, and it is a sim- ple matter to deduce the following THEOREM 1.5. The series (1.7.1) is boundedly convergent in any in- terval [x , x ] (0, ), and uniformly convergent in any such interval 1 2 ⊂ ∞ free from integers. The same assertions hold for the series (1.6.5) for a = 0. Proof. The integral in (1.7.2) vanishes if x = m, and otherwise it tends to zero as a and b tend to infinity. Thus, in any case, the Voronoi series (1.7.1) converges. Moreover, if the interval [x1, x2] contains no integer, then the integral in question is a 1/2 uniformly in this interval, where ≪ − the Voronoi series is therefore uniformly convergent. Finally, to prove the boundedness of the convergence in [x1, x2], let x and m be as in Theorem 1.4, and put x = m + δ, c = min( √b, max( √a, 1/ δ )). Then | | √b c √b 1 u− sin(4π( √m √x)u/k) du = + Z − Z Z √a √a c c 1 1 √m √x du + c− √m √x − 1. ≪ Z | − | | − | ≪ √a

Hence (a, b; x) 1 uniformly for all 0 < a < b and x [x , x ]. ≪ ∈ 1 2 P 

1.8 Identities for D(x, r) and A(x, r)

We are now in a position to prove Theorem 1.3 for a = 0. For conve- 41 nience of reference and because of the importance of this result, we state it separately as a theorem. 1.8. Identities for D(x, r) and A(x, r) 39

THEOREM 1.6. For x > 0 we have

1 (1.8.1) D(x, h/k) = k− (log x + 2γ 1 2 log k)x + E(0, h/k) − − 1/2 ∞ 1/2 x d(n)n− e ( nh¯)Y (4π √nx/k) + (2/π)e (nh¯)K (4π √nx/k) − k − 1 k 1 Xn=1 n o and

k/2 k/2 ∞ k/2 (1.8.2) A(x, h/k) = ( 1) x a(n)n− e ( nh¯)J (4π √nx/k). − k − k Xn=1

Proof. Consider first the case when x is not an integer. Let [x1, x2] be an interval containing x but no integer. Then the series on the right of (1.8.1) converges uniformly in this interval, by Theorem 1.5. Therefore the differentiation term by term of the identity (1.6.4) for ∆1(x, h/k) is justified, which gives the formula (1.6.4) for a = 0, and thus also the formula (1.8.1) (see the remark in the end of 1.6). § The case when x = m is an integer will now be settled by Theorem 1.4 and the previous case. Let

1/2 ∞ 1/2 S (x) = x d(n)n− − Xn=1

ek( nh¯)Y1(4π √nx/k) + (2/π)ek(nh¯)K1(4π √nx/k) . n − o Then S (x) = ∆(x, h/k) if x > 0 is not an integer, and S (m) is the value of ∆(m, h/k) asserted. We are going to show that

1 (1.8.3) lim (D(m + δ, h/k) + D(m δ, h/k)) 2 δ o+ − → 1 = k− (log m + 2γ 1 2 log k)m + E(0, h/k) + S (m). − − Because 1 (D(m + δ, h/k) + D(m δ, h/k)) equals D(m, h/k) for all 2 − δ (0, 1), this implies (1.8.1) for x = m. ∈ First, the leading terms of the formula for D(m δ, h/k), just proved, 42 ± 40 1. Summation Formulae

give in the limit the leading terms on the right of (1.8.3). Therefore, it remains to prove that

(1.8.4) lim (S (m + δ) + S (m δ) 2S (m)) = 0. δ 0+ − − → Where S (x) = S 1(x) + S 2(x) + S 3(x), 1 where the range of summation in the sums S i(x) is, respectively, [1, δ− ), [δ 1, δ 3], and (δ 3, ). We estimate separately the quantities − − − ∞ ∆ (δ) = S (m + δ) + S (m δ) 2S (m). i i i − − i Consider first ∆1(δ), writing

∆1(δ) = d(n)αn(δ). 1 nX<δ− By the formulae

1/2 (1.8.5) (x Y1(4π √nx/k))′ = 2π( √n/k)Y (4π √nx/k), ◦ 1/2 (1.8.6) (x K1(4π √nx/k))′ = 2π( √n/k)K (4π √nx/k), − ◦ which follow from (1.6.8) and (1.6.9), we find that α (δ) n 1/4δ. n ≪ − Hence

(1.8.7) ∆ (δ) δ1/4 log(1/δ). 1 ≪ Next, by definition, (1.8.8) 1 3 1 3 1 3 ∆2(δ) = δ− ,δ− ; m + δ δ− ,δ− ; m δ + 2 δ− ,δ− ; m . − X   − X  −  X  

To facilitate comparisons between the sums on the right, we write 5/4 1 3 the factor (m δ) in front of the formula (1.7.2) for (δ− , δ− ; m δ) 5/4 ± ± 43 as m + 0(δ). Then, by (1.8.8) and (1.7.2), P

3/2 δ− 1 ∆2(δ) u− sin(4π( √m √m + δ)u/k) ≪ Z − 1/2 n δ−

1.9. The Summation Formulae 41

+ sin(4π( √m √m δ)u/k) du + δ1/4 log(1/δ).

− − o

The expression in the curly brackets is estimated as follows:

= 2 sin 2π √m + δ + √m δ 2 √m u/k − − |{···}| |     cos 2π √m δ √m δ u/k   − − −   | δ2u. ≪ Hence

(1.8.9) ∆ (δ) δ1/4 log(1/δ). 2 ≪ 3 Finally, by Theorem 1.4 and Lemma 1.6, we have for any b > δ−

3 3/2 1 3/4 1/2 (δ− , b; m δ) δ δ− + δ log(1/δ) δ , X ± ≪ ≪ 3 and the same estimate holds also for (δ− , b; m). Hence P (1.8.10) ∆ (δ) δ1/2, 3 ≪ Now (1.8.7), (1.8.9), and (1.8.10) give together

S (m + δ) + S (m δ) 2S (m) δ1/4 log(1/δ), − − ≪ and the assertion (1.8.4) follows. This completes the proof of (1.8.1), and (1.8.2) can be proved likewise. 

1.9 The Summation Formulae

We are now in a position to deduce the main results of this chapter, the summation formulae of the Voronoi type involving an exponential factor.

THEOREM 1.7. Let 0 < a < b and f C1[a, b]. Then 44 ∈ b 1 1 (1.9.1) ′ d(n)e (nh) f (n) = k− (log x + 2γ 2log k) f (x) dx + k− k − aXn b Z ≤ ≤ a 42 1. Summation Formulae

b ∞ d(n) 2πek( nh¯)Y (4π √nx/k) + 4ek(nh¯)K (4π √nx/k) f (x) dx Z {− − ◦ ◦ } Xn=1 a and

′ (1.9.2) a(n)ek(nh) f (n) aXn b ≤ ≤ b 1 k/2 ∞ (k 1)/2 (k 1)/2 = 2πk− ( 1) a(n)ek( nh¯)n − x − Jk 1(4π √nx/k) f (x) dx. − − Z − Xn=1 a

The series in (1.9.1) and (1.9.2) are boundedly convergent for a and b lying in any fixed interval [x , x ] (0, ). 1 2 ⊂ ∞ Proof. We may suppose that 0 < a < 1, for the general case then follows by subtraction. Accordingly, the sum in (1.9.1) is

b

′ d(n)ek(nh) f (n) = f (x)dD(x, h/k). Xn b Z ≤ a By an integration by parts, this becomes

b

(1.9.3) f (b)D(b, h/k) f ′(x)D(x, h/k) dx. − Z a We substitute D(x, h/k) from the identity (1.8.1), noting that the re- sulting series can be integrated term by term because of bounded con- vergence. Thus

b b 1 f ′(x)D(x, h/k) dx = f ′(x) k− (log x + 2γ 1 Z Z − a a n b ∞ 1/2 1/2 2 log k)x + E(0, h/k) dx d(n)n− f ′(x)x − − Z o Xn=1 a 1.9. The Summation Formulae 43

ek( hh¯)Y1(4π √nx/k) + (2/π)ek(nh¯)K1(4π √nx/k) dx. n − o This is transformed by another integration by parts, using also (1.8.5) and (1.8.6). The integrated terms then yield f (b)D(b, h/k), again by (1.8.1), and the right hand side of the preceding equation becomes 45

b 1 f (b)D(b, h/k) k− (log x + 2γ 2 log k) f (x) dx − Z − a b 1 ∞ + 2πk− d(n) ek( nh¯)Y (4π √nx/k) (2/π)ek(nh¯)K (4π √nx/k) Z − ◦ − ◦ Xn=1 a n o f (x) dx.

Substituting this into (1.9.3) we obtain the formula (1.9.1). It is also seen that the boundedness of the convergence of the series (1.9.1). 

The proof of (1.9.2) is analogously based on the identity (1.8.2) and the formula

k/2 (k 1)/2 (x Jk(4π √nx/k))′ = 2π( √n/k)x − Jk 1(4π √nx/k), − which follows from (1.6.10).

Notes

Our estimate (1.1.4) for E(0, h/k) is stronger by a logarithm than the bound E(0, h/k) k log2 2k of Estermann [8]. ≪ The value ζ(0) = 1/2 can also be deduced from (1.1.9) by observ- − ing that for fixed s , 1 the function ζ(s, a) is continuous in the inter- val 0 < a 1 (this follows e.g. from the loop integral representation ≤ (2.17.2) of ζ(s, a) in [27]). The integrals I , I , and I in 1.4 can also be evaluated by the inver- 1 2 3 § sion formula for the Mellin transformation, using the Mellin transform 44 1. Summation Formulae

pairs (see (7.9.11), (7.9.8), and (7.9.1) in [26])

ν s ν 2 1 1 x− Kν(x), 2 − − Γ s Γ s ν , 2 ! 2 − !

ν s ν 1 1 1 1 1 x− Yν(x), 2 − − π− Γ s Γ s ν cos s ν π , − 2 ! 2 − ! 2 − ! !

ν s ν 1 1 1 x− Jν(x), 2 − − Γ s Γ ν s + 1 . 2 ! − 2 ! 46 Theorems 1.1, 1.2, 1.6 and 1.7 (for sums involving d(n)) appeared in [18]. The error terms in Theorem 1.2 could be imporved. In fact, Tong [28] proved that

X 2 3/2 5 (*) ∆ (x) dx = C1X + o(X log X) Z 2 (for a simple proof, see Meurman [22]), and similarly it can be shown that (1.5.22) and (1.5.23) hold with error terms o(k2X log5 X) and o(k2Xk log5 X), respectively. An analogue of (*) for the error term E(T) in (0.6) was obtained by Meurman in the above mentioned paper. The general summation formulae of Berndt (see [3], in particular part V) cover (1.9.2) but not (1.9.1), because the functional equation (1.1.2) for E(s, r) is not of the form required in Berndt’s papers. The novelty of the proof of Theorem 1.6 for integer values of x lies in the equation (1.8.3). Analogues of the results in this and subsequent chapters can be proved for sums and Dirichlet series involving Fourier coefficients of Maass waves. H. Maass [21] introduced non-holomorphic cusp forms as auto-morphic functions in the upper half-plane H for the full modular group, which are eigenfunctions of the hyperbolic Laplacian y2(∂2+∂2) − x y and square integrable over the fundamental domain 1 1 z = x + yi x , y > 0, z 1 ( −2 ≤ ≤ 2 | |≥ )

2 47 with respect to the measure y− dxdy. Such functions, which are more- 1.9. The Summation Formulae 45 over orthonormal with respect to the Petersson inner product, eigen- functions of all Hecke operators Tn, and either even or odd as functions of x, are called Maass waves. A Maass wave f , which is associated with the eigenvalue 1/4 + r2(r R) of the hyperbolic Laplacian and an even ∈ function of x, can be expanded to a Fourier series of the form (see [20])

∞ 1/2 f (z) = f (x + yi) = a(n)y Kir(2π ny) cos(2π nx). Xn=1 It has been conjectured that a(n) nǫ, but this hypothesis-an ana- ≪ logue of (1.2.5) - is still unsettled. The weaker estimate a(n) n1/5+ǫ ≪ has been proved by J.-P. Serre. As an analogue of the Dirichlet series ϕ(s), one may define the L- function ∞ s L(s) = a(n)n− . Xn=1 This can be continued analytically to an entire function satisfying the functional equation (see [7])

s s + ir s ir s 1 1 s + ir 1 s ir π− L(s)Γ Γ − = π − L(1 s)Γ − Γ − − .  2   2  − 2 ! 2 ! More generally, it can be proved that the function

∞ s L(s, h/k) = a(n) cos(2πnh/k)n− Xn=1 has the functional equation

s + ir s ir (k/π)sL(s, h/k)Γ Γ −  2   2  1 s 1 s + ir 1 s ir = (k/π) − L(1 s, h¯/k)Γ − Γ − − , − 2 ! 2 ! which is an analogue (1.2.6). Results of this kind can be proved for 48 “odd” Maass waves as well, and having the necessary functional equa- tions at disposal, one may pursue the analogy between holomorphic and non-holomorphic cusp forms further. Chapter 2

Exponential Integrals

49 AN INTEGRAL OF the type

b g(x)e( f (x)) dx Z a

is called an exponential integral. The object of various “saddle-point theorems” is to give the value of such an integral approximately in terms of the possible saddle point x (a, b) satisfying, by definition, the ◦ ∈ equation f ′(x ) = 0. Results of this kind can be found e.g. in [27], Chapter IV, and◦ in [13], 2.1. § For our purposes, the existing saddle-point theorems are some-times too crude. However, more precise results can be obtained for smoothed exponential integrals

η(x)g(x)e( f (x)) dx, Z

where η(x) is a suitable smooth weight function. The present chapter is devoted to such integrals. The main result of 2.1 is a saddle-point § theorem, and 2.2 deals with the case when no saddle point exists. § 46 2.1. A Saddle-Point Theorem for 47

2.1 A Saddle-Point Theorem for Smoothed Expo- nential Integrals

It will be convenient to single out a linear part from the function f , writing thus f (x) + αx in place of f (x). Accordingly, our exponential integral reads

b b (2.1.1) I = I(a, b) = g(x)e( f (x) + αx) dx = h(x) dx, Z Z a a say, where α is a . For a given positive integer J and a given real number U > 0, we 50 define the weight function ηJ(x) by the equation

U U b u − J (2.1.2) IJ = IJ (a, b) = U− du1 duJ h(x) dx Z ··· Z Z 0 o a+u b

= ηJ(x)h(x) dx, Z a where u = u + + u . We suppose that JU < (b a)/2. Also, 1 ··· J − we define I = I, and interpret η (x) as the characteristic function of the interval [a,◦b]. Clearly 0 < η (x◦) 1 for x (a, b), and η (X) = 1 for J ≤ ∈ J a + JU x b JU. ≤ ≤ − The following lemma gives an alternative expression for the integral IJ. Lemma 2.1. For any c (a + JU, b JU) we have ∈ − J J 1 J j (2.1.3) IJ = (J!U )− ( 1) j! − Xj= ◦ c b jU −  (x a jU)J h(x) dx (b jU x)Jh(x) dx .  Z − Z − −  a+ jU c      48 2. Exponential Integrals

Proof. The case J = 0 is trivial, and otherwise the assertion can be verified by induction using the recursion formula

U 1 (2.1.4) IJ(a, b) = U− IJ 1(a + uJ, b uJ) duJ . Z − − ◦ For completeness we give some details of the calculations. Supposing that (2.1.3) holds for the index J 1, we have by (2.1.4) −

J 1 U c 1 J 1 1 − J 1 j IJ = U− (J 1)!U − − − ( 1)  (max(x a jU − j ! −  − − − Xj=0 Z  Z   o a+ jU  b jU  −  J 1 J 1 uJ, 0)) − h(x) dx + (max (b jU uJ x, 0)) − h(x) dx duJ Z − − −  c   J 1 c  1 − J 1  = J!U J − − ( 1) j+1  (x a ( j + 1)U)J h(x) dx j ! −  − − Xj=0  Z   a+( j+1)U  b ( j+1)U  −  J 1 1 − J 1 + (b ( j + 1)U x)J h(x) dx + J!U J − − ( 1) j  j Z − −  = ! − c    Xj 0  c  b jU  −  (x a jU)Jh(x) dx (b jU x)J h(x) dx  Z − Z − −  a+ jU c    J 1   1 −  J 1 J 1  = J!U J − − + − ( 1) j j 1! j !! −   Xj=1 − c b jU −  (x a jU)Jh(x) dx + (b jU x)J h(x) dx  Z − Z − −  a+ jU c     c b JU   −  J 1 J J + J!U − ( 1)  (x a JU) h(x) dx +  − Z − − Z      a+JU c     2.1. A Saddle-Point Theorem for 49

c b (b JU x)Jh(x) dx + (x a)J h(x) dx + (b x)J h(x) dx , − − Z − Z −  a c    which yields (2.1.3) for the index J.   51

Remark. As a corollary of (2.1.3), we obtain the identity

J 1 J (2.1.5) J!U J − ( 1) j(z jU)J = 1. j! − −   Xj=0 Indeed, this holds for z = x a with a + JU x c, since η (x) = 1 − ≤ ≤ J in this interval. Then, by analytic continuation, (2.1.5) holds for all com- plex z. Of course, (2.1.5) can also be verified directly in an elementary way. Before going into formulations of the saddle-point theorems, it is 52 convenient to list for future reference a number of conditions on the functions f and g.

(i) f (x) is real for a x b. ≤ ≤ (ii) f and g are holomorphic in the domain

D = z z x <µ(x) for some x [a, b] , − ∈ n o where µ(x) is a positive function, which is continuous and piece- wise continuously differentiable in the interval [a, b].

(iii) There are positive functions F(x) and G(x) such that for z x < | − | µ(x) and a x b ≤ ≤ g(z) G(x), | | ≪ 1 f ′(z) F(x)µ(x)− . ≪

(iv) f ′′(x) > 0 and 2 f ′′(x) F(x)µ(x)− ≫ for a x b. ≤ ≤ 50 2. Exponential Integrals

(v) µ (x) 1 for a x b whenever µ (x) exists. ′ ≪ ≤ ≤ ′ (vi) F(x) 1 for a x b. ≫ ≤ ≤

Since f ′(x) + α is monotonically increasing by (iv), it has at most one zero, say at x , in the interval (a, b). Whenever terms involving x occur in the sequel,◦ it should be understood that these terms are to be◦ omitted if x does not exist. ◦ Remark . By Cauchy’s integral formula for the derivatives of a holo- 53 morphic function, it follows from (ii) and (iii) that

(n) n n (2.1.6) f (x) n!2 F(x)µ(x)− for a x b, n = 1, 2,... ≪ ≤ ≤

Hence the conditions (iii) and (iv) together imply that

2 (2.1.7) f ′′(x) F(x)µ(x)− for a x b. ≍ ≤ ≤ Next we state two saddle-point theorems. The former of these, due to F.V. Atkinson ([2], Lemma 1), deals with the integral I, and the latter is its generalization to IJ . Let

1/2 J 1 (2.1.8) EJ(x) = G(x) f ′(x) + α + f ′′(x) − − .   In the next theorem, and also later in this chapter, the unspecified constant A will be supposed to be positive.

Theorem 2.1. Suppose that the conditions (i) - (v) are satisfied, and let I be defined as in (2.1.1). Then

1/2 (2.1.9) I = g(x ) f ′′(x )− e( f (x ) + αx + 1/8) ◦ ◦ ◦ ◦ b + o  G(x) exp( A α µ(x) AF(x)) dx Z − | | −   a     3/2  + o G (x ) µ (x ) F (x )− + o (E (a))+ o (E (b)) .  ◦ ◦ ◦  ◦ ◦ 2.1. A Saddle-Point Theorem for 51

Theorem 2.2. Let U > 0, J a fixed nonnegative integer, JU < (b a)/2, − and suppose that the conditions (i)-(vi) are satisfied. Suppose also that 1/2 (2.1.10) U δ(x )µ(x )F(x )− , ≫ ◦ ◦ ◦ where δ(x) is the characteristic function of the union of the intervals (a, a + JU) and (b JU, b). Let I be defined as in (2.1.2). Then − J 1/2 (2.1.11) IJ = ξJ(x )g(x ) f ′′(x )− e ( f (x ) + αx + 1/8) ◦ ◦ ◦ ◦ ◦ b J + o  (1 + (µ(x)/U) )G(x) exp( A α µ(x) AF(x) dx Z − | | −   a     1/2 3/2  + o  1 + δ(x )F(x ) G(x )µ(x )F(x )−  ◦ ◦ ◦ ◦ ◦  J   J + o U− (E (a + jU) + E (b jU)) ,  J J −   Xj=0      where   54 (2.1.12)

ξJ(x ) = 1 for a + JU < xc < b JU, ◦ − (2.1.13) j1 J 1 J j ν J 2ν ξJ(x ) = J!U − ( 1) cν f ′′(x )− (x a jU) − − ◦ j! − ◦ ◦ − −   Xj=0 Xν J/2 ◦≤ ≤ for a < x a + JU with j1 the largest integer such that a + j1U < x , ◦ ≤ ◦ (2.1.14) j2 J 1 J j ν J 2ν ξJ(x ) = (J!U )− ( 1) cν f ′′(x )− (b jU x ) − ◦ j! − ◦ − − ◦ Xj=0 Xν J/2 ◦≤ ≤ for b JU x < b with j2 the largest integer such that b j2U > x . − ≤ ◦ − ◦ The cν are numerical constants. Proof. We follow the argument of Atkinson [2] with some modifications caused by the smoothing. There are four cases as regards the saddle point x : 1) a + JU < x < b JU, 2) x does not exist, 3) a < x ◦ ◦ − ◦ ◦ ≤ a + JU, 4) b JU x < b. Accordingly the proof will be in four parts. − ≤ ◦ 52 2. Exponential Integrals

1) Suppose first that a + JU < x < b JU. Put λ(x) = βµ(x), where ◦ − β is a small positive constant. Choose c = x in the expression ◦ (2.1.3) for IJ . The intervals of integration in (2.1.3) are replaced by the paths

shown in the figure. Here C1, C3, C3′ , and C5 are, respectively,

55 the line segments [a+ jU, a+ jU (1+i)λ(a+ jU)], [x (1+i)λ(x ), − o− o x ], [x , x + (1 )λ(x )], and [b jU + (1 + i)λ(b jU), b jU]. o o o i o − − − The curve C is defined by z = x (1 + i)λ(x), a + jU x x , 2 − ≤ ≤ o and analogously C is defined by z = x + (1 + i)λ(x), x x b 4 o ≤ ≤ − jU. By the holomorphicity assumption (ii) and Cauchy’s integral theorem, we have

(2.1.15) J J 1 J j J IJ = J!U − ( 1)  (z a jU) h(z) dz j! −  Z − −   Xj=0  C1+C2+C3   + (b jU z)J h(z) dz . Z − −  + +  C3′ C4 C5    To estimate the modulus of h(z), we need an upper bound for Re 2πi( f (z) + αz) . Let z = x + (1 + i)y, where a x b and { } ≤ ≤ y λ(x). By Taylor’s theorem, | |≤ (2.1.16) 2 f (z) + αz = f (x) + αx + ( f ′(x) + α)(1 + i)y + i f ′′(x)y + θ(x, y), 2.1. A Saddle-Point Theorem for 53

where ∞ θ(x, y) = f (n)(x)/n! ((1 + i)y)n. Xn=3   By (2.1.6), we have

3 3 θ(x, y) F(x) y µ(x)− , | | ≪ | | so that by (iv) 56 1 2 θ(x, y) y f ′′(x) | |≤ 2 if β is supposed to be sufficiently small. Then (2.1.16) gives

2 (2.1.17) Re 2πi( f (z) + αz) 2π( f ′(x) + α)y π f ′′(x)y { }≤− − for a x b and y λ(x). ≤ ≤ | |≤ Consider, in particular, the case y = sgn( f ′(x) + α)λ(x), which occurs in the estimation of the integrals over C2 and C4. The right hand side of (2.1.17) is now at most

A f ′(x) + α µ(x) AF(x). − | | − In the cases α 2 f (x) and α < 2 f (x) this is | |≥ | ′ | | | | ′ | A α µ(x) AF(x) ≤− | | − and AF(x) A α µ(x) AF(x), ≤− ≤− | | − respectively. Hence for z C C ∈ 2 ∪ 4 (2.1.18) h(z) G(x) exp( A α µ(x) AF(x)). | | ≪ − | | −

The paths Ci for i = 1, 2, 4 and 5 depend on j, so that for clarity we denote them by Ci( j). Let us first estimate the contribution of the integrals over the C2( j) and C4( j) to IJ . By the identity (2.1.5), the integrands in (2.1.15) combine to give simply h(z) on C ( j) C C C ( j), hence in particular on C ( j) C ( j). Thus, 2 ∪ 3 ∪ 3′ ∪ 4 2 ∪ 4 54 2. Exponential Integrals

by (2.1.18) and the assumption (v), viz. µ (x) 1, the integrals ′ ≪ in (2.1.15) restricted to C2( j) and C4( j) contribute 57

b JU − G(x) exp( A α µ(x) AF(x)) dx. ≪ Z − | | − a+JU

Integrals over the other parts of the C2( j) and C4( j) are estimated similarly, but noting that the function in front of h(z) is now J ≪ 1+(µ(x)/U) . In this way it is seen that the integrals over the C2( j) and C4( j) give together at most the first error term in (2.1.11).

Next we turn to the integrals over the C1( j) and C5( j). By (2.1.17) we have

∞ J J (z a jU) h(z) dz G(a + jU) y exp( 2π f ′(a + jU) Z − − ≪ Z − | C ( j) 1 ◦ 2 + α y π f ′′(a + jU)y ) dy | − E (a + jU), ≪ J and similarly for the integrals over the C5( j). Hence these inte- grals contribute the last error term in (2.1.11).

Finally, as was noted above, the integrals over C3 + C3′ give to- gether the integral

λ(x ) ◦ (2.1.19) K = (1 + i) h(x + (1 + i)y) dy. Z ◦ λ(x ) − ◦ Applying Taylor’s theorem and similar arguments as in the proof of (2.1.17), we find that for y λ(x ) | |≤ ◦ (2.1.20) 2 2 g(xo + (1 + i)y) = g(xo) + g′(xo)(1 + i)y + o G(xo)µ(xo)− y ,   or, more crudely,

1 (2.1.21) g(x + (1 + i)y) = g(x ) + o G(x )µ(x )− y , ◦ ◦ ◦ ◦ | |   2.1. A Saddle-Point Theorem for 55

and analogously

(2.1.22) f (x + (1 + i)y) + α(x + (1 + i)y) = f (x ) + αx ◦ ◦ ◦ ◦ 2 1 3 3 4 4 + i f ′′(x )y + f ′′′(x ) (1 + i) y + o F(x )µ(x )− y ◦ 6 ◦ ◦ ◦   (2.1.23) 2 3 3 = f (x ) + αx + i f ′′(x )y + o F(x )µ(x )− y . ◦ ◦ ◦  ◦ ◦ | |  58 Let

1/3 (2.1.24) v = λ(x )F(x )− , ◦ ◦

and write K = K1 + K2 + K3, where the integrals K1, K2, and K3 are taken over the intervals [ λ(x ), v], [ v, v], and [v, λ(x )], − ◦ − − ◦ respectively. First, by (2.1.17) we have

∞ 2 K1 + K3 G(x ) exp π f ′′(x )y dy ≪ ◦ Z − ◦ v   1 1 2 G(x )v− f ′′(x )− exp πv f ′′(x ) ≪ ◦ ◦ − ◦ 2/3  1/3 G(x )µ(x )F(x )− exp AF(x ) , ≪ ◦ ◦ ◦ − ◦  whence by (vi)

3/2 (2.1.25) K1 + K3 G(x )µ(x )F(x )− . ≪ ◦ ◦ ◦

The integral K2, which will give the saddle-point term is evaluated by applying (2.1.20) and (2.1.22). The latter implies that for y | |≤ v

(2.1.26) e ( f (x + (1 + i)) + α (x + (1 + i)y)) ◦ ◦ 2 = e ( f (x ) + αx ) exp 2π f ′′(x )y ◦ ◦ − ◦ ×   56 2. Exponential Integrals

1 3 3 4 4 1 + πi f ′′(x ) (1 + i) y + o F(x )µ(x )− y 3 ◦ ◦ ◦ × (   2 6 6 + o F(x ) µ(x )− y ;  ◦ ◦  ) note that the last two terms in (2.1.22) are 1 by the choice ≪ 59 (2.1.24) of v. When this equation is multiplied by (2.1.20) and the product is integrated over the interval [ v, v], the integrals of − those explicit terms involving odd powers of y vanish, and we end up with

(2.1.27) v 2 K2 = (1 + i)g(x )e ( f (x ) + αx ) exp 2π f ′′(x )y dy ◦ ◦ ◦ Z − ◦ v   v − 2 2 2 4 4 +o G(x ) exp 2π f ′′(x )y µ(x )− y + F(x )µ(x )− y +  ◦ Z − ◦ ◦ ◦ ◦  v     −  2 6 6  F(x ) µ(x )− y dy . ◦ ◦   In the main term, the integration can be extended to the whole line 3/2 with an error G(x )µ(x )F(x )− , and since ≪ ◦ ◦ ◦ ∞ (2.1.28) exp cy2 dy = (π/c)1/2(c > 0), Z −  −∞ the leading term in (2.1.27) gives the leading term in (2.1.11) with ξ(x ) = 1, in accordance with (2.1.12). Further, as a generaliza- tion◦ of (2.1.28), we have

∞ 2 2ν ν 1/2 (2.1.29) exp cy y dy = dνc− − (c > 0,ν 0) Z −  ≥ −∞ where the dν are certain numerical constants, and by using this the 3/2 error terms in (2.1.27) are seen to be G(x )µ(x )F(x )− . ≪ ◦ ◦ ◦ 2.1. A Saddle-Point Theorem for 57

2) Suppose next that x does not exist. Then f ′(x) + α is of the same sign, say positive, in◦ the whole interval (a, b). Let c be a point in the interval (a + JU, b JU), write I as in (2.1.3), and transform − J the integrals over the intervals [a + jU, c] and [c, b jU] using the − contours shown in the figure, where the curvilinear part is defined by z = x + (1 + i)λ(x) with a + jU x b jU. Observe that the ≤ ≤ − integrals over the segment [c, c + (1 + i)λ(c)] cancel, by (2.1.5). Integrals over the other parts of the contours are estimated as in the preceding case, and these contribute the first and last error 60 term in (2.1.11).

If f ′(x) + α is negative, then an analogous contour is used in the lower half-plane.

3) Consider now the case a < x a + JU. Again choose c ◦ ≤ ∈ (a + JU, b JU). In (2.1.3) the integrals over [c, b jU] are − − written as in the preceding case, and likewise the integrals over [a + jU, c] for j > j1, in which case the saddle point x does not lie in the (open) interval of integration. On the other◦ hand, for j j the contour is of a shape similar to the first case. Only the ≤ 1 last mentioned integrals require a separate treatment; the others give error terms as before. A new complication is that the sum over j j of the integrals ≤ 1 over the line segment L = [x (1 + i)λ(x ), x + (1 + i)λ(x )] ◦ − ◦ ◦ ◦ cannot be written as an integral of h(z), but the integrals have to be evaluated separately. Other parts of the contours do not present any new difficulties.

Thus, consider the integral 61 58 2. Exponential Integrals

J J (2.1.30) K = U− (z a jU) h(z) dz. Z − − L

This is of the same type as the integral K in (2.1.19) - with g(z) replaced by U J(z a jU)Jg(z) - so that in principle it would − − − be possible to apply the result of the previous discussion as such. J J But then the function G(x) would have to be replaced by U− µ(x) G(x), which may become large if J is large. Therefore we modify the argument in order to prevent the error term from becoming impracticably large. But the first step in the treatment of the integral K is as before. Namely, let v be as in (2.1.24), put z = x + (1 + i)y, and let ◦ K1 and K3 be the integrals with respect to y over the intervals [ λ(x ), v] and [v, λ(x )]. Then K1 + K3 can be estimated as be- − ◦ − ◦ fore, except that the extra factor 1+(v/U)J has to be inserted. But since (v/U)J F(x )J/6 by (2.1.10) and (2.1.24), the estimate ≪ ◦ (2.1.25) remains valid even for the new integrals K1 and K3.

The new integral K2, which represents the main part of K, is now v J J K2 = (1 + i)U− (x a jU + (1 + i)y) h(x + (1 + i)y) dy. Z ◦ − − ◦ v − For the function h(x + (1 + i)y) we are going to use a somewhat cruder approximation◦ than before. By (2.1.21) and (2.1.23) we have

(2.1.31) h(x + (1 + i)y) ◦ 1 = g(x )e( f (x ) + αx ) + o G(x )µ(x )− y ◦ ◦ ◦ ◦ ◦ | | n 3 3  2 +o F(x )G(x )µ(x )− y exp 2π f ′′(x )y .  ◦ ◦ ◦ | | o × − ◦  62 Since by (2.1.29), (2.1.10), and (iv) v J J J ν 2 U− U + y y exp 2π f ′′(x )y dy Z | | | | − ◦ v     − 2.1. A Saddle-Point Theorem for 59

(ν+1)/2 J (ν+J+1)/2 f ′′(x )− + U− f ′′(x )− ≪ ◦ ◦ ν+1 (ν+1)/2 µ(x ) F(x )− ≪ ◦ ◦ the contribution of the error terms in (2.1.31) to K2 is G(x ) 1 ≪ ◦ µ(x ) F(x )− . Hence ◦ ◦ v J J K2 = √2g(x )e( f (x ) + αx + 1/8)U− (x a jU + (1 + i)y) ◦ ◦ ◦ Z ◦ − − v − 2 1 exp 2π f ′′(x )y dy + o G(x )µ(x )F(x )− − ◦ ◦ ◦ ◦    J  J 2ν = √2g(x )e ( f (x ) + αx + 1/8) U− (x a jU) − ◦ ◦ ◦ ◦ − − × 0 Xν J/2 ≤ ≤ v 2ν J 2ν 2 (1 + i) y exp 2π f ′′(x )y dy × 2ν! Z − ◦ v   − 1 + o G(x )µ(x )F(x )− .  ◦ ◦ ◦ 

As before, the integrals here can be extended to the whole real line with a negligible error. Then, evaluating the new integrals by (2.1.29) we find that with

ν ν 1/2 2ν J cν = 2− π− − (1 + i) dν 2ν!

and with ξ(x ) as in (2.1.13), the resulting expression for IJ is as in (2.1.11). ◦

4) The remaining case b jU x < b is analogous to the preceding − ≤ ◦ one. 

Remark . If f satisfies the conditions of Theorem 2.1 and 2.2 except that f ′′(x) is negative in the interval [a, b], then the results hold with the 63 minor modifications that in the main term the factor e( f (x )+αx +1/8) ◦ ◦ is to be replaced by e( f (x ) + αx 1/8), and f ′′(x ) should stand in ◦ ◦ − | ◦ | place of f ′′(x ). ◦ 60 2. Exponential Integrals

2.2 Smoothed Exponential Integrals without a Sad- dle Point

Theorem 2.2 covers also the case of exponential integrals IJ without a saddle point. However, in applications, the condition (iv) on f ′′ may not be fulfilled. Nevertheless, if the assumption of f ′ is strengthened, then no assumption on f ′′ is needed. The next theorem is a result of this kind.

Theorem 2.3. Suppose that the functions f and g satisfy the conditions (i) and (ii) in the preceding section, with µ(x) = µ, a constant. Suppose also that

(2.2.1) g(z) G for z D, | | ≪ ∈ (2.2.2) f ′(x) M for a x b, | |≍ ≤ ≤ and

(2.2.3) f ′(z) M for z D. | | ≪ ∈ Let I be as in (2.1.2) with α = 0 and 0 < JU < (b a)/2. Then J −

J J 1 J 1 J AMµ (2.2.4) IJ U− GM− − + µ U − + b a Ge− . ≪  −  Proof. By (2.2.2), the function f ′(x) cannot change its sign in the inter- val [a, b]. Suppose, to be specific, that f ′(x) is positive. By (2.2.3) and Cauchy’s integral formula we have

(k) k+1 f (x) k!M(µ/2)− for k = 1, 2,... and a x b. | | ≪ ≤ ≤ Then, by (2.2.2) and Taylor’s theorem, it is seen that

(2.2.5) Re(2πi f (z)) < AMy − 64 for z = x + yi, a x b, and 0 y βµ = λ, where β is a sufficiently ≤ ≤ ≤ ≤ small positive constant.  2.2. Smoothed Exponential Integrals without a Saddle Point 61

Now, for a proof of (2.2.4), the integral IJ is written as in (2.1.3), where the intervals [a+ jU, c] and [c, b jU] are deformed to rectangular − contours respectively with vertices a + jU, a + jU + iλ, c + iλ, and c, or c, c + iλ, b jU + iλ, and b jU. Then (2.2.4) follows easily by (2.2.1), − − (2.2.5) and (2.1.5).

Notes

In the saddle-point lemma of Atkinson (Lemma 1 in [2]), the assump- tions on the functions F and µ are weaker than those in Theorem 2.2, for the conditions (v) and (vi) are missing. Actually we posed these just for simplicity. On the other hand, one of the conditions in [2] is stronger 1 than ours, for in place of (iv) there is an upper bound for f ′′(z)− for z D. However, in the proof this is needed only on the real interval ∈ [a, b], in which case it coincides with (iv). The complications that arose in Theorem 2.2 when x lies near a or b seem inevitable, for then the integrand is almost stationar◦ y near a or b, and consequently there is not so much to be gained by smoothing. The case J = 1 of Theorem 2.3 is Lemma 2 in [16] and Lemma 2.3 in [13]. Our proof is not a direct generalization of that in [16] which turned out to become somewhat tedious for general J. Theorems 2.2 and 2.3 may be useful in problems in which the stan- 65 dard results (corresponding to J = 0) on exponential integrals are not accurate enough. An example of such an application is the improvement of the error terms in the approximate functional equations for ζ2(s) and ϕ(s) in [19]. The parameters U and J, which determine the smoothing, can be chosen differently at a and b. Such a version of Theorem 2.2 is given in [19], and the proof is practically the same. The corresponding smoothed integrals is of the type

U U V V b v − J K U− V− du1 duJ dv1 dvK h(x) dx, Z ··· Z Z ··· Z Z a+u ◦ ◦ ◦ ◦ 62 2. Exponential Integrals where u = u + + u and v = v + + v . For U = V and J = K, 1 ··· J 1 ··· K this amounts to the integral IJ in (2.1.2). Chapter 3

Transformation Formulae for Exponential Sums

THE BASIC RESULTS of these notes, formulae relating exponential 66 sums b(m)g(m)e( f (m) = d(m) or a(m),

M1 Xm M2 ≤ ≤ or their smoothed versions, to other exponential sums involving the same b(m), are established in this chapter by combining the summation formulae of Chapter 1 with the theorems of Chapter 2 on exponential integrals. The theorems in [16] and [17] concerning Dirichlet polyno- mials (which will be discussed in 4.1) were the first examples of such § results. As will be seen, the methods of these papers work even in the present more general context without any extra effort.

3.1 Transformation of Exponential Sums

To begin with, we derive a transformation formula for the above men- tioned sum with b(m) = d(m). The proof is modelled on that of Theorem 1 in [16]. In the following theorems, δ1, δ2,... denote positive constants which may be supposed to be arbitrarily small. Further, put L = log M1 for short.

63 64 3. Transformation Formulae for Exponential Sums

Theorem 3.1. Let 2 M < M 2M , and let f and g be holomorphic ≤ 1 2 ≤ 1 functions in the domain

(3.1.1) D = z z x < cM for some x [M , M ] , { | − | 1 ∈ 1 2 } 67 where c is a positive constant. Suppose that f (x) is real for M x 1 ≤ ≤ M2. Suppose also that, for some positive numbers F and G,

(3.1.2) g(z) G, | | ≪ 1 (3.1.3) f ′(z) FM− | | ≪ 1 for z D, and that ∈ 2 (3.1.4) (0 <) f ′′(x) FM− for M x M . ≫ 1 1 ≤ ≤ 2 Let r = h/k be a rational number such that

1/2 δ1 (3.1.5) 1 k M − , ≤ ≪ 1 1 (3.1.6) r FM− | |≍ 1 and

(3.1.7) f ′(M(r)) = r

for a certain number M(r) (M , M ). Write ∈ 1 2 M = M(r) + ( 1) jm , j = 1, 2. j − j Suppose that m m , and that 1 ≍ 2

δ2 1/2 1 δ3 (3.1.8) M1 max M1F− , hk m1 M1− .  | | ≪ ≪ Define for j = 1, 2

j 1 (3.1.9) p , (x) = f (x) rx + ( 1) − 2 √nx/k 1/8 , j n − − − 2 2   (3.1.10) n j = r f ′ M j k M j,  −   3.1. Transformation of Exponential Sums 65

and for n < n j let x j,n be the (unique) zero of p′j,n(x) in the interval (M1, M2). Then

(3.1.11) d(m)g(m)e( f (m))

M1 Xm M2 ≤ ≤ 1 1/2 = k− log M(r) + 2γ 2 log k g(M(r)) f ′′(M(r))− e( f (M(r)) −  2 1/2 1/2 j 1 rM(r) + 1/8) ++i2− k− ( 1) − − − Xj=1 1/4 1/4 1/2 d(n)e nh¯ n− x− g x , p′′ x , − k − j,n j n j,n j n × nX

Proof. Suppose, to be specific, that r > 0, and thus h > 0. The proof is similar for r < 0. The assertion (3.1.11) should be understood as an asymptotic result, in which M1 and M2 are large. Then the numbers F and n j are also large. In fact,

(3.1.12) F M1/2+δ1 ≫ 1 and

(3.1.13) n hkM2δ2 . j ≫ 1 For a proof of (3.1.12), note that by (3.1.6) and (3.1.5)

1 1/2+δ1 F M r k− M M . ≫ 1 ≥ 1 ≫ 1

Consider next the order of n j. By (3.1.1) and the holomorphicity of f in the domain (3.1.1) we have f (x) FM 2, which implies together ′′ ≪ 1− with (3.1.4) that

2 (3.1.14) f ′′(x) FM− for M x M . ≍ 1 1 ≤ ≤ 2 66 3. Transformation Formulae for Exponential Sums

Thus, by (3.1.7),

2 (3.1.15) r f ′ M j m jFM1− , −   ≍

so that by (3.1.10) and (3.1.6) we have

2 2 2 3 1 3 1 2 (3.1.16) n F k m M− F− h k− m . j ≍ j 1 ≍ j 69 This gives (3.1.13) owing to the estimates m M1+δ2 F 1/2 and j ≫ 1 − F M r. Also, it follows that n MA, for h M and m M by ≪ 1 j ≪ 1 ≪ 1 j ≪ 1 (3.1.8). The numbers n j are determined by the condition

(3.1.17) p M = 0. ′j,n j j   Then clearly j ( 1) p′j,n M j > 0 for n < n j. −   On the other hand, by (3.1.7)

j 1/2 1/2 1 ( 1) p′ (M(r)) = n M(r)− k− < 0 − j,n − for all positive n. Consequently, for n < n j there is a zero x j,n of p (x) in the interval (M , M ), and moreover x , (M , M(r)), x , ′j,n 1 2 1 n ∈ 1 2 n ∈ (M(r), M2). Also, it is clear that p′j,n has no zero in the interval (M1, M2) if n n . ≥ j To prove the uniqueness of x j,n, we show that p′′j,n is positive and thus p′j,n is increasing in the interval [M1, M2]. In fact,

2 (3.1.18) p′′ (x) FM− for M x M , n 2n , j,n ≍ 1 1 ≤ ≤ 2 ≤ j at least if M1 is supposed to be sufficiently large. For by definition

j 1 1/2 3/2 1 p′′ (x) = f ′′(x) + ( 1) n x− k− , j,n − 2 where by (3.1.16) and (3.1.8)

1/2 3/2 1 3 2 δ3 n x− k− Fm M− FM− − , ≪ 1 1 ≪ 1 3.1. Transformation of Exponential Sums 67

70 so that by (3.1.14) the term f ′′(x) dominates. After these preliminaries we may go into the proof of the formula (3.1.11). Denote by 5 the sum under consideration. Actually it is easier to deal with the smoothed sum U 1 (3.1.19) S ′ = U− S (u) du, Z ◦ where

(3.1.20) S (u) = d(m)g(m)e( f (m)).

M1+uXm M2 u ≤ ≤ − The parameter U will be chosen later in an optimal way; presently we suppose only that 1 (3.1.21) Mδ4 U min (m , m ) . 1 ≪ ≤ 2 1 2 Since (3.1.22) d(n) y log x for xǫ y x, x Xn x+y ≪ ≪ ≪ ≤ ≤ (see [25]), we have

(3.1.23) S S ′ GUL. − ≪ The summation formula (1.9.1) is now applied to the sum S (u), which is first written as S (u) = d(m)g(m)e( f (m) mr)e(mr), − aXm b ≤ ≤ with a = M +u, b = M u. We may assume that neither of the numbers 1 2− a and b is an integer, for the value of S (u) for the finitely many other values of u is irrelevant in the integral (3.1.19). Then by (1.9.1) (3.1.24) b 1 S (u) = k− (log x + 2γ 2 log k)g(x)e( f (x) rx) dx Z − − a 68 3. Transformation Formulae for Exponential Sums

b 1 ∞ + k− d(n) 2πek nh¯ Y 4π √nx/k + 4ek nh¯ Z − − ◦ Xn=1 a n       K 4π √nx/k g(x)e( f (x) rx) dx ◦  o − 1 ∞ = k− I + d(n) ek nh¯ In + ek nh¯ in ,  ◦ −   Xn=1         71 say.   The integrals in are very small and quite negligible. Indeed, by (3.1.5) we have √nM /k √nMδ1 , so that by (1.3.17) 1 ≫ 1

∞ ∞ 1 1 δ1 (3.1.25) k− d(n) i k− GM d(n) exp A √nM | n| ≪ 1 − 1 Xn=1 Xn=1   δ1 G exp AM1 . ≪ −  Consider next the integral I . We apply Theorem 2.1 with α = r ◦ − and µ(x) a constant function M . The assumptions of Theorem 2.1 ≍ 1 are satisfied in virtue of the conditions of our theorem. By (3.1.7), the 1 saddle point is M(r). Hence the saddle-point term for k− I equals the leading term in (3.1.11). ◦ The first error term in (2.1.9) is

LGM exp( AF) ≪ 1 − which is negligible by (3.1.12). The last two error terms contribute (3.1.26) 1 1 1/2 1 − 1/2 1 − GL f ′(a) r + F M1− + f ′(b) r + F M1− . ≪  − −      For same reasons as in (3.1.15), we have

2 f ′(a) r Fm M− , − ≍ 1 1

and likewise for f (b) r . Hence the expression (3.1.26) is F 1 | ′ − | ≪ − 72 Gm 1 M2L, which is further FGm 1r 2L by (3.1.6). The second error 1− 1 ≪ 1− − 3.1. Transformation of Exponential Sums 69

3/2 term in (2.1.9), viz. o(GM1F− L), can be absorbed into this, for

3/2 1 1 2 1 2 M F− r− FM− r− Fm− r− . 1 ≪ ≪ 1 ≪ 1 1 2 1 Hence the error terms for k− I give together o(FGh− km1− L), which is the first error term in (3.1.11).◦ We are now left with the integrals

b

(3.1.27) In = 2π Y 4π √nx/k g(x)e( f (x) rx) dx. − Z ◦ − a   By (1.3.9), the function Y can be written in terms of Hankel func- tions as ◦ 1 (3.1.28) Y (z) = H(1)(z) H(2)(z) , ◦ 2i ◦ − ◦   where by (1.3.13)

1/2 ( j) 2 j 1 1 (3.1.29) H (z) = exp ( 1) − i z π 1 + g j(z) . ◦ πz 4 ! − − !!  

The functions g j(z) are holomorphic in the half-plane Re z > 0, and by (1.3.14)

1 (3.1.30) g (z) z − for z 1, Re z > 0. j ≪ | | | |≥

By (3.1.27) - (3.1.29) we may write

(3.1.31) I = I(1) I(2), n n − n where

(3.1.32) b ( j) 1/2 1/2 1/4 1/4 In = i2− k n− x− g(x) 1 + g j 4π √nx/k e p j,n(x) dx. Z a      70 3. Transformation Formulae for Exponential Sums

( j) For n 2n we apply Theorem 2.1 to I , again with α = r. The 73 ≤ j n − function j 1 f (x) + ( 1) − 2 √nx/k 1/8 − −   now stands for the function f , and moreover µ(x) M and F(x) = F. ≍ 1 The conditions of the theorem are satisfied, in particular the validity of the condition (iv) on f ′′ follows from (3.1.18), and the condition (iii) on f ′ can be checked by (3.1.3) and (3.1.16). The number x j,n is, by ( j) definition, the saddle point for In , and it lies in the interval (M1, M2) ( j) if and only if n < n j. However, in In the interval of integration is [a, b] = [M +u, M u], and x , (a, b) if and only if n < n (u), where 1 2 − j n ∈ j j 1 2 2 j 1 (3.1.33) n (u) = r f ′ M + ( 1) − u k M + ( 1) − u j − j − j −      in analogy with (3.1.10). But for simplicity we count the saddle-point terms for all n < n j, and the number of superfluous terms is then

2 2 3 (3.1.34) 1 + n n (U) 1 + F k m M− U. ≪ j − j ≪ 1 1 1 ( j) The saddle-point term for k− In is

1/2 1/2 1/4 1/4 (3.1.35) i2− k− n− x− g x j,n 1 + g j 4π √nx j,n/k j,n × 1/2      − p′′j,n x j,n e p j,n x j,n + 1/8 . ×       Multiplied by ( 1) j 1d(n)e ( nh¯), these agree, up to g (...), with − − k − j the individual terms of the sums on the right of (3.1.11). The effect of the omission of g j(...) is by (3.1.30), (3.1.16), (3.1.18), and (3.1.5)

1/2 1/2 1/4 3/4 F− Gk M d(n)n− ≪ 1 nXM1 ≪ 1/2 1/2 1/2 Gkm M− L Gm L, ≪ 1 1 ≪ 1 74 which can be absorbed into the second error term in (3.1.11). The extra saddle-point terms, counted in (3.1.34), contribute at most

2 2 3 1/2 1/2 3/4+ǫ 1/4 1 + F k m M− U F− Gk− M n− , ≪ 1 1 1 1   3.1. Transformation of Exponential Sums 71 which, by (3.1.16) and (3.1.6), is

1/2 3/2 1/2 1/2 ǫ 1/2 3/2 1/2 1/2 ǫ (3.1.36) F Gh− k m− M + F− Gh k− m M U. ≪ 1 1 1 1 Now, allowing for these error terms, we have the same saddle-point terms given in (3.1.11), for all sums S (u), and hence by (3.1.19) for S ′, too. ( j) Consider now the error terms when Theorem 2.1 is applied to In for n 2n . The first error term in (2.1.9) is clearly negligible. Further, ≤ j the contribution of the error terms involving x to S (u) is ◦ 3/2 1/2 3/4 1/4 F− Gk− M d(n)n− , ≪ 1 nXn1 ≪ which, by (3.1.16), (3.1.8) and (3.1.5), is

3/2 3/2 1/2 1/2 1/2 Gkm M− L Gkm M− L Gm . ≪ 1 1 ≪ 1 1 ≪ 1 This is smaller than the second error term in (3.1.11). The last two error terms are similar, so it suffices to consider o(E (a)) 1 ( j) ◦ as an example. By (2.1.8) and (3.1.32), this error term for k− In is

1/2 1/4 1/4 1/2 1 Gk− M− n− p′ (a) + p′′ (a) − . ≪ 1 j,n j,n   Consider the case j = 1; the case j = 2 is less critical since p2′ ,n(a) 1 2 | | cannot be small. Now p′ (a) = 0 and p′′ (a) F− r , so it is easily 75 1,n1(u) 1,n ≍ seen that

1/2 1 1/2 2 1/2 1 F r− for n n1(u) F− h m1, p′ (a) + p′′ (a) − | − | ≪ 1,n 1,n  1/2 1/2 1 ≪ kM1 n1 n n1(u) − otherwise    | − |  Note that by (3.1.8) and (3.1.6)

1/2 2 1 2 1+δ2 δ2 F− h m F− h M hkM . 1 ≫ 1 ≫ 1 Hence, by (3.1.22), the mean value of d(n) in the interval n n (u) | − 1 | F 1/2h2m can be estimated as o(L). It is now easily seen that the ≪ − 1 contribution to S (u) of the error terms in question is

Gh1/2k1/2m1/2L2, ≪ 1 72 3. Transformation Formulae for Exponential Sums

which is the second error term in (3.1.11). ( j) The smoothing device was introduced with the integrals In for n > 2n j in mind. By (3.1.19), (3.1.24), (3.1.31), and (3.1.32), their contribu- tion to S ′ is equal to

2 1/2 1/2 j 1 1/4 (3.1.37) i2− k− ( 1) − d(n)e nh¯ n− − k − × Xj=1 nX>2n j  

M2 1/4 η1(x)x− g(x) 1 + g j 4π √nx/k e p j,n(x) dx, × Z M1     

where η1(x) is a weight function in the sense of Chapter 2, with J = 1 and U being the other smoothing parameter. The series in (3.1.24) is boundedly convergent with respect to u, by Theorem 1.7, so that it can be integrated term by term. The smoothed exponential integrals in (3.1.37) are estimated by 76 Theorem 2.3, where p , (z) stands for f (z), and µ m . To begin with, j n ≍ 1 we have to check that the conditions of this theorem are satisfied. We have j 1 1/2 1/2 1 p′ (z) = f ′(z) r + ( 1) − n z− k− . j,n − − Let n > 2n j, and let z lie in the domain D, say D , of Theorem 2.3. Then by (3.1.16) ◦ 1/2 1/2 1 2 n z− k− m FM− . ≫ 1 1

On the other hand, since f (M( r)) r = 0 and f (z) FM 2 for ′ − | ′′ | ≪ 1− z D by (3.1.3) and Cauchy’s integral formula, we also have ∈ ◦ 2 f ′(z) r m FM− . | − | ≪ 1 1 Thus, the condition (2.2.3) holds with

1 1/2 1/2 (3.1.38) M = k− M1− n . Further, to verify the condition (2.2.2), compare p (x) with p (x), ′j,n ′j,n j using (3.1.17) and the fact that p (x) is increasing in the interval ′j,n j [M1, M2]. 3.1. Transformation of Exponential Sums 73

We may now apply the estimate (2.2.4) in (3.1.37). The second term on the right of (2.2.4) is exponentially small, for by (3.1.38), (3.1.16), and (3.1.8)

1 1/2 1/2 1/2 2 2 Mµ k− M− n m (n/n ) Fm M− ≫ 1 1 ≫ 1 1 1 (n/n )1/2 M2δ2 . ≫ 1 1 Hence these terms are negligible. 1 2 The contribution of the terms U− GM− in (2.2.4) to (3.1.37) is 3/2 3/4 1 5/4 Gk M U− d(n)n− ≪ 1 nXn1 ≫ 3/2 3/4 1/4 1 Gk M n− U− L ≪ 1 1 1/2 3/2 1/2 1 GF− kM m− U− L ≪ 1 1 3/2 5/2 1/2 1 GFh− k m− U− L. ≪ 1 77 Combining this with (3.1.23) and (3.1.36), we find that (3.1.11) holds, up to the additional error terms

1/2 3/2 1/2 1/2 ǫ (3.1.39) GUL + F Gh− k m− M ≪ 1 1 1/2 3/2 1/2 1/2 ǫ 3/2 5/2 1/2 1 + F− Gh k− m1 M1U + GFh− k m1− U− L. Here the second term is superseded by the last term in (3.1.11). Fur- ther, the first and last term coincide with the last term in (3.1.11) if we choose

1/2 3/4 5/4 1/4 (3.1.40) U = F h− k m1− . Then, by (3.1.8), the third term in (3.1.39) is

3/4 3/4 1/4 1/2 1/2 ǫ δ2/4 Gh k m M G(hk) m M − , ≪ 1 1 ≪ 1 1 which can be absorbed into the second error term in (3.1.11). It should still be verified that the number U in (3.1.40) satisfies the condition (3.1.21). By (3.1.8) and (3.1.6) we have

1 1 1+δ2 1/2 1/4 1/4 δ2 δ2 Um− U M F− − (hk) m− M− M− 1 ≪ 1 ≪ 1 1 ≪ 1   74 3. Transformation Formulae for Exponential Sums

and also, in the other direction,

1/2 3/4 5/4 1/4+δ3/4 U F h− k M− ≫ 1 1/4+δ3/4 1/4 3/4 δ3/4 M h− k M ≫ 1 ≫ 1 78 Hence (3.1.21) holds, and the proof of the theorem is complete. The next theorem is an analogue of Theorem 3.1 for exponential sums involving the Fourier coefficients a(n) of a cusp form of weight κ. The proof is omitted, because the argument is practically the same; the summation formula (1.9.2) is just applied in place of (1.9.1). Note that in (1.9.2) there is nothing is correspond to the first term in (1.9.1), and consequently in the transformation formula there are no counterparts for the first explicit term and the first error term in (3.1.11).  Theorem 3.2. Suppose that the assumptions of Theorem 3.1 are satis- fied. Then (3.1.41) a(m)g(m)e( f (m))

M1 Xm M2 ≤ ≤ 2 1/2 1/2 j 1 κ/2+1/4 = i2− k− ( 1) − a(n)e nh¯ n − k − × Xj=1 nX

We now give analogues of Theorem 3.1 and 3.2 for smoothed exponen- tial sums provided with weights of the type ηJ(n). We have to pay for the better error terms in these new formulae by allowing certain weights to appear in the transformed sums as well. Theorem 3.3. Suppose that the assumptions of Theorem 3.1 are satis- fied. Let

1/2 1+δ4 1/2 1 δ4 (3.2.1) U F− M F r− M , ≫ 1 ≍ 1 3.2. Transformation of Smoothed Exponential Sums 75 and let J be a fixed positive integer exceeding a certain bound (which 79 depends on δ4). Write for j = 1, 2

j 1 j M′ = M + ( 1) − JU = M(r) + ( 1) m′ , j j − − j and suppose that m m . Let n be as in (3.1.10), and define analo- ′j ≍ j j gously

2 2 (3.2.2) n′ = r f ′ M′ k M′. j − j j    Then, defining the weight function ηJ(x) in the interval [M1, M2] as in (2.1.2), we have

(3.2.3) ηJ(m)d(m)g(m)e( f (m)) M1 Xm M2 ≤ ≤ 1 1/2 = k− log M(r) + 2γ 2 log k g(M(r)) f ′′(M(r))− −  e( f (M(r)) rM(r) + 1/8) − 2 1/2 1/2 j 1 1/4 1/4 + i2− k− ( 1) − w (n)d(n)e nh¯ n− x− − j k − j,n × Xj=1 nX

(3.2.4) w j(n) = 1 for n < n′j, (3.2.5) w (n) 1 for n < n , j ≪ j w j(y) and w′j(y) are piecewise continuous functions in the interval (n′j, n ) with at most J 1 discontinuities, and j − 1 − (3.2.6) w′j(y) n j n′j for n′j < y < n j ≪  −  whenever w′j(y) exists. 76 3. Transformation Formulae for Exponential Sums

Proof. We follow the argument of the proof of Theorem 3.1, using 80 however Theorem 2.2 in place of Theorem 2.1. Denoting by S J the smoothed sum under consideration, we have by (1.9.1), as in (3.1.24),

(3.2.7) M2 1 S J = k− log x + 2γ 2 log k ηJ(x)g(x)e( f (x) rx) dx Z − − M1 

M2 1 ∞ + k− d(n) 2πek nh¯ Y 4π √nx/k + 4ek nh¯ K Z − − ◦ ◦ Xn=1 n       M1

4π √nx/k ηJ(x)g(x)e( f (x) rx) dx  o − 1 ∞ = k− I + d(n)(ek( nh¯)In + ek(nh¯)in) .  ◦ −   Xn=1    As in the proof of Theorem 3.1, the integrals in are negligible. Consider next the integral I . We apply Theorem 2.2 choosing µ(x) M again. The saddle-point◦ is M(r), as before, and the saddle- ≍ 1 point term for I is the same as in the proof of Theorem 3.1. The first error term in (2.1.11)◦ is exponentially small. The error terms involving EJ are also negligible if J is taken sufficiently large (depending on δ4), since 1 1/2 δ4 U− f ′′(x)− M− for M x M . ≪ 1 ≤ ≤ 2 3/2 1 The contribution of the error term o(G(x )µ(x )F(x )− ) to k− I is by (3.2.1) and (3.1.8) ◦ ◦ ◦ ◦

1 3/2 1 1 1/2 1/2 k− F− GM L F− GUL F− Gh− m U, ≪ 1 ≪ ≪ 1 which does not exceed the error term in (3.2.3). Turning to the integrals In, we write as in (3.1.31) and (3.1.32)

I = I(1) I(2), n n − n 81 where 3.2. Transformation of Smoothed Exponential Sums 77

(3.2.8) M2 ( j) 1/2 1/2 1/4 1/4 In = i2− k n− ηJ(x)− g(x) 1 + g j 4π √nx/k e p j,n(x) dx. Z M1     

Let first n > 2n j. As in the proof of Theorem 3.1, we may apply Theorem 2.3 with µ m and M as in (3.1.38). Observe that by (3.2.1), ≍ 1 (3.1.16), and (3.1.8)

1 1 1/2 1/2 1 1 δ4 1/2 δ2 δ4 U− M− (n /n) F− m− M − (n /n) M− − ≪ 1 1 1 ≪ 1 1 J J 1 whence we we may make the term U− GM− − in (2.2.4) negligibly small by taking J large enough. As before, the second term in (2.2.4) is also negligible. The terms for n 2n are dealt with by Theorem 2.2. The saddle ≤ j point terms occur again for n < n j, and they are of the same shape as in (3.1.35) except that there is the additional factor

(3.2.9) w j(n) = ξ x j,n .   The property (3.2.4) of w j(n) is immediate by (2.1.12), for M1 + JU < x , < M JU if and only if n < n . Further, (3.2.5) follows j n 2 − ′j from (2.1.12) - (2.1.14) by (3.2.1) and (3.1.18). To prove the property (3.2.6), consider w j(y) = ξ x j,y   as a function of the continuous variable in the interval (n′j, n j). Here x j,y is the unique zero of p′j,y(x) in the interval (M1, M1 + JU) for j = 1, and in the interval (M JU, M ) for j = 2. Thus z = x , satisfies 2 − 2 j y j 1 1/2 1/2 1 f ′(z) r + ( 1) − y z− k− = 0. − − Hence, by implicit differentiation, 82

dx j,y 1 j 1 1/2 1/2 1 p′′ (z) + ( 1) − y− z− k− = 0, j,y dy 2 − which implies that

dx j,y 1 1 3/2 1/2 1 (3.2.10) F− k− M1 n1− m1n1− for n′j < y < n j. dy ≍ ≍

78 3. Transformation Formulae for Exponential Sums

The function ξJ(x) in (2.1.13) and (2.1.14) is continuously differen- tiable except at the points a + JU and b jU, j = 1,..., J, where terms − 3 appear or disappear. By differentiation, noting that f ′(x) FM , it ′′ ≪ 1− is easy to verify that

1 (3.2.11) ξ′ (x) U− J ≪ elsewhere in the intervals (a, a+JU) and (b JU, b). By (3.1.10), (3.2.2), − and (3.1.14) we have

1 1 (3.2.12) n j n′j n−j m1− U.  −  ≍ Now (3.2.6) follows from (3.2.10) - (3.2.12) at those points y for which x , is not of the form a + jU or b jU with 1 j J 1. j y − ≤ ≤ − As in the proof of Theorem 3.1, we may omit g j(...) in the saddle- point terms with an admissible error

1/2 1/2 1 3/2 1/2 1/2 Gkm M− L F− Gh k− m U, 1 1 ≪ 1 and after that these terms coincide with those in (3.2.3). ( j) Consider finally the error terms in (2.1.11) for In . The first of these is clearly negligible. Also, for the same reason as in the case of I , the ◦ error terms involving EJ can be omitted if J is chosen sufficiently large. 1 ( j) 83 Finally, the second error term in (2.1.11) for k− In is

3/2 1/2 3/4 1/4 F− Gk− M n− for n < n′ ≪ 1 1 j and 1 1/2 3/4 1/4 F− Gk− M n− for n′ n < n . ≪ 1 1 j ≤ j The contribution of these to S J is

3/2 1/2 3/4 3/4 1 1/2 3/4 1/4 (3.2.13) F− Gk− M1 n1 L+F− Gk− M1 n1− n j n′j L. ≪  −  Here we estimated the mean value of d(n) in the interval [n′j, n j) by o(L), which is possible, by (3.1.22), for by (3.2.12), (3.1.16), (3.1.8), (3.2.1), and (3.1.6) we have

1 2 2 3 n n′ m− n U F k m M− U j − j ≪ 1 1 ≪ 1 1 3.2. Transformation of Smoothed Exponential Sums 79

2 2 1+δ2 1/2 3 1/2 1+δ4 δ2+δ4 F k M1 F− M1− F− M1 hkM1 . ≪     ≍ By (3.2.12), the second term in (3.2.13) is

1 1/2 1 3/4 3/4 F− Gk− m− M n UL, ≍ 1 1 1 which is of the same order as the error term in (3.2.3) and dominates the first term, since

1 1/2 1 1/2 m1− U F− M1 M1− = F− . ≫   The proof of the theorem is now complete. The analogue of the preceding theorem for exponential sums involv- ing Fourier coefficients a(n) is as follows. The proof is similar and can be omitted. 

Theorem 3.4. With the assumptions of Theorem 3.3, we have 84

(3.2.14) ηJ(m)a(m)g(m)e( f (m)) M1 Xm M2 ≤ ≤ 2 1/2 1/2 j 1 κ/2+1/4 = i2− k− ( 1) − w (n)a(n)e nh¯ n− − j k − Xj=1 nX

1/2+ǫ 1 (3.2.15) U F r− . ≍ Then the error term in (3.2.3) is

1/2+ǫ 1/2 1/2 (3.2.16) o F− G ( h k) m1  | |  and that in (3.2.14) is

1/2+ǫ 1/2 (κ 1)/2 1/2 (3.2.17) o F− G ( h k) M1 − m1 .  | |  Chapter 4

Applications

85 THE THEOREMS OF the preceding chapter show that the short expo- nential sums in quesion depend on the rational approximation of f ′(n) in the interval of summation. But in long sums the value of f ′(n) may vary too much to be approximated accurately by a single rational number, and therefore it is necessary to split up the sum into shorter segments such that in each segment f ′(n) lies near to a certain fraction r. By suit- able averaging arguments, it is possible to add these short sums - in a transformed shape - in a non-trivial way. Variations on this theme are given in 4.2 - 4.4. But as a preliminary for 4.2 and 4.4, we first §§ §§ work out in 4.1 the transformation formulae of Chapter 3 in the special § case of Dirichlet polynomials related to ζ2(s) and ϕ(s).

4.1 Transformation Formulae for Dirichlet Polyno- mials

The general theorems of the preceding chapter are now applied to Dirichlet polynomials

1/2 it (4.1.1) S (M1, M2) = d(m)m− − , M1 Xm M2 ≤ ≤ k/2 it (4.1.2) S ϕ (M1, M2) = a(m)m− − , M1Xm m2 ≤ ≤ 80 4.1. Transformation Formulae for Dirichlet Polynomials 81 as well as to their smoothed variants

1/2 it (4.1.3) S˜ (M1, M2) = ηJ(m)d(m)m− − , M1 Xm M2 ≤ ≤ k/2 it (4.1.4) S˜ ϕ (M1, M2) = ηJ(m)a(m)m− − , M1 Xm M2 ≤ ≤ where ηJ (x) is a weight function defined in (2.1.2). 86 We shall suppose for simplicity that t is a sufficiently large positive number, and put L = log t. The function χ(s) is as in the functional equation ζ(s) = χ(s)ζ(1 s), thus −

s s 1 1 χ(s) = s π − sin sπ Γ(1 s). 2 ! −

If σ is bounded and t tends to infinity, then (see [27], p. 68)

s 1/2 i(t+π/4) 1 (4.1.5) χ(s) = (2π/t) − e 1 + o t− .    Define also

1/2 (4.1.6) φ(x) = ar sin h x1/2 + x + x2 .     As before, δ1, δ2,... will denote positive constants which may be supposed to be arbitrarily small.

Theorem 4.1. Let r = h/k be a rational number such that t (4.1.7) M < < M 1 2πr 2 and

1/2 δ1 (4.1.8) 1 k M − . ≤ ≪ 1 Write t (4.1.9) M = + ( 1) jm , j 2πr − j 82 4. Applications

and suppose also that m m and 1 ≍ 2

δ2 1/2 1 1 δ3 (4.1.10) t max t r− , hk m1 M1− .   ≪ ≪ Let

2 2 1 (4.1.11) n j = h m j M−j .

87 Then (4.1.12)

1/2 S (M1, M2) = (hk)− log (t/2π) + 2γ log(hk) n − 2 ¯ 1/4 1/4 h 1 1/4 + π (2hkt)− d(n)e n n− k − 2hk !! × Xj=1 nX

1/2 (4.1.14) g(z) = z− .

Then the assumptions of the theorem are obviously satisfied with

(4.1.15) F = t, 1/2 1/2 1/2 (4.1.16) G = M− r t− , 1 ≍ and t (4.1.17) M( r) = . − 2πr 4.1. Transformation Formulae for Dirichlet Polynomials 83

Then the number n j in (3.1.10) equals the one in (4.1.11). The leading term on the right of (3.1.11) is

1/2 it it i(t+π/4) (hk)− (log(t/2π) + 2γ log(hk))r (2π/t) e − which can also be written, by (4.1.5), as

1/2 it 1 1 (4.1.18) (hk)− (log(t/2π) + 2γ log(hk))r χ + it 1 + o t− . − 2 !    The function p j,n(x) reads in the present case 88

j 1 (4.1.19) (t/2π) log x + rx + ( 1) − 2 √nx/k 1/8 − −  −  and the numbers x j,n are roots of the equation

t j 1 1/2 1/2 1 (4.1.20) p′ = + r + ( 1) − n x− k− = 0, j,n −2πx − and thus roots of the quadratic equation t n t 2 (4.1.21) x2 + x + = 0. − πr h2  2πr  Moreover, since x1,n < x2,n, we have

/ t n ( 1) j n2 hknt 1 2 (4.1.22) x j,n = + + − + 2πr 2h2 h2 4 2π ! and

j 2 1/2 2 1 t n ( 1) n hknt (4.1.23) (t/2πr) x− = + − + . j,n 2πr 2h2 − h2 4 2π ! Next we show that (4.1.24) 1/4 1/2 1/2 3/4 1/2 1/4 1/4 πn − 2 k x− p x − = π (2hkt) 1 + . − − j,n ′′j,n j,n − 2hkt     Indeed, by (4.1.19) we have

3/2 1 1/2 j 1/2 2kx p′′ x , = π− ktx− + ( 1) n , j,n j,n j n j,n −   84 4. Applications

which by (4.1.20) and (4.1.23) is further equal to

2 j 1 2 1/2 t 1 t j 1/2 ( 1) − h n− 2 x−j,n + ( 1) n − 2πr  − πr ! − 1/2 1/2 1/2 πn = π− (2hkt) 1 + .  2hkt  This proves (4.1.24). 89 To complete the calculation of the explicit terms in (3.1.11), we still have to work out p j,n(x j,n). Note that by (4.1.22) and (4.1.23)

πn πn 2 2πn 1/2 2πrt 1 x ( 1) j = 1 + + + − j,n hkt hkt hkt   −   ! πn 1/2 πn 1/2 2 = + 1 + , 2hkt   2hkt  ! whence

1/2 1 j πn (4.1.25) log 2πrt− x , = ( 1) 2ar sin h . j n − 2hkt     ! Also, by (4.1.20) and (4.1.22),

j 1 1/2 1/2 1 2πrx , + 4π( 1) − n x k− j n − j,n = 2t 2πrx , − j n 2 1/2 πn j 1 πn πn = t + ( 1) − 2t + − hk − 2hkt 2hkt  ! Together with (4.1.19), (4.1.25), and (4.1.6), this gives

j 1 πn π πn 2πp , x , = ( 1) − 2tφ t log(t(2π) + t log r + t . j n j n − 2hkt − 4 − − hk       Hence, using (4.1.5) again, we have

j 1 (4.1.26) i( 1) − e p , x , + 1/8 = − j n j n  n   j 1 πn π it = e exp i( 1) − 2tφ + r −2hk   −  2hkt  4  4.1. Transformation Formulae for Dirichlet Polynomials 85

1 χ + it 1 + 0 t 1 . 2 − !    By (4.1.18), (4.1.24), and (4.1.26), we find that the explicit terms on the right of (3.1.11) coincide with those in (4.1.12), up to the factor 1 1 1 + o(t− ). The correction o(t− ) can be omitted with an error

1 1/2 1/4 3/4 t− (hk)− L + (hkt)− n1 L , ≪   which is 90

1 1/2 1 2 1 3/2 t− (hk)− L + t− h k− m L ≪ 1  1/2 1  (4.1.27) hm t− L ≪ 1 by (4.1.11), (4.1.7), and (4.1.10). This is clearly negligible in (4.1.12). Finally, the error terms in (3.1.11) give those in (4.1.12) by (4.1.15) and (4.1.16). An application of Theorem 3.2 yields an analogous result for S ϕ(M1, M2).  Theorem 4.2. Suppose that the conditions of Theorem 4.1 are satisfied. Then (4.1.28) 2 ¯ 1/4 1/4 h 1 S ϕ(M1, M2) = π (2hkt)−  a(n)e n  k − 2hk !! × Xj=1 nX

1 1/2+δ4 (4.1.29) U r− t ≫ 86 4. Applications

and let J be a fixed positive integer exceeding a certain bound (which depends on δ4). Write for j = 1, 2

j 1 t j M′ = M + ( 1) − JU = + ( 1) m′ , j j − 2πr − j

91 and suppose that m m . Define j ≍ ′j 1 2 2 − (4.1.30) n′j = h (m′j) M′j .   Then, defining the weight function ηJ(x) in the interval [M1, M2] with the aid of the parameters U and J, we have

1/2 (4.1.31) S˜ (M1, M2) = (hk)− (log(t/2π) + 2γ log(hk)) n − 2 ¯ 1/4 1/4 h 1 1/4 +π (2hkt)− w j(n)d(n)e n n− k − 2hk !! × Xj=1 nX

(4.1.32) w j(n) = 1 for n < n′j, (4.1.33) w (n) 1 for n < n , j ≪ j

w j(y) and w′j(y) are piecewise continuous in the interval (n′j, n j) with at most J 1 discontinuities, and − 1 − (4.1.34) w′j(y) n j n′j for n′j < y < n j ≪  −  whenever w′j(y) exists

Proof. We apply Theorem 3.3 to the sum S˜ (M1, M2) with f, g, F, G and r as in the proof of Theorem 4.1; in particular, F = t. Hence the con- dition (3.2.1) on U holds by (4.1.29). The other assumptions of the 4.1. Transformation Formulae for Dirichlet Polynomials 87

theorem are readily verified, and the explicit terms in (4.1.31) were al- ready calculated in the proof of Theorem 4.1,up to the properties of the weight functions w j(y), which follow from (3.2.4) - (3.2.6). The error term in (3.2.3) gives that in (4.1.31). It should also be 92 noted that as in the proof of Theorem 4.1 there is an extra error term 1 caused by o(t− ) in the formula (4.1.5) for χ(1/2 + it). This error term is hm1/2t 1L, as was seen in (4.1.27). By (4.1.29) this can be absorbed ≪ 1 − into the error term in (4.1.31), and the proof of the theorem is complete. The analogue of the preceding theorem for S˜ ϕ(M1, M2) reads as fol- lows. 

Theorem 4.4. With the assumptions of Theorem 4.3, we have

(4.1.35) 2 ˜ 1/4 1/4 S ϕ (M1, M2) = π (2hkt)−  w j(n)a(n)  × Xj=1 nX

1/2 it 2/3 d(n)n− − log t for t 2 and xi t/2π t . ≪ ≥ | − | ≪ x1Xn x2 ≤ ≤

Remark 2. In Theorems 4.3 and 4.4 the error term is minimal when U is as small as possible, i.e.

1 1/2+ǫ (4.1.37) U r− t . ≍ The error term then becomes

1/2 1+ǫ (4.1.38) 0 hm1 t− .   88 4. Applications

This is significantly smaller than the error terms in Theorems 4.1 3/4 and 4.2. for example, if r = 1 and m1 = t , then the error in Theorem 4.1 is t 1/8L2, while (4.1.38) is just t 5/8+ǫ . The lengths n of the ≪ − ≪ − j transformed sums are about t1/2, which is smaller than the length t3/4 ≍ 93 of the original sum. A trivial estimate of the right hand side of (4.1.12) is t1/8L, which is a trivial estimate of the original sum is t1/4L. ≪ ≪ Thanks to good error terms, Theorems 4.3 and 4.4 are useful when a number of sums are dealt with and there is a danger of the accumulation of error terms.

Remark 3. With suitable modifications, the theorems of this section hold for negative values of t as well. In this case r (and thus also h)

will be negative. Because p′′j,n is now negative, our saddle point theo- rems take a slightly different form (see the remark in the end of 2.1. § When the calculations in the proof of Theorem 4.1 are carried out, then instead of (4.1.12) we obtain, for t < 0,

1/2 S (M1, M2) = ( h k)− (log t /2π) + 2γ log( h k) n | | | | − | | 2 ¯ 1/4 1/4 h 1 1/4 + π (2hkt)− d(n)e n n− k − 2hk !! × Xj=1 nX

4.2 On the Order of ϕ(k/2 + it)

Dirichlet series are usually estimated by making use of their approx- imate functional equations. For ζ2(s), this result is classical-due to 4.2. On the Order of ϕ(k/2 + it) 89

Hardy-Littlewood and Titchmarsh - and states that for 0 σ 1 and ≤ ≤ t 10 ≥ 2 s 2 s 1 1/2 σ (4.2.1) ζ (s) = d(n)n− + χ (s) d(n)n − + o x − log t , Xn x Xn y ≤ ≤   where x 1, y 1, and xy = (t/2π)2. Analogously, for ϕ(s) we have 94 ≥ ≥ s s k k/2 σ (4.2.2) ϕ(s) = a(n)n− + ψ(s) a(n)n − + o x − log t , Xn x Xn y ≤ ≤   where

k/2 2s k ψ(s) = ( 1) (2π) − Γ(k s)/Γ(s). − − For proofs of (4.2.1) and (4.2.2), see e.g. [19]. The problem of the order of ϕ(k/2 + it) can thus be reduced to esti- mating sums

k/2 it (4.2.3) a(n)n− − Xn x ≤ for x t; we take here t positive, for the case when t is negative is much ≪ the same, as was seen in Remark 3 in the preceding section. Estimating the sum (4.2.3) by absolute values, we obtain

ϕ(k/2 + it) t1/2L, | | ≪ which might be called a “trivial” estimate. If there is a certain amount of cancellation in this sum, then one has

(4.2.4) ϕ(k/2 + it) tα, | | ≪ where α< 1/2. An analogous problem is estimating the order of ζ(1/2+ it), and in virtue of the analogy between ζ2(1/2 + it) and ϕ(k/2 + it), one would expect that if

(4.2.5) ζ(1/2 + it) tc, | | ≪ 90 4. Applications

then (4.2.4) holds with α = 2c. (Recently it has been shown by E. Bombieri and H. Iwaniec that c = 9/56 + epsilon is admissible). In par- ticular, as an analogue of the Lindel¨of hypothesis for the zeta-function, 95 one may conjecture that (4.2.4) holds for all positive α. A counterpart of the classical exponent c = 1/6 would be α = 1/3, for which (4.2.4) is indeed known to hold, up to an unimportant logarithmic factor. More precisely, Good [9] proved that

(4.2.6) ϕ(k/2 + it) t1/3(log t)5/6 | | ≪ as a corollary of his mean value theorem (0.11). The proof of the lat- ter, being based on the spectral theory of the hyperbolic Laplacian, is sophisticated and highly non-elementary. A more elementary approach to ϕ(k/2 + it) via the transformation formulae of the preceding section leads rather easily to an estimate which is essentially the same as (4.2.6).

Theorem 4.5. We have

(4.2.7) ϕ(k/2 + it) ( t + 1)1/3+ǫ . | | ≪ | | Proof. We shall show that for all large positive values of t and for all numbers M, M with 1 M < M t/2π and M 2M we have ′ ≤ ′ ≤ ′ ≤

k/2 it 1/3+ǫ (4.2.8) S ϕ(M, M′) = a(m)m− − t . ≪ M Xm M′ ≤ ≤

A similar estimate could be proved likewise for negative values of t, and the assertion (4.2.7) then follows from the approximate functional equation (4.2.2). Let δ be a fixed positive number, which may be chosen arbitrarily small. for M t2/3+δ the inequality (4.2.8) is easily verified on estimat- ≤ ing the sum by absolute values. 96 Let now

M = t2/3+δ, ◦ (4.2.9) M < M < M′ t/2π, ◦ ≤ 4.2. On the Order of ϕ(k/2 + it) 91

(4.2.10) K = (M/M )1/2, ◦ and consider the increasing sequence of reduced fractions r = h/k with 1 k K, in other words the Farey sequence of order K. ≤ ≤ The mediant of two consecutive fractions r = h/k and r′ = h′/k′ is h + h ρ = ′ k + k′

The basic well-known properties of the mediant are :r <ρ< r′, and 1 1 1 1 (4.2.11) ρ r = , r′ ρ = . − k(k + k′) ≍ kK − k′(k + k′) ≍ k′K

Subdivide now the interval [M, M′] by all the points t (4.2.12) M(ρ) = 2πρ lying in this interval; here ρ runs over the mediants. Then the sum S ϕ(M, M′) is accordingly split up into segments, the first and last one of which may be incomplete. Thus, the sum S ϕ(M, M′) now becomes a sum of subsums of the type

(4.2.13) S ϕ(M(ρ′), M(ρ)), up to perhaps one or two incomplete sums. This sum is related to that fraction r = h/k of our system which lies between ρ and ρ′. We are going to apply Theorem 4.2 to the sum (4.2.13). The numbers m1 and m in the theorem are now M(r) M(ρ ) and M(ρ) M(r). Hence 2 − ′ − m m by (4.2.12) and (4.2.11), which imply moreover that 97 1 ≍ 2 2 1 1 2 1 1 3/2 2/3+δ/2 (4.2.14) m tr− (r ρ) k− K− M t− k− M t− . j ≍ − ≍ ≍ This gives further

1/3+δ 1/6+δ/2 (4.2.15) Mt− m Mt− . ≪ j ≪ It follows that the incomplete sums contribute

1/2 1/6+δ 1/3+δ M t− t , ≪ ≪ 92 4. Applications

which can be omitted. Next we check the conditions of Theorem 4.2, i.e. the conditions (4.1.8) and (4.1.10) of Theorem 4.1. The validity of (4.1.8) is clear by (4.2.10) and (4.2.9). In (4.1.10), the upper bound for m1 follows from (4.2.15). As to the lower bound, note that m Mt 1/2+δ t1/2+δr 1 1 ≫ − ≍ − by (4.2.15), and that

2 1 1 1/3 δ 3δ hk = rk M− t MM− t − m jt− . ≪    ◦  ≍ ≪ The error terms in (4.1.28) can be estimated by (4.2.10), (4.2.12), and (4.2.14). The first of them is L2, and the second is smaller. The ≪ number of subsums is

1 2 1/3 δ tM− K t − . ≍   ≍ Hence the contribution of the error terms is t1/3. ≪ Next we turn to the main terms in (4.1.28). A useful observation will be that the numbers

2 2 1 n m h M− j ≍ j are of the same order for all relevant r, namely (4.2.16) n t2/3+δ. j ≍ 98 This is easily seen by (4.2.14). To simplify the expression in (4.1.28), we omit the factors πn 1/4 1 + − = 1 + o k 2 Mnt 2 , 2hkt − −     which can be done with a negligible error 1. ≪ We now add up the expression in (4.1.28) for different fractions r. Putting (k 1)/2 a˜(n) = a(n)n− − , we end up with the problem of estimating the multiple sum ¯ 1/4 1/4 it 1/4 h 1 (4.2.17) t− (hk)− (h/k) a˜(n)n− e n k − 2hk !! × Xh,k nX

4.2. On the Order of ϕ(k/2 + it) 93

j 1 πn exp i( 1) − 2tφ + π/4 . × − 2hkt     

As a matter of fact, the numbers n j depend also on r = h/k, but this does not matter, for only the order of n j will be of relevance. For convenience we restrict in (4.2.17) the summations to the in- tervals K k K′ , N n N′ , where K′ min(2K , K), and ◦ ≤ ≤ 2/3◦+δ ◦ ≤ ≤ ◦ ◦ ≤ ◦ N′ 2N , N t . Also we take for j one of its two values, say ◦ ≤ ◦ ◦ ≪ j = 1. The whole sum is then a sum of o(tδ) such sums. It may happen that some of the n-sums are incomplete. In order to have formally complete sums, we replacea ˜(n) bya ˜(n)δ(h, k; n), where

1 for n < n (h, k), δ(h, k; n) = 1 0 otherwise;   the dependence of n1 on h/k is here indicated by the notation n1(h, k). Then, changing in (4.2.17) the order of the summations with respect to 99 n and the pairs h, k, followed by applications of Cauchy’s inequality and Rankin’s estimate (1.2.4), we obtain

1/4 1/4 1/4 it t− N  δ(h, k; n) (hk)− (h/k) ≪ ◦ × Xn Xh,k   2 1/2  h¯ 1 πn e n + (1/π)tφ . × k − 2hk ! 2hkt !       Here the square is written out as a double sum with respect toh1, k1, and h2, k2, and the order of the summations is inverted. Then, since N t2/3+δ and ◦ ≪ 2 1 (4.2.18) hk K M− t, ≍ ◦ the preceding expression is

1/2 1/2 1/4 1/3+δ (4.2.19) K− M t− s (h1, k1; h2, k2) , ≪ ◦  | | hX1,k1 hX2,k2      94 4. Applications

where

(4.2.20) s (h1, k1; h2, k2) = δ (h1, k1; n) δ (h2, k2; n) e( f (n))

N Xn N′ ◦≤ ≤ ◦ with

(4.2.21) h¯ 1 h¯ 1 πx πx f (x) = x 1 2 + + (t/π) φ φ . k1 − 2h1k1 − k2 2h2k2 ! 2h1k1t ! − 2h2k2t !!

Thus s(h1, k1; h2, k2) is a sum over a subinterval of [N , N′ ]. It will ◦ ◦ be estimated trivially for quadruples (h1, k1, h2, k2) such that h1k1 = h2k2, and otherwise by van der Corput’s method, applying the following well-known lemma (see [27], Theorem 5.9). 

Lemma 4.1. Let f be a twice differentiable function such that

0 < λ f ′′(x) hλ or λ f ′′(x) hλ 2 ≤ ≤ 2 2 ≤− ≤ 2 100 throughout the interval (a, b), and b a + 1. Then ≥ 1/2 1/2 e( f (n)) h(b a)λ + λ− . ≪ − 2 2 a

1 3/2 1/2 φ′′(x) = x− (1 + x)− , −2 so that for our function f in (4.2.21)

1/2 3/2 1/2 1/2 3/2 1/2 πx − f ′′(x) = 2− π− t x− (h1k1)− 1 + −  2h1k1t !   1/2 1/2  πx − (h2k2)− 1 + , − 2h2k2t !    and accordingly

3/2 1/2 1/2 1/2 λ2 N− t (h1k1)− (h2k2)− ≍ ◦ −

4.2. On the Order of ϕ(k/2 + it) 95

3 3/2 3/2 1 K− M N− t− h1k1 h2k2 , ≍ ◦ ◦ | − | where we used (4.2.18). Hence, by Lemma 4.1,

3/2 3/4 1/4 1/2 1/2 s (h1, k1; h2, k2) K− M N t− h1k1 h2k2 ≪ ◦ ◦ | − | 3/2 3/4 3/4 1/2 1/2 + K M− N t h1k1 h2k2 − ◦ ◦ | − | if h1k1 , h2k2. By (4.2.18)

1/2 2 1 5/2 h1k1 h2k2 K M− t | − | ≪ ◦ h1kX1,h2k2   and

1/2 2 1 3/2 δ h1k1 h2k2 − K M− t t . | − | ≪ ◦ h1kX1,h2k2   Thus

7/2 7/4 1/4 2 s (h1, k1; h2, k2) K M− N t | | ≪ ◦ ◦ hX1,k1 hX2,k2 9/2 9/4 3/4 2+δ 2 1 + K M− N t + K M− N t ◦ ◦ ◦ ◦ 7/2 7/4 13/6+δ K M− t ≪ ◦ 9/2 9/4 5/2+2δ 2 1 5/3+δ + K M− t + K M− t . ◦ ◦ Hence the expression (4.2.19) is 101

5/4 5/8 3/4+2δ 7/4 7/8 11/12+2δ 1/2 1/4 1/2+2δ K M− t + K M− t + K M− t ≪ ◦ ◦ ◦ t1/3+2δ, ≪ and the proof of the theorem is complete. Remark. The preceding proof works for ζ2(s) as well, and gives

ζ2(1/2 + it) t1/3+ǫ . ≪

This is, of course, a known result, but the argument of the proof is new, though there is a van der Corput type estimate (Lemma 4.1) as an element in common with the classical method. 96 4. Applications

4.3 Estimation of “Long” Exponential Sums

The method of the preceding section is now carried over to more general exponential sums

(4.3.1) b(m)g(m)e( f (m)), b(m) = d(m) or a(m), M Xm M ≤ ≤ ′ which are “long” in the sense that the length of a sum may be of the order of M. “Short” sums of this type were transformed in Chapter 3 under rather general conditions. Thus the first steps of the proof of Theorem 4.5 –dissection of the sum and transformation of the subsums-can be repeated in the more general context of sums (4.3.1) without any new assumptions, as compared with those in Chapter 3. But it turned out to be difficult to gain a similar saving in the summation of the transformed sums without more specific assumptions on the function f . However, if we suppose that f ′ is approximately a power, the analogy with the 102 previous case of Dirichlet polynomials will be perfect. The result is as follows.

Theorem 4.6. Let 2 M < M 2M, and let f be a holomorphic ≤ ′ ≤ function in the domain

(4.3.2) D = z z < cM for some x [M, M′] , | − | ∈ n o where c is a positive constant. Suppose that f (x) is real for M x M , ≤ ≤ ′ and that either

α 1/3 (4.3.3) f (z) = Bz 1 + 0 F− for z D ∈    where α , 0, 1 is a fixed real number, and

(4.3.4) F = B Mα, | | or

1/3 (4.3.5) f (z) = B log z 1 + o F− for z D,    ∈ 4.3. Estimation of “Long” Exponential Sums 97 where

(4.3.6) F = B . | | Let g C1[M, M ], and suppose that for M x M ∈ ′ ≤ ≤ ′

(4.3.7) g(x) G, g′(x) G′. | | ≪ | | ≪ Suppose also that

(4.3.8) M3/4 F M3/2. ≪ ≪ Then

1/2 1/3+ǫ (4.3.9) b(m)g(m)e( f (m)) (G + MG′)M F , ≪ M Xm M′ ≤ ≤ where b(m) = d(m) or a˜(m).

Proof. We give the details of the proof for b(m) = d(m) only; the other case is similar, even slightly simpler. It suffices to prove the assertion for g(x) = 1, because the general 103 case can be reduced to this by partial summation. The proof follows that of Theorem 4.5, which corresponds to the case f (z) = t log z. Then F = t, so that the condition (4.3.8) states − t2/3 M t4/3. We restricted ourselves to the case t2/3 M t, ≪ ≪ ≪ ≪ which sufficed for the proof of Theorem 4.5, but the method gives, in fact,

a˜(m)mit M1/2t1/3+ǫ for t2/3 M t4/3. ≪ ≪ ≪ M Xm M′ ≤ ≤ This follows, by the way, also from the previous case by a “reflec- tion”, using the approximate functional equation (4.2.2). The analogy between the number t in Theorem 4.5 and the number F in the present theorem will prevail throughout the proof. Accordingly, we put M = F2/3+δ ◦ 98 4. Applications

and define, as in (4.2.10),

(4.3.10) K = (M/M )1/2 . ◦ We may suppoe that M M , for otherwise the assertion to be ≥ ◦ proved, viz.

(4.3.11) d(m)e( f (m)) M1/2F1/3+ǫ , ≪ M Xm M′ ≤ ≤ is trivial. Consider the case when f is of the form (4.3.3); the case (4.3.5) is 104 analogous and can be dealt with by obvious modifications. We suppose that B is of a suitable sign, so that f ′′(x) is positive. The equation (4.3.3) can be formally differentiated once or twice to give correct results, for by Cauchy’s integral formula we have

α 1 1/3 (4.3.12) f ′(z) = αBz − 1 + o F−    and

α 2 1/3 (4.3.13) f ′′(z) = α(α 1)Bz − 1 + o F− −    for z lying in a region D′ of the type (4.3.2) with c replaced by a smaller positive number. Hence, for z D , ∈ ′ 1 (4.3.14) f ′(z) FM− , ≍

and

2 (4.3.15) f ′′(z) FM− , ≍

so that the parameter F plays here the same role as in Chapters 2 and 3. The proof now proceeds as in the previous section. The set of the mediants ρ is constructed for the sequence of fractions r = h/k with 1 k K, and the interval [M, M ] is dissected by the points M(ρ) such ≤ ≤ ′ that

(4.3.16) f ′(M(ρ)) = ρ. 4.3. Estimation of “Long” Exponential Sums 99

The numbers m1 and m2 then have formally the same expressions as before by (4.3.15) and (4.3.16)

1 1 2 1 1 3/2 2/3+δ/2 (4.3.17) m k− K− M F− k− M F− j ≍ ≍ in analogy with (4.2.14). This implies that 105

1/3+δ 1/6+δ/2 1/2 1/3+δ/2 (4.3.18) MF− m min MF− , M F ; ≪ j ≪   note that k F 1 M for M F by (4.3.16) and (4.3.14). The up- ≫ − ≫ per estimate in (4.3.18) shows that the possible incomplete sums in the dissection can be omitted. The subsums are transformed by Theorem 3.1, the assumptions of which are readily verified as in the proof of Theorem 4.5. Of the three error terms in (3.1.11), the second one is M1/2 ≪ log2 M, and the others are smaller. Since the number of subsums is F1/3 δ, the contribution of these is M1/2F1/3. ≍ − ≪ The leading term in (3.1.11) is F 1/2k 1 M log F. For a given k, h ≪ − − takes FM 1k values. Hence the contribution of the leading terms is ≪ − F1/2K log F M1/2F1/6. ≪ ≪

Consider now the sums of length n j in (3.1.11). By (3.1.16) and (4.3.17) we have (cf. (4.2.16))

(4.3.19) n F2/3+δ. j ≍ For convenience, we restrict the triple sum with respect to j, n, and h/k by the conditions j = 1, N n N′ , and K k K′ , where ◦ ≤ ≤ ◦ ◦ ≤ ≤ ◦ K K′ and N N′ . Denote the saddle point x1,n by x(r, n) in order ◦ ≍ ◦ ◦ ≍ ◦ to indicate its dependence on r. Then, for given r, the sum with respect to n can be written as

(4.3.20) Q(r, n)d(n)e ( fr(n)) , Xn where 106

1/2 1/2 1/4 1/4 (4.3.21) Q(r, n) = i2− k− n− x(r, n)− f ′′(x(r, n)) − 100 4. Applications

1/2 1 1 1/2 3/2 − k− n x(r, n)− −2 !

and

1 1/2 1/2 (4.3.22) f (n) = nh¯/k + f (x(r, n)) rx(r, n) + 2k− n x(r, n) . r − − The range of summation in (4.3.20) is either the whole interval [N , N′ ], or a subinterval of it if n1 N′ . Since ◦ ◦ ≤ ◦ 1/2 1/2 3/4 1/4 Q(r, n) F− K− M N− , | |≍ ◦ ◦ we may write

1/2 1/2 3/4 1/4 Q(r, n) = F− K− M N− q(r, n), ◦ ◦ where q(r, n) 1. Then, using Cauchy’s inequality as in the proof of | | ≍ Theorem 4.5, we obtain

(4.3.23) Q(r, n)d(n)e ( fr(n)) Xr Xn 1/2 1/2+δ 1/2 3/4 1/4 F− K− M N s (r1, r2) , ≪ ◦ ◦  | | rX1,r2      where s (r , r ) = q (r , n) q (r , n) e f (n) f (n) . 1 2 1 2 r1 − r2 Xn  The saddle point x(r, n) is defined implicitly by the equation

1 1/2 1/2 (4.3.24) f ′(x(r, n)) r + k− n x(r, n)− = 0. − Therefore, by the implicit function theorem, (4.3.25) dx(r, n) 1 1/2 1/2 2 1 1 1 3/2 1/2 K− M− N− FM− − F− K− M N− dn ≍ ◦ ◦ ≍ ◦ ◦    

107 Then it is easy to verify that 4.3. Estimation of “Long” Exponential Sums 101

dq(r, n) 1 δ/2 N− F ; dn ≪ ◦ the assumption M F4/ 3 is needed here. Consequently, if ≪

(4.3.26) e f (n) f (n) σ (r , r ) r1 r2 1 2 Xn − ≤  whenever n runs over a subinterval of [N , N′ ], then by partial summa- tion ◦ ◦ s (r , r ) σ (r , r ) Fδ/2. | 1 2 | ≪ 1 2 Thus, in order to prove that the left hand side of (4.3.23) is 1/2 1/3+ (δ) ≪ M F ◦ , it suffices to show that

5/3+o(δ) 1/2 1/2 (4.3.27) σ (r1, r2) F K M− N− . ≪ ◦ ◦ rX1,r2 With an application of Lemma 4.1 in mind, we derive bounds for 2 d ( f (n) f (n)), where n is again understood for a moment as a con- dn2 r1 − r2 tinuous variable. First, by (4.3.22) and (4.3.24),

d fr(n) 1 1/2 1/2 dx(r, n) = h¯/k + f ′(x(r, n)) r + k− n x(r, n)− dn − − dn 1 1/2 1/2  + k− n− x(r, n) 1 1/2 1/2 = h¯/k + k− n− x(r, n) , − and further 2 d fr(n) 1 1 1/2 1/2 dx(r, n) 1 1 3/2 1/2 (4.3.28) = k− n− x(r, n)− k− n− x(r, n) . dn2 2 dn − 2 Here the first term, which is

1 2 1 (4.3.29) F− K− MN− ≪ ◦ ◦ by (4.3.25), will be less significant. The saddle point x(r, n) is now approximated by the point M(r), which is easier to determine. By definition

f ′(M(r)) = r; 102 4. Applications

hence by (4.3.12) 108 α 1 1/3 αBM(r) − = r 1 + o F− ,    which gives further, by (4.3.4),

α 2β M 2β 1/3 (4.3.30) M(r) = r 1 + o F− α F ! | | | |    with 1 β = . 2(α 1) − But the difference of M(r) and x(r, n) is at most the maximum of m1 and m2, so that by (4.3.17) 1 1 1 2 x(r, n) = M(r) + o F− K− K− M ◦  1 1 1 = M(r) 1 + o F− K− K− M .   ◦  Hence by (4.3.30)

α 2β M 2β 1 1 1 x(r, n) = r 1 + o F− K− K− M ; α F ! | | ◦ | |    note that by (4.3.10) 1 1 1 1 2 1/3+δ F− K− K− M F− K− M = F− . ◦ ≥ So the second term in (4.3.28) is

1 β 1 αβ 3/2 β 1 2 1 3/2 3/2 ( α F)− k− M n− r + o F− K− K− M N− . 2 ◦ − | | | |  ◦  The expression (4.3.29) can be absorbed into the error term here, for 2 N K− M by (4.3.10) and (4.3.19). Hence (4.3.28) gives ◦ ≪ d2 (4.3.31) f (n) f (n) dn2 r1 − r2  1 β αβ 3/2 β 1 β β 1 β = ( α F)− M n− h k− − h k− − 2 | | | 2| 2 − | 1| 1  1 2 1 3/2 3/2 + o F− K− K− M N− .  ◦ ◦  β 1 β β 1 β 109 By the following lemma, the differences h k− − h k− − are | 2| 2 − | 1| 1 distributed as one would expect on statistical grounds.  4.3. Estimation of “Long” Exponential Sums 103

Lemma 4.2. Let H 1, K 1, and 0 < ∆ 1. Let α and β be non-zero ≥ ≥ ≪ real numbers. Then the number of quadruples (h1, k1, h2, k2) such that

(4.3.32) H h 2H, K k 2K ≤ i ≤ ≤ i ≤ and

α β α β α β (4.3.33) h1 k1 h2 k2 ∆H K − ≤ is at most

(4.3.34) HK log2(2HK) + ∆H2K2, ≪ where the implied constants depend on α and β.

We complete first the proof of Theorem 4.6, and that of Lemma 4.2 will be given afterwards. In our case, the number of pairs (r1, r2) such that

β 1 β β 1 β 1 1 β (4.3.35) h2 k2− − h1 k1− − ∆K− FM− ≤ ◦ | | − | |   is at most

2 1 2 2 4 2 (4.3.36) FK M− log F + ∆F K M− ≪ ◦ ◦ by Lemma 4.2. Let 1 1 1 ∆ = c F− K− K− M, ◦ ◦ ◦ where c is a certain positive constant. For those pairs (r1, r2) satisfying ◦ (4.3.35) with ∆ = ∆ we estimate trivially σ(r1, r2) N . Then, by ◦ ≪ ◦ (4.3.36), their contribution to the sum in (4.3.27) is

2 1 2 5/3+2δ 1/2 1/2 FK M− N log F F K M− N− . ≪ ◦ ◦ ≪ ◦ ◦ 110 Let now ∆ ∆ 1, and consider those pairs (r1, r2) for which ◦ ≤ ≪ 1 1 β the expression on the left of (4.3.35) lies in the interval (∆K− (FM− ) , 1 1 β ◦ 2∆K− (FM− ) ]. If c is chosen sufficiently large, then the main term ◦ ◦ 104 4. Applications

1 1/2 3/2 (of order ∆K− M N− ) on the right of (4.3.31) dominates the error ≍ ◦ ◦ term. Then by Lemma 4.1

1/2 1/2 1/4 1/4 1/2 1/2 1/4 3/4 σ (r1, r2) ∆ K− M N + ∆− K M− N . ≪ ◦ ◦ ◦ ◦ 2 3 2 The number of the pairs (r1, r2) in question is ∆F K KM− ≪ ◦ log2 F by (4.3.36) and our choice of ∆ , so that they contribute ◦ 3/2 2+δ 5/2 7/4 1/4 1/2 2+δ 7/2 9/4 3/4 ∆ F K KM− N + ∆ F K KM− N ≪ ◦ ◦ ◦ ◦ 2+δ 1/2 1/2 5/2 5/4 3/4 7/2 7/4 5/4 F K M− N− K M− N + K M− N ≪ ◦ ◦ ◦ ◦ 5/3+δ 1/2 1/2  F K M− N− . ≪ ◦ ◦ The assertion (4.3.27) is now verified, and the proof of Theorem 4.6 is complete. Proof of Lemma 4.2. To begin with, we estimate the number of quadruples satisfying, besides (4.3.32) and (4.3.33), also the conditions

(4.3.37) (h1, h2) = (k1, k2) = 1.

By symmetry, we may suppose that H K. The condition (4.3.33) ≥ can be written as α β α β h1 k1 = h2 k2(1 + o(∆)). 1 β/α Raising both sides to the power α− and dividing by h2k1 , we ob- tain h k β/α 1 2 ∆. h − k ! ≪ 2 1

111 for given k1 and k2, the number of fractions h1/h2 satisfying this, (4.3.32), and (4.3.37), is 1 + ∆H2 by the theory of Farey fractions. ≪ Summation over the pairs k1, k2 in question gives

K2 + ∆H2K2 HK + ∆H2K2. ≪ ≪ Consider next quadruples satisfying (4.3.32) and (4.3.33) but instead of (4.3.37) the conditions

(h1, h2) = h, (k1, k2) = k 4.3. Estimation of “Long” Exponential Sums 105

for certain fixed integers h and k. Then, writing h1 = hhi′, ki = kki′, we find that the quadruples (h1′ , k1′ , h2′ , k2′ ) satisfy the conditions (4.3.32), (4.3.33), and (4.3.37) with H and K replaced by H/h and K/k. Hence, as was just proved, the number of these quadruples is 2 2 2 HK/hk + ∆H K (hk)− . ≪ Finally, summation with respect to h and k gives (4.3.34). Example . To illustrate the scope of Theorem 4.6, let us consider the exponential sum X (4.3.38) S = b(m)e m M Xm M   ≤ ≤ ′ where b(m) is d(m) ora ˜(m). By the theorem, S M1/6X1/3+ǫ for M7/4 X M5/2. ≪ ≪ ≪ Thus, for M χ1/2, one has S M5/6+ǫ. In the case b(m) = d(m) ≍ ≪ it is also possible to interpret S as the double sum X e . mn mX,n 1   ≥ M mn M′ ≤ ≤ 112 This can be reduced to ordinary exponential sums, fixing first m or n, but it can be also estimated by more sophisticated methods in the theory of multiple exponential sums. For instance, B.R. Srinivasan’s theory of n-dimensional exponent pairs gives, for M X1/2 and b(m) = d(m), ≍ 1 ℓ1+ℓ (4.3.39) S M − ◦ , ≪ where (ℓ ,ℓ1) is a two-dimensional exponent pair (see [13], 2.4). Of ◦ § 23 56 the pairs mentioned in [13], the sharpest result is given by ( 250 , 250 ), namely (4.3.39) with the exponent 217/250 = 0.868. The optimal ex- ponent given by this method is 0.86695 ... (see [10]). If a conjecture concerning one- and two-dimensional exponent pairs (Conjecture P in [10]) is true, then the exponent could be improved to 0.8290 ..., which is smaller than 5/6. But in any case, for b(m) = a˜(m) the sum S seems to be beyond the scope of ad hoc methods because of the complicated structure of the coefficients a(m). 106 4. Applications

4.4 The Twelfth Moment of ζ(1/2+it) and Sixth Mo- ment of ϕ(k/2 + it)

In this last section, a unified approach to the mean value theorems 0.7 and 0.9 will be given. Theorem 4.7. For T 2 we have ≥ T (4.4.1) ζ(1/2 + it) 12 dt T 2+ǫ Z | | ≪ ◦ and

T (4.4.2) ϕ(k/2 + it) 6 dt T 2+ǫ. Z | | ≪ ◦ 113 Proof. The proofs of these estimates are much similar, so it suffices to consider (4.4.2) as an example, with some comments on (4.4.1). It is enough to prove that

2T (4.4.3) ϕ(k/2 + it) 6 dt T 2+ǫ. Z | | ≪ T Actually we are going to prove a discrete variant of this, namely that

6 2+ǫ (4.4.4) ϕ(k/2 + itν) T Xν | | ≪

whenever tν is a “well-spaced” system of numbers such that { }

(4.4.5) T tν 2T, tµ tν 1 for µ , ν. ≤ ≤ − ≥

Obviously this implies (4.4.3). Again, (4.4.4) follows if it is proved that for any V > 0 and for any system tν ,ν = 1,..., R, satisfying { } besides (4.4.5) also the condition

(4.4.6) ϕ(k/2 + itν) V, | |≥ 4.4. The Twelth Moment of 107 one has

2+ǫ 6 (4.4.7) R T V− . ≪ The last mentioned assertion is easily verified if

(4.4.8) V T 1/4+δ ≪ where δ again stands for a positive constant, which may be chosen as small as we please, and which will be kept fixed during the proof. In- deed, one may apply the discrete mean square estimate

2 1+δ (4.4.9) (k/2 + itν) T Xν | | ≪ which is an analogue of the well-known discrete mean fourth power estimate for ζ(1/2 + it) (see [13], equation (8.26)), and can be proved | | in the same way. Now (4.4.9) and (4.4.6) together give 114

1+δ 2 (4.4.10) R T V− , ≪ and thus R T 2+5δV 6 if also (4.4.8) holds. ≪ − Henceforth we may assume that

(4.4.11) V T 1/4+δ. ≫ Then by (4.4.10)

1/2 δ (4.4.12) R T − . ≪ Large values of ϕ(s) on the critical line can be investigated in terms of large values of partial sums of its Dirichlet series, by the approximate functional equation (4.2.2). The partial sums will be decomposed as in the proof of Theorem 4.5. However, in order to have compatible decompositions for different values t [T, 2T], we define the system of ∈ fractions r = h/k in terms of T rather than in terms of t. As a matter of fact, the “order” K of the system will not be a constant, but it varies as a certain function K(r) of r. More exactly, write t (4.4.13) M(r, t) = , 2πr 108 4. Applications

and letting R be the cardinality of the system tν satisfying (4.4.5), { } (4.4.6), and (4.4.11), define

1/2 1/3 1/3 (4.4.14) K(r) = M(r, T) T − R− .

We now construct the (finite) set of all fractions r = h/k 1 satisfy- ≥ ing the conditions

(4.4.15) k K(r), ≤ (4.4.16) K(r) T δ, ≥ 115 and arrange these into an increasing sequence. This sequence determines the sequence ρ1 < ρ2 < < ρP of 1 ··· the mediants, and we define moreover ρ = ρ1− . We apply (4.2.2) for σ = k/2, choosing ◦

2 1 (4.4.17) x = x(t) = M(ρ , t), y = y(t) = (t/2π) x− . ◦ Then, if (4.4.6) and (4.4.11) hold, at least one of the sums of length x(tν) and y(tν) in (4.2.2) exceeds V/3 in absolute value. Let us suppose that for at least R/2 points tν we have

1/2 itν (4.4.18) a˜(n)n− − V/3; ≥ nXx(tν) ≤

the subsequent arguments would be analogous if the other sum were as large as often. The sum in (4.4.18) is split up by the points M(ρ , tν) as in 4.2. As i § to the set of points tν satisfying (4.4.18), there are now two alternatives: either

1/2 itν (4.4.19) a˜(n)n− − V/6 ≥ n MX(ρP,tν) ≤

for R points, or there are functions M (t ), M (t) of the type M(ρ , t) ≫ 1 2 i such that M (t) M (t) and 1 ≍ 2 1 (4.4.20) S ϕ (M (tν), M (tν)) VL− , 1 2 ≫

4.4. The Twelth Moment of 109

1 with L = log T, for at least RL points tν. We are going to derive an ≫ − upper bound for R in each case. 116 Consider first the former alternative. We apply the following large values theorem of M.N. Huxley for Dirichlet polynomials (for a proof, see [12] or [15]). 

Lemma 4.3. Let N be a positive integer,

2N s (4.4.21) f (s) = ann− , n=XN+1

and let sr = σr + itr, r = 1,..., R, be a set of complex numbers such that σ 0, 1 t t T for r , r , and r ≥ ≤ | r − r′ |≤ ′ f (s ) V > 0. | r |≥ Put 2N G = a 2 . | n| n=XN+1 Then

2 3 6 ǫ (4.4.22) R GNV− + TG NV− (NT) . ≪   This lemma cannot immediately be applied to the Dirichlet poly- nomial in (4.4.19), for it is not of the type (4.4.21), and the length of the sum depends moreover on tν. To avoid the latter difficulty, we ex- press the Dirichlet polynomials in question by Perron’s formula using the function w f (w) = a˜(n)n− nXN ≤ 1/2 with N = M(ρP, 2T). Letting y = N and α = 1/ log N, we have

α+iY

1/2 itν 1 a˜(n)n− − = f (1/2 + itν + w) 2πi Z n MX(ρP,tν) α iY ≤ − w 1 δ M (ρP, tν) w− dw + o T .   110 4. Applications

Now, in view of (4.4.19), there is a number X [1, Y] and numbers 117 ∈ N , N with N < N max(2N , N) such that writing 1 2 1 2 ≤ 1 w f (w) = a˜(n)n− ◦ N1Xn N2 ≤ ≤ we have

X 2 (4.4.23) f (1/2 + α + i (tν + u)) du VχL− Z | ◦ | ≫ X − 2 for at least RL− points tν. Next we select a sparse set of R numbers ≫ ◦ tν with

1 2 (4.4.24) R 1 + RX− L− ◦ ≪

such that (4.4.23) holds for these, and moreover tµ tν 3X for µ , ν. | − |≥ Further, by (4.4.23) and similar quantitative arguments as above, we 2 conclude that there exist a number w VL− , a subset of cardinality 1 ≫ R L− of the set of the R indices just selected, and for each ν in this ≫ ◦ 1 3 ◦ subset a set VW XL points uν,µ [ X, X] such that ≫ − − ∈ −

(4.4.25) W f 1/2 + α + i tν + uν,µ 2W ≤ ◦    ≤

and uν,λ uν,µ 1 for λ , µ. − ≥

The system tν + u ν,µ for all relevant pairs ν, µ is well-spaced in the sense that the mutual distance of these numbers is at least 1, and its cardinality is 1 4 R VW− XL− . ≫ ◦ On the other hand, its cardinality is by Lemma 4.3

2 6 δ 1 1 5 δ 10 N1W− + TN1W− T W− NV− + TNV− T L . ≪   ≪   118 4.4. The Twelth Moment of 111

These two estimates give together

2 6 δ 14 R X NV− + TNV− T L . ◦ ≪   2 But R X RL− by (4.4.24), so finally ◦ ≫ 2 6 2δ 2 2δ (4.4.26) R NV− + TNV− T NV− T ≪   ≪ by (4.4.11). This means that a direct application of Lemma 4.3 gives a correct result in the present case though the conditions of the lemma are not formally satisfied. Since ρP was the last mediant, we have by (4.4.14), (4.4.16), and the definition of N N T 2/3+2δR2/3. ≪ Together with (4.4.26), this implies

2/3+4δ 2/3 2 R T R V− , ≪ whence

2+12δ 6 (4.4.27) R T V− ≪ We have now proved the desired estimate for R in the case that (4.4.19) holds for R indices ν. ≫ Turning to the alternative (4.4.20), we write

i2 (4.4.28) S ϕ (M1(t), M2(t)) = S ϕ (M (ρi+1, t) , M (ρi, t)) Xi=i1 for T t 2T. The sums S ϕ here are transformed by Theorem 4.2. ≤ ≤ That unique fraction r = hk which lies between ρi and ρi+1 will be used as the fraction r in the theorem. Write M = M1(T) and 119

1/2 1/3 1/3 (4.4.29) K = M T − R− .

Then by (4.4.14) we have K(r) K for those r related to the sums ≍ in (4.4.28). Since for two consecutive fractions r and r′ of our system 112 4. Applications

we have r r [K(r )] 1, it is easily seen that K(r) K(r ) < 1. Thus ′ − ≤ ′ − − ′ either r and r′ are consecutive fractions in the Farey system of order [K(r)], or exactly one fraction r = h /k with K(r ) < k K(r) of ′′ ′′ ′′ ′ ′′ ≤ this system lies between them. Then, in any case, r ρ (kK) 1 for | − j| ≍ − j = i and i + 1, whence as in (4.2.14) we have

1 1 2 1 1 3/2 2/3 1/3 (4.4.30) m k− K− M T − k− M T − R . j ≍ ≍ Hence m M1 δ/3 by (4.4.12), so that the upper bound part of the j ≪ − condition (4.1.10) is satisfied. The other conditions of Theorem 4.2 are easily checked as in the proof of Theorem 4.5. The error terms in Theorem 4.2 are now by (4.4.30) and (4.4.13) 1/2 1/2 2 3/4 1/4 1/4 o(k k− L ) and o(K K M− L), and the sum of these for differ- ent r is

2 1 2 3 5/4 k M− TL + K TM− L ≪ 1/3 2/3 2 1/4 1 1/3 2/3 2 T R− L + M R− L T R− L . ≪ ≪ If 1/3 2/3 δ T R− VT − , ≪ then these error terms can be omitted in (4.4.20). Otherwise

1/2+3δ/2 3/2 R T V− ≪ 1/2+3δ/2 3/2 4 2+6δ 6 T V− = T V− ≪   120 and we have nothing to prove. Hence, in any case, we may omit the error terms in (4.1.28). Consider now the explicit terms in Theorem 4.2. For the numbers n j we have by (4.1.11), (4.4.30), and (4.4.29) 2 2/3 2/3 (4.4.31) n K− M T R . j ≍ ≍ Denote by S r(t) the explicit part of the right hand side of (4.1.28) for the sum related to the fraction r. Then by (4.4.20), (4.4.28), and the error estimate just made we have

1 (4.4.32) S (tν) VL− r ≫ Xr

4.4. The Twelth Moment of 113

1 for at least RL numbers tν. ≫ − At this stage we make a brief digression to the proof of the estimate (4.4.1). So far everything we have done for ϕ(s) goes through for ζ2(s) as well, except that in Theorem 4.1 there is the leading explicit term and the first error term which have no counterpart in Theorem 4.2. The additional explicit term is (hk) 1/2L, and the sum of these over the ≍ − relevant fractions r is T 1/6L, which can be omitted by (4.4.11). The ≪ additional error term in (4.1.12) is also negligible, for it is dominated in our case by the second one. So the analogy between the proofs of (4.4.1) and (4.4.2) prevails here, like also henceforth. It will be convenient to restrict the fractions r = h/k in (4.4.32) suitably. Suppose that K k K′ , where K K′ and K K, and ◦ ≤ ≤ ◦ ◦ ≍ ◦ ◦ ≪ suppose also that for two different fractions r = h/k, r′ = h′/k′ in our 121 system we have

2 δ (4.4.33) r r′ K− T − ≫ ◦ and

1 1 2 (4.4.34) 0 < < K′ − hk − h k ◦ ′ ′ 

An interval [K , K′ ] and a set of fractions of this kind can be found such that ◦ ◦

2δ (4.4.35) S (tν) VT − r ≫ Xr

2δ for at least R RT numbers tν. The sum over r here is restricted as 1 ≫ − indicated above. Let

2 1 (4.4.36) Z = K M− T. ◦

There exists a number R2 such that those intervals [T + pZ, T + (P + 1)Z] containing at least R2/2 and at most 2R2 of the R1 numbers 1 tν contain together R L of these. Omit the other numbers tν, and ≫ 1 − 114 4. Applications

1 suppose henceforth that the tν under consideration lie in these R R ≪ 1 2− intervals. Summing (4.4.35) with respect to those tν lying in the interval [T + pZ, T + (p + 1)Z], we obtain by Cauchy’s inequality

2 1/2 2δ (4.4.37) R2VT − R2 S r (tν)  . ≪  Xν Xr      The following inequality of P.X. Gallagher (see  [23], Lemma 1.4) is now applied to the sum over tν. Lemma 4.4. Let T , T δ > 0 be real numbers, and let A be a finite ◦ ≥ set in the interval [T + δ/2, T + T δ/2] such that a′ a δ for any ◦ ◦ − | − |≥ 122 two distinct numbers a, a A. Let S be a continuous complex valued ′ ∈ function in [T , T + T] with continuous derivative in (T , T + T). Then ◦ ◦ ◦ ◦ T +T T +T 1/2 T +T 1/2 ◦ ◦ ◦ 2 1 2 2 2 S (a) δ− S (t) dt +  S (t) dt  S ′(t) dt . | | ≤ | | | | | | XaǫA Z  Z   Z  T  T   T  ◦  ◦   ◦      The lengths n j of the sums in S r(t) depend linearly on t. However, the variation of n in the interval T + pZ t < T + (p + 1)Z is only o(1), j ≤ so that (4.4.35) and (4.4.37) remain valid if we redefine S r(t) taking n j constant in this interval. Lemma 4.4 then gives

2 Z 2

(4.4.38) S (ν) S (T + pZ + u) du r ≪ r Xν Xr Z Xr ◦ 1/2 1/2 Z 2 Z 2 +  S r(T + pZ + u) du  S r′(T + pZ + u) du . Z  Z   Xr   Xr   ◦   ◦      Let η(u) be a weight function of the type ηJ(u) such that JU = Z, η(u) = 1 for 0 u Z, η(u) = 0 for u < ( Z, 2Z), and J is a large ≤ ≤ − positive integer. Then

Z 2 2Z 2 (4.4.39) S r(T + pZ + u) du η(u) S r du Z r ≤ Z r X Z X ◦ −

4.4. The Twelth Moment of 115

2Z

= η(u)S rS r du. Z ′ Xr,r′ Z − We now dispose of the nondiagonal terms. Put t(u) = T + pZ + u. When the integral on the right of (4.4.39) is written as a sum of integrals, recalling the definition (4.1.28) of S r(t), a typical term is

2Z (4.4.40) η(u)g(u)e( f (u)) du, Z Z − where 123

1/2 1/2 1/4 1/4 g(u) = π 2− (hkh′k′)− a˜(n)a˜(n ) (nn′)− ′ × ¯ h 1 h′ 1 1/2 e n n′ t(u)− ×  k − 2hk ! −  k − 2h k    ′ ′ ′    1/4 1/4   πn −  πn − 1 + 1 + , 2hkt(u)! 2h′k′t(u)! j 1 πn f (u) = ( 1) − (t(u)/π)φ + 1/8 − ( 2hkt(u) ! )

j 1 πn′ ( 1) ′− (t(u)/π)φ + 1/8 + (t(u)/π) log(r/r′), − − ( 2h′k′t(u)! ) = = r h/k, r′ h′/k′, j′ and j′ are 1 or 2, and n < n j, n′ < n j′ . Now

d 2 1+δ t(u) log r/r′ = log r/r′ K− MT − du ≫ ◦   by (4.4.33), while by (4.1.6)

d πn 1/2 1/2 1 1/2 1/2 1 t(u)φ (hkT)− n K− M n T − , du 2hkt(u)!! ≍ ≍ ◦

which is by (4.4.31)

1 1 1 2 1 K− K− MT − K− MT − . ≪ ◦ ≪ ◦ 116 4. Applications

Accordingly,

δ 1 f ′(u) log(r, /r′) T Z− . | | ≍ | | ≫ We may now apply Theorem 2.3 to the integral (4.4.40) with µ ≍ Z, M log(r, r ) T δZ 1, and U Z. If J δ 2 and δ is small, then ≍ | ′ | ≫ − ≍ ≍ − this integral is negligible. A similar argument applies to the integral in- volving S r′ in (4.4.38). Consequently, it follows from (4.4.37) - (4.4.39) that

2Z R VT 2δ R1/2  S (t(u)) 2 du 2 − 2  r ≪ Xr Z | |  Z −  1/2 2Z  1/2 2Z 1/2 2 2 +  S r(t(u)) du  S r′(t(u)) du  . | |  Xr Z  Xr Z    Z   Z    −   −       1 124 Summing these inequalities with respect to the R1R2− values of p, we obtain by Cauchy’s inequality ≪

2Z 1 2δ 1/2 2 R L− VT − R  S (T + pZ + u) du 1 ≪ 1  | r | Xp,r Z  Z  −  1/2 1/2 1/2 2Z  2Z 2 2 +  S r(T + pZ + u) du  S r′(T + pZ + u) du  . Z | | Z  Xp,r  Xp,r    Z   Z    −   −        For each p, the integrals here are expressed by the mean value theo- rem. Then by (4.4.36) this implies (recall that RT 2δ R R) − ≪ 1 ≤

1/2 1/2 1/2+4δ 2 (4.4.41) RV R K M− T L S r(tp) ≪ ◦  Xp,r

1/2  1/2 1/2 + 2 2  S r(tp)   S r′(t′p)   , Xp,r  Xp,r                  4.4. The Twelth Moment of 117 where t is a set of numbers in the interval (T Z, 2T + 2Z) such that { p} − (4.4.42) t t Z for p , p′, p − p′ ≥ and similarly for t′ . { p} The rest of the proof will be devoted to the estimation of the double sums on the right of (4.4.41). For convenience we restrict in S r and S r′ the summation to an interval N n N , where N N , and take ≤ ≤ ′ ≍ ′ j = 1. The notation S r is still retained for these sums. The original sum can be written as a sum of o(L) new sums. We are going to show that

2 2 2 2 1 1 1/2 2δ (4.4.43) S r(tp) K− K− M T − + K− M R T . ≪ ◦ ◦ Xp,r  

It will be obvious that the argument of the proof of this gives the same estimate for the similar sum involving S r′ as well. Then the in- equality (4.4.41) becomes 125

1 1/2 1/2 1/2 1/4 1/2 6δ 5/6 1/3+6δ RV K− M R + K M− T R T R T ; ≪  ◦  ≪ recall the definition (4.4.29) of K. This gives

2+36δ 6 R T V− , ≪ as desired. To prove the crucial inequality (4.4.43), we apply methods of Hal´asz and van der Corput. The following abstract version of Hal´asz’s inequal- ity is due to Bombieri (see [23], Lemma 1.5, or [13], p. 494).

Lemma 4.5. If ξ,ϕ1,...,ϕR are elements of an inner product space over the complex numbers, then

R R 2 2 (ξ,ϕr) ξ max (ϕr,ϕs) . | | ≤k k 1 r R | | Xr=1 ≤ ≤ Xs=1

Suppose that the numbers N and N′ above are integers, and de- = = fine the usual inner product for complex vectors a (aN ,..., aN′ ), b

(bN,..., bN′ ) as N′ (a, b) = anb¯n. nX=N 118 4. Applications

Define vectors

1/4 N′ ξ = a˜(n)n− , n=N n o 1/4 N′ πn − h¯ 1 πn ϕp,r = 1 + e n + tp/π φ  2hktp ! k − 2hk ! 2hktp !!    n=N   with the convention that if n1 < N′, then in ϕp,r the components for n n N are understood as zeros. Then by (4.1.28) we have 1 ≤ ≤ ′ 1/2 1/4 1/2 S r(tp) K− M T − ξ,ϕp,r . ≪ ◦  

126 Hence, by Lemma 4.5, there is a pair p′, r′ such that

2 1 1/2 1/2 1+δ (4.4.44) S r(tp) K− M N T − ϕp,r,ϕp′,r′ . Xp,r ≪ ◦ Xp,r  

If now

1 1 + 1/2 δ (4.4.45) ϕp,r,ϕp′,r′ K− K− M KM− RT T , ≪ ◦ Xp,r    

then (4.4.43) follows from (4.4.44); recall that N K 2 M by (4.4.31). ≪ − Hence it remains to prove (4.4.45). Let h¯ 1 πx f (x) = x + t /π φ . p,r k 2hk p 2hkt − !   p ! The estimation of (ϕ , ,ϕ , ) can be reduced, by partial summa- | p r p′ r′ | tion, to that of exponential sums

e f , (n) f , (n) . p r − p′ r′ Xn   Namely, if this sum is at most ∆(p, r) in absolute value whenever n runs over a subinterval of [N, N′], then

ϕ , ,ϕ , ∆(p, r). p r p′ r′ ≪  

4.4. The Twelth Moment of 119

So in place of (4.4.45) it suffices to show that

1 1 1/2 δ (4.4.46) ∆(p, r) K− K− M + KM− RT T . ≪ ◦ Xp,r  

The quantity ∆(p, r) will be estimated by van der Corput’s method. To this end we need the first two derivatives of the function f , (x) p r − fp′,r′ (x) in the interval [N, N′]. By the definition (4.1.6) of the function φ(x) we have

1 1/2 φ′(x) = 1 + x− ,   1 3/2 1/2 φ′′(x) = x− (1 + x)− . −2 127 Then by a little calculation it is seen that

¯ ¯ h 1 h′ 1 (4.4.47) f ′ (x) f ′ (x) = + p,r p′,r′ − k − 2hk − k′ 2h′k′ 1 1 hk h k + 3 3/2 1/2 1 1 + − ′ ′ BK− M N− hkt−p h′k′t−p πx tptp′ , ◦ − 2 h k − hk !   ′ ′ where B 1, and | |≍ 3 3/2 3/2 1 1 (4.4.48) fp′′,r(x) fp′′,r (x) K− M N− hkt−p h′k′t−p − ′ ′ ≍ ◦ − ′

1 2 2 + πx t−p t− . 2 − p′  

We shall estimate ∆(p, r) either by Lemma 4.1, or by the following simple lemma (see [27], Lemmas 4.8 and 4.2). We denote by α the k k distance of α from the nearest integer.

Lemma 4.6. Let f C1[a, b] be a real function with f (x) monotonic ∈ ′ and f (x) m > 0. Then k ′ k≥ 1 e( f (n)) m− . ≪ aX

Turning to the proof of (4.4.46), let us first consider the sum over the pairs p, r′. Trivially,

2 1 1 ∆(p′, r′) N K− M K− K− M. ≪ ≪ ≪ ◦

If p , p′, then by (4.4.47)

1 1/2 1/2 1 fp′,r (x) fp′ ,r (x) K− M N− T − tp tp′ . ′ − ′ ′ ≍ ◦ −

128 We may apply Lemma 4.6 if

1/2 1/2 tp tp K M− N T, − ′ ≪ ◦

and the corresponding part of the sum (4.4.46) is

1 1/2 1/2 − 1 1 δ K M− N T tp tp′ K− K− MT ; ≪ ◦ − ≪ ◦ Xp

recall (4.4.42), (4.4.36), and (4.4.31). Ohterwise ∆(p, r) is estimated by Lemma 4.1. Now by (4.4.48)

1 1/2 3/2 1 1 fp′′,r (x) fp′′,r (x) K− M N− T − tp tp N− , ′ − ′ ′ ≍ ◦ − ≫

so that these values of p contribute

1 1/2 3/2 1/2 1/2 1/2 1/4 1/4 1/2 N K− M N− + N K− M N + N R ≪ ◦ ≪ ◦ Xp       M1/2R, ≪ which is clearly KM 1/2RT. ≪ − For the remaining pairs p, r in (4.4.46) we have r , r′. Let p be fixed for a moment. Then

1 1 1 (4.4.49) hkt−p h′k′t− T − , − p′ ≫

save perhaps for one “exceptional” fraction r = h/k; note that by the assumption (4.4.34) no two different fractions in our system have the same value for hk. If (4.4.49) holds, then by (4.4.48)

3 3/2 3/2 1 1 fp′′,r(x) fp′′,r (x) K− M N− hkt−p h′k′t−p . − ′ ′ ≍ ◦ − ′

4.4. The Twelth Moment of 121

Then, if r runs over the non-exceptional fractions,

1/2 3/2 3/4 3/4 1/2 1/2 fp′′,r(x) fp′′,r (x) − K M− N T m− − ′ ′ ≪ ◦ r 2 1 X m KXM− T ≪ ◦ 5/2 5/4 3/4 K M− N T ≪ ◦ 1/2 KM− T, ≪ and 129

1/2 3/2 3/4 1/4 1/2 N fp′′,r(x) fp′′,r (x) K M− N T KM− T. − ′ ′ ≪ ◦ ≪ Xr

Hence by Lemma 4.1

1/2 (4.4.50) ∆(p, r) KM− T. Xr ≪ Consider finally ∆(p, r) for the exceptional fraction. We shall need the auxiliary result that for any two different fractions h/k and h′/k′ of our system we have

¯ h 1 h′ 1 2 2 2 (4.4.51) + K− M T − . k k − 2hk − k′ 2h′k′ k≫ ◦ 2 For if k , k′, then the left hand side is K− by the condition ≫ ◦ (4.4.34), like also in the case k = k′ if h . h′ (mod k). On the other hand, if k = k′ and h h′ (mod k), then h h′ K , and the left hand ≡ | − | ≫ ◦ side is 1 1 1 2 2 2 (hh′)− K− M T − . 2hk − 2h′k ≫ ≫ ◦

Let r be the exceptional fraction (for given p), and suppose first that for a certain small constant c

1 1 1/2 1/2 2 (4.4.52) hkt−p h′k′t−p cK M N T − − ′ ≤ ◦ in addition to the inequality

1 1 1 (4.4.53) hkt− h′k′t− T − p − p ≪

122 4. Applications

which defines the exceptionality. Then, by (4.4.51), the first four terms in (4.4.47) dominate, and we have

2 2 2 fp′′,r(x) fp′ r (x) K− M T − . k − ′ ′ k≫ ◦ Hence by Lemma 4.6 130

2 2 2 3/2 5/3 1/2 ∆(p, r) K M− T KM− T KM− T, ≪ ≪ ≪ since K M1/2T 1/3 and M T 2/3. ≪ − ≫ On the other hand, if (4.4.52) does not hold, then by (4.4.48) and (4.4.53)

3 3/2 3/2 1 2 2 1 2 K− M N− T − fp′′,r(x) fp′′,r (x) K− M N− T − . ◦ ≫ − ′ ′ ≫ ◦

Hence by Lemma 4.1

3/2 3/4 1/4 1/2 1 1/2 ∆(p, r) K− M N T − + K M− N T ≪ ◦ ◦ 1/2 1/2 1/2 MT − + M− T M− T. ≪ ≪ Now we sum the last estimations and those in (4.4.50) with respect to p to obtain 1/2 ∆(p, r) KM− RT. Xp,r ≪ r,r′

Taking also into account the previous estimations in the case r = r′, we complete the proof of (4.4.46), and also that of Theorem 4.7.

Notes

Theorems 4.1 and 4.2 were proved in [16] for integral values of r. The results of 4.1 as they stand were first worked out in [17]. § In 4.2 - 4.4 we managed (just!) to dispense with weighted ver- §§ sions of transformation formulae. The reason is that in all the problems touched upon relatively large values of Dirichlet polynomials and expo- 131 nential sums occurred, and therefore even the comparatively weak error 4.4. The Twelth Moment of 123 terms of the ordinary transformation formulae were not too large. But in a context involving also small or “expected” values of sums it becomes necessary to switch to smoothed sums in order to reduce error terms. A challenging application of this kind would be proving the mean value theorems

T+T 2/3 ζ(1/2 + it) 4 dt T 2/3+ǫ Z | | ≪ T T+T 2/3 ϕ(k/2 + it) 2 dt T 2/3+ǫ , Z | | ≪ T respectively due to H. Iwaniec [14] and A. Good [9] (a corollary of (0.11) in a unified way using methods of this chapter. The estimate (4.4.2) for the sixth moment of ϕ(k/2 + it) actually gives the estimate for ϕ(k/2 + it) in Theorem 4.5 as a corollary, so that strictly speaking the latter theorem is superfluous. However, we found it expedient to work out the estimate of ϕ(k/2+it) in a simple way in order to illustrate the basic ideas of the method, and also with the purpose of providing a model or starting point for the more elaborate proofs of Theorem 4.6 and 4.7, and perhaps for other applications to come. The method of 4.4 can probably be applied to give results to the § effect that an exponential sum involving d(n) or a(n) which depends on a parameter X is “seldom” large as a function of X. A typical example is the sum (4.3.38). An analogue of Theorem 4.7 would be

X2 6 X 5/2+ǫ b(m)e dx X1 Z m ≪ X M1 Xm M2 1 ≤ ≤ for M M , X X , X M2, and b(m) = d(m) ora ˜(m). 1 ≍ 2 1 ≍ 2 1 ≫ 1 Bibliography

132 [1] T.M. APOSTOL: Modular Functions and Dirichlet Series in Num- ber Theory. Craduate Texts in Math. 41, Springer, 1976.

[2] F.V. ATKINSON: The mean-value of the Riemann zeta-function. Acta Math. 81 (1949), 353–376.

[3] B.C. BERNDT: Identities involving the coefficients of a class of Dirichlet series. I. Trans. Amer. Math. Soc. 137 (1969), 354–359; II. ibid, 361–374; III. ibid, 146 (1969), 323–348; IV. ibid, 149 (1970), 179–185; V. ibid, 160 (1971), 139–156; VI. ibid, 157–167; VII. ibid, 201 (1975), 247–261.

[4] B.C. BERNDT: The Voronoi summation formula. The Theory of Arithmetic Functions, 21–36. Lecture Notes in Math. 251, Springer 1972.

[5] P. DELIGNE: La Conjecture de Weil. Inst. Hautes Etudes´ Sci publ. Math. 53 (1974), 273–307.

[6] A.L. DIXON and W.L. FERRAR: Lattice-point summation formu- lae. Quart. J. Math. Oxford 2 (1931), 31–54.

[7] C. EPSTEIN, J.L. HAFNER and P. SARNAK: Zeros of L- functions attached to Maass forms. Math. Z. 190 (1985), 113–128.

[8] T. ESTERMANN: On the representation of a number as the sum of two products. Proc. London Math. Soc. (2) 31 (1930), 123–133.

124 BIBLIOGRAPHY 125

[9] A. GOOD: The square mean of Dirichlet series associated with cusp forms. Mathematika 29 (1982), 278–295.

[10] S.W. GRAHAM: An a algorithm for computing optimal exponent 133 pairs. J. London Math. Soc. (2) 33 (1986), 203–218.

[11] D.R. HEATH-BROWN: The twelth power moment of the Rie- mann zeta-function. Quart. J. Math. oxford 29 (1978), 443–462.

[12] M.N. HUXLEY: Large values of Dirichlet polynomials III. Acta Arith. 26 (1975), 435–444.

[13] A. Ivic: The Riemann Zeta-function. John Wiley & Sons, New York, 1985.

[14] H. IWANIEC: Fourier Coefficients of Cusp Forms and the Rie- mann Zeta-function. S´eminare de Th´eorie des Nombres, Univ. Bor- deaux 1979/80, expos´eno 18, 36 pp.

[15] M. JUTILA: Zero-density estimates for L-functions. Acta Arith. 32 (1977), 55–62.

[16] M. JUTILA: Transformation formulae for Dirichlet polynomials. J. Number Theory 18 (1984), 135–156.

[17] M. JUTILA: Transformation formulae for Dirichlet polynomials II. (unpublished)

[18] M. JUTILA: On exponential sums involving the divisor function. J. Reine Angew. Math. 355 (1985), 173–190.

[19] M. JUTILA: On the approximate functional equation for ζ2(s) and other Dirichlet series. Quart. J. math. Oxford (2) 37 (1986), 193– 209.

[20] N.V. KUZNETSOV: Petersson’s conjecture for cusp forms of weight zero and Linnik’s conjecture. sums of Kloosterman sums. Mat. Sb. 111 (1980), 334–383. (In Russian). 126 BIBLIOGRAPHY

[21] H. MAASS: Uber¨ eine neue Art von nichtanalytischen automor- 134 phen Funktionen und die Bestimmung Dirichletcher Reihen durch Funktionalgleichungen. math. Ann. 121 (1949), 141–183.

[22] T. MEURMAN: On the mean square of the Riemann zeta-function. Quart J. Math. Oxford. (to appear).

[23] H.L. MONTGOMERY: Topics in Multiplicative Number Theory. Lecture Notes in Math. 227, Springer, 1971.

[24] R.A. RANKIN: Contributions to the theory of Ramanujan’s func- tion τ(n) and similar arithmetical functions II. The order of Fourier coefficients of integral modular forms. Math. Proc. Cambridge Phil. Soc. 35 (1939), 357–372.

[25] P. SHIU: A Brun-Titchmarsh theorem for multiplicative functions. J. Reine Angew. math. 31 (1980), 161–170.

[26] E.C. TITCHMARSH: Introduction to the Theorey of Fourier Inte- grals. oxford univ. Press, 1948.

[27] E.C. TITCHMARSH:The Theory of the Riemann Zeta-function. Oxford Univ. Press, 1951.

[28] K.-C. TONG: On divisor problems III. Acta Math. Sinica 6 (1956), 515–541. (in Chinese)

[29] G.N. WATSON’: A Treatise on the Theory of Bessel Functions (2nd ed.). Cambridge Univ. Press, 1944.

[30] J.R. WILTON: A Note on Ramanujan’s arithmetical function τ(n). Proc. Cambridge Phil. soc. 25 (1929), 121–129.