On the Atkinson Formula for the Ζ Function
Total Page:16
File Type:pdf, Size:1020Kb
On the Atkinson formula for the ζ function Daniele Dona, Sebastian Zuniga Alterman June 23, 2021 Abstract Thanks to Atkinson (1938), we know the first two terms of the asymptotic for- mula for the square mean integral value of the Riemann zeta function ζ on the criti- cal line. Following both his work and the approach of Titchmarsh (1986), we present an explicit version of the Atkinson formula, improving on a recent bound by Simoniˇc 1 3 (2020). Moreover, we extend the Atkinson formula to the range ℜ(s) ∈ 4 , 4 , giv- ing an explicit bound for the square mean integral value of ζ and improving on a bound by Helfgott and the authors (2019). We use mostly classical tools, such as the approximate functional equation and the explicit convexity bounds of the zeta function given by Backlund (1918). 1 Introduction The search for meaningful bounds for ζ(s) in the range 0 < (s) < 1 has spanned more ℜ than a century. The classical conjecture on L∞ bounds, called the Lindel¨of hypothesis, 1 ε states that ζ 2 + it = oε( t ) for any ε > 0; by Hadamard’s three-line theorem and | | ε the functional equation of ζ, this implies in particular that ζ(τ + it) = oε( t ) for 1 | | | | 1 2 τ+ε 1 2 <τ< 1 and ζ(τ + it) = oε( t − ) for 0 <τ< 2 . | 1 | τ | | − +ε Bounds of order t 2 are called convexity bounds, and bounds with even lower exponent are called subconvexity| | bounds. The current best bound is due to Bourgain [6], 13 1 84 +ε who showed that ζ 2 + it = oε( t ). Explicit convex bounds are given in [4] and [15], and explicit subconvex bounds| | are given in [8] and [20]. Lindel¨of hypothesis itself however is still unproved, be it or not in explicit form. On the other hand, L2 bounds are easier to obtain. Classical non-explicit versions have been known for a long time, at least since Landau (see [13, Vol. 2, 806–819, 905– arXiv:2104.14253v2 [math.NT] 21 Jun 2021 1 906]). Currently, for τ = 2 we know that T 1 2 ζ + it dt = T log(T )+(2γ 1 log(2π))T + (T ) (1.1) 2 − − E Z0 2010 Mathematics Subject Classification : 11M06. Key words and phrases: Riemann zeta function, L2 norm, mean square bounds, explicit bounds, mean value theorem. 1 35 +ε 1 for some function (T ) = O(T 108 ) [10, (15.14)] and (T ) = Ω(T 4 ) [9]. Explicit ver- sions of (1.1) haveE appeared more recently in [7] and [24],E both based on the approximate 3 functional equation for ζ: the error term (T ) in the latter, of order T 4 log(T ), was the record in the explicit case. E 1 3 p Moreover, for 2 <τ < 4 , Matsumoto [16] proved that T 2 ζ(2 2τ) 2 2τ ζ(τ + it) dt = ζ(2τ)T + − T − + (T ) (1.2) | | (2 2τ)(2π)1 2τ Eτ Z0 − − 1 2 3 τ for τ (T ) = Oτ (T 1+4τ log (T )) and τ (T )=Ωτ (T 4 − ); later, (1.2) was extended to 1 E E 2 <τ < 1 by Matsumoto and Meurman [18]. An explicit version of (1.2) has appeared 2 2τ in [7], whose error term of order max T − log(T ), √T absorbs the second main term, and it was the record in the explicit case.{ Any bound of} the form (1.2) can be extended 1 to the range 0 <τ < 2 using the functional equation of ζ(s), and vice versa. In the present paper, we give an explicit version of (1.1) based on a procedure elab- orated by Atkinson [2] and Titchmarsh [27, §7.4], improving on the order of (T ) to √T log2(T ). Moreover, following the same procedure, we give an explicit versionE of (1.2) 3 2 1 1 2 2τ in the range 4 τ < 2 with an error term τ (T ) of order T − log (T ), and then in ≤ E 1 2 1 3 2τ 2 the range 2 < τ 4 with an error term of order T − log (T ). We have already≤ mentioned the O notation and its derivates: for two real-valued functions f,g, the notation f(x)= o(g(x)) means that for any C > 0 there is x0 such that for all x > x0 we have f(x) < Cg(x); an indexed oε indicates that the constant x0 may depend on the variable| ε. Following| the Hardy-Littlewood convention, f(x) = Ω(g(x)) means instead that there is C > 0 such that for any x0 there is some x > x0 with f(x) > Cg(x). | | However, for our purposes we shall use more generally the complex O and O∗ nota- tion. Let f : C C. We write f(s)= O(g( (s), (s))) as s z (z = is allowed) for a real-valued function→ g such that g > 0 inℜ a neighborhoodℑ of→ ( (z), ±∞(z)) to mean that there is an independent constant C such that f(s) Cg( (s)ℜ, (s))ℑ in that neighbor- | |≤ ℜ ℑ hood. We write f(s)= O∗(h( (s), (s))) as s z to indicate that f(s) h( (s), (s)) in a neighborhood of z. ℜ ℑ → | |≤ ℜ ℑ With the notation above at hand, our main result reads as follows. Theorem 1.1. Let T T = 100. Then ≥ 0 T 2 1 2 ζ + it dt = T log(T )+(2γ 1 log(2π))T + O∗(18.169 √T log (T )). 2 − − Z0 Furthermore, if 1 τ < 1 , then 4 ≤ 2 T 2 ζ(2 2τ) 2 2τ 2.215 3 2τ 2 ζ(τ + it) dt = − T − + ζ(2τ)T + O∗ T 2 − log (T ) , 1 2τ 1 2 0 | | (2 2τ)(2π) − τ ! Z − 2 − whereas, if 1 < τ 3 , then 2 ≤ 4 T 2τ 1 2 (2π) − ζ(2 2τ) 2 2τ 4.613 2τ 1 2 ζ(τ + it) dt = ζ(2τ)T + − T − + O∗ T − 2 log (T ) . 1 2 0 | | 2 2τ τ ! Z − − 2 2 For more precise error terms, see Theorem 3.4 and Corollary 3.5. For quantitatively better error terms with higher values of T0 and for results in a wider range of τ, see §4. One might potentially improve on the order of the error term by making later works explicit instead. Atkinson’s later formula [3] offers an estimate for (T ) by way of summations, exact up to error O(log2(T )), based on Vorono¨ı’ssummationE formula for n X d(n) [28]: it would be feasible to bound such expressions, at the cost of consid- ≤ erable more effort. One could make the estimate for 1 < (s) < 1 of Matsumoto and P 2 Meurman [18] explicit too, and retrieve error bounds for 0 <ℜ (s) < 1 via the functional equation. Other possibilities include following Titchmarsh [26],ℜ Balasubramanian [5], or Ivi´c[10, §15]. For an exposition of some of the aforementioned procedures, we refer the reader to Matsumoto’s survey [17]. Added in proof. Shortly after the appearance the original version of the present paper, Simoniˇcand Starichkova [25] announced that they have given an explicit version 1 5 of (1.1) with an error term of order T 3 log 3 (T ). Their method follows the route through Atkinson’s later paper [3] that we described above: given the constants appearing in 1 their result, the bound we present here for (s)= 2 yields a better error term up to at least T = 1030. ℜ 1.1 Strategy and layout of the presented work As already anticipated, our strategy follows the ideas of Atkinson [2] and Titchmarsh [27, §7.4]. At their core, both results use nothing more than an approximate formula for ζ and several instances of partial summation to estimate a number of weighted sums of the number-of-divisors function d(n). The latter emerge by applying Dirichlet’s convolution to rewrite ζ2, and by appropriately transforming and splitting the integral’s contour via the residue theorem. In particular, we stay closer to Titchmarsh’s ideas in some specific choices of contour 1 for intermediate results (such as Lemma 3.9), which in the case (s)= 2 lead to saving a factor of log(T ) in the error term of one of the main integralsℜ that we estimate (see Proposition 3.1). However, later we diverge from Titchmarsh’s way as many simplifica- ε tions are introduced by applying d(n) = Oε(n ), leading to a final error term of order 1 +ε 2 O(T 2 ). Indeed, since we aim for an error term of order O(√T log (T )), we adopt Atkinson’s approach, by dealing with d(n) by partial summation. We follow essentially the same strategy when working in the range 1 (s) < 1 : in 4 ≤ℜ 2 particular, in the above, we work with the generalized sum-of-divisors functions da : n a R 7→ d n d for a , of which the divisor function d = d0 is a particular case. Our process shows| that Atkinson’s∈ and Titchmarsh’s ideas can be successfully extended outside of the criticalP line while yielding error terms of smaller order than the theoretically predicted two main terms. As a matter of fact, the method applies in principle to the whole critical strip: however, the error terms may be larger than one of the main terms, which is why we decided to concentrate on the regions where this does not happen. The numerical 1 1 estimates improve as well when restricting ourselves to the smaller range 4 (s) < 2 , 1 ≤ℜ when compared to 0 < (s) < 2 .