arXiv:2104.14253v2 [math.NT] 21 Jun 2021 envletheorem. value mean xoetaecalled are exponent oee ssilupoe,b to o nepii form. Lindel¨of hy explicit in [20]. not and or [8] it in be given unproved, are still bounds is however subconvex explicit and [15], aebe nw o ogtm,a es ic adu(e 1,Vl 2 Vol. [13, (see Landau since least at time, for long Currently, a 906]). for known been have h erhfrmaigu onsfor bounds meaningful for search The Introduction 1 h ucinleuto of equation functional the h hwdthat showed who hnacnuy h lsia ojcueon conjecture classical The century. a than ttsthat states 2 1 nteAkno oml o the for formula Atkinson the On < τ < onso order of Bounds nteohrhand, other the On e od n phrases and words Key 2010 22) oevr eetn h tisnfruat h rang the to formula Atkinson the r extend a on we improving Moreover, formula, Titchma Atkinson (2020). of the approach of the version and explicit work an his both Following line. cal uafrtesur enitga au fteReanzt fu zeta Riemann the of value integral mean square the for mula n nepii on o h qaema nerlvleof value integral mean square the for bound explicit an ing on yHlgt n h uhr 21) euemsl class (1918). conve mostly Backlund explicit use by the We given and function (2019). equation authors functional approximate the the and Helfgott by bound ahmtc ujc Classification Subject Mathematics and 1 hnst tisn(98,w nwtefis w em fteas the of terms two first the know we (1938), Atkinson to Thanks
ζ Z 0 | 2 1 T ζ + (
ζ τ ζ ail oa eata uiaAlterman Zuniga Sebastian Dona, Daniele it + ucneiybounds subconvexity | τ 1 2 2 1 t
it | = + 1 + = ) imn eafunction, zeta Riemann : − L 2 | τ it 2 1 it = 2 o + ε eko that know we onsaeese ooti.Casclnnepii versions non-explicit Classical obtain. to easier are bounds ε ζ
( o
| hsipisi atclrthat particular in implies this , ε = t 2 r called are ( | ε dt | t o o any for ) | ε 2 1 = ( − | t ue2,2021 23, June τ T | + 13 84 11M06. : log( ε ζ + Abstract o 0 for ) ( h urn etbudi u oBugi [6], Bourgain to due is bound best current The . s ovxt bounds convexity ε nterne0 range the in ) .Epii ovxbud r ie n[]and [4] in given are bounds convex Explicit ). > ε T 1 (2 + ) L ;b aaadstreln hoe and theorem three-line Hadamard’s by 0; ∞ < τ < L 2 ons aldthe called bounds, om ensur ons xlctbounds, explicit bounds, square mean norm, γ − 1 2 1 . − n onswt vnlower even with bounds and , log(2 < ℜ cn on ySimoniˇc by bound ecent iybud ftezeta the of bounds xity ζ ( s 18) epresent we (1986), rsh π s | nction e ζ n mrvn na on improving and )) ) ( ℜ < T cltos uhas such tools, ical τ ζ ( + s + a pne more spanned has 1 Lindel¨of hypothesis ) function ζ it mttcfor- ymptotic ∈ E ntecriti- the on ) ( 0–1,905– 806–819, , | T 1 4 (1.1) ) = ohssitself pothesis , 4 3 o giv- , ε ( | t | ε for ) , 35 +ε 1 for some function (T ) = O(T 108 ) [10, (15.14)] and (T ) = Ω(T 4 ) [9]. Explicit ver- sions of (1.1) haveE appeared more recently in [7] and [24],E both based on the approximate 3 functional equation for ζ: the error term (T ) in the latter, of order T 4 log(T ), was the record in the explicit case. E 1 3 p Moreover, for 2 <τ < 4 , Matsumoto [16] proved that T 2 ζ(2 2τ) 2 2τ ζ(τ + it) dt = ζ(2τ)T + − T − + (T ) (1.2) | | (2 2τ)(2π)1 2τ Eτ Z0 − − 1 2 3 τ for τ (T ) = Oτ (T 1+4τ log (T )) and τ (T )=Ωτ (T 4 − ); later, (1.2) was extended to 1 E E 2 <τ < 1 by Matsumoto and Meurman [18]. An explicit version of (1.2) has appeared 2 2τ in [7], whose error term of order max T − log(T ), √T absorbs the second main term, and it was the record in the explicit case.{ Any bound of} the form (1.2) can be extended 1 to the range 0 <τ < 2 using the functional equation of ζ(s), and vice versa. In the present paper, we give an explicit version of (1.1) based on a procedure elab- orated by Atkinson [2] and Titchmarsh [27, §7.4], improving on the order of (T ) to √T log2(T ). Moreover, following the same procedure, we give an explicit versionE of (1.2) 3 2 1 1 2 2τ in the range 4 τ < 2 with an error term τ (T ) of order T − log (T ), and then in ≤ E 1 2 1 3 2τ 2 the range 2 < τ 4 with an error term of order T − log (T ). We have already≤ mentioned the O notation and its derivates: for two real-valued functions f,g, the notation f(x)= o(g(x)) means that for any C > 0 there is x0 such that for all x > x0 we have f(x) < Cg(x); an indexed oε indicates that the constant x0 may depend on the variable| ε. Following| the Hardy-Littlewood convention, f(x) = Ω(g(x)) means instead that there is C > 0 such that for any x0 there is some x > x0 with f(x) > Cg(x). | | However, for our purposes we shall use more generally the complex O and O∗ nota- tion. Let f : C C. We write f(s)= O(g( (s), (s))) as s z (z = is allowed) for a real-valued function→ g such that g > 0 inℜ a neighborhoodℑ of→ ( (z), ±∞(z)) to mean that there is an independent constant C such that f(s) Cg( (s)ℜ, (s))ℑ in that neighbor- | |≤ ℜ ℑ hood. We write f(s)= O∗(h( (s), (s))) as s z to indicate that f(s) h( (s), (s)) in a neighborhood of z. ℜ ℑ → | |≤ ℜ ℑ With the notation above at hand, our main result reads as follows. Theorem 1.1. Let T T = 100. Then ≥ 0 T 2 1 2 ζ + it dt = T log(T )+(2γ 1 log(2π))T + O∗(18.169 √T log (T )). 2 − − Z0
Furthermore, if 1 τ < 1 , then 4 ≤ 2 T 2 ζ(2 2τ) 2 2τ 2.215 3 2τ 2 ζ(τ + it) dt = − T − + ζ(2τ)T + O∗ T 2 − log (T ) , 1 2τ 1 2 0 | | (2 2τ)(2π) − τ ! Z − 2 − whereas, if 1 < τ 3 , then 2 ≤ 4 T 2τ 1 2 (2π) − ζ(2 2τ) 2 2τ 4.613 2τ 1 2 ζ(τ + it) dt = ζ(2τ)T + − T − + O∗ T − 2 log (T ) . 1 2 0 | | 2 2τ τ ! Z − − 2 2 For more precise error terms, see Theorem 3.4 and Corollary 3.5. For quantitatively better error terms with higher values of T0 and for results in a wider range of τ, see §4. One might potentially improve on the order of the error term by making later works explicit instead. Atkinson’s later formula [3] offers an estimate for (T ) by way of summations, exact up to error O(log2(T )), based on Vorono¨ı’ssummationE formula for
n X d(n) [28]: it would be feasible to bound such expressions, at the cost of consid- ≤ erable more effort. One could make the estimate for 1 < (s) < 1 of Matsumoto and P 2 Meurman [18] explicit too, and retrieve error bounds for 0 <ℜ (s) < 1 via the functional equation. Other possibilities include following Titchmarsh [26],ℜ Balasubramanian [5], or Ivi´c[10, §15]. For an exposition of some of the aforementioned procedures, we refer the reader to Matsumoto’s survey [17]. Added in proof. Shortly after the appearance the original version of the present paper, Simoniˇcand Starichkova [25] announced that they have given an explicit version 1 5 of (1.1) with an error term of order T 3 log 3 (T ). Their method follows the route through Atkinson’s later paper [3] that we described above: given the constants appearing in 1 their result, the bound we present here for (s)= 2 yields a better error term up to at least T = 1030. ℜ
1.1 Strategy and layout of the presented work As already anticipated, our strategy follows the ideas of Atkinson [2] and Titchmarsh [27, §7.4]. At their core, both results use nothing more than an approximate formula for ζ and several instances of partial summation to estimate a number of weighted sums of the number-of-divisors function d(n). The latter emerge by applying Dirichlet’s convolution to rewrite ζ2, and by appropriately transforming and splitting the integral’s contour via the residue theorem. In particular, we stay closer to Titchmarsh’s ideas in some specific choices of contour 1 for intermediate results (such as Lemma 3.9), which in the case (s)= 2 lead to saving a factor of log(T ) in the error term of one of the main integralsℜ that we estimate (see Proposition 3.1). However, later we diverge from Titchmarsh’s way as many simplifica- ε tions are introduced by applying d(n) = Oε(n ), leading to a final error term of order 1 +ε 2 O(T 2 ). Indeed, since we aim for an error term of order O(√T log (T )), we adopt Atkinson’s approach, by dealing with d(n) by partial summation. We follow essentially the same strategy when working in the range 1 (s) < 1 : in 4 ≤ℜ 2 particular, in the above, we work with the generalized sum-of-divisors functions da : n a R 7→ d n d for a , of which the divisor function d = d0 is a particular case. Our process shows| that Atkinson’s∈ and Titchmarsh’s ideas can be successfully extended outside of the criticalP line while yielding error terms of smaller order than the theoretically predicted two main terms. As a matter of fact, the method applies in principle to the whole critical strip: however, the error terms may be larger than one of the main terms, which is why we decided to concentrate on the regions where this does not happen. The numerical 1 1 estimates improve as well when restricting ourselves to the smaller range 4 (s) < 2 , 1 ≤ℜ when compared to 0 < (s) < 2 . In §2, we collect explicitℜ versions of some classical bounds related to the Riemann ζ
3 function. In §3, we split the integral in Theorem 1.1 into several main pieces; then we estimate each of them in subsequent subsections, in which the relevant weighted sums of da(n) are also bounded. We reserve §4 for commenting about our numerical choices and computations: in it, we also report other versions of the multiplicative constant in the error terms of Theorem 1.1 for different choices of T0, as well as showing a result for the 1 whole range 0 < (s) < 2 . For the sake ofℜ rigor, in computing the constants in this article, we have used interval arithmetic implemented by the ARB package [11], which we used via Sage [23]. The necessary code is embedded within the TeX file of the paper itself via SageTeX.
2 Bounds on functions related to the Riemann Zeta function
Let us recall that the Gamma function Γ is defined for all s C such that (s) > 0 ∞ s 1 t ∈ ℜC as Γ : s 0 t − e− dt. This function can be extended meromorphically to , with simple poles7→ on the set 0, 1, 2, 3,... and vanishing nowhere. Where well-defined, it satisfies theR relationship{ − Γ(s−+1)− = sΓ(} s), so one says that Γ extends the factorial function to the complex numbers. Moreover, this function is closely related to the ζ function, by means of the functional equation, valid for all s C 0, 1 , ∈ \{ } s πs ζ(1 s)= χ(1 s)ζ(s) = 2(2π)− cos Γ(s)ζ(s), (2.1) − − 2 where χ can be extended to a meromorphic function with a simple pole at 1. We will need estimates for the functions involved in the functional equation above. Firstly, concerning the asymptotic behavior of Γ, we have the following. Theorem 2.1 (Explicit Stirling’s formula). Let s = σ + it C ( , 0]. We have ∈ \ −∞ 1 1 1 O∗ s 2 s+ 12s ( 60 s ( s +σ) ) (A1) Γ(s)= √2πs − e− e | | | | ,
π σ 1 1 t σ+ O∗ 1 (t) σ + σ 2 2 12 s 2 ( t=0 60 s ( s +σ) ) (A2) Γ(s) = √2π s − e− | |− | | e { 6 } | | | | | | , | | | | 1 π (A3) (log(Γ(s))) = t log( s ) + sgn(t) σ t ℑ | | − 2 2 − t 1 1 σ 1 + O∗ t=0 (t) σ | | + . − 12 s 2 { 6 } − 2 t 60 s ( s + σ) | | | | | | | |
Moreover, if arg(s) π θ, 0 <θ<π, where arg corresponds to the principal argument of s, then we| have |≤ −
1 Fθ 1 +O∗ s 2 s 12s s 3 (B1) Γ(s)= √2πs − e− e | | ,
3 σ Fθ 1 π 1 σ O∗ 1 (t) | | + σ(1 t=0 (t))+ t=0 3t2 s 3 σ 2 2 t 12 s 2 { 6 } (B2) Γ(s) = √2π s − e− | |e− − { 6 } | | e | | , | | | | 1 π 1 σ (B3) (log(Γ(s))) = t log( s )+ σ sgn(t) t=0 (t) t ℑ | | − 2 2 − { 6 } t −
4 3 t 1 1 σ Fθ + O∗ t=0 (t) σ | | + , − 12 s 2 { 6 } − 2 3 t 3 s 3 | | | | | | 1 where Fθ = 4 θ . 360 sin ( 2 )
Proof. (A1) is given in [22, §2.5 (3”)]; moreover, since s C 0 , s + σ = 0, the estimation is well defined. ∈ \{ } | | 6 Furthermore, by taking real and imaginary parts of the logarithm of Γ, defined through (A1) under the principal complex logarithm log, we derive
log(2π) 1 log(σ2 + t2) σ (log(Γ(s))) = + σ t arg(σ + it) σ + + Ξt , ℜ 2 − 2 2 − − 12(σ2 + t2) σ (2.2) 2 2 t log(σ + t ) 1 t t (log(Γ(s))) = + σ arg(σ + it) t + Ξ′ , (2.3) ℑ 2 − 2 − − 12(σ2 + t2) σ t t 1 where Ξσ , Ξσ′ 60 s ( s +σ) and where arg corresponds to the principal argument func- | | | |≤ | | | | 1 t t tion, which satisfies the identity arg(σ + it) = sgn(t) σ<0 (σ)π + sgn σ arctan |σ| . { } | | 1 Here, sgn corresponds to the sign function and we adopt the conventions sgn 0 = sgn(+ ) = 1 and arctan 1 = arctan(+ )= π . ∞ 0 ∞ 2 π ∞ dt 1 π 1 Now, the estimation arctan (x) = 2 x t2+1 = x=0 (x) 2 + O∗ x , x 0, − { 6 } | | ≥ 1 1 t π σ gives arg(s) = sgn(t) σ<0 (σ)π+ t=0 (t)R sgn σ 2 + O∗ | t| . Thereupon, it is not { } { 6 } | | C 1 1 t π difficult to verify that, for any s ( ,0], sgn( t) σ<0 (σ)π+ t=0 (t)sgn σ 2 = π π ∈ \1 −∞ { } { 6 } sgn(t) , so that t arg(s)= t +O∗ t=0 (t) σ . By using this estimation in (2.2) and 2 2 { 6 } (2.3) (and exponentiating (2.2)),| | we derive (A2)| | and (A3), respectively. On the other hand, set k = 1 in [22, §2.5 (3)] and then observe that µ2 = µ3 can be bounded in C ( , 0] by taking n = 2 in [22, §2.6 (1)] (where ϕ = arg(s)). Now, if \ −∞ 1 1 π θ θ arg(s) π θ ,0 <θ<π, then cos 2 arg(s) = cos 2 arg(s) cos −2 = sin 2 , |whence|≤ the estimation− (B1). | | ≥ t t Fθ Moreover, we can derive (2.2) and (2.3) from (B1), with Ξσ , Ξσ′ s 3 . Finally, | | | | ≤ | | 1 π 1 1 by using the refined estimation arctan (x) = x=0 (x) 2 x + O∗ 3 x 3 , x 0, { 6 } − | | ≥ and proceeding similarly to the obtention of (A2) and (A3), we derive (B2) and (B3), respectively. Secondly, with respect to the complex cosine, we have the following estimation. Proposition 2.2. For s = σ + it C, we may write ∈ π t πs e 2 1 | | O , cos = 1+ ∗ π t 2 2 e | | where cos is the complex cosine function.
5 Proof. For every complex number z, we have the identity sin(z) 2 = cosh2( (z)) cos2( (z)) (for example, combine 4.5.7 and 4.5.54 in [1]). Therefore,| | ℑ − ℜ πs 2 eπ t 1 1 πσ cos = | | 1+ 2+ 4cos2 2 4 eπ t eπ t − 2 | | | | π t π t 2 e | | 2 1 e | | 1 = 1+ O∗ + = 1+ O∗ , (2.4) 4 eπ t e2π t 4 eπ t | | | | | | 2 πσ where we have used that 2 4cos 2 2. The result is concluded by taking square roots in (2.4). − ≤
On the other hand, with respect to ζ itself, Backlund, in equations (53), (54), (56) and (76) of [4], has given an explicit version of a convexity bound for it. It reads as follows. Theorem 2.3 (Explicit convexity bounds of ζ). Let s = σ + it, where t 50. Then ≥ log(t) 0.048 if σ 1, 2 − 1 σ ≥ ζ s t t −2 ( ) t2 4 2π log(t) if 0 σ 1, | |≤ − 1 σ ≤ ≤ t 2 − log(t) if 1 σ 0. 2π − 2 ≤ ≤ t2 As t t2 4 is a decreasing function for t> 2, we immediately deduce 7→ − Corollary 2.4. Let s = σ + it such that t T = 100 and 0 σ 1. Then ≥ 0 ≤ ≤ log(t) if σ 1, 1 σ ≥ ζ(s) − | |≤ (ω t 2 log(t) if 0 σ 1, 2π ≤ ≤ 2 T0 where ω = ω(T0)= T 2 4 1.001. 0 − ≤ Furthermore, we have the following two explicit estimations for ζ when it takes posi- tive values. Proposition 2.5. For any α> 0 and α =1 we have 6 1 α < ζ(α) < . α 1 α 1 − − Proof. See [19, Cor. 1.14]. Lemma 2.6. Let α R+ and X 1. Then ∈ ≥ 1 1 1 (i) = ζ(α) + O∗ , if α> 0 and α =1, nα − (α 1)Xα 1 Xα 6 n X − X≤ − 1 2 (ii) = log(X)+ γ + O∗ , if α =1, nα 3X n X X≤
6 α+1 α X α (iii) n = + O∗(X ), if α 0. α +1 ≥ n X X≤ Proof. By [7, Lemma 2.9], [7, Lemma 2.8] and [21, Lemma 3.1] we derive (i), (ii), (iii), respectively. Finally, we introduce the elementary bounds below, proved by means of Taylor ex- pansions. Lemma 2.7. Let t 1 and 0 <α< 1. Then ≥ t 1 1 1 (a) tα 1+ α(t 1), (b) 1 − t, (c) t + log 1+ 1, ≤ − ≤ log(t) ≤ 2 t ≥ where in (b) we mean for the inequalities to hold for t 1+. →
1 3 R 3 The mean value of the Zeta function in 4, 4 + i
1 In order to derive our main result, we proceed as in [2]. Let τ 0, 2; as ζ(s) = ζ(s), we have that ∈ T 0 ζ (τ + it) 2 dt = ζ (τ + it) 2 dt. 0 | | T | | Z Z− Therefore, we can write
T T 1 τ+iT 1 1 − ζ (τ + it) 2 dt = ζ (τ + it) 2 dt = ζ(1 s)ζ(2τ 1+ s)ds. 0 | | 2 T | | 2i 1 τ iT − − Z Z− Z − − 1 Let 0 <λ< 2 be a parameter. Denote the contour formed by the three lines joining the points 1 τ iT , 2 2τ + λ iTC, 2 2τ + λ + iT , 1 τ + iT . The function s ζ(1 s)−ζ(2τ− 1+ s−) is meromorphic− − and has simple poles− at s = 2 2τ and at7→s = 0,− both with− residue ζ(2τ 1). The only pole inside the region defined− by [1 τ iT, 1 τ + iT ] corresponds− to s =2 2τ. Hence, by residue theorem, C ∪ − − − − T 1 ζ (τ + it) 2 dt = πζ(2τ 1)+ ζ(1 s)ζ(2τ 1+ s)ds. (3.1) 0 | | − − 2i − − Z ZC Now, by (2.1), the integral in the right hand side of (3.1) may be written as
1 1 ζ(1 s)ζ(2τ 1+ s)ds = χ(1 s)ζ(s)ζ(2τ 1+ s)ds. (3.2) 2i − − 2i − − ZC ZC Moreover, as 2 2τ 1, for all s such that (s) > 2 2τ, we have the identity − ≥ ℜ −
1 1 d1 2τ (n) ζ(s)ζ(2τ 1+ s)= = − . (3.3) − ns n2τ 1+s ns n ! n − ! n X X X 7 On the other hand, let X 1 be a parameter. The right hand side of (3.2) can be expressed as I + J, where ≥
1 d1 2τ (n) I = χ(1 s) − ds, (3.4) 2i − ns n X ZC X≤ 1 d1 2τ (n) J = χ(1 s) ζ(s)ζ(2τ 1+ s) − ds. (3.5) 2i − − − ns n X ZC X≤ T In the course of our reasoning, we shall choose X = 2π . The estimation we present for (3.4) is the following. Proposition 3.1. Assume that T T = 100 and X = T . If τ = 1 , then ≥ 0 2π 2
I = T log(T )+(2γ 1 log(2π))T + O∗(19.275 √T log(T )), − − whereas, if 1 τ < 1 , then 4 ≤ 2
ζ(2 2τ) 2 2τ 1.99 3 2τ I = − T − + ζ(2τ)T + O∗ + 35.354 T 2 − log(T ) . (2 2τ)(2π)1 2τ 1 τ − − 2 − The reader should remark that the error term in Proposition 3.1 introduces a saving of a factor of log(T ) with respect to the corresponding estimation presented in [2] when 1 τ = 2 . As for the integral J in (3.5), we may write it as
2 2τ+λ iT 1 τ+iT 1 − − − d1 2τ (n) + χ(1 s) ζ(s)ζ(2τ 1+ s) − ds + K, 2i − − − ns 1 τ iT 2 2τ+λ+iT ! n X Z − − Z − X≤ J1+J2 (3.6) | {z }
where J1 and J2 are the integrals in the intervals [1 τ iT, 2 2τ + λ iT ] and [2 2τ + λ + iT, 1 τ + iT ] respectively, and where − − − − − − 2 2τ+λ+iT 1 − d1 2τ (n) K χ s − ds d n K , = (1 ) s = 1 2τ ( ) n 2i 2 2τ+λ iT − n ! − Z − − n>XX n>XX 2 2τ+λ+iT 1 − χ(1 s) K ds. n = −s 2i 2 2τ+λ iT n Z − − The expression for K has been derived with the help of identity (3.3) in the first equality, and the dominated convergence theorem in the second (as s χ(1 s) is continuous in the 7→ − d1 2τ (n) compact set [2 2τ +λ iT, 2 2τ +λ+iT ] C and − ζ(1+λ)ζ(2 2τ +λ) − − − ⊂ n>X ns ≤ − in that set). P With the definitions above, it will become clear in §3.2 why we will end up selecting 1 λ of order log(T ) . With that choice, J1, J2 and K are estimated as follows.
8 Proposition 3.2. Assume that T T = 100, X = T and λ = 1.501 . If τ = 1 , then ≥ 0 2π log(T ) 2 J 0.47 √T log2(T )+2.825 √T log(T ), | 2|≤ whereas, if 1 τ < 1 , then 4 ≤ 2
0.173 3 2 2τ 1 τ 2 J2 2 +6.76 T − log(T )+0.1 T − log (T ). | |≤ 1 τ ! 2 − On replacing T by T > 0, the same bound may be derived for J . − 1 Proposition 3.3. Assume that T T = 100, X = T and λ = 1.501 . If τ = 1 , then ≥ 0 2π log(T ) 2 K 3.097 √T log2(T )+40.116 √T log(T ), | |≤ whereas, if 1 τ < 1 , then 4 ≤ 2
0.4 3 2τ 2 K + 20.072 T 2 − log (T ). | |≤ 1 τ 2 − The following sections consist of the proof of Propositions 3.1, 3.2 and 3.3: we will analyze I in §3.1, J1 and J2 in §3.2, and K in §3.3. Consequently, by combining them, we derive our main result, which reads as follows. Theorem 3.4. Assume that T T = 100. Then ≥ 0 T 1 2 ζ + it dt = T log(T )+(2γ 1 log(2π))T 2 − − Z0 2 + O∗(4.037 √T log (T )+65.076 √T log(T )),
and if 1 τ < 1 , then 4 ≤ 2 T 2 ζ(2 2τ) 2 2τ ζ (τ + it) dt = − T − + ζ(2τ)T | | (2 2τ)(2π)1 2τ Z0 − − 0.284 3 2 2τ 2 + O∗ 2 + 30.893 T − log (T ) . 1 τ ! ! 2 − By using the functional equation of ζ, we can derive a bound in the other half of the critical strip. Corollary 3.5. Assume that T T = 100. Then if 1 < τ 3 , then ≥ 0 2 ≤ 4 T 2τ 1 2 (2π) − ζ(2 2τ) 2 2τ ζ (τ + it) dt = ζ(2τ)T + − T − | | 2 2τ Z0 − 0.627 1 2τ 2 2 + O∗ 2 + 63.767 T − log (T ) . τ 1 ! ! − 2 9 1 1 Proof. By expressing τ = 1 τ ′, with 4 τ ′ < 2 , observing that ζ(s) = ζ(s) and recalling (2.1), we derive − ≤ T T T 2 2 2 ζ (τ + it) dt = ζ (1 τ ′ it) dt = χ(1 τ ′ it)ζ(τ ′ + it) dt. (3.7) | | | − − | | − − | Z0 Z0 Z0 π Note that τ ′ + i[T0,T ] belongs to the angular sector defined by arg(s) < 2 . We π 1 | | can then use Theorem 2.1(B2) with t > 0 and θ = 2 (Fθ = 90 ) and the estimation 2 4 τ ′ τ ′ log( s ) = log(t)+ 2 + O∗ 4 to obtain that | | 2t 4t
V(τ ,T ) 1 π ′ 0 τ ′ t O∗ 2 Γ(τ ′ + it) = √2πt − 2 e− 2 e t , (3.8) | | 1 1 1 1 1 1 where, by using that s 2 t2 , t3 T t2 and that 4 τ ′ < 2 , | | ≤ ≤ 0 ≤ 2 4 3 1 τ ′ τ ′ τ ′ τ ′ 1 V(τ ′,T )= τ ′ + + + + < V′(T )=0.115. 0 2 − 2 4T 2 12 3 90T 0 0 0 2V (T ) 2V′(T0) ′ 0 O∗ 2 W(T0) T 2 t 2 0 Moreover, observe that e =1+ O∗ 2 , where W(T )= T (e 1). t 0 0 − Thus, from Proposition 2.2 and (3.8), we have 2 2 2 1 2τ ′ 2τ ′ 1 1 W(T0) χ(1 τ ′ it) = (2π) − t − 1+ O∗ 1+ O∗ | − − | eπt t2 Z(T ) = (2π)1 2τ ′ t2τ ′ 1 1+ O 0 , − − ∗ t2 t2 where we have used that t T0 and that the function t T0 eπt is decreasing, ≥ ≥ 7→ 2 T 2 T 2 Z T . W T 0 W (T0) 1 W T 0 W (T0) defining ( 0)=0 459 2 ( 0)+ eπT0 + eπT0 + T 2 ( 0)+ eπT0 + eπT0 . ≥ 0 We conclude from (3.7) that T ζ (τ + it) 2 dt equals 0 | | T0 R T 2 1 2τ ′ 2τ ′ 1 2 Z(T0) ζ (τ + it) dt + (2π) − t − ζ(τ ′ + it) 1+ O∗ dt. (3.9) | | | | t2 Z0 ZT0 T0 2 By a rigorous numerical estimation, we have 0 ζ (τ + it) dt i = 319.388 (see §4 for more details). | | ≤ R For the second term in (3.9), since ζ is holomorphic in C 1 and t ζ (τ + it) 2 \{ } 7→ | | is continuous in [T0,T ], by the fundamental theorem of calculus we have that for any υ, T T t υ 2 υ d 2 t ζ(τ ′ + it) dt = t ζ(τ ′ + it′) dt′ dt | | dt | | ZT0 ZT0 ZT0 T T t υ 2 υ 1 2 = T ζ(τ ′ + it′) dt′ υt − ζ(τ ′ + it′) dt′ dt. (3.10) | | − | | ZT0 ZT0 ZT0 t 2 Furthermore, by Theorem 3.4, for any t T0, we can write ζ(τ ′ + it) dt = Mτ (t) ≥ T0 | | ′ − M (T )+ O (2E (t)), where τ ′ 0 ∗ τ ′ R
ζ(2 2τ ′) 2 2τ ′ Mτ ′ (t)= − t − + ζ(2τ ′)t, (2 2τ )(2π)1 2τ ′ − ′ −
10 3 0.284 2τ ′ 2 E 2 τ ′ (t)= 2 + 30.893 t − log (t). 1 τ ! 2 − ′ Therefore, by assuming that υ < 0, υ = (2 2τ ′), υ = 1 and using integration by parts, we conclude from (3.10) that 6 − − 6 −
T T T υ 2 υ υ υ 1 t ζ(τ ′ + it) dt = t M (t)′dt + O∗ 2T E (T )+ υ t − E (t)dt | | τ ′ τ ′ | | τ ′ ZT0 ZT0 ZT0 ! T υ υ ζ(2 2τ ′) 2 2τ ′+υ 2 2τ ′+υ = t Mτ ′ (t)′dt + O∗ (2T0 Eτ ′ (T )) = − (T − T0 − ) (2 2τ + υ)(2π)1 2τ ′ − ZT0 − ′ − ζ(2τ ′) υ+1 υ+1 υ + (T T )+ O∗ (2T E (T )) , υ +1 − 0 0 τ ′ where we have used that υ = υ, that E is an increasing function and that T υ T υ. | | − τ ′ ≤ 0 In particular, when υ 2τ ′ 1, 2τ ′ 3 we can estimate (3.9) as ∈{ − − }
1 2τ ′ 1 2τ ′ (2π) − ζ(2τ ′) 2τ ′ 2(2π) − Z(T0) ζ(2 2τ ′)T + T + O∗ 1+ Eτ (T ) − 2τ T 1 2τ ′ T 2 ′ ′ 0 − 0 ! 3 2τ ′ 2 Z(T0) T 2 − log (T ) + O∗ i + z + z , (3.11) 2τ ′ 2 2 2τ ′ 3 2τ | | T0 | − | 2 − ′ 2 T0 log (T0)!
3 2τ ′ since t t 2 − log(t) is increasing in [T ,T ], where we have defined 7→ 0
1 2τ ′ (2π) − ζ(2τ ′) 2τ ′ z = ζ(2 2τ ′)T + T . υ − 0 υ 0
1 1 Using Proposition 2.5, τ ′ , and T 2π, we have ∈ 4 2 0 ≥ 1 z z 2τ ′ 2 2τ ′ < ζ(2 2τ ′)T0 < 1+ 2 T0, ≤ − − 8 1 τ ! 2 − ′ ζ(2τ ′) 1 1 z 2τ ′ ζ(2 2τ ′)+ T0 > T0 > 1+ 2 T0, ≥ − 2τ ′ −2τ ′ − 8 1 τ ! 2 − ′ so we have a bound on z2τ , z2 2τ that we can plug into (3.11). | ′ | | − ′ | The result is concluded by replacing τ ′ by 1 τ and calculating the constants in the error term. −
3.1 The integral I We readily derive that
1 χ(1 s) I = d1 2τ (n) − ds = d1 2τ (n) In, − 2i ns − n T ZC n T X≤ 2π X≤ 2π
11 1 τ+iT 1 − χ(1 s) In = −s ds, 2i 1 τ iT n Z − − where in the first equality we have used the finiteness of the summation and in the second that the function s χ(1 s) is holomorphic, so that its residues vanish. First, we establish7→ here− some results about the average of arithmetical functions involving the function da. Proposition 3.6. Let X 1 and 0 <σ< 1. Then ≥ ζ(2 2σ) 2 2σ 1 σ 1 d1 2σ(n)= ζ(2σ)X + − X − + O∗ AσX − if σ = , − 2 2σ 6 2 n X − X≤ 1 d0(n)= X log(X)+(2γ 1)X + O∗ A 1 √X if σ = , − 2 2 n X X≤ 1+2σ 1 16 1 1 where Aσ =4+ σ(2 2σ) for σ = 2 , and A 1 = 3 . In particular, Aσ 8 for σ 4 , 2 . − 6 2 ≤ ∈ Proof. By the hyperbola method, we have
a a da(n)= m + m 1 n X n X m n m>√X d X ≤ ≤ | ≤ m X X mX√X X X ≤ = ma 1+ ma (3.12) m √X d X d √X √X 1 2σ 1 1 2σ m − 1= X + O∗ m − m2σ m √X d X m √X m √X X≤ X≤ m X≤ X≤ 3 σ 1 σ X 2 − 1 σ X − 1 σ = ζ(2σ)X + O∗(X − )+ O∗ + X 2 − − 2σ 1 2 2σ − − 3 σ X 2 − 5 4σ 1 σ = ζ(2σ)X + O∗ − X − , (3.13) − 2σ 1 2 2σ − − 1 where the last line holds for X 1. For 2 <σ< 1, by Lemma 2.6(i)-(iii) we obtain the same estimate as in (3.13): in fact≥ one more term ζ(2σ 1) emerges, but it is negative 1 σ X − − 1 and bounded in absolute value by 2 2σ by Proposition 2.5 and X 1. For σ = 2 , by Lemma 2.6(ii), we get instead − ≥ 1 1 5 1= X + O∗(√X)= X log(X)+ γX + O∗ √X . (3.14) m 2 3 m √X d X m √X X≤ X≤ m X≤ 12 We also use Lemma 2.6 for the inner sum of the second term of (3.12), and derive 2 2σ 1 σ 1 2σ 1 2σ 1 X − X − X − 1 σ m − = + O∗ + X 2 − . (3.15) 2 2σ d − 2 2σ d X ! √X 3 σ 1 2σ ζ(2 2σ) 2 2σ X 2 − 1 σ m − = − X − + + O∗ B X − (3.17) 2 2σ 2σ 1 σ d √X √X