<<

On into coprime parts

Matthew Just and Noah Lebowitz-Lockard August 11, 2019

Abstract Let F (n) and G(n) denote the number of unordered and ordered factorizations of n into pairwise coprime larger than one. We establish the average orders of F (n)β and G(n)β for all real β.

1 Introduction

Let f(n) and g(n) be the number of unordered and ordered factorizations of n into integers larger than 1. In 1927, Oppenheim [9] found an asymptotic formula for the sum of f(n), namely √ X 1 x exp(2 log x) f(n) ∼ √ , 2 π (log x)3/4 n≤x which Szekeres and Tur´an[15] rediscovered a few years later. Soon afterwards, Kalm´ar[6] proved that X 1 g(n) ∼ − xρ, ρζ0(ρ) n≤x where ζ is the and ρ ≈ 1.73 is the unique solution in (1, ∞) to ζ(ρ) = 2. More precisely, Hwang [5] proved that X 1 g(n) = − xρ + O(xρ exp(−c(log x)(3/2)−)) ρζ0(ρ) 2 n≤x

for a positive constant c and all positive . (We use logk x to refer to the kth iterate of the natural logarithm.) We may also put restrictions on our factorizations. Let F (n) and G(n) be the number of unordered and ordered factorizations of n into pairwise coprime integers larger than one. Though Warlimont did not find asymptotic formulae for the sums of F (n) and G(n), he did establish upper and lower bounds [16]. He proved that there exist positive constants c1, c2, c3, c4 such that s ! s ! log x X log x x exp c  F (n)  x exp c , 1 log x 2 log x 2 n≤x 2

1  log x  X  log x  x exp c  G(n)  x exp c . 3 log x 4 log x 2 n≤x 2 We refine Warlimont’s estimates and obtain the following results. Theorem 1.1. We have s ! X log x F (n) = x exp (c + o(1)) , log x n≤x 2

where √ c = 2 2e−γ ≈ 2.12, with γ ≈ 0.577 referring to the Euler-Mascheroni constant. In to state the corresponding result for G, we must define two functions. Let W be the Lambert W function, i.e. the inverse of the function h(z) = zez. Define the incomplete Γ function as Z ∞ Γ(s, x) = ts−1e−tdt. x Theorem 1.2. Let w = W (1/(e log 2)). We have

X  log x  G(n) = x exp (c + o(1)) , log x n≤x 2 where w c = w(1 − e Γ(0, w)(w + log w + log2 2)) ≈ 0.771. In addition, we consider the β-th moments of these functions for all β > 0. Pollack [10] recently found the moments of f (the corresponding moments of g are still unknown). Let

L(x) = exp(log x log3 x/ log2 x). Theorem 1.3. For β > 1, X xβ f(n)β = . L(x)β+o(1) n≤x Theorem 1.4. For β ∈ (0, 1),

1/(1−β)! X (1 − β) log x f(n)β = x exp (1 + o(1)) 2 . (log x)β n≤x 3 We extend Pollack’s results to F and G. We write their positive moments, then their negative ones. Theorem 1.5. For β > 1, X xβ F (n)β = . L(x)2β+o(1) n≤x

2 Theorem 1.6. For β ∈ (0, 1),

1/(1−β)! X (1 − β) log x F (n)β = x exp (1 + o(1)) 2 . (log x)β n≤x 3

Theorem 1.7. For β > 1, X xβ G(n)β = . L(x)β+o(1) n≤x Theorem 1.8. For β ∈ (0, 1),

X  1 − β  G(n)β = x exp (1 + o(1)) (log x)1/(1−β) . (log 2)β/(1−β) 2 n≤x

In the process of obtaining the negative moments of F and G, we also obtain the negative moments of f and g.

Theorem 1.9. For β > 0, the sums X X f(n)−β, F (n)−β n≤x n≤x

are both x exp((1 + o(1))((1 + β)(log x)(log x)β)1/(1+β)). log x 2 3 Theorem 1.10. For β > 0, the sums X X g(n)−β, G(n)−β n≤x n≤x

are both x exp((1 + o(1))(1 + β)((log 2)β log x)1/(1+β)). log x 2

2 Bounds on π(x, k)

Let ω(n) be the number of distinct primes factors of n and let π(x, k) be the number of n ≤ x satisfying ω(n) = k. Throughout this paper, we use several bounds for π(x, k), each of which corresponds to a distinct set of possible values of k. We collect those bounds here. The first result is the Hardy-Ramanujan Theorem.

Theorem 2.1 ([3]). There exist constants c1 and c2 such that for all x, k,

x (log x + c )k−1 π(x, k) ≤ c · 2 2 . 1 log x (k − 1)!

3 Sathe improved the Hardy-Ramanujan Theorem for small values of k and Selberg sim- plified his proof. This result applies to π0(x, k), which is the number of squarefree n ≤ x satisfying ω(n) = k.

Theorem 2.2 ([12, 13]). If k  log2 x, then x (log x)k−1 π0(x, k) ∼ · 2 . log x (k − 1)!

Though the Hardy-Ramanujan Theorem is the most widely applicable formula for π(x, k), it is also the least precise. Each of the following results applies to a smaller set of possible

values of k, but gives a more precise estimate. From here on, let L0 = log2 x − log k − log2 k.

Theorem 2.3 ([11, Thm. 6.1]). For k = bc log x/ log2 xc with c ∈ [1/3, 1 − (1/ log2 x)], we have π(x, k) = xk−k(1 − c)keO(k).

If c > 1 − (1/ log2 x), then π(x, k) = eO(k). Theorem 2.4 ([11, Thms. 3.1, 4.1]). For

2 log x (log2 x) ≤ k ≤ , 3 log2 x we have x π(x, k) = exp(k log L + o(k)). k! log x 0 The following theorem is more precise than the previous one, but is less widely applicable.

2 2 Theorem 2.5 ([4, Cor. 2]). For (log2 x)  k  (log x)/(log2 x) , we have x   1  π(x, k) = exp k log M + + O(R) , k! log x M with

M = log ξ + log2 ξ − log L0 − γ, 1 1 1  R = + , L0 y L0 k y = , L0 log x ξ = . y log y We close this section with a theorem that applies to all values of k that are neither too large nor too small.

4 Theorem 2.6 ([7, Section 1.2.3, Cor. 1]). Fix c ∈ (, 1−). Let H(t) be the inverse function of tetΓ(0, t). Define t π  S(t) = sin . π t Let 1 β = 1 + (H(c) + o(1)) . log2 x

For k = (c + o(1)) log x/ log2 x, we have !β  e k   k  π(x, k) = x exp O 1/4 . S(β) log x (log2 x)

3 Positive moments of F

e1 ek Let n = p1 ··· pk . Suppose we write n as a product a1 ··· a`, where (ai, aj) = 1 for all i 6= j. ei For each i ≤ k, there exists a unique j ≤ ` such that kaj. So, the values of the ei’s are irrelevant to F (n). Indeed, F (n) is the simply the number of set partitions of a k element set. The number of such partitions is Bk, the kth . In general, we have

F (n) = Bω(n). Using this equation, we may rewrite the sum of F (n)β. For any k, let π(x, k) be the number of n ≤ k satisfying ω(n) = k. We have

X β X β X β F (n) = Bω(n) = Bk π(x, k). n≤x n≤x k A simple lower bound for this sum is X F (n)β ≥ max Bβ π(x, k). k ω(k) n≤x From [1], we see that the maximum value of ω(n) for any n ≤ x is log x   1  1 + O . log2 x log2 x This result allows us to put an upper bound on our sum:

X β X β X β log x β F (n) ≤ Bk π(x, k) ≤ max Bk π(x, k)  max Bk π(x, k). k log2 x k n≤x k log x ` log x log2 x log2 x Putting this together gives us

β X β log x β max Bk π(x, k)  F (n)  max Bk π(x, k), k log x k n≤x 2

5 i.e., X β β F (n) = exp(O(log2 x)) max Bk π(x, k). k n≤x

Note that in Theorems 1.1, 1.5, and 1.6, exp(O(log2 x)) is smaller than the error terms we are trying to obtain, rendering the exp(O(log2 x)) irrelevant. In order to estimate the β β sum of F (n) , we simply need to determine the maximum value of Bk π(x, k). Before moving further, we provide a short proof of Theorem 1.6. It is clear that the righthand side is an upper bound for the left because F (n) ≤ f(n) and we already have the sum of f(n)β from Theorem 1.4. As to showing it is a lower bound, we note that Pollack actually proved Theorem 1.4 by demonstrating that

 1/(1−β)  β 1/(1−β) (log2 x) max Bk π(x, k) = x exp ((1 − β) + o(1)) β/(1−β) . k (log3 x)

Throughout this section and Section 5, we use the following formula for Bk to varying degrees of precision.

Theorem 3.1 ([2, Eq. (6.27)]). We have

 k log k k k(log k)2  B = exp k log k − k log k − k + 2 + + O 2 . k 2 log k log k (log k)2

3.1 The β > 1 case We prove Theorem 1.5, which we restate below. Our proof is very similar to the proof of Theorem 1.3.

Theorem 3.2. For β > 1, X xβ F (n)β = . L(x)2β+o(1) n≤x Proof. The lower bound comes from a straightforward argument. We note that X F (n)β ≥ max F (n)β. n≤x n≤x

The maximum value of F (n) is Bk, where k is the largest possible value of ω(n) for any n ≤ x. We wrote earlier that this value of k is log x  log x  + O 2 . log2 x (log2 x) In this case,

log k = log2 x − (1 + o(1)) log3 x

log2 k = (1 + o(1)) log3 x,

6 and   log x log3 x x Bk = exp(k log k + O(k log2 k)) = exp log x − (2 + o(1)) = 2+o(1) . log2 x L(x) Therefore, X xβ F (n)β ≥ . L(x)2β+o(1) n≤x We now establish the righthand side as an upper bound. 2β/(β−1) Let S1 be the set of n ≤ x satisfying F (n) ≤ x/L(x) . We have

 β−1 β−1   β X β X x x x F (n) ≤ max F (n) F (n) ≤ 2β+o(1) o(1) = 2β+o(1) . n∈S1 L(x) L(x) L(x) n∈S1 n≤x

Let S2 be all other n ≤ x. Fix  > 0. For k < (1 − )(log x/ log2 x), we have   log x 1−+o(1) Bk < exp (1 + o(1))(1 − ) log2 x = x . log2 x

Because n ∈ S2, we may assume that ω(n) = (1+o(1))(log x/ log2 x). Under this assumption, we only need to evaluate

β max Bk π(x, k). (1−) log x

Suppose k > (log x/ log2 x)(1 − (1/ log2 x)). Then,

π(x, k) = eO(k) = eO(log x/ log2 x) = L(x)o(1).

β 2β 2β+o(1) β 2β 2β+o(1) In addition, Bk ≤ x /L(x) . Therefore, Bk π(x, k) = x /L(x) as well. If k ≤ (log x/ log2 x)(1 − (1/ log2 x)), then

β O(k) Bk π(x, k) = xe exp(βk log k − (β + o(1))k log2 k − k log k + k log(1 − c)) o(1) = xL(x) exp((β − 1)k log k − (β + o(1))k log2 k + k log(1 − c)).

β Note that | log(1 − c)| ≤ | log(1/ log2 x)| = log3 x = o(log k). We maximize Bk π(x, k) by maximizing k. Let k = (log x/ log2 x)(1 − (1/ log2 x)). We have  log x  1   exp((β − 1)k log k) = exp (β − 1) 1 − (log2 x − (1 + o(1)) log3 x) log2 x log2 x xβ−1 = L(x)β−1+o(1),     log x 1 β+o(1) exp((β + o(1))k log2 k) = exp (β + o(1)) 1 − log3 x = L(x) , log2 x log2 x

7     log x 1 −1+o(1) exp(k log(1 − c)) = exp −(1 + o(1)) 1 − log3 x = L(x) . log2 x log2 x Putting all this together gives us

β β x max Bk π(x, k) = 2β+o(1) . (1−) log x

3.2 The β = 1 case First we show that the sum of F (n) for n having many prime factors is negligible. Before doing so, we write a new formula for π(x, k) for certain values of k.

2 Lemma 3.3. For all k  (log x)/(log2 x) , s ! log x Bkπ(x, k) = o . log2 x

Proof. We split the proof into three cases: 1. log x  1  k ≥ 1 − , log2 x log2 x 2. log x log x  1  < k < 1 − , 3 log2 x log2 x log2 x 3. log x log x 2  k ≤ , (log2 x) 3 log2 x We show each of these cases has the correct upper bound.

Suppose k ≥ (log x/ log2 x)(1 − (1/ log2 x)). By Theorem 2.3, π(x, k) = exp(O(k)).

We have

Bkπ(x, k) = exp(k log k − (1 + o(1))(k log2 k))  exp(k log k)  log x   1    exp 1 + O (log2 x − (1 + o(1)) log3 x) log2 x log2 x x = . L(x)1+o(1)

8 Suppose k = bc log x/ log2 xc with 1 1 < c < 1 − . 3 log2 x By Theorem 2.3,

π(x, k) = xk−k(1 − c)keO(k)  xk−keO(k) = x exp(−k log k + O(k)).

Therefore,

Bkπ(x, k)  x exp(−(1 + o(1))k log2 k)  x. Suppose log x log x 2  k ≤ . (log2 x) 3 log2 x By Theorem 2.4, x π(x, k) = exp(k log L + o(k)), k! log x 0 with L0 = log2 x − log k − log2 k. Recall that

Bk = exp(k log k − k log2 k + O(k)).

Thus,

Bkπ(x, k) = x exp(k log L0 − k log2 k + O(k)). 2 Note that log L0 − log2 k decreases as k increases. For k = log x/(log2 x) ,

L0 = log2 x − (log2 x − 2 log3 x) − (1 + o(1)) log3 x = (1 + o(1)) log3 x,

log L0 = O(log4 x),

log2 k = (1 + o(1)) log3 x. Therefore,

Bkπ(x, k)  x exp(−(log3 x + o(1))k)  x.

Before proving the main result, we state a few preliminary lemmas.

Lemma 3.4 ([8]). For h = O(1), we have

(k + h)! exp(eW (k) − 1) B = · · (1 + O(e−W (k))). k+h W (k)k+h (2π(W (k)2 + W (k))eW (k))1/2

Corollary 3.5. We have B k + 1 k+1 = (1 + O(e−W (k))). Bk W (k)

9 2 Corollary 3.6. For all k  log x/(log2 x) , B π(x, k + 1) L  1   log L  k+1 = 0 1 + 1 + O e−W (k) + 0 . Bkπ(x, k) W (k) k L0 Proof. By [4, Cor. 3], π(x, k + 1) L  log L  = 0 1 + O 0 π(x, k) k L0 2 for k  (log x)/(log2 x) . Combining this with the previous corollary gives us the desired result. Theorem 3.7. We have r ! s ! X 2 log x F (n) = exp 2 + o(1) . eγ log x n≤x 2

2 Proof. Because of the previous result, we may assume that ω(n)  log x/(log2 x) . We maximize Bkπ(x, k) by selecting an optimal value of k. We use the previous corollary to show that the maximum value of Bkπ(x, k) occurs at 1/2 C+o(1) k = (log x) (log2 x) for some constant C. For an optimal value of k, B π(x, k + 1) k+1 ∼ 1, Bkπ(x, k) which occurs when L0 ∼ W (k). Recall that

L0 = log2 x − log k − log2 k,

W (k) = log k − log2 k + o(1). Therefore,

log2 x = 2 log k + o(1), which implies that k = (log x)(1/2)+o(1). 1/2 C We may refine this estimate further. Let k = (log x) (log2 x) with C = o(log2 x/ log3 x). We obtain      1 C log3 x L0 = log2 x − log2 x + C log3 x − log3 x − log 2 + O 2 log2 x 1 = log x − (C + 1) log x + log 2 + o(1), 2 2 3 1  W (k) = log x + C log x − (log x − log 2 + o(1)) + o(1) 2 2 3 3 1 = log x + (C − 1) log x − log 2 + o(1). 2 2 3 10 Hence, L log x 0 = 1 − (4C + o(1)) 3 . W (k) log2 x Note that  1   log L  log x 1 + 1 + O e−W (k) + 0 = 1 + O 3 . k L0 log2 x 1/2 C Therefore, the optimal value of C is a constant. We may assume that k = (log x) (log2 x) with C  1. Using Theorem 2.5, we prove that if

1/2 C k = (log x) (log2 x) , then Bkπ(x, k) 1/2 C−1 1/2 C−1 = x exp((2 − 4C)(log x) (log2 x) log3 x + 2(2 − γ − log 2 + o(1))(log x) (log2 x) ). The maximum value of this function occurs at C = 1/2, at which point we obtain the desired formula. We find the values of the variables used in Theorem 2.5: 1 L = log x − (C + 1) log x + log 2 + o(1), 0 2 2 3

k 1/2 C−1 1/2 C−2 y = = 2(log x) (log2 x) + O((log x) (log2 x) (log3 x)), L0

log x ξ = y log y log x =    2(log x)1/2(log x)C−1 1 log x 1 + O log3 x 2 2 2 log2 x 1/2  1/2  (log x) (log x) log3 x = C + O C+1 , (log2 x) (log2 x)

M = log ξ + log2 ξ − log L0 − γ     1 log3 x = log2 x − C log3 x + (log3 x − log 2) − (log3 x − log 2) − γ + O 2 log2 x   1 log3 x = log2 x − C log3 x − γ + O , 2 log2 x

 2  2C log3 x 2γ (log3 x) log M = log3 x − log 2 − − + O 2 , log2 x log2 x (log2 x)

11 1  1 1   1  R = + = O 2 . L0 log y L0 (log2 x) We now apply Theorem 2.5: x k π(x, k) = exp(k log M + + O(kR)) k! log x M  k  k  = x exp k log M + − k log k + k + O 2 M (log2 x) Recall that  k log k k k(log k)2  B = exp k log k − k log k − k + 2 + + O 2 . k 2 log k log k (log k)2

Therefore,

 k k log k k k(log k)2  B π(x, k) = x exp k log M − k log k + + 2 + + O 2 . k 2 M log k log k (log k)2

Note that

log M − log2 k      2  2C log3 x 2γ 2C log3 x (log3 x) = log3 x − log 2 − − − log3 x − log 2 + + O 2 log2 x log2 x log2 x (log2 x)  2  4C log3 x 2γ (log3 x) = − − + O 2 . log2 x log2 x (log2 x) In addition,   1 2 log3 x = + O 2 , M log2 x (log2 x)   1 2 log3 x = + O 2 , log k log2 x (log2 x)  2  log2 k 2 log3 x 2 log 2 (log3 x) = − + O 2 . log k log2 x log2 x (log2 x) Putting everything together gives us

   2  (2 − 4C) log3 x 4 − 2γ − 2 log 2 (log3 x) Bkπ(x, k) = x exp k + + O 2 . log2 x log2 x (log2 x) Hence, Bkπ(x, k) 1/2 C−1 1/2 C−1 = x exp((2 − 4C)(log x) (log2 x) (log3 x) + 2(2 − γ − log 2 + o(1))(log x) (log2 x) ).

12 We choose C to maximize this function. Specifically, we factor out various terms which are independent of C and maximize

C (log2 x) ((1 − 2C)(log3 x) + 2 − γ − log 2). Its derivative with respect to C is

C (log2 x) (log3 x)((1 − 2C)(log3 x) − (γ + log 2)). Setting this quantity equal to 0 gives us 1 γ + log 2 1 C = − . 2 2 log3 x

Plugging this value of C into our formula for Bkπ(x, k) gives us our desired result. p We note here that Warlimont obtained his lower bound by setting k ∼ log x log2 x, p γ whereas we have used k ∼ (1/2e ) log x log2 x. In addition, we can bound the error term more precisely: s ! X log x  (log x)2  F (n) = x exp c 1 + O 3 . log x log x n≤x 2 2

4 Positive moments of G

The techniques we used for F also hold for G. Here, we use the ordered Bell numbers, which we denote ak, instead of the unordered Bell numbers. We now have G(n) = aω(n). Thus,

X β X β O(log2 x) β G(n) = ak π(x, k) = e max ak π(x, k). k n≤x k

Once again, the eO(log2 x) term is negligible, allowing us to focus entirely on maximizing β ak π(x, k). Theorem 4.1 ([14]). We have 1 k! a ∼ · = exp(k log k − (1 + log 2)k + O(log k)). k 2 log 2 (log 2)k 2

4.1 The β > 1 case In the previous section, we established that if β > 1, then

X xβ F (n)β = . L(x)2β+o(1) n≤x Already having this bound allows us to write a short proof for the sum of G(n)β.

13 Theorem 4.2. For β > 1, X xβ G(n)β = . L(x)β+o(1) n≤x Proof. First, we establish the righthand side as a lower bound. Note that X G(n)β ≥ max G(n)β = max aβ . n≤x n≤x ω(n) n≤x

Recall that the maximum value of k = ω(n) is (1 + O(1/ log2 x)) log x/ log2 x. We have  log x  log x  x ak = exp(k log k + O(k)) ≤ exp (log2 x − log3 x) + O = 1+o(1) . log2 x log2 x L(x) Therefore, X xβ G(n)β ≥ . L(x)β+o(1) n≤x β For the upper bound, we want to find the maximum value of ak π(x, k). Note that  β  β β ak β ak β max ak π(x, k) ≤ max Bk π(x, k) ≤ max max Bk π(x, k). k k Bk k Bk k However, β β o(1) X β x max Bk π(x, k) = L(x) F (n) = . k L(x)2β+o(1) n≤x Recall that

a(k) = exp(k log k + o(k log2 k)),

Bk = exp(k log k − (1 + o(1))k log2 k). Therefore,

a(k)/Bk = exp((1 + o(1)) log2 k, which implies that 1+o(1) ak/Bk ≤ L(x) for all k ≤ (1 + o(1)) log x/ log2 x. Putting all this together gives us

X xβ G(n)β = . L(x)β+o(1) n≤x

14 4.2 The β < 1 case We prove Theorem 1.8, which is the sum of G(n)β for β ∈ (0, 1).

Theorem 4.3. For all β ∈ (0, 1), X G(n)β = x exp((1 + o(1))e−(β log log 2)/(1−β)(1 − β)(log log x)1/(1−β)). n≤x

β Proof. Our goal is to maximize ak π(x, k) for k ≤ (1 + o(1))(log x/ log2 x). By the Hardy- Ramanujan Theorem,

x (log x)k  π(x, k)  2 = x exp(k log x − k log k + k + O(log x)). log x (k − 1)! 3 2

In addition, β ak = exp(βk log k − β(1 + log2 2)k + O(1)). Therefore,

β a(k) π(x, k)  x exp(k log3 x − (1 − β)k log k + (1 − β − β log2 2)k + O(log2 x)).

(1/(1−β))− If k  (log2 x) for some  > 0, then

β (1/(1−β))−+o(1) 1/(1−β) a(k) π(x, k)  x exp(k log3 x)  x exp((log2 x) ) = x exp(o((log2 x) )).

(1/(1−β))+ If k  (log2 x) for some , then

log3 x − (1 − β) log k ≤ −(1 − β) log3 x + O(1),

which implies that a(k)βπ(x, k) = o(x). (1/(1−β))+o(1) 1/(1−β) We may assume that k = (log2 x) . We write k = C(log2 x) with C = o(1) (log2 x) and optimize C. Note that we currently have

β 1/(1−β) ak π(x, k)  x exp(C((1 − β)(1 − log C) − β log2 2)(log2 x) + O(log2 x)).

We maximize C((1 − β)(1 − log C) − β log2 2). As C approaches 0 or ∞, this quantity approaches 0. At the maximum, d (C((1 − β)(1 − log C) − β log 2)) = −(1 − β)(log C) − β log 2 = 0. dC 2 2

Therefore, C = e−β(log2 2)/(1−β). We have

 β log2 2 1  β − 1−β 1−β ak π(x, k)  x exp (1 + o(1))e (1 − β)(log2 x) .

15 We now establish the righthand side as a lower bound. By Theorem 2.4, x π(x, k) = exp(k log L + o(k)), k! log x where

L0 = log2 x − log k − log2 k In this case, 1 L = log x − (log x) − (1 + o(1)) log x, 0 2 1 − β 3 4

log L0 = log3 x + o(1). So,

π(x, k) = x exp(k log3 x − k log k + k + O(log2 x)).

4.3 The β = 1 case

Suppose β = 1. Warlimont proved that there exist constants c1 and c2 such that

 log x  X  log x  x exp c  G(n)  x exp c . 1 log x 2 log x 2 n≤x 2

His lower bound comes from a bound on a(k)π(x, k) for k = λ log x/ log2 x for a small value of λ and the upper bound comes from the . We obtain an asymptotic for the sum. We sum over akπ(x, k). First, we bound the sum for log x log x  < k < (1 − ) . log2 x log2 x Then, we bound it for large and small values of k. We now prove the main result. Theorem 4.4. Letting w = W (1/(e log 2)), we have

X  log x  G(n) = x exp (c + o(1)) log x n≤x 2 with w c = w(1 − e Γ(0, w)(w + log w + log2 2)) ≈ 0.770.

Proof. Fix  > 0 and c ∈ (, 1 − ). Let k = c log x/ log2 x. We evaluate π(x, k) using Theorem 3.1. First, note that   k    log x    log x  exp O 1/4 = exp O 5/4 = exp o , (log2 x) (log2 x) log2 x

16 making it negligible. We have  log x  xβ = x1+(H(c)+o(1))/(log2 x) = x exp (H(c) + o(1)) , log2 x  log x   log x  eβk = exp βc = exp (c + o(1)) , log2 x log2 x k c (log x) = exp(k log2 x) = exp(c log x) = x ,  log x  (log x)βk = xβc = xc exp (cH(c) + o(1)) . log2 x We still need to evaluate S(β)k. Note that β = 1 + o(1). For t sufficiently close to 1,

sin(π/t) = π(t − 1) + O((t − 1)2).

Therefore, β 1 S(β) = (π(β − 1) + O((β − 1)2)) = β(β − 1) + O(β(β − 1)2) = (H(c) + o(1)) . π log2 x So,

k   k k H(c) log x log3 x log x S(β) = (1 + o(1)) k = exp −c + (c log H(c) + o(1)) . (log2 x) log2 x log2 x Putting all this together gives us  log x log x log x  π(x, k) = x1−c exp c 3 + (c + H(c) − cH(c) − c log H(c) + o(1)) . log2 x log2 x We also calculate the corresponding ordered Bell number term:

ak = exp(k log k − (1 + log2 2 + o(1))k). Here, log x log x log3 x k log k = c (log2 x − log3 x) = c log x − c , log2 x log2 x log x (1 + log2 2 + o(1))k = (c(1 + log2 2) + o(1)) . log2 x Thus,   c log x log3 x log x ak = x exp −c − (c(1 + log2 2) + o(1)) . log2 x log2 x We now have  log x  akπ(x, k) = x exp (H(c) − c(H(c) + log H(c) + log2 2) + o(1)) . log2 x

17 We maximize the coefficient of log x/ log2 x by determining when its derivative is zero. The derivative is

0 0 0 H (c) − H(c) − log H(c) − log2 2 − cH (c) − (cH (c)/H(c)). However, H0(c) satisfies the following differential equation [7, p. 8]: 1 c = + c − 1. H0(c) H(c) Therefore, we want to solve

H(c) + log H(c) = −(1 + log2 2). This occurs at H(c) = W (1/(e log 2)). In order to find c, we simply take H−1(H(c)) and obtain c = H−1(w) = wewΓ(0, w) with w = W (1/(e log 2)). Therefore,   w log x a(k)π(x, k) = x exp (w − we Γ(0, w)(w + log w + log2 2) + o(1)) . log2 x

5 Negative moments of F and f

We establish the negative moments of F , then show that our argument holds for f as well. Theorem 5.1. For β > 0, X x F (n)−β = exp((1 + o(1))((1 + β)(log x)(log x)β)1/(1+β)). log x 2 3 n≤x For the negative moments of F , we must refine our techniques. In Section 3, we showed that if β is positive, then

X β β F (n) = exp(O(log2 x)) max Bk π(x, k). k n≤x

−β The same argument holds for F (n) . However, the exp(O(log2 x)) term is now larger than our error term. In order to handle this issue, we must obtain upper and lower bounds for

X −β X −β F (n) = Bk π(x, k) n≤x k separately.

18 5.1 Upper bound Recall that the Hardy-Ramanujan Theorem states that there exists a constant C such that

x (log x + C)k−1 π(x, k)  · 2 . log x (k − 1)!

Therefore, X x X (log x + C)k−1 F (n)−β  B−β 2 . log x k (k − 1)! n≤x k

−β k−1 Let bk(x) = Bk (log2 x + C) /((k − 1)!) so that X x X F (n)−β  b (x). log x k n≤x k

We maximize bk(x) by determining when b (x) k+1 = 1. bk(x) By Corollary 3.5,

 −β  β β bk+1(x) Bk+1 log2 x + C log k log2 x (log k) = ∼ = (log2 x) 1+β . bk(x) Bk k k k k This ratio is 1 when  1 1/(1+β) k ∼ (log x)(log x)β . (1 + β)β 2 3 ∗ o(k∗) Call this quantity k . Note that the maximum value of bk(x) is e bk∗ (x). We also have 1/(1+β) ∗ bk+1(x)/bk(x) < 1/2 for k > 2 k . Therefore, X x X F (n)−β  b (x) log x k n≤x k   x X X = b (x) + b (x) log x  k k  k≤21/(1+β)k∗ k>21/(1+β)k∗ x o(k∗) 1/(1+β) ∗ ≤ e 2 k b ∗ (x) + b ∗ (x)) log x k k x o(k∗)  e b ∗ (x). log x k Note that k∗−1 −β (log2 x + C) b ∗ (x) = B ∗ . k k (k∗ − 1)!

19 We have

k∗−1 ∗ (log2 x + C) = exp((k − 1) log(log2 x + C)) ∗ = exp(k log(log2 x + C) + O(log3 x))   ∗  ∗ k = exp k log3 x + O + log3 x log2 x ∗ ∗ = exp(k log3 x + o(k )), (k∗ − 1)! = (k∗!)/k∗ = exp(k∗ log k∗ − k∗ + O(log k∗) = exp(k∗ log k∗ − (1 + o(1))k∗), ∗ ∗ ∗ ∗ ∗ Bk∗ = exp(k log k − k log2 k − (1 + o(1))k ).

Therefore,

∗ ∗ ∗ ∗ bk∗ (x) = exp(k (log3 x − (log k − 1 + o(1)) − β(log k − log2 k − 1 + o(1)))) ∗ ∗ ∗ = exp(k (log3 x − (1 + β) log k + β log2 k + (1 + β) + o(1))).

Note that 1 β β log k∗ = log x + log x − log(1 + β), 1 + β 3 1 + β 4 1 + β ∗ log2 k = log4 x − log(1 + β) + o(1). Hence, ∗ bk∗ (x) = exp((1 + o(1))(1 + β)k ). Substituting our expression for k∗ into this equation and multiplying it by x/ log x gives us our desired upper bound.

5.2 Lower bound and f We obtain a lower bound with the same formula as our upper bound. Theorem 2.2 states that if k  log2 x, then x (log x)k−1 π0(x, k) ∼ · 2 , log x (k − 1)! where π0(x, k) is the number of squarefree n ≤ x satisfying ω(n) = k. If we plug k∗ into this formula, we have the correct bound.

Theorem 5.2. For β > 0, X x f(n)−β = exp((1 + o(1))((1 + β)(log x)(log x)β)1/(1+β)). log x 2 3 n≤x

20 Proof. By definition, f(n) ≥ F (n) for all n. Therefore, X X f(n)−β ≤ F (n)−β. n≤x n≤x

However, our lower bound argument for F also applies to f due to the fact that if n is squarefree, then f(n) = F (n).

6 Negative moments of G and g

We establish upper bounds for the negative moments of G using the same techniques that we used in the previous section. Arguments similar to those found in the previous subsection establish that these bounds are the correct asymptotic formulae for the negative moments of G and g.

Theorem 6.1. For β > 0, the sums X X G(n)−β, g(n)−β n≤x n≤x are both x exp((1 + o(1))(1 + β)((log 2)β log x)1/(1+β)). log x 2 Proof. We now have

X x X (log x + C)k−1 G(n)−β  a−β 2 , log x k (k − 1)! n≤x k

−β k−1 for some constant C. Let Ak(x) = ak (log2 x + C) /((k − 1)!) so that X x X G(n)−β  A (x). log x k n≤x k

We determine when Ak+1(x)/Ak(x) ∼ 1. Recall that 1 k! a ∼ · . k 2 log 2 (log 2)k

Hence,  β Ak+1(x) log 2 log2 x β log2 x ∼ = (log 2) 1+β . Ak(x) k k k This function is equal to 1 when

β/(1+β) 1/(1+β) k = (log 2) (log2 x) .

21 Call this quantity k∗. 1/(1+β) ∗ Note that if k > 2 k , then Ak(x) < Ak∗ (x)/2. In this case, we simply need to determine Ak∗ (x). We have

k∗−1 ∗ ∗ (log2 x + C) = exp(k log3 x + o(k )), (k∗ − 1)! = exp(k∗ log k∗ − (1 + o(1))k∗), ∗ ∗ ∗ ak∗ = exp(k log k − (1 + log2 2 + o(1))k ).

Therefore,

∗ ∗ Ak∗ (x) = exp(k (log3 x − (1 + β) log k + β(1 + log2 2) + 1 + o(1))).

In this case, 1 β log 2 log k∗ = log x + 2 , 1 + β 3 1 + β which implies that ∗ Ak∗ (x) = exp(k (1 + β + o(1))).

References

[1] M. Balazard, Unimodalit´ede la distribution du nombre de diviseurs premiers d’un entier, Ann. Inst. Fourier, Grenoble 40:2 (1990), 255–270.

[2] N. G. de Bruijn, Asymptotic methods in analysis, 3rd ed., Dover Publications, Inc., New York, 1981.

[3] G. H. Hardy, The normal number of prime factors of a number n, Quart. J. Math. 48 (1917), 76–92.

[4] A. Hildebrand and G. Tenenbaum, On the number of prime factors of an , Duke Math. J. 56:3 (1988), 471–501.

[5] H.-K. Hwang, Distribution of the number of factors in random ordered factorizations of integers, J. 81 (2000), 61–92.

[6] L. Kalm´ar, Uber¨ die mittlere Anzahl der Produktdarstellungen der Zahlen, erste Mit- teilung, Acta Litt. Sci. Szeged. 5 (1930–1932), 95–107.

[7] S. Kerner, R´epartitiond’entiers avec contraintes sur les diviseurs, PhD thesis, Universit´e Henri Poincare Nancy 1, 2002.

[8] L. Moser and M. Wyman, An asymptotic formula for the Bell numbers, Royal Soc. Canada 49 (1955), 49–54.

22 [9] A. Oppenheim, On an (II), J. London Math. Soc. 2 (1927), 123–130.

[10] P. Pollack, The distribution of numbers with many factorizations, submitted for publi- cation.

[11] C. Pomerance, On the distribution of round numbers, in Number Theory: Proceedings of the 4th Matscience Conference Held at Ootacamund, India, ed. K. Alladi, Lecture Notes in Math. 1122 (1985), 173–200.

[12] L. G. Sathe, On a problem of Hardy on the distribution of integers having a small number of prime factors, I-IV, J. Indian Math. Soc. 17 (1953), 63–141.

[13] A. Selberg, Note on a paper by L. G. Sathe, J. Indian Math. Soc. 18:1 (1954), 83–87.

[14] A. Sklar, On the of squarefree integers, Proc. Amer. Math. Soc. 3 (1952), 701–705.

[15] G. Szekeres and P. Tur´an, Uber¨ das zweite Hauptproblem der “Factorisatio numero- rum”, Acta Litt. Sci. Szeged 6 (1933), 143–154.

[16] R. Warlimont, Factorisatio Numerorum with constraints, J. Number Theory 45 (1993), 186–199.

23