On an Erd\H{o}s--Kac-type conjecture of Elliott

Elliott and Halberstam proved that $\sum_{p<n} 2^{\omega(n-p)}$ is asymptotic to $\phi(n)$. In analogy to the Erd\H{o}s--Kac Theorem, Elliott conjectured that if one restricts the summation to primes $p$ such that $\omega(n-p)\le 2 \log \log n+\lambda(2\log \log n)^{1/2}$ then the sum will be asymptotic to $\phi(n)\int_{-\infty}^{\lambda} e^{-t^2/2}dt/\sqrt{2\pi}$. We show that this conjecture follows from the Bombieri--Vinogradov Theorem. We further prove a related result involving Poisson--Dirichlet distribution, employing deeper lying level of distribution results of the primes.


Introduction and motivation
Let ω(m) = p|m 1 be the prime divisor function.Elliott and Halberstam proved that [9] (1.1) holds for n ≥ 3.Here and later p denotes a prime and φ is Euler's totient function.Elliott gave a talk at the meeting held in Urbana-Champaign, June 5-7, 2014, in memory of Paul and Felice Bateman, and Heini Halberstam, where he revisited his works with Halberstam.A telegraphic representation of the talk was published, which contains the following conjecture [10, Conj.B]: for each real λ, p<n ω(n−p)≤2 log log n+λ (2 as n → ∞.In this note we establish the conjecture. Theorem 1.1.The asymptotic relation (1.2) holds for each real λ.
Theorem 1.1 can be restated as follows.Let X n be a random variable taking the value m ∈ {1, 2, . . ., n} with probability proportional to 2 ω(m) 1 n−m a prime .Then The reduction of Theorem 1.1 to Proposition 1.2 is explained in §3.The proof of Proposition 1.2 is given in §4 and uses the classical tools utilized in [9] (the Brun-Titchmarsh and Bombieri-Vinogradov Theorems).Proposition 1.2 has little content when log R ≍ log n.This is remedied by the following proposition, proved using recent work on primes in arithmetic progressions [6].
Proposition 1.3.There is an absolute constant η > 0 with the following property.For every k ≥ 1, (1.4) where C k is a positive constant depending only on k, log(p i ) Proposition 1.3 is proved in §5.The lower order term in (1.4) leads to the following application.Given an integer m > 1 we write it as m = Ω(m) where Ω(m) is the number of prime factors of m (multiplicity counted) and p 1 (m) ≥ . . .≥ p Ω(m) .We define where p i (m) := 1 for i > Ω(m).This vector sums to 1 and its entries are nonincreasing.Let PD(θ) be the Poisson-Dirichlet distribution with parameter θ (see Chapter 1 of [5] for its definition).
Let X n be the random variable taking the value m ∈ {1, 2, . . ., n} with probability proportional to 2 ω(m) .As observed by Elliott [10], See §6 for a self-contained proof of (1.8).As shown in [8], V ( X n ) tends to PD(2).We expect Theorem 1.1 and Corollary 1.4 to hold because X n is distributed like X n , only with support restricted to shifted primes which are believed to behave in most ways like random integers.The structure of the paper is as follows.In the next short section we show how Corollary 1.4 follows from Proposition 1.3.Afterwards in §3 we reduce the conjecture of Elliott we prove, Theorem 1.1 to Proposition 1.2.In §4 we show how Proposition 1.2 follows from the classical Bombieri-Vinogradov and Brun-Titchmarsh Theorems.At this point the proof of Theorem 1.1 is complete.In the penultimate §5 we go beyond the Bombieri-Vinogradov range by employing a result in [6], based on spectral theory of automorphic forms, to prove Proposition 1.3.As mentioned in the last paragraph, we conclude the paper with a short proof of (1.8).

Proof of Corollary 1.4
We now show how Corollary 1.4 follows from Proposition 1.3.Suppose V (X n ) d −→ PD(θ), and in particular 1) , we have P(X n ≤ T ) ≪ T /n 1+o (1) , so that log X n / log n tends to 1 in probability.This implies t dt for all 0 < a < b < 1 (the sum appears infinite but can be truncated at i ≤ ⌊1/a⌋ since i≥1 V (X n ) i ≤ 1).This is simplified as By Proposition 1. where (L 1 , L 2 , . ..) is distributed according to PD(2).In the language of Bharadwaj and Rodgers [3], this means we can compute the correlation functions of large prime factors of X n and show they tend to those of the Poisson-Dirichlet process with parameter θ = 2, at least for test functions with restricted support.

Reduction
We now reduce Theorem 1.1 to Proposition 1.2.In other words, we make precise the well-known principle (see for example [11]) that Erdős-Kac type results can be deduced from level of distribution information.Let (3.1) , where X ∼ N (0, 1).It is known that for (1.3) to hold it suffices to show that Let Uniformly for where C k , C ′ k are given by (3.1).For f ≡ 1 Proposition 3.1 is [11,Prop. 3] and the proof works the same way for general f ≥ 0. We derive from Proposition 3.1 the following criterion, to be proved below.Corollary 3.2.For every sufficiently large n let Y n be a random variable taking values in {1, 2, . . ., n} and set I n := (exp(exp(log 1/3 log n)), n exp(− log 1/3 log n) ), T := n 1/ log 1/3 log n , P := p∈In p.
Fix k ≥ 1. Suppose that as n → ∞, and that for some multiplicative function g : N → [0, 1], (3.4) ω(a)≤k a|P as n → ∞.If p≤x g(p) = θ log log x + O(1) for some θ > 0 and p≤x g 2 (p) = O(1) then Proof.We consider a given n and apply Proposition 3.1 with f (m . We find that (3.6) holds for every fixed k ≥ 1 (implied constants may depend on k).We claim Indeed, by our assumptions on p≤x g(p) and p≤x g 2 (p), We express (3.6) as By (3.4), the right-hand side of (3.8) goes to 0 as n → ∞.We have the identity for T and I n as in the statement of the corollary.By (3.7), the (constant) random variables A 1 and A 3 tend to 1 and 0, respectively.We have observed EA k 2 → C k by (3.8).An integer m can have at most log m/ log T prime factors of size at least T , implying A 5 ≪ (log log n) −1/6 and so EA k 5 → 0 as n → ∞.To handle A 4 we use (3.3) which yields = o(1).
Corollary 3.2 may also be proved by generalizing an argument of Billingsley [4].To deduce Theorem 1.1 from Corollary 3.2 we apply it with Y n = X n , g(m) = 2 ω(m) /m and θ = 2.This reduces matters to showing that (3.9) hold as n → ∞, for every k ≥ 1, where T and P are defined in Corollary 3.2.We explain why Proposition 1.2 implies (3.9)-(3.10).Applying Proposition 1.2 with S = {a : a divides p<T p/P}, B = 1 and R = T k we obtain and the triangle inequality yields holds uniformly for a with (a, n) > 1, (3.9) is obtained.To obtain (3.10) we apply Proposition 1.2 with S = {a : a divides P}, B = 1 and R = (max I n ) k (for I n defined in Corollary 3.2), obtaining (3.12) We conclude since log R/ log n ≪ k exp(− log 1/3 log n) holds by definition of R and ω(a)≤k a|P holds by Mertens' theorem [13,Thm. 2.7(d)].The condition (a, n) = 1 in (3.12) is harmless by (3.11) and the fact that p|(P,n) 1/p goes to 0 faster than any power of log log n by the construction of P.

Proof of Proposition 1.2
We have P(a divides Using (1.1) we see that Proposition 1.2 will follow if we show We write 2 ω(n−p) as a divisor sum using 2 ω(m) = d|m µ 2 (d) and µ 2 (d) = e 2 |d µ(e) and separate into different ranges: with the choices , and m is equal to n − p.We split A a (n) accordingly: Writing π(x; q, b) for the number of primes in [1, x] that are congruent to b modulo q, we have and accordingly We first observe some basic properties of φ.We have for every n 1 , n 2 ≥ 1.We have (4.7)The following lemmas estimate the contributions of S i to the left-hand side of (4.2) and will be proved below.
Note that M 0 (a) coincides with Li(n − 1) times the left-hand side of (4.9).Applying the last 4 lemmas and recalling our choice of D in (4.3) we obtain (4.10) . The contribution of the lower order term Li(n − 1)2D a φ(n)/(nφ(a)) can be absorbed in the right-hand side of (4.10) using (1.6), giving (4.2).This will complete the proof of Proposition 1.2.
To bound T 2 , note π(n − 1; [a, s], n) ≤ p|n 1 ≪ log n holds for (s, n) > 1, and so /(a log n) by our choice of D, and the assumption a ≤ R. To bound T 1 we use the Brun-Titchmarsh Theorem: .
The theorem saves log n since [a, s] < Rn/D ≤ n 3/4 by our choice of D, and the assumption a ≤ R. To estimate the sum in (4.11) we let g = (a, s) and change the order of summation: where we used (4.5) in the first inequality and (4.6) in the second one.By [13, p. 43], (4.12) holds for A ≥ B ≥ 2. Applying (4.12) with (A, B) = (n/(Dg), D/g) we find that since a is squarefree and ω(a) ≤ k, which is acceptable by our choice of D. It remains to estimate S 3 (a).We consider separately (se, n) = 1 and (se, n) > 1: To bound T 4 we observe π(n − 1; [se 2 , a], n) ≤ p|n 1 ≪ log n holds when (se, n) > 1, which leads to which is ≪ n/(a log n) ≪ φ(n)(log log n) 2 /(a log n) due to our choice of D, and since a ≤ R. To bound T 3 we use the Brun-Titchmarsh Theorem when e ≤ V log n and otherwise use π(n − 1; m, n) ≤ n/m ≤ n/φ(m): The theorem saves log n when e ≤ V log n because [se 2 , a] ≤ D(V log n) 2 R ≤ n 1/2 by our choices of V and D, and since a ≤ R. Let g = (a, se 2 ) and observe there are coprime g 1 and g 2 such that g = g 1 g 2 , g 1 | s and g 2 | e.It follows that (4.13) for every V ′ ≥ 1.Here we used (4.5)-(4.8).Applying (4.13) with where, by (4.5)-(4.6), Since a is squarefree, if we let g = (se 2 , a), then there are coprime g 1 , g 2 (not necessarily unique) such that g = g 1 g 2 , g 1 | e and g 2 | s, and so (using (4.7)-(4.8)) where Using the same g, g 1 and g 2 as before, D/e 2 <s≤D g2|s where we used (4.5)-(4.8).

4.4.
Proof of Lemma 4.4.For a coprime to n let which is multiplicative in d.
for n ≤ D 5 .We have, for a coprime to n, f n,a = f n,1 * g a where g a (d) = g(d)1 d|a ∞ and g is defined in (1.5).Hence, by (4.14), This implies that under our assumptions, E/φ(a) ≪ k (log log n) 2 /a.With the same E as in (4.16), we express the sum in the right-hand side of (4.15) as The p-product is 2 ω(a) φ(a)/a and the d-sum is D a .

Proof of Proposition 1.3
As in the proof of Proposition 1.2, we work with A a (n) (defined in (4.1)) and reduce the lemma to proving (5.1) 1) .
We write 2 ω(n−p) as a divisor sum, separate the contribution of large square divisors, and apply divisor switching.We have, for m = n − p, Then, given a ≥ 1, we split A a (n) accordingly: and accordingly for The rest of the proof is separated into four lemmas, proved below, which together establish (5.1).This will complete the proof of Proposition 1.3.
which implies the result.
The conclusion follows by applying Cauchy-Schwarz and the Bombieri-Vinogradov Theorem [9, Lemma] (cf. the proof of Lemma 4.2).

5.3.
Proof of Lemma 5.2.As E 1 (a) requires us to break the 1/2-barrier in the level of distribution, it is the more delicate error term.In the previous two lemmas, η can be close to 1, but not here.We have where we substituted d = bd ′ .Let E 1,1 denote the contribution to the above with b ≤ J = n 2η and E 1,2 the remainder.For E 1,1 we apply the triangle inequality to arrive at To make the summation range of d ′ independent of b, we apply a standard finer-than-dyadic decomposition on the d ′ summation (see e.g. the proof of [2, Thm.1.1]).More precisely, set λ = 1 + (log n) −B−100 and decompose E 1,1 into the contribution of d ′ ≤ D/J and at most (log n) B+101 sums with the additional condition (D/J)λ k < d ′ ≤ (D/J)λ k+1 for some k ≥ 0. Thus, by the triangle inequality, Here b ′ stands for be 2 , in the second inequality we used φ(m) ≥ m/ log log m [13, Thm.2.9] and in third one we used a≤T, b ′ ≤S (b ′ , a)τ (b ′ ) ≪ T S(log(T S + 2)) 3 .
Consequently after some simple reshuffling of the terms with k < k 0 (b) with the help of the triangle inequality, we have reduced the required E 1,1 estimate for Lemma 5.2 to showing for any D ′ ≤ D, where If [be 2 , a] = g then a, b and e divide g and the condition (d ′ , ae) = 1 is equivalent to (d ′ , g) = 1.Thus We quote [6,Prop. 6.1] specialised to a 2 = c = c 0 = 1.Proposition 5.5.There is an absolute constant ̟ > 0 with the following property.Let hold.Then for any B > 0 we have As commented after [6,Prop. 6.1], this holds for the characteristic function of the primes replacing the von Mangoldt function and so implies (5.2).To prove Lemma 5.2, it remains to estimate E 1,2 and we claim .
Since [be 2 , a] = be 2 a/(be 2 , a), by dyadic decomposition to get (5.3) it suffices to show (5.4) for all ε > 0.Here C ε > 0 depends only on ε > 0 (later occurrences of C ε might have a different value), a ∼ A means a ∈ [A, 2A) and e ∼ V , b ∼ J should be interpreted in the same way.Estimate (5.4) . We can bound the left-hand side of (5.4) by for every ε > 0. Hence This proves (5.4), and completes the proof of Lemma 5.2.
5.4.Proof of Lemma 5.4.Let 1 ≤ a ≤ n η be squarefree with (a, n) = 1.The expression N 1 (a) matches M 2 (a) defined in (4.4), so + O n a log n by Lemma 4.3, its proof and our choice of V .The d-sum is estimated in Lemma 4.4, giving where we used (1.6).It remains to deal with N 2 (a).By completing the e-range we have → Ee itX = e − t 2 2 , t ∈ R, a relation that is readily verified using the Selberg-Delange method, as observed by Elliott [10, p. 1364].