Two point function for critical points of a random plane wave

Random plane wave is conjectured to be a universal model for high-energy eigenfunctions of the Laplace operator on generic compact Riemanian manifolds. This is known to be true on average. In the present paper we discuss one of important geometric observable: critical points. We first compute one-point function for the critical point process, in particular we compute the expected number of critical points inside any open set. After that we compute the short-range asymptotic behaviour of the two-point function. This gives an unexpected result that the second factorial moment of the number of critical points in a small disc scales as the fourth power of the radius.


Introduction and main results
1.1. Random Gaussian functions. Studying the Laplace eigenfunctions and their geometry is a classical subject going back to at least XIX century. It is most important to understand the eigenfunctions behaviour in the high energy limit. For a given domain, this is a difficult question, and we only have limited information about it other than in the few cases where the eigenfunctions is explicitly given.
For a generic chaotic domain (i.e. where the billiard dynamics is chaotic) it was conjectured by Berry [3] that the high energy functions behave like a random superposition of monochromatic plane waves propagating in different (random) directions, usually referred to as the random plane wave, rigorously defined below. As the comparison between these two is lacking mathematical rigour, one may understand this comparison in different ways.
Berry's conjecture seems to be out of reach by modern analytic techniques; a similar statement for a random linear combination of eigenfunctions with close eigenvalues could be proved though. Namely, for a compact Riemanian manifold M we can consider an orthonormal basis of eigenfunctions φ i satisfying ∆φ i + t 2 i φ i = 0 with t 0 ≤ t 1 ≤ . . . , and define the band-limited functions where c i are i.i.d. normal random variables. It is known [10,9,14,6] that the local scaling limit of f T is the random plane wave. The above conjectures and results show that the random plane wave is a universal object, and motivate their further study; here we are interested in their geometry. As usual, a Gaussian random field could be defined or constructed in two different ways. On one hand we may define it as a concrete random series, an on the other hand we may describe it as uniquely defined in terms of it covariance function via Kolmogorov's Theorem. As a concrete random series we define the random plane wave with energy E = k 2 to be (1.1) Ψ(z) = Ψ(r, θ) = Re of the order n, the series (1.1) is almost surely convergent, absolutely and uniformly on any compact set, and hence the sum is a real analytic function. By the definition (1.1), Ψ is a centred Gaussian random field, therefore its law is prescribed by the covariance function ψ(z, w) := E[Ψ(z) · Ψ(w)], for z, w ∈ R 2 . It is then easy to evaluate ψ explicitly as where J 0 is the Bessel J function of order 0. From this representation it follows that Ψ is stationary (i.e. its law is translation invariant), and isotropic, (i.e. its law is invariant under rotations); by the standard abuse of notation we write ψ(z, w) = ψ(z − w).
It also follows directly from (1.1) that the function Ψ is an a.s. solution of the Helmholtz equation (1.2) (∆ + k 2 )Ψ = 0, i.e. Ψ is an a.s. eigenfunction of −∆ with eigenvalue k 2 ; we are interested in the geometry of random (or "typical") solutions Ψ of (1.2). The geometric properties considered below are related to the nodal lines (i.e. Ψ −1 (0)), nodal domains (i.e. connected components of the complement of the nodal set), as well as the level curves (Ψ −1 (c)), and excursion sets (connected components of {z : Ψ(z) > c}). The geometry of these sets is closely related to that of the set of critical points of Ψ. The critical points and values and their applications appear a lot ( [11,13,12,8,7] to mention a few) in the literature on nodal domains of random plane waves and, more generally, smooth Gaussian fields.

Critical points.
There are several intriguing questions on the critical points of random fields. From our perspective, of the most important questions are the ones on the distribution of the critical points number, and the corresponding critical values. This general question could be made more concrete in different ways, most basically, evaluating the expected number of critical points inside a given domain; the latter admits a precise answer in a more general scenario. In an analogous case of random spherical harmonics (that converges to Ψ as a scaling limit), Cammarota, Marinucci and Wigman [4] evaluated the expected number of minima, maxima and saddles whose value falls into a given window, and also determined the order of magnitude of the corresponding variance for a "generic" window, which does not include the total number of critical points. In a subsequent work Cammarota and Wigman [5] resolved this outstanding case by evaluating the variance of the total number of critical points to be of lower order as compared to the generic case. It is important to understand the finer aspects of the structure of critical points. Upon looking at Figure 1 (left), it is evident that the structure of critical points is very "rigid" or "regular"; however it is not entirely clear how to formulate or quantify this statement with mathematical rigour. One can compare this to two other very well known translation invariant processes: Figure  1 (centre) one may observe the Poisson point process, and Figure 1 (right) is the corresponding picture for Ginibre point process; both are scaled to have the same intensity as the critical points in Figure 1 (left).
For all three point processes depicted in 1 the number of points in a square of side-length n is c·n 2 where c = 1/2 √ 3π. This value of c is the natural intensity of critical points (see Proposition 1.1) of Ψ, whereas the other two point processes are so rescaled. The fluctuations of the total number of points in a square depend a lot on the point process. Though formally stated for random spherical harmonics (which are only equivalent to Ψ in the limit, under a natural scaling), it is likely that one may deduce from [5] that the variance for the critical points scales like n 2 log(n), whereas for the Poisson point process it is asymptotic to c · n 2 (with the same c as above), and for the Ginibre ensemble it is of order n.
On the local scale, the probability that there is at least one point in a small disc or radius ρ is the same for all three processes due to the translation invariance and our choice of normalization. The respective probabilities that there are exactly two points in a small disc are very different though. For the Poisson point process it is the probability that a Poisson random variable with intensity cπρ 2 is equal to 2. By the definition it is given by whereas for the Ginibre ensemble (which is a determinantal point process) this probability is of order ρ 6 . That means that the points corresponding to the Ginibre ensemble repel each other, inducing on their visible regularity or rigidity. Our principle result (Theorem 1.2) is the evaluation of the 2nd factorial moment of the number of critical points of Ψ in a radius ρ disc, asymptotically for ρ → 0. This suggests (Corollary 1.3 and Conjecture 1.4) that the probability of having precisely two critical points in the disc is 1 of the same order of magnitude (and leading constant smaller by the factor 8 √ 3 ≈ 13.8) as the probability of finding 2-points in the same disc for the Poisson point process. This minor difference could not stand for the striking difference in the appearance of the two processes, highly regular for the critical points of Ψ. It is also worth noting, that despite the fact that critical points are more "lattice like" than the Ginibre ensemble, for the critical points clustering is significantly more likely (see Figure 2).
To formulate our main results we introduce the following notation for the number of critical points of a random plane wave Ψ in a disc B(ρ) of radius ρ > 0: and N e ρ of saddles, minima, maxima, and extrema respectively may also be defined.
Since the function Ψ is translation invariant, the above random variables are independent of the center of the disc, so for simplicity, we may assume that it is centred at the origin. Another useful observation is that the random plane waves are scale invariant (that is, the law of Ψ with arbitrary k on B(1) is (up to homothety) equivalent to the law of Ψ with k = 1 on B(k)); hence, with no loss of generality, we may assume that k = 1, as we will for the rest of this manuscript. The following principal results of this manuscript evaluate the expectation and the second factorial moment of N c ρ for small values of ρ. The first result, consistent to a similar statement from [4, Proposition 1.1], is for the expectations.  to Area(Ω) 2 √ 3π . Evaluating the second moment is more involved, and we were unable to obtain a precise expression. Instead, we show how it behaves asymptotically as as the radius ρ → 0. Theorem 1.2. As ρ → 0, we have the following expansion for the number of critical points: For N saddle ρ , N min ρ , N max ρ , and N e ρ -numbers of saddles, local minima, local maxima, and local extrema in a ball of radius ρ we have There is no evidence that the estimates (1.6), (1.8), and (1.9) are sharp. In fact, it seems quite likely, that they are not, for (1.9) particularly. Since the extremum-saddle covariance (1.10) gives the main contribution to (1.5), the last formula (1.10) is an asymptotic and as such gives a precise decay rate. For integer-valued random variables it is more natural to consider factorial moments instead of the usual moments. The asymptotic behaviour of the variance, dominated by the expectation (and hence less useful), can be easily obtained by combining (1.3) and (1.5) Since all our random variables N ρ are integer valued, the first and second factorial moments yield the asymptotics for probabilities of the events N ρ = 1 and N ρ ≥ 2, as follows. Corollary 1.3. As ρ → 0 we have the following asymptotic formulas for probabilities to have exactly one point: For probabilities to have at least two points we have Finally, for the probability to have three points we have The proof is straightforward. For the sake of notational convenience we write N = N c ρ . The expectation and the second factorial moment could be written as series Comparing the coefficients in front of P(N = k) we see that if the second moment is o-small of the expectation, then the expectation is dominated by P(N = 1). In this case P(N = 1) has the same leading term as the expectation and the error term is of the same order as the second moment. This, combined with the results of Proposition 1.1, proves formulas (1.11). The estimates (1.12) are obtained by applying Markov inequality to the results of Theorem 1.2. Finally, to prove (1.3), we notice that the event N c ≥ 3 is majorised by N e ≥ 2 or N saddle ≥ 2.
Note that the third factorial moment should be dominated by the event N = 3, which, by (1.3), is O(ρ 7 log(1/ρ)). This gives a strong evidence that Assuming that this is indeed true, we can repeat the argument above and compare the coefficients in the second and third factorial moments and show that

1.3.
Outline of the proofs. The proofs of Proposition 1.1 and Theorem 1.2 are based on the Kac-Rice formula applied to the gradient of Ψ. The Kac-Rice formula is a standard tool for computing the expected number and higher (factorial) moments of the zero set of a Gaussian field. In general, under some non-degeneracy conditions on the given random field, for every n ≥ 1 the factorial moments are given by: where z = (z 1 , . . . z n ) ∈ B(ρ) × · · · × B(ρ) ⊂ R 2n , and K n is the n-point correlation function defined as the conditional Gaussian expectation where φ (∇Ψ(z1),...,∇Ψ(zn)) (0, . . . , 0) is the density function of the Gaussian vector (∇Ψ(z 1 ), . . . , ∇Ψ(z n )) evaluated at (0, . . . , 0), and H Ψ (z i ) is the Hessian matrix of Ψ at z i . The Kac-Rice formula in (1.14) holds under the condition that the Gaussian vector (∇Ψ(z 1 ), . . . , ∇Ψ(z 1 )) is non degenerate.
For n = 1 the computation of K 1 is straightforward; it is essentially the same as in [4]. We give it below since demonstrates the use of the Kac-Rice formula. The case n = 2 is more involved and the asymptotics of K 2 (z 1 , z 2 ) as z 2 → z 1 (inducing on the second factorial moment) was entirely unexpected. Based on the above computer simulations one would expect for the critical points repel, i.e. as z 2 → z 1 , K 2 (z 1 , z 2 ) → 0. That would have indicated that the second factorial moment is o(ρ 4 ), with plausible true order ρ 5 of decay. To our surprise, a precise analysis of the relevant Gaussian integrals have shown that K 2 does not vanish on the diagonal; it has a finite, non-zero limit. Hence the second factorial moment for small ρ behaves like a constant times the square of the area of B(ρ), i.e. a constant times ρ 4 .
It is theoretically possible to compute the behaviour of the higher correlation functions K n near the diagonal i.e. when z i → z j for i = j, but seems extremely technically demanding. On the other hand, it is easy to believe, that K n should stay bounded. Considering it as a given and using the same argument as in the proof of Corollary 1.3, we obtain the following conjecture.
) and, correspondingly, on the probability to have exactly n points in a small ball . Be believe that this estimate holds, but we know that it is not sharp since already for n = 3 we , and its higher moments by expressing the nth (factorial) moment in terms of an n-dimensional integral. We apply Kac-Rice formula in this section to compute the expected value of N c ρ . Counting the critical points of Ψ in the ball B(ρ) is equivalent to counting the zeros of the map B(ρ) → R 2 given by z → ∇Ψ(z). One defines the zero density K 1 : B(ρ) → R of Ψ as where φ ∇Ψ(z) is the Gaussian probability density of 2-dimensional vector ∇Ψ(x) ∈ R 2 evaluated at (0, 0), and H Ψ (z) is the Hessian matrix of Ψ at z. By the Kac-Rice formula, if ∇Ψ(z) is nonsingular for all z ∈ B(ρ), then Proof of Proposition 1.1. We first observe that in our case the zero density K 1 is independent of z because Ψ is isotropic; hence the Kac-Rice formula (2.1) sates that Moreover, as we are dealing with a smooth Gaussian field, it is possible to write an analytic expressions for K 1 in terms of the covariance function ψ and its derivatives; to derive such analytic expression we evaluate the covariance matrix Σ of the 5-dimensional centred jointly Gaussian vector where ∇ 2 Ψ(z) is the vectorized Hessian evaluated at z, that is a vector (∂ 2 z1,z1 Ψ(z), ∂ 2 z1,z2 Ψ(z), ∂ 2 z2,z2 Ψ(z)). The covariance matrix Σ of (∇Ψ(z), ∇ 2 Ψ(z)) is evaluated in Appendix B.1 and has the form From A we immediately obtain the probability density of the 2-dimensional vector ∇Ψ(z) evaluated at (0, 0): in addition, since the first and the second order derivatives of Ψ are independent at every fixed point z ∈ R 2 , we have that is a centred jointly Gaussian random vector with covariance matrix To evaluate (2.5) we introduce the transformation 2 |] in terms of a conditional expectation as follows to evaluate the conditional expectation in (2.6) we follow the argument in the proof of [4, Proposition 1.1], i.e. we note that where Z 1 , Z 2 are independent standard Gaussian and X is a χ-squared random variable with density The statement follows combining (2.2), (2.4), (2.7), and observing that

Second factorial moment
3.1. On the Kac-Rice formula for computing the second factorial moment of the number of critical points. We will find an explicit expression for the 2-point correlation function K 2 : B(ρ) × B(ρ) → R, defined as the conditional Gaussian expectation in terms of the covariance function ψ and its derivatives. Finding such an expression involves studying the centred Gaussian vector with covariance matrix Σ(z, w), z, w ∈ B(ρ). It is known [2, Theorem 6.9] that, if for all z = w the Gaussian distribution of (∇Ψ(z), ∇Ψ(w)) is non-degenerate, the second factorial moment of the number of critical points in B(ρ) can be expressed as We note that K 2 is everywhere nonnegative.

3.2.
Proof of Theorem 1.2. In order to study the asymptotic behaviour of the second factorial moment of the number of critical points in B(ρ), as the radius ρ of the disk goes to zero, we need to study the centred Gaussian random vector (3.1). Its covariance matrix Σ = Σ(z, w) is of the form where A = A(z, w) is the covariance matrix of the gradients (∇Ψ(z), ∇Ψ(w)), C = C(z, w) is the covariance matrix of the second order derivatives (∇ 2 Ψ(z), ∇ 2 Ψ(w)) and B = B(z, w) is the covariance matrix of the first and second order derivatives. The function Ψ is isotropic, hence, the critical point process is also invariant w.r.t. translations and rotations. This means that its 2-point function K 2 (z, w) depends on |z − w| only (this is not true for covariance matrix Σ); by the standard abuse of notation we write (3.3) K 2 (z, w) = K 2 (|z − w|).
Our aim is to study the asymptotic behaviour of the 2-point correlation function K 2 in the vicinity of r = 0.
For every strictly positive r, ∆(r) is symmetric, hence we may diagonalise it with an orthogonal P (r): ∆(r) = P −1 (r)Λ(r)P (r) = P t (r)Λ(r)P (r), (3.5) where the matrix Λ(r) is diagonal, with eigenvalues λ i (r), i = 1, . . . , 6, and P (r) is the orthogonal matrix with row vectors the normalized eigenvectors of ∆(r). The analytic expressions of the eigenvalues and eigenvectors of ∆(r), r > 0, are computed in Lemma A.1 and Lemma A.3 respectively. In Lemma A.2 and Lemma A.4 we compute their Taylor expansion around r = 0. We prove these lemmas with the aid of Mathematica since the calculations are technically demanding. We stress that all the computations performed with Mathematica are symbolic.
Equation (3.5) implies that we can write This suggests to introduce a new variable ξ = Λ −1/2 (r)P (r)ζ. Clearly, we can express ζ in terms of ξ as With this change of variables 1 Using (3.7), we can write components ζ i as where the q ij (r) are the elements of Q(r) = P −1 (r) = P t (r). The columns of Q form an orthonormal basis of eigenvectors of eigenvectors of ∆(r). With this change of variables we can rewrite the two quadratic forms ζ 1 ζ 3 − ζ 2 2 and ζ 4 ζ 6 − ζ 2 5 in (3.4) as Summing it all up, the 2-point correlation function K 2 in (3.4) in ξ coordinates becomes (3.8) To obtain the asymptotic behaviour around r = 0 of the integral in (3.8), we Taylor expand around the origin the entries q ij of the matrix Q and eigenvalues λ j . Such Taylor expansions up to O(r 4 ) are given by equations (A.5) and (A.3). Combining these expansions and noting that the first two factors in the integrand are polynomials of degree 4 in terms of ξ we obtain the following expansion: and then In the Gaussian integral variables separate and it is a product of standard one-dimensional integrals. Each of them is equal to √ 2π and the entire integral is (2π) 3 . Matrix A has a simple block structure and it is easy to compute its determinant. Explicit computation in Appendix B. Combining this asymptotic with (3.9), we finally obtain that, as r → 0, and, in view of (3.2), as ρ → 0, To prove the second part of Theorem 1.2 we need to evaluate the two-point correlation function K 2 modified for the respective types of critical points. The modified function K 2 has the same expression (3.4) with the integration over a proper subset of R 6 , i.e. the ζ with the corresponding critical points of the prescribed types.
To be more precise, let us define two Hessians at points z and w (already conditioned to be are critical points). In terms of ζ i these Hessians are given by The characteristic polynomials for these matrices are The particular type of a critical point depends on the eigenvalues of its Hessian: they are both negative for the local maxima, negative for the local minima, and of different signs for the saddles. We may reformulate these dependencies in terms of the coefficients b i = −Tr H i and c i = det(H i ): a critical point with Hessian H i is a minimum if c i > 0 and b i < 0, a maximum if c i > 0 and b i > 0, and a saddle if c i < 0 (we may ignore the special probability 0 cases when one of the eigenvalues vanishes).
As before, we rewrite ζ i in terms of of ξ i . This gives the coefficients of the polynomials as functions of ξ i and r. Expanding in powers of r we get (3.10) We observe the following: all of the coefficients b i,j are linear functions of ξ, and all of the coefficients c i,j are quadratic forms. We also notice that Since all the expressions we deal with are homogeneous functions of various degrees, it is natural to work in spherical coordinates. We introduce s i = ξ 1 /|ξ| and rescale b i by |ξ| and c i by |ξ| 2 . Abusing notation we denote the rescaled coefficients b i , b i,j , c i , and c i,j that are now functions of s i instead of ξ i by the same letters; there is no confusion since from now on all expressions will be in terms of |ξ| ∈ (0, ∞) and s = (s 1 , . . . , s 6 ) ∈ S 5 . With this notation, the formula (3.8) for K 2 becomes (3.11) where ds is the spherical volume element on the unit sphere S 5 , and we evaluated the standard Gaussian integral ∞ 0 |ξ| 9 e −|ξ| 2 /2 d|ξ| = 2 7 · 3.
Minimum-minimum two point function.
The the two-point correlation function K min,min 2 (r) corresponding to the local minima is given by (3.11) except that we replace the domain of the integration S 5 , by S min,min = {s ∈ S 5 : c 1 > 0, c 2 > 0, b 1 < 0, b 2 < 0}, the set of s such that both Hessians correspond to local minima. If |s 4 s 6 | > Cr for sufficiently large C, then c 1 and c 2 are of opposite signs (for the rest of this section we use C to denote all absolute constants). This implies that S min,min is a subset of {s : |s 4 s 6 | < Cr} for some C sufficiently big. It is easy to see that on this set |c i | = O(r 2 ), thus The other estimate of (1.6) follows from symmetry considerations.

Minimum-maximum two point function.
In the similar way, we have to estimate the integral over S min,max , the set where one point is a minimum and the other is a maximum. This set is given by conditions that both c i are positive and b 1 and b 2 are of different signs. First, the same argument as above forces |s 4 s 6 | < Cr for some large constant C. If |s 6 | > Cr 2 for a large constant C, then the leading terms in the formulas (3.10) for b i dominate and b 1 and b 2 are of the same sign, contradicting our assumption. Hence this implies that |s 6 | < Cr 2 , and under this assumption both b i are of the form Again, since b i should be of different signs, it forces the term corresponding to O(r 3 ) to dominate, that is for some big constant C. Notice that this condition is stronger than the previous condition that |s 6 | < Cr 2 .
Substituting the estimate |s 6 | < Cr 2 into the formulas (3.10) for c 1 and c 2 we see that they both are equal to Saddle-saddle and extremum-extremum two point functions.
For two extrema or two saddle points both c i are forced to be of the same sign. The same argument as for the minimum-minimum case yields Extremum-saddle two point function. Finally we notice that N = N e + N saddle , and Combining this formula with previous estimates we obtain This completes the proof of Theorem 1.2.
Appendix A. Eigenvalue and eigenvectors of ∆(r), r > 0 We introduce the notation ∆(r) = ∆ 1 (r) ∆ 2 (r) ∆ 2 (r) ∆ 1 (r) where ∆ 1 and ∆ 2 are 3 × 3 symmetric matrices with elements in terms of a i , i = 1, . . . 8 We compute now the eigenvalues and eigenvectors of the matrix ∆(r), r > 0. We introduce the following notation: A + 1 (r) = a 1 (r) + a 5 (r) + The elements of v i are explicit algebraic expressions in terms of a i that are defined by (A.1). Normalizing the vectors and using expansions of a i (B.5) we obtain the following expansion for the matrix Q Lemma A.4. The orthogonal matrix Q(r) of normalized eigenvectors of ∆(r) is Appendix B. Expansions of covariance matrices B.1. Covariance matrix of (∇Ψ(z), ∇ 2 Ψ(z)). In this section we compute the covariance matrix Σ of the 5-dimensional centred Gaussian vector which combines the gradient and the elements of the Hessian evaluated at z. By the translation invariance of Ψ, Σ does not depend on the point z ∈ R 2 . It is convenient to write the covariance matrix in blocks It is a standard fact that covariances of the derivative are given by derivatives of the covariance kernel.
The computations of A, B and C are quite lengthy, but they do not require sophisticated arguments other than iterative differentiation of Bessel functions. For example to compute A we first have to compute expressions where z = (z 1 , z 2 ) and w = (w 1 , w 2 ) are two points in R 2 . The elements of A are obtained by passing to the limit w → z.
To give an example of such computation we give details of the computation of (A) 1,1 . For this we first use the chain rule to obtain The zeroth Bessel function could be defined by power series .
From this expansion we can immediately get the expansions for J 0 and J 0 and show that lim w→z ∂ 2 ∂ z1 ∂ w1 J 0 (|z − w|) = 1 2 .
In the same way we compute the other entries of A and obtain A = Since the first and second order derivatives of any stationary field are independent at every fixed point z ∈ R 2 , we immediately have B = 0. With analogous calculations, but using higher order derivatives of J 0 we compute the entries of C and find that B.2. Covariance matrix of (∇Ψ(z), ∇Ψ(w), ∇ 2 Ψ(z), ∇ 2 Ψ(w)). We compute the covariance matrix Σ(z, w) for the 10-dimensional Gaussian random vector which combines the gradient and the elements of the Hessian evaluated at z, w: (∇Ψ(z), ∇Ψ(w), ∇ 2 Ψ(z), ∇ 2 Ψ(w)), only for the case z = (0, 0) and w = (0, r). As explained in Section 3.2 this is sufficient in order to evaluate K 2 (r) for all relevant r, thanks to the isotropic property of Ψ. It is convenient to write the matrix Σ(z, w) in block form, i.e., Σ(z, w) = Σ(r) = A(z, w) B(z, w) B t (z, w) C(z, w) .
Since we already have explicit formulas for elements of A, B, and C we can obtain the following explicit formulas and expansions for a i that define ∆: