A novel approach to the computation of one-loop three- and four-point functions. I -- The real mass case

This article is the first of a series of three presenting an alternative method to compute the one-loop scalar integrals. This novel method enjoys a couple of interesting features as compared with the method closely following 't Hooft and Veltman adopted previously. It directly proceeds in terms of the quantities driving algebraic reduction methods. It applies to the three-point functions and, in a similar way, to the four-point functions. It also extends to complex masses without much complication. Lastly, it extends to kinematics more general than the one of physical e.g.\ collider processes relevant at one loop. This last feature may be useful when considering the application of this method beyond one loop using generalised one-loop integrals as building blocks.


Introduction
Automated evaluations of loop multileg processes demand a fast and numerically stable evaluation of Feynman integrals. In particular, the calculation of two-loop three-and fourpoint functions in the general complex mass case remains challenging. Getting a reliable result using a multidimensional numerical integration and sector decomposition [1][2][3][4][5] has a high computing cost. The derivation of a fully analytic result remains beyond reach so far in the general mass case. In between, approaches based on Mellin-Barnes techniques [6][7][8][9][10] allow to perform part of the integrals analytically, yet, as far as we understand, the number of integrals left over for numerical quadratures depends on the topologies considered and can remain rather costly. An alternative approach performing some/many of the Feynman parameter integrations analytically in a systematic way to reduce the number of integrations to be performed numerically would therefore be useful.
Such a working program was initiated in [11] for the calculation of massive two-loop N-point functions using analytically computed one-loop building blocks. This approach is based on the implementation of two-loop scalar N-point functions in n dimensions (2) I n N as double integrals of the form: (2) where W (ρ, ξ) are some weighting functions whereas the (1) I n ′ N ′ (ρ, ξ) are"generalised oneloop type" N ′ -point Feynman-type integrals 1 . The latter are "generalised" in the sense that the integration domain spanned by the Feynman parameters is no longer the usual simplex {0 ≤ z j ≤ 1, j = 1, · · · , N ′ ; N ′ j=1 z j = 1} at work for the one-loop N ′ -point function but another domain, e.g. a hypercube or a cylinder with triangular basis, which depends on the topology of the two-loop N-point function considered. The generalisation also concerns the underlying kinematics, which, besides external momenta, depends on two extra Feynman parameters ρ and ξ. The parameter space spanned by this kinematics is larger than the one spanned in one-loop N ′ -particle processes at colliders -for example Gram determinants may be all positive whereas this never happens in the physical region for one-loop N ′particle collider processes. Both these generalisations may be addressed by tuning case-bycase adaptations of the methods well-established in the standard one-loop calculations [12] and using careful analytic continuations [13] to all cases considered, whose implementation can nevertheless be tricky. Alternatively, we hereby and in two related companion papers propose to develop a novel approach to address both these generalisations in a systematic way. More generally, for both the three-and four-point functions the approach presented will consider integrals of the form 2 (1.1) wheren = 1, 2 or 3, D = X T · G · X − 2 V T · X − C, with ann ×n Gram matrix G, a columnn-vector V , and the x j , j = 1, · · · ,n are the components of a columnn-vector X spanning the simplex Σ = {0 ≤ x j ≤ 1, j = 1, · · · ,n | n j=1 x j ≤ 1}. The method will make extensive use of the following Stokes-type identity 3 proven in appendix A: where ∆n = V T · G −1 · V + C and ∇ stands for the gradient w.r.t. X.
The present article and two companion papers [14,15] aim at presenting the method advocated to compute the building blocks (1) I n ′ N ′ by applying it to the calculation of the standard one-loop three-and four-point functions as a "proof of concept". It has been designed to be straightforwardly applied so as to, on one hand, trivialise the Feynman parameter integrations as boundary terms in the integrals defining the above building blocks (1) I n ′ N ′ (ρ, ξ) in a systematic way i.e. regardless of the shape of the integration domain of the Feynman parameters; and on the other hand to obtain all necessary analytic continuations in a systematic way as well. The primary aim of its comparison with the well-established methods on this well studied case is not so much to readily provide an alternative competing first-line method to compute standard one-loop N-point functions, but rather use this comparison as a test-bench seeing the well-established approach as a benchmark in the matter of efficiency, which provides guidance to improve and optimise the novel method before its application to compute the two-loop ingredients which it has been designed for. We aim at showing its ability to circumvent the subtleties of the various analytic continuations in the kinematical variables in a systematic way. We also aim at controlling the proliferation of dilogarithms in the closed form expressions, which otherwise would hamper its use in further two-loop calculations. The computation of the building blocks (1) I n ′ N ′ (ρ, ξ) by itself using this method will be presented in a future article.
Let us remind that the motivation behind this work is to study two-loop massive three-and four-point functions in a scalar theory. The case where some internal masses vanish may lead to soft/collinear divergent functions for which n and n ′ have to be taken away from 4. The computation of the "generalised one-loop function" in this case (restricted to the case where the phase space volume of the Feynman parameters is a simplex) is presented in a companion paper [15]. Nevertheless, even if no internal mass vanishes, in a general scalar theory with three-and four-leg vertices, some two-loop three-and four-point functions diverge in the UV region. In this case, the space-time dimension has to be taken slightly under 4 to regularise the Feynman integrals. It can be shown by power counting that, in this theory, the twoloop UV-divergent three-and four-point diagrams have four propagators which implies that the associated "generalised one-loop functions" are two-point functions [11]. To compute analytically the latter, after doing the n ′ − 4 expansion around 0, an integration over one Feynman parameter of a logarithm, at most to the power 2, whose argument is a second order polynomial in this Feynman parameter has to be performed. This integration can be carried out without any difficulty. In more complicated field theories, for example gauge theories, the UV divergences can come from tensorial integrals. Although, in principle, the method presented can be used, this case is postponed to a future work. Since in this article, we focus on fully massive three-and four-point one-loop functions which are related to UV finite scalar two-loop functions, we set n and n ′ to 4 in the rest of this article.
As already said, the work presented in the present article does not quite provide an alternative competing first-line method to compute standard one-loop N-point functions, nevertheless it also provides a few interesting byproducts, we think, which are not manifest on the existing results. Most of the general results for three-and four-point scalar functions, valid for complex mass case, expressed in terms of dilogarithms [12,16,17] are valid in the physical domain, except those given in ref. [13] where the authors presented very compact results for the massive four-point scalar function whose validity has been extended by analytical continuation for kinematical ranges accessible beyond one-loop. Let us mention another general result for scalar box in arbitrary dimensions [18] expressed in term of generalised hypergeometric functions but this result is hardly usable for practical computations. In a previous article [19] closely following 't Hooft and Veltman [12], one-dimensional integral representations free of numerical instabilities caused by negative powers of Gram determinants were obtained for three-point functions in 4, 6 and 8 dimensions in the general complex mass case and they have been used in the Golem95 library [20,21]. The various analytic handlings used in [19] following [12]: the successive cuttings and pastings of integrals and the corresponding changes of variables involved, were rather intricate, they are even more so for the -yet unpublished -case of the four-point functions in 4 dimensions or more. Furthermore, in [12,16] and [17], the connection between the analytic results and the algebraic quantities det (S) (the determinant of the kinematic matrix associated with a given Feynman diagram), det (G) (the associated Gram determinant), and the algebraic reduction coefficients b i involved in algebraic reduction methods such as in Golem [22] (as well as similar ones corresponding to associated pinched diagrams), is badly blurred. Tracing back these algebraic ingredients in the coefficients and in the arguments of logarithms and dilogarithms in the final expressions obtained then requires a cumbersome extra work. Our alternative approach circumvents at least some of the above difficulties. In this respect the method presented enjoys a couple of interesting features as compared with the method primitively adopted in [19]. It directly proceeds in terms of the algebraic quantities det (S), det (G), b i etc. The derivation of the one-loop integral representations happens to be more systematic and applies to the threeand four-point function cases in a similar way. It also extends to complex masses without much complication. It also applies to kinematical configurations beyond those relevant for collider processes at the one-loop order. The method exploits an integral identity that, as a byproduct, also allows one to compute angular integrals of eikonal terms describing real (massless) emission off massive emitters in closed form in a (relatively) simple way, while keeping the origin of the kinematical arguments involved in the final result more transparent.
At first glance a price to pay with the novel method seems to be an inherent increase of the number of dilogarithms involved compared to the 't Hooft-Veltman results -a doubling in the three-point case and worse in the four-point case -when the one-dimensional integral representations are further computed in closed form. However this increase can be counteracted in a simple way, at least in the three-point case. The four-point case still needs more work to resolve this issue.
The account of this approach amounts to an extensive technical matter. In particular, the treatment of the four-point function involves a multistep process, whereas the upgrade from the real mass case to the general complex mass case implies a patchwork of cases. Both deserve some elaboration. Gathering everything at once in a single article might seem indigestible, we therefore chose to split the presentation into a triptych for ease of reading. In part I forming the present article we revisit the one-loop scalar three-point function I 4 3 with real internal masses to present the method, and apply it to the four-point function I 4 4 with internal masses all real. The extension to general complex masses, especially for I 4 4 will constitute part II, presented in a companion article. The presence of some zero internal masses causes the appearance of infrared soft and/or collinear singularities therefore requires some specific treatments although the general line of thought sticks to the one of fully massive case. These specific cases with zero masses will thus be addressed separately in part III forming yet another companion article.
In this article we start by considering the three-point function I 4 3 with real internal masses considered as a warm-up in sec. 2. We successively present two variants of the method. The simplest variant, which we call the "direct way", is presented in subsubsec. 2.2.1. It is well suited for the three-point function, unfortunately it is not suitable for a handy extension to the case of the four-point function. In subsubsec 2.2.2. we therefore tailor a more sophisticated alternative called the "indirect way". The latter comes across as rather artificial in the three-point case but it has the virtue of extending in a straightforward -albeit somewhat foaming -way to the four-point case. We conclude this section by commenting on the apparent doubling of dilogarithms and the way to counteract this doubling. In sec. 3, we consider the four-point function with internal masses all real, we follow the "indirect way" which comprises one stage more than in the three-point case because of the larger number of Feynman parameters. Its implementation in four steps is presented in subsec. 3.1 to 3.4. We then discuss our results with respect to those of ref. [12] and about the proliferation of dilogarithms in subsec. 3.5. Lastly, we conclude. Various appendices gather a number of utilities: tools, proofs of steps, etc. We removed them from the main text to facilitate its reading but we consider them useful to the reader. Accordingly, in appendix A, we present a Stokes-type identity which is the master identity for the derivation of our result. Appendix B explains how to tune the power of the denominators to apply the master identity of appendix A. Appendix C provides a detailed discussion on how to solve the equation N j=1 S ij b j = 1, ∀i = 1 · · · N even in the case of peculiar kinematical configurations for which det (G) or det (S) and det (G) vanish. Appendix D gives the derivation of two integrals useful for the computation of three-point and four-point integrals. And appendix E shows how to compute the last integration in a closed form in terms of dilogarithms. Lastly, appendix F contains a discussion about the prescription of the imaginary part of det (S). Each internal line with momentum q i stands for the propagator of a particle of mass m i . We define the kinematic matrix S, which encodes all the information on the kinematics associated with this diagram by: The squares of differences of two internal momenta can be written in terms of the internal masses m i and the external invariants p 2 i so that S reads: Here Z stands for a column 3-vector whose components are the z i , S is the 3 × 3 kinematic matrix associated with the diagram of fig. 1, and the superscript " T " stands for the matrix transpose. Let us single out the subscript value a (a ∈ S 3 = {1, 2, 3}) and write z a as 1 − i =a z i . We find: where the 2 × 2 Gram matrix G (a) and the column 2-vector V (a) are defined by We label b and c the two elements of S 3 \ {a}, with b < c. We write the polynomial (2.4) with the 2 × 2 matrix G (a) , the column 2-vector V (a) , the scalar C (a) as: can be written:

Step 1
Let us substitute identity (1.2) into eqs. (2.7) or (1.1). Were the power α + 1 in the l.h.s of eq. (1.2) such thatn − 2α = 0, only the boundary term -i.e. the second term in eq. (1.2) -would remain, thus making one integration in I trivial. In the case of the N = 3-point function, cf. eq. (2.7),n = N − 1 is equal to 2. Imposingn − 2α = 0 thus forces α + 1 = 2 in eq. (1.2). However in eq. (2.7), 1/D appears raised to the power 1, not 2. The idea is thus to adjust the power of the denominator by introducing an appropriate integral representation of the form (2.8) (ξ ν being some suitable power of ξ) so that the power α ′ of the effective denominator in this representation matches the requestn − 2α ′ = 0. The auxiliary identity (2.8) is properly made explicit and derived in its general form in appendix B. In the case of the three-point function in four dimensions I 4 3 the relevant form for identity (2.8) is simply the familiar integral representation: 1 The substitution of identity (2.9) into eq. (2.7) provides a representation of I 4 3 where (D (a) + ξ) 2 now replaces D (a) in the integrand: Identity (1.2) is then applied to the integrand of (2.10) considered as a function of the two variables x b , x c to be integrated first keeping ξ fixed, yielding: The use of the Stokes identity (1.2) in the integral representation (2.9) induces an apparent pole at ξ = ∆ 2 in eq. (2.11). Insofar as ∆ 2 = 0 this apparent pole is no issue, as can be justified directly as follows. It is manifestly no issue in the real mass case for which the − i λ contour prescription avoids this apparent pole anyway. In the general complex mass case the − i λ contour prescription is overruled by the finite imaginary part of ∆ 2 , whose sign may however change in a continuous way with the kinematics: one might then worry about what happens whenever Im(∆ 2 ) vanishes with a change of sign while Re(∆ 2 ) > 0. In this respect we shall however notice that This distribution (2.12), whether by direct calculation of the l.h.s. or by recognition on the r.h.s., proves to vanish identically outside its singular support contained in the region where D (a) (x b , x c ) + ∆ 2 = 0. On the other hand Im(D (a) (x b , x c )) < 0 on the integration simplex Σ bc where it is a convex linear combination of the imaginary parts of the masses squared. Thus, when Im(∆ 2 ) vanishes, Im(D (a) (x b , x c ) + ∆ 2 − i λ) < 0 on Σ bc hence the residue of the apparent pole ξ = ∆ 2 in eq. (2.11) vanishes identically in x b , x c on Σ bc .
The cases where ∆ 2 vanishes require a separate examination. This implies that det (S) vanishes. However the condition det (S) = 0 does not correspond to a kinematical singularity of the initial Feynman integral if the corresponding eigenvector of S points outside the quadrant {z j ≥ 0}. In this case D (a) (x b , x c ) never vanishes on the simplex Σ bc and the above argument based on eq. (2.12) still applies. The case corresponding to the presence of infrared singularities in conjunction with some internal masses is addressed separately in the third article of the series. Lastly, if the condition det (S) = 0 corresponds to a threshold singularity, the original Feynman integral is indeed singular and ξ = 0 is an end-point singularity of the integral representation (2.11).
For each term of the sum in eq. (2.11) the integration performed first is on the variable x j on which the derivative acts, and we get: After the change of variables x b = x and x c = 1−x, the first and third terms in the integrand of the r.h.s. of eq. (2.13) recombine and we get: 14) The quantities ∆ 2 and (G (a) ) −1 · V (a) in eq. (2.14) are expressed simply 5 in terms of det (S), det (G) and the coefficients b j such that 3 j=1 S i j b j = det (S) e i with e i = 1 for i = 1, 2, 3 (cf. eqs. (C.11) and (C.31) of appendix C): The denominators D (a) (0, x) and D (a) (0, x) straightforwardly read: and for D (a) (x, 1 − x) a simple calculation yields: The expressions in eqs.  fig. 1. In the three-point case at hand, the above pinched Gram "matrices" and "vectors" have actually one component only, thus the multiple superscript notation " {a}(c) ", etc. may seem a clumsy sophistication. Yet it keeps track of the line and column that have been singled out and it encodes how quantities with the same pinching but distinct singled out lines and columns are related. For example, Likewise we have: with corresponding relations among the G {b}(j) , V {b}(j) , C {b}(j) for j ∈ S 3 \ {b}, and among the G {c}(k) , V {c}(k) , C {c}(k) for k ∈ S 3 \ {c} respectively. The three-point integral (2.14) may then be written as a weighted sum over the b i :  3) in what follows. At this stage, one may argue that we have progressed by next to nothing as we performed one integration trivially yet at the price of introducing an extra integral so that two integrals still remain to be performed. However representation (2.22) amounts to a handier form as we will see next.

Step 2
We can proceed further in two different ways. The first way henceforth called "direct" has the virtue to lead in a very few simple steps to the result which we obtained using the method a la 't Hooft and Veltman cf. ref. [19]. Unfortunately, in the case of the four-point function we did not succeed in proceeding as simply. We have therefore formulated an alternative to the direct way, henceforth called "indirect". It is somewhat academic to follow the latter for the three-point function, all the more so as it is uselessly more cumbersome than the direct way in this case. Yet we do the exercise as a warm-up before tackling the more involved case of the four-point function. As will be seen, the result that we obtain at first for the threepoint function in the "indirect way" is not quite the same formula as the one obtained via the "direct way". In particular the "indirect way" would lead to a doubling of dilogarithmic terms compared to result via the "direct way" if the last integral were explicitly performed in closed form. A comparison between the two may help understand how to recombine terms from the "indirect way" so as to counteract this doubling. The experience gained on this example may guide us to reverse a similar unwanted proliferation of dilogarithms in the case of the four-point function where only the indirect way is available. Let us now present the two ways in this subsection.

Direct way
We integrate directly the l.h.s. of eq. (2.22) over the variable ξ first, keeping x fixed. This requires us to perform the following type of integral: which, using eq. (D.4), gives 6 : Last we substitute eq. (2.24) into eq. (2.22). To make the connection with the notations used in [19], we identify ∆ 2 = 1/B, b i / det (G) = b i /B (cf. also appendix C) and write The above result thus coincides with the result obtained in ref. [19] after symmetrisation over the parameter α using the methodà la 't Hooft and Veltman. The present derivation is somewhat faster, though. The derivation of eq. (2.25) holds true also in the complex mass case. In the real mass case, the difference of logarithms in eq. (2.25) can be replaced by the logarithm of the ratio.

Indirect way
We perform the integration over x in eq. (2.22) by writing the integrand as a derivative using identity (1.2) anew -this time for "n = 1". Therefore identity (2.9) is not the relevant one to provide an integral representation of 1/(D {i}(i ′ ) + ξ) with the appropriate power in the denominator so as to apply identity (1.2). Indeed, in order to keep only the boundary term in identity (1.2) we shall choose α = 1/2 whereas the power of the denominator in eq. (2.22) is 1 not 3/2. We thus have to customise an alternative representation of the type (2.8) providing shifts in powers of denominators by 1/2 instead of 1. A generalised representation including this type is derived in appendix B. It happens to be also the one suited to the four-point function case as will be seen below. It takes the following form (see eq. (B.1)): where B(y, z) = Γ(y) Γ(z)/Γ(y + z) and Γ(y) are the Euler Beta and Gamma functions [31]. The parameters µ and ν are chosen such that µ − 1/ν = the power of (D {i}(i ′ ) + ξ) in the integrand of eq. (2.22) i.e. 1. The representation of 1/(D {i}(i ′ ) + ξ) thus obtained is substituted into eq. (2.22). This provides a representation of I 4 3 with factors 1/(D {i}(i ′ ) + ξ + ρ ν ) µ . Identity (1.2) is then applied to this new integrand considered as a function of x seen as single integration variable (i.e. n = 1), with ρ (and ξ) seen as fixed. In order that the first term of identity (1.2) vanish, µ − 1 = α shall be chosen equal to 1/2 thus µ = 3/2 and ν = 2. We substitute this integral representation into eq. (2.22) and perform the integration over x explicitly. We get: . In eq. (2.27), one recognises familiar algebraic quantities associated with the three possible pinchings of the triangle diagram (cf. eqs. (C.11) and (C.31)): Let us introduce 8 From eq. (2.19) we also have: We can write eq. (2.27) as the following weighted sum over the coefficients b i and b {i} j : We first perform the ρ integration in eq. (2.33) using appendix D. In the real mass case at hand Im(−∆ We perform the ξ integration first, using eq. (D.4) to get: In the real mass case, the difference of logarithms in eq. (2.35) can be rewritten as the logarithm of a ratio.
The quantities ∆ 2 and ∆ {i} 1 and D ij are expressed in terms of the various determinants and b coefficients: To derive eqs. (2.36) and (2.37), the following identities are used: They are particular cases of the so-called Jacobi identity for determinants [32], of which various cases of interest for the issues discussed here were specified in appendix A.2 of [19].
Using eqs. (2.36) -(2.39) we obtain: Eq. (2.42) can be recast in the form (2.25) obtained according to the direct way. To achieve this goal, we first make the change of variable s = z b {i} j , so that I 4 3 reads: In the real mass case, the b {i} j are real: the integration contours of the two integrals corresponding to the two values of j ∈ S 3 \ {i} in eq. (2.43) both run along the real axis. The poles in the integrand of eq. (2.43) are fake as their residues vanish by construction, and the straight contours of integration in eq. (2.43) do not cross the cuts 9 of the logarithms. The two real-contour integrals corresponding to the two values of j ∈ S 3 \ {i} can thus be joined end-to-end into a single one in a straightforward way. To refer to the two elements of S 3 \ {i} in a definite way, let us introduce k ≡ 1 + ((i + 1) modulo 3) and l ≡ 1 + (i modulo 3). Accordingly I 4 3 can be recast into: We then make the following change of variable: The numerator of the argument of the logarithm in eq. (2.44) becomes: Thanks to the identity (2.40), we recognise We can further identify so that the complete identification reads: Furthermore, thanks to the identity (2.41), we recast the denominator [s 2 det (G) + b 2 i ] in the integrand of eq. (2.44) as: Reminding that B = det (G)/ det (S) and b i / det (S) = b i , we finally get: which is nothing but eq. (2.25).

Taming the proliferation of dilogarithms
A comment is in order here regarding the dilogarithms generated by the computation of eqs.

whereas the denominators in original calculation of 't Hooft and
Veltman were first degree in x only. This seemingly amounts to a twofold proliferation of dilogarithms (24 instead of 12) using eq. (2.25) and, even a fourfold proliferation (48 instead of 12) using eq. (2.42), w.r.t. the original calculation of 't Hooft and Veltman. The situation is actually not so bad.
We previously explained how pairs of terms in eq. (2.42) can be recombined so as to recover eq. (2.25). Thus we are left only with the relative twofold proliferation seemingly occurring in eq. (2.25) w.r.t to [12]. Let us show that this apparent doubling is actually fake. Let us rewrite eq. (2.25) as 10 are the roots of the denominator in eq. (2.44) and the index k is the element of S 3 \ {i, l}. Eq. (2.48) involves 24 dilogarithms i.e. twice as many as the result of ref. [12]. To reduce their number let us first compute the following quantity: Combining the two terms in the square bracket into a single denominator and using eq. (2.47) yields: We now make the change of variable t = D {i}(l) (x). The integrals over [0, 1] in eq. (2.50) are traded for contour integrals in the complex plane 11 . K takes the form: Note that D {i}(l) (0) and D {i}(l) (1) are internal masses, and from the correspondence (2.31) in subsubsec. 2.2.2 we see that the three integrals in eq. (2.51) combine into an integral along a closed contour (triangle). Note also that the pole in F (t) is fake the residue is zero by construction thus we do not have to worry about the positions of the pole with respect to the triangle. In addition, in the case of complex masses, this triangle lies entirely below the real axis in the complex t plane: in no case it crosses the real axis, consequently there is no worry with the discontinuity cuts of the various logarithms. Thus K = 0. To summarise, defining "K = 0" reads: and these two mutually opposite sums of three terms happen to be subtracted one from another to yield I 4 3 in eq. (2.25), so that the three-point function is given by: which involves 4 dilogarithms in each of the three terms I + 1 , I + 2 and I + 3 , hence 12 distinct dilogarithms only, not 24 as hastily thought. In some intermediate step, the authors of [12] made use of some change of variable involving either of the two roots α ± of some second degree polynomial. Both changes of variables were usable and lead to namely one combination of three I ± i and its complementary with respect to eq. (2.53), respectively 12 . The two combinations, albeit seemingly distinct analytically, were guaranteed to take equal values in the approach of [12] since the three-point function does not depend on the choice of the change of variable made. Instead, our approach does not rely on any such choice triggering a breakdown of symmetry between two options. It thus yields a result which preserves this symmetry, being half the sum of the two expressions corresponding to α + and to α − in [12], yet at the expense of doubling the number of terms. The use of identity (2.53) to counteract the doubling is somehow a counterpart of the choice of α + vs. α − in [12].
Anticipating the absence of a direct way in the four-point function case, one may wonder whether the fourfold proliferation in eq. (2.42) compared to ref. [12] can be counteracted directly instead of recovering eq. (2.25) first and then acting as explained just above. In this respect it shall be stressed that the actual number of dilogarithms generated by eq. (2.42) is twice smaller than naively counted because the integrand is even in x. Indeed, each of the 6 terms in eq. (2.42), of the form where both P j (x) are even second degree trinomials in x, can be rewritten Let us note ±x 2 the two mutually opposite roots of P 2 (x). Partial fraction decomposition of 1/P 2 (x) leads to an integral of the form which yields only 4 dilogarithms instead of 8: eq. (2.42) thus yields 6 × 4 = 24 dilogarithms "only". This fortunate feature of parity in the integration variable and its consequence will also be encountered in the case of the four-point function. One may wonder whether there is a straight way to reduce the number of dilogarithms, bypassing the step that reshuffles eq. (2.42) into eq. (2.25), which would instead exploit the parity in z of the integrand in eq. (2.42) combined with some other trick. For the three-point function, an independent derivation of identity (2.53) alternative to the one above might be worked out directly from eq. (2.42) computed in closed form, using a version of the five-dilogarithm Hill identity [33] supplemented by a couple of cancellations thereby made manifest [34].
3 Leg up: the scalar four-point function I 4 4 The usual integral representation of I 4 4 in terms of Feynman parameters is given by: where Z is now a column 4-vector whose components are the z i . In a way similar to sec. 2 we arbitrarily single out the subscript value a (a ∈ S 4 = {1, 2, 3, 4}), and write z a as 1− j =a z j . We find: The box picturing the one-loop four-point function.
where the 3 × 3 Gram matrix G (a) and the column 3-vector V (a) are defined by We label b, c and d the three elements of S 4 \ {a} with b < c < d. The polynomial (3.2) reads: can be written Again the dependence on G (a) , V (a) and C (a) will arise through quantities independent of the actual choice of a. We will follow a similar strategy as for the three-point function in section 2 mutatis mutandis.

Step 1
The idea is again to adjust the power of the denominator in the l.h.s of eq. (1.2) in such way that only the boundary term in eq. (1.2) remains. In the four-point function case at hand, cf. eq. (3.5)n = N − 1 is equal to 3. Imposingn − 2α = 0 implies α = 3/2, hence a power α + 1 = 5/2 in the l.h.s. of eq. (1.2). In eq. (3.5) however D is raised to the power 2, not 5/2. Identity (2.26) is used with µ = 5/2 and ν = 2 chosen such that µ − 1/ν = 2 and 2(µ − 1) = 3 to customise a denominator raised to a power shifted from 2 to 5/2. The representation of 1/D 2 obtained is substituted into eq. (3.5). Identity (1.2) is then applied to the new integrand seen as a function of the three variables z b , z c , z d . Integration will be performed on the latter first, keeping ξ fixed. This yields: Identity (1.2) recasts eq. (3.6) into: . For each of the three terms j = b, c, d in eq. (3.7) this makes the first integration over the corresponding z j trivial, and we get: By an appropriate relabelling of the variables, all three terms T j in eq. (3.8) can be gathered in one integral: The number of terms in eq. (3.10) can be reduced. In the r.h.s. of eq. (3.10) let us consider the first term of the second line of the curly bracket: and perform the following change of variables: The new variables y 1 and y 2 still span the simplex Σ 12 . Eq. (3.11) becomes : In a similar way, using t 1 = x 2 and t 2 = 1 − x 1 − x 2 we recast the first term of the third line of the curly bracket in the r.h.s. of eq. (3.10) into: We then recombine T c and T d with the first term of the first line of the curly bracket in the r.h.s. of eq. (3.10), and I 4 4 reads: The quantities ∆ 3 and ((G (a) ) −1 · V (a) ) j involved in eq. (3.14) are expressed in terms of det (S), det (G) and the b j = b j det (S) (see eqs. (C.11) and (C.31) of appendix C): The four new polynomials of x 1 , x 2 appearing in the denominators coincide with those involved in the integral representations of the triangle diagrams obtained by all possible pinchings of the box diagram of fig. 2. Namely, D (a) (0, x 1 , x 2 ) = D {b}(a) (x 1 , x 2 ) comes out when pinching propagator b in the box, which corresponds to suppressing line and column b in the kinematic matrix S in which line and column a are singled out: Likewise for c, d, a: whereas a simple calculation yields: I 4 4 thus reads: where, for definiteness, the i ′ assignment for each i is the one read on the four eqs. (3.16) above: i ′ = a for i = b, c, d and i ′ = b for i = a. In a similar way as with the threepoint function case, this i ′ assignment can actually be changed relying on identities such as D {d}(a) (x 1 , combined with corresponding appropriate changes of variables that leave the integration simplex unchanged.

Step 2
We proceed further as we did for the three-point function following the "indirect way", iterating the procedure to integrate over one more x i explicitly. Let us define the quantities J i by: Here again we do not apply identity (1.2) directly because the power of the numerator is not the appropriate one. We now deal withn = 2 variables of integration, α shall thus be equal to 1 in order to keep the boundary term only, whereas the exponent of the denominator in eq. (3.18) happens to be 3/2, not α + 1 = 2. To get the appropriate power, we again make use of eq. (2.26) with µ = 2, ν = 2 and J i is represented by: We use identity (1.2) in which "D" is interpreted as (D {i}(i ′ ) +ξ 2 +ρ 2 ). We note k, l ∈ S 4 \{i, i ′ } with k < l. We get: . In each term "j" the integration performed over x j first is trivialised as a boundary term and we get for J i : Making the change of variable x k = x and x l = 1 − x in the first and third terms inside the square bracket of eq. (3.21) and gathering the terms in the same integral, we get: We recognise We recall that the coefficientsb For any pair {i, j}, k can thus be traded for l in table (3.24) modulo the change of variable x ′ = 1 − x which leaves unchanged the corresponding term in eq. (3.22). I 4 4 is rewritten as a sum of twelve terms:

Step 3
We iterate once more the procedure, adjusting the power of the denominator of eq. (3.26) using identity (2.26) with µ = 3/2, ν = 2, so as to transform the integrand into an x derivative using identity (1.2) forn = 1, α = 1/2. This trivialises the integration over xalbeit at the price of yet an extra integration. Let us introduce With the help of eq. (2.26), K ij reads: Using identity (1.2), we get: The trivial integration over x reads: We recognise 13 : Furthermore, D {i,j}(k) (0) and D {i,j}(k) (1) are proportional to internal masses squared: : By substitution of eq. (3.33) into eq. (3.26) I 4 4 reads: In summary, steps 1 to 3 traded three nested integrals over the initial Feynman parameters spanning the three-dimensional simplex for three nested integrals over the real positive half line. The procedure generated the natural appearance of familiar ingredients of the algebraic reduction: this has been the main benefit so far. We also note that the r.h.s. of eq. (3.34) is manifestly independent of the successive choices made to single out lines and columns in the kinematic matrix S and its subsequent pinchings. We now have to compute the triple integral on the first octant.

Step 4
Let us consider L 4 4 (∆ 3 , ∆ and D ijk being all real, thus all imaginary parts come from the − iλ prescription, the situation is the simplest possible. In contrast the extension to the general complex mass case leads to a branching of cases, which, albeit systematic, is more profuse than what happened for the three-point function. We therefore postpone the extension to the general complex mass case to a separate article. We split step 4 itself into a succession of substeps that will make the further complex mass extension easier. , D ijk ) given by: − i λ and B = ξ 2 + ρ 2 + D ijk − i λ to transform the r.h.s. of (3.36) into an integral of only one factor: b. Next we nest M 1 (ξ 2 + ρ 2 ) in the ρ integration which reads: The ρ integration is performed first, using partial fraction decomposition and eq. (2.26). We get: c. We then substitute M 2 (ξ 2 ) given by eq. (3.39) into the remaining ξ integration providing Normalisation factors greatly simplify in L 4 4 (∆ 3 , ∆ Keeping z fixed, the nested ξ integral in the last two lines of eq. (3.41) can be split into two integrals, each of which being of the same type as the one in eq. (3.36). Each integral can thus be recast in a form similar to eq. (3.37). For convenience we introduce: The recasting reads: , D ijk ) takes the form: f. We trade y for x = y 2 : We perform a partial fraction decomposition w.r.t. x in the integrand of eq. (3.45): h. Lastly the x integration is readily performed and we finally get: Using the following identities: (see appendix A.2 on Jacobi identities for determinants in ref. [19]), the coefficients P ijk , R ij , Q i and T defined in eqs.
which is useful in view of making practical numerical implementation.
Let us note that the apparent poles in the integrand of the r.h.s. of eq. (3.47) are fake as their residues vanish 14 . Furthermore, in the real mass case at hand, (u 2 P ijk + R ij + Q i + T ), (u 2 (P ijk + R ij + Q i ) + T ) and (u 2 Q i + T ) are all real when u spans [0, 1]: each logarithm of ratios in eq. (3.47) can be safely split into a difference of two logarithms, the arguments of which keeping an (infinitesimal) imaginary part of constant (negative) sign as u spans [0, 1]. The remaining integration can be performed in closed form in terms of dilogarithms. This computation is provided in appendix E.
Let us first remind a few features of the latter. The method of ref. [12] linearises the dependence of the integrand in some integration variables to facilitate two integrations. For this purpose two parameters α and β are introduced and successively adjusted as roots of some second degree equations, which generates a fourfold arbitrariness in the procedure. All choices of α and β are equivalent, they lead to results which look formally the same, and provide the same numerical result. Yet the analytic forms when α and β are made explicit in terms of the primary parameters depend on the choice made. In contrast our iterative decomposition closely related to algebraic reduction and pinching operations preserves a kind of symmetry -a "democracy" among all possible single pinchings, and double pinchings. This symmetry is explicitly broken in ref. [12] whenever any choice for α and β is made. Therefore, the comparison of our result shall be carried out with the double symmetrisation of ref. [12] w.r.t. both α and β, not with ref. [12] per se. This double symmetrisation increases the number of terms w.r.t. ref. [12]. The price to pay for our method compared to ref. [12] is thus that some extra work shall be carried out to counteract the corresponding proliferation. This already occurred for the three-point function, and we saw that the issue was easily overcome in that case. The situation for the four-point function is more complex and requires some elaboration and we need to carry out the comparison in some more detail.
Consecutively to the choice of α the calculation of ref. [12] splits into three contributions depending on the choice of α -let us call them "α-sectors". In each α-sector, the subsequent choice of β (which depends both on the sector considered and explicitly on α) leads again to three contributions -or "β-subsectors". This provides I 4 4 as a sum of one-dimensional integrals. In each of the β-subsectors the integrand is given by a logarithm of some rational fraction, divided by some second degree polynomial. The rational fractions in the logarithms are of the form (L − iλ)/(Q − iλ) in which L and Q are first and second degree polynomials respectively. After partial fraction decomposition of these denominators the values of the logarithms at each pole are subtracted so that the poles are made manifestly fake in each term separately. Whereas the Q are independent of the choices of α and β, the L are independent on the choice of β but do depend explicitly on α. The locations of the fake poles in each α-sector depend on β explicitly but not on α, yet there are relations between different poles from different α-sectors. Each one of the three α-sectors in ref. [12] leads to three L and three Q, however this does not lead to nine distinct L and Q overall. Among the nine Q, three of them appear twice (among the last two lines in eq. (6.26) of ref. [12]) whereas the three others (in the second line in eq. (6.26)) appear only once. Each of the Q happens to correspond to a double-pinching pattern of the box diagram, and indeed there are only six such distinct double pinchings. There is no connection between splitting in sectors and double pinchings, though, and the correspondence between the polynomials Q and the double pinchings are identified a posteriori. The L happen to coincide whenever the Q correspondingly involved in the rational fractions respectively coincide, yet the L look "atypical" and have no simple interpretation e.g. in terms of pinching or otherwise, at least to us.
The terms ln(u 2 P ijk + (R ij + Q i + T − iλ)) in eq. (3.47) in our approach coincide with the ln(Q) in [12]. This can be seen by first absorbingb − det(G {i,j} ) y. However, the pole terms weighting these identical logarithms do not match in any clear way between eq. (3.47) in our approach and [12]. Furthermore, the two other logarithms ln(u 2 Q i + T − iλ) and ln(u 2 (P ijk + (R ij + Q i ) + T − iλ)) in our approach are atypical and their contributions do not match with those of the atypical ln(L) of ref. [12] in any clear way either.
As for taming the proliferation of dilogarithms in eq. (3.47) in our approach, the use of the five-dilogarithm Hill identity combined with the property of parity in u of the involved integrals allows to reduce the number of dilogarithms by a factor two, from 288 to 144, still somewhat more than in ref. [12]: some extra work would still be required to further reduce this number. Note also that the advantage of the method used here is that the roots of the denominators and of the arguments of the logarithms are expressed in terms of the determinants of the Gram matrix and the S matrix as well as those of the corresponding pinched matrices. From that, some relations between these different roots can be deduced. These relations could be used to reduce in a systematic way using modern tools like symbols the number of dilogarithms. This future work is postponed in a future publication [34].

Summary and outlook
In this article we have presented a novel approach to the computation of one-loop threeand four-point functions. The method naturally proceeds in terms of algebraic kinematical invariants involved in reduction algorithms and applies to general kinematics beyond the one relevant for one-loop collider processes, it thereby offers a potential application to the calculation of processes beyond one loop using one-loop (generalised) N-point functions as building blocks. In addition, limiting behaviours in a number of specific regimes with det (S) → 0 or det (G) → 0 or both simultaneously is presented. They are restricted to cases that do not lead to Landau singularities and that are relevant for NLO kinematics. The treatment of the leftover cases will be postponed to subsequent work if relevant. For the sake of pedagogy, the method was hereby exposed on "ordinary" three-and four-point functions in four dimensions in the real mass case. It can be extended in several respects, and each of these respects will be tackled in two companion papers briefly advertised in what follows.
In the first companion paper we will extend the presented framework to the general complex mass case. Whereas the extension will branch into a profusion of cases, the general philosophy of the approach extends straightforwardly, and the profusion of cases can be tamed and reunified by means of one-dimensional contour integrals in the complex plane encoding analyticity properties. This companion paper will also provide a formula to express Im(det (S)) in terms of widths.
In the second companion paper we will extend the presented framework to the case where some vanishing internal masses cause infrared soft and/or collinear divergences. The method extends in a straightforward way, once a few intermediate steps and tools are appropriately adapted.
The proliferation of dilogarithms in the expression of the four-point function computed in closed form with the present method requires some extra work to be better apprehended, in order to counteract it. This issue will be addressed in a future article.
This method can of course also be applied to compute N-point one-loop functions with any N ≥ 5 but this is pointless because in these cases the first step boils down to recovering well-known formulae that reduce the N-point one-loop function to one-loop ones with fewer points. [25][26][27]35].
The last goal is to provide the generalised one-loop building blocks entering as integrands in the computation of two-loop three-and four-point functions by means of an extra numerical double integration.

In memoriam
Various ideas and techniques used in this work were initiated by Prof. Shimizu after a visit to LAPTh. He explained us his ideas about the numerical computation of scalar two-loop three-and four-point functions, he shared his notes partly in English, partly in Japanese with us and he encouraged us to push this project forward. J.Ph. G. would like to thank Shimizu-sensei for giving him a taste of the Japanese culture and for his kindness.

A A Stokes-type identity
In this appendix, we derive the master identity (1.2) that enables us to perform one integral trivially. Let us consider the columnn-vector and the polynomial D ofn variables defined by: where G is a symmetricn ×n matrix and V a columnn-vector. The subscript T stands for the transpose. Throughout this appendix we assume the matrix G to be invertible. Let ∇ stand for: so that ∇ f is the gradient of the scalar field f whereas ∇ T · F is the divergence of thē n-vector field F . The gradient of D is given by: The square bracket in the r.h.s. of eq. (A.2) may be written:

B How to tune powers of denominators
The following identity is used with real µ, ν such that µ > 1/ν > 0 to tune powers in denominators: where B(x, y) is the Euler Beta function defined by The integral in the l.h.s. of eq. (B.1) is convergent for µ > 1/ν > 0. Identity (B.1) is easily established for D real writing then making the change of variable ξ = v 1/ν and performing the integration over v first. For D complex, identity (B.1) still holds provided ν µ > 1. Indeed, we may equivalently When R → + ∞, the first integral of eq. (B.3) leads to the sought-after one, the second integral is O(R 1−ν µ ) → 0 provided ν > 1/µ > 0, and the path of the third integral may be parametrised by ξ = ρ D 1/ν , 0 ≤ ρ ≤ R |D| −1/ν so as to get: The integral in the r.h.s. of eq. (B.4) is namely the one just computed above for D = 1, hence: In this appendix, we show how to express the b coefficients in term of G (a) and V (a) . We assume that the dimension of the S matrix is N and we single out its last line and column to build the associated Gram matrix G (N ) . We introduce the b i [22] which are solutions of the equation: The S matrix can be related to the block matrix S (N ) whose blocks are proportional to the Gram matrix G (N ) , to the vector V (N ) and to the scalar S N N defined in sec. 2 and 3: The subtraction of column and line N leading from S to S (N ) corresponds to the following matricial product: The quantity is in its turn expanded w.r.t. to its last column V (N ) : where Cof G (N ) is the matrix of cofactors of G (N ) . Substituting (C.8) into (C.7) we get: or, in other words: This is to compare with eq. (A.4) reminding that the constant term C of the polynomial D is precisely S N N (cf. eq. (2.5) or eq. (3.3)), so we conclude that: The eq. (C.1) can be reformulated in terms of S (N ) . Inverting eq. (C.5), we get: where β (N ) is the N − 1 vector: We will discuss the solutions of eq. (C.12) in the following cases: 1) the determinants of the matrices S and G (N ) do not vanish, 2) the determinant of the matrix S is different from 0 but the determinant of the Gram matrix G (N ) vanishes and 3) both determinants of the S and G (N ) matrices vanish.
At this level, the solution of eq. (C.12) is fully determined. Nevertheless, it is interesting to continue and compute the remaining parameter Q. For that, let us substitute (C.23) and (C.25) into eq. (C.19) which becomes: i.e.
Although eqs. (C.11) and (C.31) provide the main results of this appendix, we will discuss now the peculiar cases where some determinants vanish limiting ourselves to cases that do not lead to Landau singularities and whose associated kinematics is relevant to colliders. 15 Instead of subtracting column and line N while solving eq. (C.12) we could have subtracted any other column and line a = N . Eq. (C.31) would then The column vector b defined by (C.31) seems to depend on the choice of the line and column singled out but actually it does not. When det S = 0, b fulfils S · b = (det S) e whose solution is manifestly a-independent .
C.2 Case det (S) = 0 and det (G) = 0 We now assume det (G) = 0, therefore we can no longer proceed as in sec. C.1 to solve eq. (C. 19) and (C.20) . We parametrise where ν is determined by solving eq.(C.22) which, using (C.32), reads: We notice that V (N ) T · u 1 = 0 since otherwise eq. (C.22) cannot be fulfilled 17 . At this level, we hold all the information to solve eq. (C.12). Nevertheless, let us go on and also determine the parameter Q.
For that, we substitute the expression of W into eq. (C.19) which becomes: We look for the pseudo inverse, more precisely the Moore-Penrose pseudo inverse (MPPI) 18 [36] of G (N ) , i.e. H such that: In order for (C.35) to admit solutions the following compatibility condition has to hold: For N = 3, 4, dim Ker G (N ) > 1 leads to degenerate kinematics cf. the discussion at the end of the subsec. C. 3 17 Notice also that since Cof[G (N ) ] is a real symmetric matrix, Cof[G (N ) ] = γ 1 u 1 · u T 1 and then det (S) = (−1) N −1 γ 1 (V (N ) T · u 1 ) 2 which is by assumption different from zero, so V (N ) T · u 1 = 0.
18 Any other pseudo inverse could also be used yet the MPPI provides extra properties -namely the MPPI H is symmetric and commutes with G (N ) , and G (N ) · H is symmetric and nilpotent i.e. a projectorconvenient for further use.
and Q is of the form where n is a so far arbitrary (complex) (N − 1)-vector. Eq. (C.40) is a matricial counterpart of the usual unknown (N − 1)-vector spanning Ker G (N ) in the singular vectorial case (it is "the general solution of the homogeneous equation G (N ) · X = 0"). We shall further request Q to be symmetric (as ( S (N ) ) −1 has to be so!). We have: This vanishes if and only if where the scalar ξ is fixed by substituting into eq. (C.21). We get: Collecting the results of eqs. (C.40), (C.42) and (C.43), the parameter Q is given by: which completes the extraction of ( S (N ) ) −1 . Using eqs. (C.44), we can verify that eq. (C.20) is indeed verified.
To sum up, putting together the results of eqs. (C.32)-(C.34) and (C.44), the solution of eq. (C.12) is given by: We seek to express the MPPI Σ of S (N ) in terms of V (N ) and the MPPI H of G (N ) . We parametrise Σ as we thus have: Let us warn the reader that we will not determine the MPPI Σ in general. Since we are interested in solving the eq. S · b = e, we restrict ourselves here to the case when this equation admits solutions i.e. when the compatibility condition is fulfilled. The compatibility condition for (C.12) in terms of the MPPI is [1 N − S (N ) · Σ] · e = 0. This condition explicitly reads: Furthermore, the Moore-Penrose extra conditions require S (N ) · Σ to be a projector: it shall be symmetric, implying together with the nilpotent request: Together with MP conditions (C.51)-(C.53), compatibility conditions (C.49)-(C.50) make the extraction of Σ easier.
A solution such that U = 0 would require W to belong to Ker G (N ) in order to fulfil (C.49) and not be orthogonal to V (N ) in order to fulfil (C.50). Let us assume Ker G (N ) to be onedimensional and spanned by a normalised vector ) 2 = 0, which would be inconsistent with det (S) = 0 and det (G) = 0 simultaneously. Consequently: • either U = 0 and Rank G (N ) has to be ≤ N − 3 thus Cof[G (N ) ] = 0 identically, which corresponds to degenerate kinematics; • or Rank G (N ) = (N − 2), in which case U cannot be 0.
Let us ignore the first alternative and focus on the second one. Indeed, the former leads to degenerate kinematics for the case N = 3, 4 not relevant for colliders. Nevertheless, as explained in the introduction, since a two-loop scalar N-point function can be written as a sum of double integrals whose integrands are some "generalised" one-loop Feynman integrals, the kinematics of the latter depends on two parameters. So, one may wonder if such degenerate kinematics is met when these parameters run over the unit square. If relevant, this issue will be tackled later on.
In order for (C.49) to admit solutions, the following compatibility condition is fulfilled: H being the MPPI of G (N ) . Eq. (C.49) is then solved in the form: So far k remains arbitrary. Notwithstanding, (V (N ) T · k) = 0, substituting (C.55) into (C.50) thus yields: Let us define Ω as the factor of U in eq. (C.78), namely: Since S (N ) does admit one unique MPPI Σ, whose U has to obey (C.56), the combination Ω involved in (C.56) is necessarily non zero, and (C.56) fixes U = Ω −1 .
Let us now determine Q fulfilling (C.51)-(C.53). Using(C.49)-(C.51), the l.h.s. of (C.53) may be expanded as Thus condition (C.53) may be replaced by: Inspired by the cases where S (N ) is invertible, let us seek Q of the form with the (N − 1)-column vector y to be determined. We get: where H is the pseudo inverse of G (N ) , (G (N ) · H) is nilpotent (cf. eqs. (C.36) and (C.37)) and "(C.60)+(C.59)" simplifies: and (C.58) reads: At this point we recall that V (N ) fulfils the compatibility condition (C.54) and we make the following Ansatz: with ζ to be determined. Let us note that actually, since Q quadratically depends on y, only ζ 2 not ζ per se is involved in the expression of Q. The l.h.s. of eq.(C.62), with the help of eq. (C.54), becomes: We shall now enforce the symmetry property (C.52): where the last equality is obtained using the compatibility condition (C.54). Due to the fact that the MPPI H of G (N ) is symmetric and does commute with G (N ) , the r.h.s. of (C.66) is thus symmetric if and only if k = 0 -which now completely fixes W .
Last, we shall check whether (C.51) is fulfilled.
We have to distinguish between the two cases.
where n is an arbitrary N vector. In terms of the block components of the Σ matrix, these solutions become: where n is a N − 1 vector formed by the N − 1 first components of n. The fact that det (G) has to vanish can be also seen as a direct consequence of the compatibility condition required for eq. (C.1) equivalent to eq. (C.12): with Σ the MPPI of S. If S is real, as can be shown e.g. by diagonalisation of S (and Σ) this compatibility condition is equivalent to e ⊥ Ker S (where "⊥" is understood as the Euclidean scalar product u · v = N i=1 u i v i ): this means that all u ∈ Ker S fulfil the condition e T · u = 0. This condition has been shown in appendix E of ref. [19] to imply the existence of an eigenvector 19 for G (N ) with corresponding vanishing eigenvalue , hence det (G) = 0. This can be extended to a complex S as shown in what follows. Albeit not real S is symmetric thus admits a Takagi decomposition [37][38][39][40][41][42][43] of the form S = F · S D · F T , where S D = diag(s j , j = 1, · · · , N) is a real non negative diagonal matrix and F is a N × N unitary matrix. Let us note f k the column vectors of F so that the Takagi decomposition of S reads: where J is the set of index values corresponding to the non zero elements of S D . Let us also define the set of index values K corresponding to vanishing elements of S D . The f j , j ∈ K are mutually orthogonal, normalised eigenvectors of S associated with eigenvalue zero i.e. form an orthonormal basis of Ker S. A Takagi decomposition of the MPPI Σ of S is provided by: is the (orthogonal) projector onto Ker S and the consistency condition [1 N − S · Σ] · e = 0 again means i.e. e ⊥ Ker S (where"⊥" now refers to the Hermitian . Whether S is real or complex, "e ⊥ Ker S" implies that "det S = 0" does not correspond to any kinematical singularity. Indeed such a singularity is characterised by a point satisfying the Landau conditions [44,45], which may occur either on the boundary of the domain of integration (end-point singularity as for an infrared or collinear singularity), or inside the domain of integration and where a pinching of the integration hypercontour occurs (pinching singularity as e.g. for a threshold singularity). To such a point corresponds an eigenvector x ∈ Ker S belonging to the first hyperquadrant defined by { N j=1 x j > 0, x i ≥ 0 for all i = 1, · · · , N}. In contrast, in the present case, "e ⊥ Ker S" prevents any element of Ker S from belonging to this hyperquadrant. Let us notice that the converse is not true: there are situations where e may not be "⊥ Ker S" yet no vector of Ker S may belong to the first hyperquadrant either. If det (S) = 0 but the condition [1 N − S · Σ] · e = 0 is not fulfilled, eq. (C.12) has no solution. This situation occurs in cases of practical relevance such the one leading to Landau singularities. In deriving the representation of I 4 N using the Stokes-type identity, the coefficients b, defined when det (G) = 0 by where g ′ j γ ′ j = det (G) for all j = 1, · · · , N − 1. The label " j=1 " labels the (unique) eigendirection corresponding to the eigenvalue that makes det (G) vanish. The "prime notation" stands for the generic case det (G) = 0, with corresponding quantities computed in subsec. C.2, whereas unprimed quantities are the respective values when det (G) vanishes. We introduce We can rewrite det (S) It is convenient to introduce The expression B = (−1) N −1 det (G)/ det (S) also takes the form: The r.h.s. of eq. (C.79) vanishes continuously with g ′ 1 while keeping (u ′ T 1 · V (N ) ) = 0. When det (G) = 0 the expression of W ′ = B (G (N ) ) −1 · V (N ) , rewritten has a smooth limit when g ′ 1 → 0 while (u ′ T 1 · V (N ) ) is kept = 0 i.e. det (G) → 0 while det (S) = 0: Let us now focus on Q. With G (N ) invertible we found (cf. eq. (C.27)) As just seen, B V (N ) T · (G (N ) ) −1 → (u T 1 · V (N ) ) −1 u T 1 but in each term giving Q ′ there is an extra (G (N ) ) −1 which becomes wild onto the u 1 direction, thus Q ′ written in this form is an indeterminate ∞ − ∞. We rewrite Q ′ splitting the various contributions on u ′ 1 vs. the u ′ j≥2 : Using eqs. (C.74) and (C.75), eq. (C.82) becomes: which can be recast into Using (C.77) we rewrite the first bracket in eq. (C.84) as in which the indeterminate ∞ − ∞ has been cancelled leaving a net contribution with a finite limit when g ′ 1 will be sent to 0. Finally using (C.77) Q ′ takes the following form: In the limit g ′ 1 → 0 while (u ′ T 1 · V (N ) ) kept = 0, the last term of (C.86) → 0 and i.e. namely the form found in eq. (C.44), q.e.d.!
This study has yielded interesting byproducts: eq. (C.76) relates in some sense the matrix Cof[G (N ) ] (which provides the inverse of G (N ) as long as det (G) = 0), and the MPPI H of G (N ) when det (G) = 0, leading to eqs. (C.77) and (C.79). Eq. (C.86) will also help understand what make the case "det (G) = 0, det (S) = 0" vs. the combined limit "det (G) = 0, det (S) = 0 simultaneously" depart from each other, as it interpolates between these two limit cases in some sense.
Let us compare Σ obtained in subsec. C.3 with ( S (N ) ) −1 extracted in subsec. C.1. If one formally imposes the compatibility condition (u ′ T 1 · V (N ) ) = 0 first while keeping g ′ 1 = 0, then sends g ′ 1 → 0, the ingredients W and Q building Σ are simply obtained from those building ( S (N ) ) −1 by replacing (G (N ) ) −1 by H ′ ; U is obtained likewise by reconsidering eq. (C.79): the dependence in g ′ 1 that causes the indeterminate 0/0 in B cancels and the r.h.s. of (C.79) is just Ω ′ −1 → Ω −1 . With the prior formal imposition of compatibility condition (C.54), the limit g ′ 1 → 0 is the combined limit det (S) → 0, det (G) → 0 keeping their ratio Ω fixed cf. (C.79). This remark helps us understand how the case addressed in this subsec. and the one covered in the previous subsec. of this appendix depart from each other. The latter instead was the limit g ′ 1 → 0 while (u ′ T 1 · V (N ) ) was kept = 0 so that det (S) was kept non zero. Keeping (u ′ T 1 · V (N ) ) = 0 and sending g ′ 1 → 0 (as done in the previous subsec.) then sending (u ′ T 1 · V (N ) ) → 0 vs. imposing (u ′ T 1 · V (N ) ) = 0 first then sending g ′ 1 → 0 (as done in the present subsec.) faces a mismatch between the orderings of limits. This is best seen considering the various scalar factors that weight the tensor structures decomposing Q ′ in eq. (C.86). Let us consider for instance the last term in eq. (C.86): last term of (C.86) = g ′ taking g ′ 1 → 0 while keeping (u T 1 · V (N ) ) = 0 makes this term drop from the expression of Q ′ . In contrast, taking first (u ′ T 1 · V (N ) ) = 0 makes g ′ 1 cancel from the scalar factor which then has the finite limit Ω −1 when g ′ 1 → 0, hence the contribution ∝ H · V (N ) · V (N ) T · H/Ω to Q in the "combined limit" case. Conversely, let us consider the third term in eq. (C.86) given by: When taking g ′ 1 → 0 while keeping (u T 1 · V (N ) ) = 0 this term yields the finite contribution In contrast, taking first (u ′ T 1 · V (N ) ) = 0 makes this term drop from the expression of Q ′ in the "combined limit" case. In what follows A and B are assumed dimensionless and complex valued, the signs of their real parts are unknown, and the signs of their imaginary parts may or may not be the same either.

D.1 First kind
The computation of I 4 3 involves the computation in closed form of the following kind of integral: After partial fraction decomposition the r.h.s. of eq. (D.1) becomes: The r.h.s. of eq. (D.1) converges at infinity although each term separately grows logarithmically. Let us introduce a cut off Λ and write eq. (D.2) as: The ξ integration yields: regardless of the signs of Im(A) vs. Im(B).

D.2 Second kind
The integral arises in some intermediate step in the computation of three-and four-point functions, for real masses. When no internal masses are vanishing it arises for ν = 2. The integral need not be computed in closed form and shall instead be recast in an alternative, handier form cleared from any radical.
In the case of real masses, Im(A) and Im(B) are always of the same sign. Whenever it is the case, the use of the celebrated Feynman "trick" is justified and leads to: is readily rewritten as: Then the ξ integration is performed first, using eq. (B.1) of appendix B. Performing the change of variable z = √ x in the result obtained yields:

E Basic integrals in terms of dilogarithms and logarithms: K-type integrals
The computations of the various N-point functions in closed form can be reduced to the calculation of integrals of simple types. The K-type is of the form where u 2 0 = −B/A. In the real mass case, A is real and the non vanishing imaginary part of the complex quantity B is infinitesimal. The integration contour runs along the positive real axis, more precisely, this case involves the segment between [0, 1].
In the real mass case, u 2 0 may primitively appear in calculations either as a real parameter, or slightly shifted away from the real axis by an infinitesimal contour prescription Im(u 2 0 ) = i sλ with s = ±. In the calculation of three-point functions presented in this article K-type integrals naturally arise with real parameters u 2 0 always in conjunction with a "subtracted term" equal to ln(A u 2 0 + B), so that the poles at u = ±u 0 are actually fake. Alternatively, the calculation of four-point functions involves K-type integrals with parameters u 2 0 slightly shifted away from the real axis, which makes the integrals well-defined even with a vanishing "subtracted term". In the latter case however, framing the calculation with a vanishing "subtracted term" may result in an annoying mismatch between the infinitesimal imaginary part of B shifting the integration contour from the cut of ln(A u 2 + B) in the numerator, and the contour prescription for the pole 1/(u 2 − u 2 0 ). This mismatch may lead to possible apparent ambiguities on the signs of infinitesimal imaginary parts of some arguments in Li 2 functions, such as Li 2 [(u 0 ± 1)/(u 0 −ū)] withū = −B/A when the signs of the infinitesimal Im(u 0 ) and Im(ū) happen to be the same while Re[(u 0 ± 1)/(u 0 −ū)] > 1. Sordid fiddling would then be required to solve these ambiguities. This annoyance can be avoided by framing the calculation so as to involve K-type integrals with a "subtracted term" equal to ln(A Re(u 2 0 ) + B) instead of 0, which effectively allows to ignore the infinitesimal imaginary part in u 2 0 , as will be shown below. We hereby compute the above type of integral. This appendix often makes use of the identity ln(z) = ln(−z) + i π S(z) , S(z) ≡ sign(Im(z)) (E.1) Let us parametrise u 2 0 by Re(u 2 0 ) = u 2 R 0 and Im(u 2 0 ) = sλ with s = ± and note u 0 ≡ u 2 0 . We define: It is convenient to introduce u 0 defined by: if u 2 R 0 > 0 The subtracted term can in its turn be split as ln(A u 2 R 0 + B) = ln(A − i λ S u ) + ln( u 0 −ū) + ln( u 0 +ū) + η( u 0 −ū, u 0 +ū) (E.5) where the function η(z 1 , z 2 ) is given by: η(z 1 , z 2 ) = ln(z 1 z 2 ) − ln(z 1 ) − ln(z 2 ) =        −2 i π if Im(z 1 ) ≥ 0, Im(z 2 ) ≥ 0 and Im(z 1 z 2 ) < 0 2 i π if Im(z 1 ) < 0, Im(z 2 ) < 0 and Im(z 1 z 2 ) ≥ 0 −2 i π if Im(z 1 ) = 0, Im(z 2 ) = 0, Re(z 1 ) < 0 and Re(z 2 ) < 0 0 otherwise (E. For convenience, let us definē A comment is in order here. One shall be cautious that in eqs. (E.8) and (E.10), the denominators of the integrands (pole terms) involve u 0 whereas u 0 is involved in the numerators (logarithmic subtracted terms). Notwithstanding, i) when Re(u 2 0 ) < 0, the pole in eqs. (E.8) and (E.10) is well away the integration contour, the contour prescription of the pole is irrelevant and u 0 can be harmlessly traded for u 0 ; ii) when Re(u 2 0 ) > 0, u 0 is real and u 0 = u 0 + i s λ, in which case the contour prescription of the pole proves also to be ineffective. Indeed, let us rewrite the r.h.s. of eq. (E.10) explicitly as: the numerator vanishes ∝ (u − u 0 ) at u = u 0 so that the pole at u = u 0 , whether u 0 is on the integration contour or not, is actually fake. Only the prescription coming from the infinitesimal Im(ū) in ln(u −ū) matters, which determines on which side of the cut the integration contour runs. ThusR ′ (u 0 ,ū, u 0 ) is equivalently 21 given by: This is so because the subtracted term involves u 0 , not u 0 . If instead the subtracted term were ln(u 0 −ū) a potential conflict might occur between the signs of Im(u 0 ) vs. Im(ū). This might result into a non vanishing residue [ln( u 0 −ū) − ln(u 0 −ū)] whenever u 0 − Re(ū) < 0 and sign(Im(u 0 −ū)) = sign(Im(ū)). No such subtlety is faced with complex masses as will be met in the second article, for which poles and cuts are generically well away from the integration contour.
In conclusion, eq. (E.10) can be safely traded for eq. (E.11) in all cases. The integral R ′ (y, z) can be computed following the main line of the computation in appendix B of ref [12], more generally for two complex numbers y and z (z can be real or complex, whereas Im(y) shall be kept non vanishing) as this will also be relevant in the complex mass case. We first make the change of variable x = u − y: R ′ (y, z) = The quantity [R(u 0 , −ū, u 0 ) −R(−u 0 ,ū, − u 0 ))] can be computed along the same lines, and we end up with: R (u 0 ,ū, u 0 ) +R (u 0 , −ū, u 0 ) =R (−u 0 ,ū, − u 0 ) +R (−u 0 , −ū, − u 0 ) + F ( u 0 ,ū) + i π [S ( u 0 −ū) + S ( u 0 +ū)] ln u 0 1 + u 0 (E. 16) where Putting everything together in eq. (E.7) things can be further rearranged using eq. (E.1), by noting that, for any two complex numbers a and b: In particular when u 0 is real all the η functions appearing explicitly or through F ( u 0 ,ū) in eq. (E.19) vanish.
F Prescription for the imaginary part of det (S): real masses i.e. infinitesimal Im part In this appendix, we discuss the sign of the imaginary part of det (S) for the real mass case. The 2 nd determinant in the r.h.s. of eq. (F.5) is the value of the characteristic polynomial of the matrix −K with K = S −1 · E evaluated at x = 1/(iλ). The matrix K is given by i.e. the prescription for the sign of "the Im part of det (S)" is "+λ sign (−1) N −1 det (G) " (as long as det (G) = 0). Let us also notice that the proof holds whether λ is infinitesimal or finite. We can remark that the prescription coming naturaly for all ∆n in sec. 2 and 3, namely a +i λ, after applying the Stokes-type identity is the one obtained by taking into account the prescription given by eq. (F.4) (supplemented by eq. (F.10)) into eq. (C.11).