A convenient approach to characterizing model uncertainty with application to early dark energy solutions of the Hubble tension

Despite increasingly precise observations and sophisticated theoretical models, the discrepancy between measurements of H0 from the cosmic microwave background or from Baryon Acoustic Oscillations combined with Big-Bang Nucleosynthesis versus those from local distance ladder probes -- commonly known as the $H_0$ tension -- continues to perplex the scientific community. To address this tension, Early Dark Energy (EDE) models have been proposed as alternatives to $\Lambda$CDM, as they can change the observed sound horizon and the inferred Hubble constant from measurements based on this. In this paper, we investigate the use of Bayesian Model Averaging (BMA) to evaluate EDE as a solution to the H0 tension. BMA consists of assigning a prior to the model and deriving a posterior as for any other unknown parameter in a Bayesian analysis. BMA can be computationally challenging in that one must approximate the joint posterior of both model and parameters. Here we present a computational strategy for BMA that exploits existing MCMC software and combines model-specific posteriors post-hoc. In application to a comprehensive analysis of cosmological datasets, we quantify the impact of EDE on the H0 discrepancy. We find an EDE model probability of $\sim$90% whenever we include the H0 measurement from Type Ia Supernovae in the analysis, whereas the other data show a strong preference for the standard cosmological model. We finally present constraints on common parameters marginalized over both cosmological models. For reasonable priors on models with and without EDE, the H0 tension is reduced by at least 20%.


INTRODUCTION
The standard cosmological model, Λ-CDM, has been remarkably successful in explaining various cosmological phenomena.However, the persisting discrepancy in the determination of the Hubble constant, H0, has challenged the robustness of this model.H0, the present-day expansion rate of the Universe, plays a pivotal role in our understanding of cosmic evolution, the age of the Universe, and the formation of large-scale structures.It is typically inferred using one of two approaches: local distance ladder measurements or constraints as part of a cosmological model from more general observations such as those from the Cosmic Microwave Background (CMB) or from the combination of Baryon Acoustic Oscillation (BAO) and Big-Bang Nuclesynthesis (BBN) observations.
Leveraging the CMB data, the Planck satellite mission, reported a value of H0 close to 68kms −1 Mpc −1 for Λ-CDM cosmologies (Planck Collaboration et al. 2020b).This is supported by standard ruler observations of BAO from the Sloan Digital Sky Survey (SDSS) and BBN, which favour similar values for the same models (Alam et al. 2021).For the CMB the absolute measurement at present day results from the photon temperature and black-body assumption, giving the physical photon density.Density ratios giving the CMB power spectrum then give the matter and baryon densities, with the angular scales of the projected peak positions finally leading to a constraint on H0 through the Angular Diameter distance.For BAO+BBN, the physics is similar: BBN observations of the baryon-to-photon ratio, together with the present-day photon density from the CMB temperature, fix the present day size of the BAO ruler.The projected scale of the observed BAO then constrains H0 in a similar way to the CMB constraints.In contrast, observations from local distance ladder studies, such as the Hubble Space Telescope (HST) observations of Cepheid variable stars, favour a higher value of H0 ∼ 74kms −1 Mpc −1 (Riess et al. 2019).These observations can be considered a more direct route to measure the present-day expansion rate, using distance estimates for individual objects together with their redshifts to directly measure the expansion rate.The divergence in measurements, referred to as the H0 tension, has remained persistent for a number of years now and, in some cases, has intensified as more precise data become available.
One proposal to resolve the H0 tension is the introduction of Early Dark Energy (EDE) into the cosmological model.EDE posits the existence of a scalar field in the early Universe, leading to an enhancement of the expansion rate during the epoch of matter-radiation equality (Karwal & Kamionkowski 2016;Poulin et al. 2019).This additional early acceleration can alleviate the H0 tension by modifying the early-time cosmic expansion and the position of the acoustic feature in the radiation and matter power spectra.The EDE model has gained popularity due to its capacity to reconcile the Planck and HST measurements, but the extent to which the data support this model is unclear.
To address the challenges of model selection and parameter estimation within the context of the H0 tension and EDE, we employ Bayesian Model Averaging (BMA) as a powerful statistical tool (Parkinson & Liddle 2013;Trotta 2008).By treating the model itself as another unknown in a Bayesian framework, it simultaneously accounts for uncertainty in parameter estimates as well as model choice.BMA offers a principled approach to statistically weight the evidence from various models, enabling a comprehensive analysis of the viability of EDE as a potential solution to the H0 tension.
Standard methods for making Bayesian inferences from cosmological data consider only a single model and then use a method such as Markov-Chain Monte-Carlo (MCMC) to sample from the posterior (Hastings 1970;Tierney 1994) conditional on that model.Accounting for model uncertainty via BMA, however, requires the joint posterior for both models and parameters; this requires accounting for both the likelihood and the model prior, which is not readily available in model-conditional samples.We outline a method not previously applied in cosmology that collapses these modelconditional samples post-hoc, thus avoiding the need for specialized software for BMA.
We use this method to apply Bayesian Model Averaging to scrutinize the implications of the EDE model on resolving the H0 tension.We carefully consider a range of observational datasets, including CMB measurements, Type Ia Supernovae and Baryon Acoustic Oscillations, to constrain the relevant cosmological parameters.By employing BMA, we assess the credibility of the EDE model as a viable explanation for the H0 tension, while simultaneously exploring the possible range of cosmological scenarios.
The organization of this paper is as follows: in Section 2 we present the theoretical framework of the EDE model and its underlying physical motivations.Section 3 details the Bayesian Model Averaging methodology employed in this analysis.In Section 4, we present our results and discuss the implications of the EDE model on the resolution of the H0 tension.In Section 5 we conclude with a discussion of our findings and potential avenues for further exploration.

EARLY DARK ENERGY
The EDE model introduces an additional dark energy component, characterized by a fractional energy density parameter ΩEDE and associated equation of state (EOS) parameter wEDE, which is significant during the epoch of recombination.The presence of EDE modifies the cosmic expansion rate, leading to a different evolution of the scale factor a(t) compared to the standard Λ-CDM model.During the epoch of recombination, the Friedmann equation for the Hubble parameter H(t) can be written as where ΩX ≡ ρX/ρcrit is the density parameter for the X component (namely matter, radiation, Λ and EDE).We assume that Λ is constant and that the Universe is set up to have a flat geometry so that Ω k = 0 and the curvature term does not appear in Eq. 1.
In this paper we focus on the theoretical EDE framework introduced by Smith et al. (2020) based on a quintessence scalar field ϕ with an oscillating potential (2) The case n = 1 represents the well established Axion potential.We will restrict our analysis to the case n = 3, for the reasons given in Hill et al. (2020).The dynamics of the scalar field are governed by the Klein-Gordon equation where H is the Hubble parameter, the dots represent the derivative with respect to cosmic time and V n,ϕ = dVn/dϕ.Heuristically speaking, in order to effectively mitigate the H0 tension, the EDE contribution is concentrated in a short period after radiation-matter equality and before recombination, when the scalar field dominates the energy density contribution of the Universe and drives the growth of the scale factor before rapidly decaying.We refer the interested reader to Turner (1983), Marsh & Ferreira (2010), Poulin et al. (2019), and references therein, for the details.It is useful to define a re-normalized field variable Θ = ϕ/f , so that Eq. 3 reads The evolution of the scalar field can be completely determined by the set of three parameters {m, f, Θi}, where Θi represents the initial field value in units of f .From an observational point of view, we can characterize the background evolution by means of the maximum fraction of the total energy density in the field fEDE and the redshift corresponding to that maximum zc, as heuristically illustrated in Fig. 1 for an axionlike EDE field (Poulin et al. 2018); for a given combination of {m, f } we can always find a value of {fEDE, zc}, so that the model's free parameter set consists of either {m, f, Θi} or {fEDE, zc, Θi}.

INTRODUCTION TO BAYESIAN MODEL AVERAGING
In many statistical analyses, we are faced with the task of selecting the most appropriate model from a set of competing models.Traditional methods first fit a set of candidate models, and then compare their fit according to some criteria like the Akaike Information Criterion (Akaike 1974, AIC); finally, inferences are drawn with respect to the "best"-fitting model.A limitation of this approach is that inferences are typically drawn as if the model was chosen a priori-that is, it does not account for uncertainty in the choice of model.By contrast in a Bayesian framework, model uncertainty can be handled by treating the model like any other unknown, and computing the posterior probability of each candidate model given the observed data and chosen prior.This allows for a principled approach to comparing models and quantifying their relative plausibility based on the data and prior knowledge.
In addition to making direct inferences about the probabilities of different models, this approach allows one to formally account for model uncertainty when making inferences about cosmological parameters via Bayesian model averaging, or BMA.Intuitively, BMA computes a weighted average over the conditional posteriors for cosmological parameters under each candidate model.This marginalization occurs over the model posterior, ensuring that models with higher probabilities contribute more to the final inference.
Let M = 1, 2, ..., K be a discrete random variable indicating a choice among K different models.For convenience, we will refer to M = j as Mj.The posterior model probability for model Mi, denoted as P (Mi|d), is computed as where P (Mi) is the prior probability of model Mi, P (d|Mi) is the marginal likelihood of the data d given model Mi, and the denominator serves as a normalization factor to ensure that the probabilities sum to one.We assume that the prior probabilities are separable between model selection and model parameters.
The advantages of BMA are: (i) Accounting for Model Uncertainty: BMA provides a comprehensive approach to handle model uncertainty, ac-knowledging that multiple models may be plausible given the available data (Hoeting et al. 1999).
(ii) Robustness: BMA is less sensitive to the specific choice of a single "best" model, reducing the risk of overfitting or underfitting the data (Hoeting et al. 1999).
(iii) Improved Predictive Performance: By averaging predictions from multiple models, BMA often leads to more accurate predictions compared to relying on a single model (Madigan & Raftery 1994).

Fast-MPC and application to a two model problem
Without loss of generality, consider two candidate models, M1 and M2, parametrized by θ1 and θ2 respectively, and let θ be the union of θ1 and θ2.A full Bayesian analysis would characterize the joint posterior of Mi and θi for i=1,2, from which we aim to: (i) obtain the posterior probability of each candidate model, P (Mi|d); and (ii) obtain the marginal posteriors of the target parameters, P (θ|d).The former allows us to assess the extent the evidence in favour of either candidate model; the latter allows our inferences about cosmological parameters to formally account for uncertainty in the choice of model.
Following Hastie & Green (2012), characterizing the full joint posterior (for model and parameters) can be done in one of two ways: across-model approaches and within-model approaches.The most well-known method in the former category is the reversible jump MCMC (Green 1995, RJ-MCMC), a variant of the Metropolis-Hastings algorithm (known as Metropolis-Hastings-Green) that allows for between-model steps in which the dimension of the parameter vector changes to accommodate models of different sizes.This is a general approach that allows one to explore a large space of models, but it requires somewhat bespoke software implementation (see, for example Karnesis et al. 2023), and achieving convergence can be challenging in practice.
Since samples of P (θ|d, M1) and P (θ|d, M2) are readily available from standard analyses conditional on a single model, it then only remains to obtain the two model posteriors P (Mi|d) defined in Eq. 5.This decomposition frames the model-averaged posterior P (θ|d) as weighted average of the model-conditional posteriors P (θ|d, Mi) as in Eq. 7. Note, however, that it is not sufficient to weight by the model-priors P (Mi); rather the weights correspond to the model posteriors P (Mi|d)-the data themselves inform how much weight is assigned to either model.The model posteriors P (Mi|d) in Eq. 5 consist of both the model priors, P (Mi), which are specified by the analyst, as well as the marginal likelihood: for i=1,2.We obtain an estimate P (d|Mi) of P (d|Mi) based on the model-conditional MCMC chain, as described in Sect.7 of Newton & Raftery (1994): where θ is the t-th sample in the i-th model-conditional MCMC chain with length Ni, and ω(θi) = P (θi|Mi)/ϕ(θi).This is effectively importance sampling to perform a Monte-Carlo integration over the prior P (θi|Mi) with an importance sampling function ϕ(θi), which is set to the model-conditional posterior P (θi|Mi, d) so that we can use the readily available model-conditional chains.
To see this, by Bayes' theorem, we can express Eq. 9 as where P (d|Mi, θi) is the likelihood conditional on model Mi.
From this we have ω(θi) = P (d|Mi)/P (d|Mi, θi).Substituting this into Eq.10, we see that the estimator is simply the harmonic mean of the likelihood, averaging over the modelconditional chain To compute this, one must have access to (i) the samples θ from the model conditional MCMC, as well as (ii) the full likelihoods evaluated at each of those samples P (d|Mi, θ i ), both of which are easily accessible using a cosmological MCMC code such as Cobaya (Torrado & Lewis 2021;Lewis & Bridle 2002;Lewis 2013).It is shown in Newton & Raftery (1994) that this estimator converges almost surely to the true P (d|Mi).From this and the model priors we can estimate P (Mi|d) via Eq. 5.
Alternatively, reversible-jump MCMC (RJ-MCMC) would instead produce a single MCMC chain spanning multiple models (An example implementation is Eryn 1 described in 1 https://github.com/mikekatz04/Eryn.git Karnesis et al. 2023).The resulting chain could easily be used to determine model probabilities as for any other parameter, and would not require us to compute a direct estimate of P (d|Mi).In practice, however, convergence is a challenge using RJ-MCMC, and Fast-MPC allows for a faster and more convenient computation when (1) the number of candidate models is small and (2) a MCMC sample for each model is available (be it from off-the-shelf software or from publicly available data releases).We compare the two approaches in Appendix A, validating that they give consistent answers in a toy example.

RESULTS
For our analysis, we adopted the publicly available MCMC sampler for cosmology Cobaya2 (Torrado & Lewis 2021; Lewis & Bridle 2002;Lewis 2013); we exploited theory codes CAMB (Lewis et al. 2000;Howlett et al. 2012) and its Early Quintessence model implementation (Smith et al. 2020).The BMA was performed through Fast-MPC with our publicly available code3 ; it consists of a library of functions for reading-in MCMC samples and computing all the necessary quantities to estimate P (Mi|d).
• SNIa: 1048 luminosity distance measurements from the Pantheon SNIa light curves catalogue (Scolnic et al. 2018) and Cepheid measurements in the Large Magellanic Cloud from Riess et al. (2019).
We considered the following dataset combinations: CMB, CMB+Lensing+BAO+SN, BAO+BBN and BAO+BBN+SN.The Planck-free cases include information from BBN in the form of a Gaussian prior of Ω b h 2 = 0.02235 ± 0.00037 under the assumption of N eff = 3.046.This prior is derived using the empirically-estimated cross-section of the deuterium described in Adelberger et al. (2011), with abundance D/H = (2.527± 0.030) × 10 −5 from the high-resolution spectroscopic measurements of seven quasar absorption systems (Cooke et al. 2018).
As introduced in Sect.3, BMA allows us to compute the posterior probability of our model, given the data and the cosmological parameters, in a Bayesian way, and therefore relies on a prior assumption on model probability as shown in Eq. 5. We show in Fig. 2 the computed values of P (M |d) as a function of the prior assumption on the Λ-CDM model (or, equivalently, 1 − P (MEDE)), after removing a conservative burn-in fraction of 0.5 .The solid lines are the Λ-CDM model probabilities and the dot-dashed lines are the corresponding EDE model probabilities for the different considered datasets.We identified two interesting prior configurations, represented by the grey dotted and dashed lines on the plot: • the flat-prior, corresponding to a 50-50 split probability between Λ-CDM and EDE, and reflecting the agnostic assumption on which model should be favoured.This is the default prior assumption throughout the paper; • the Occam's prior, expressing an a-priori preference for the Λ-CDM model, as it comes with fewer parameters.Note that some observations (such as supernovae samples) are cumulative, or the sample variance is the same (such as the CMB), and hence it is not good practice to form a prior based on data that are also used in the likelihood.We therefore do not include in this prior any previous match between observations and the Λ-CDM model, but we include results using this prior as a counter to the flat-prior, allowing us to test the sensitivity to the prior.We choose a 90% probability in favour of the Λ-CDM cosmological model.As well as explicitly putting in a preference for a simpler model as we do here, we note that BMA also generally favours models with fewer parameters, because they have a smaller volume over which the likelihood is marginalized.Thus the idea behind Occam's razor does not only arise in the prior in BMA.For this reason, this prior choice results in strong advocacy for the Λ-CDM model and its interpretation should be made carefully.However, the Occam's prior represents an extreme case supporting the Λ-CDM model, and we report results under this assumptions alongside with the standard flat-prior for completeness.
We report the model probabilities under both prior assumptions in Table 1.The Λ-CDM model is favoured for both priors for BAO+BBN and SN datasets when considered alone, with a Λ-CDM probability of 79% and 83% respectively when we adopt a flat model-prior-Λ-CDM is favoured above 90% under the Occam's prior.The CMBonly case model probability closely matches the prior probabilities with 51% and 91% Λ-CDM model probabilities in the flat-prior and Occam's prior respectively; a straightforward explanation for this is that the data are not informative about the model choice, and all the information is coming from the prior.On the other hand, we find a strong EDE preference whenever we merge the datasets into CMB+Lensing+BAO+SN and BAO+BBN+SN, with more than 99% EDE model probability under a flat model-prior, and, respectively, 96% and 98% for the Occam's prior.It is not a surprise that the joint datasets BAO+BBN+SN and CMB+Lensing+BAO+SN show a strong preference for EDE; that was precisely the reason EDE was introduced in the first place.These numbers should not be interpreted as one model being right or wrong from an absolute perspective, but rather the probability of one or the other to be favoured by data.I.e.our averaged constraints are still conditioned on one of the two models (ΛCDM or EDE) being correct.
A further goal of our study is to incorporate the model uncertainty in the final cosmological parameter estimates.If there are big differences between the credible intervals for a particular parameter conditioned on different models, then we might expect the credible interval to be wider when averaged over models.Thus BMA has the general ability to reduce tensions by providing wider credible intervals given the model uncertainty.This explains why, for the H0 tension, the marginalized uncertainty tends to mitigate the tension on cosmological parameters, especially on H0, as we find in the following section.

Model-Averaged Posteriors
The knowledge of P (M |d) allows us to compute the cosmological parameters' posterior distributions marginalized over the model uncertainty as described in Eq. 7.For our analysis, we considered the following combination of the Λ-CDM base parameters with their flat prior ranges, unless otherwise specified: today's value of the We report in Table 2 the constraints for the six Λ-CDM base parameters after marginalizing over the model uncertainty with a flat model prior (top row for each parameter) and the Occam's model prior (bottom row).We report only the common parameters between the Λ-CDM and the EDE models, for those are the only ones influenced by the model uncertainty.For the reader's reference, we provide the EDE specific model parameters' constraints in Table 3 as evaluated in our analysis.
Fig. 3 shows the full posterior probability distributions, after the model uncertainty marginalization, for both the flat and the Occam's model priors.We shall make a few observations about the impact of the BMA on cosmological parameters' posteriors.Firstly, the error bars increase with respect to the model-conditional parameter posterior error bars, and it is completely due to the additional model uncertainty considered in the analysis; the impact is more relevant with a flat model prior, especially for those datasets with a lower preference for one specific model, namely CMB and BAO+BBN.Because the EDE model was introduced to solve the Hubble tension, any preference for it using both SN and high-redshift data cannot be interpreted at the same level as if this was a model suggested before the tension became apparent.Ideally, we would therefore like to see a preference for the EDE model from one of other data set, rather than the combination.The results in Fig. 3 show that this is not the case, and the high-redshift data, when analysed on their own do not show a preference for either model.The effect of the Occam's prior is mostly visible in those very datasets, but the effect is to shrink the parameters' uncertainties towards those of the model-conditional's parameter posteriors.Conversely, we notice the opposite behaviour for CMB+Lensing+BAO+SN and BAO+BBN+SN, where the model posterior expresses a strong preference for the EDE model with a flat model prior, whereas the Occam's prior increases the Λ-CDM model probability and therefore broadens the cosmological parameters' constraints.In the upper-left tile, and throughout the first column/row of the plot, we included the Planck 2018 68% and 95% confidence levels on H0: from Planck Collaboration et al. (2020b) we report the TT+EE+TE+lowE+Lensing+BAO constraint of H0 = 67.66 ± 0.42, and similarly for SNIa (Riess et al. 2019) we report the value H0 = 74.03± 1.42.Both these constraints assume a flat-Λ-CDM model, and interestingly, the H0 tension is mitigated when accounting for the additional model uncertainty for most of the dataset combinations, from ≈ 1σ in the Planck-only case to ≲ 1σ in the BAO+BBN case, with a flat model prior.This does not come as a surprise.In fact, we would ideally expect such posterior incompatibility to be even further reduced by accounting for more models; those may include new and different physics or different data modelling.
In the context of the H0 tension, we present in Fig. 4 a direct comparison of H0 from CMB and SNIa datasets independently.This is the very original argument leading to the notorious H0 tension.The dashed lines refer to the constraints obtained assuming (conditioning on, in the probabilistic context) the Λ-CDM model.Solid lines show the constraints after marginalizing over the model uncertainty by taking Λ-CDM and EDE into account.The H0 posterior distribution from CMB broadens, and the parameter estimate changes from 67.29 ± 0.61 in the Λ-CDM model to 68.17 +0.77   −1.4   after the marginalization, with a relative increase in the standard deviation of relieving the H0 tension by ≈ 1σ.The H0 constraint from SNIa, on the other hand, shows no difference in the final posterior between the conditional and the model marginal case.This is due to the very high Λ-CDM model probability of the SN data that is mostly attributed to the insensitivity of SNIa measurements to the EDE parameters.Another way to understand this is that the model selection favours Λ-CDM by virtue of the smaller number of model parameters necessary to describe the SNIa data.

Assessing Convergence
With any MCMC strategy, it is important to assess convergence in order to ensure one is sampling from the appropriate target distribution (the posterior).In the RJ-MCMC approach, where the model itself is sampled just like any other parameter, one must also consider convergence of the model choice.By contrast in the within-model strategy, the model is not sampled as part of the MCMC; rather, one would assess convergence for cosmological parameters separately for each model-conditional MCMC according to the Gelman-Rubin criterion on R−1 (Gelman & Rubin 1992).Once these chains have converged, however, a natural question is whether one has achieved a stable estimate of the model posterior.
Consider an analogous Gelman-Rubin statistic for the collapsed chains: where σ 2 BW 4 and σ 2 WC refer to the between-chain variance and the within-chain variance of the model posterior probability P (Mi|d), respectively.The latter is actually to be intended, in our context, as the collapsed within-chain variance if we stack the i-th chains from each MCMC conditional run in order to compute the model posterior probability.Moreover, there is a further complication with the evaluation of the within-chain variance of the model probability.Because Eq. 13 is based on the harmonic mean, to estimate the within chain variance, we use a second-order Taylor expansion of P (Mi|d) described in Appendix B.
The bottom panel of Fig. 5 shows R − 1 for the considered datasets as a function of the overall number of samples fraction.A good rule-of-thumb is to consider the chains converged if R − 1 < 0.1 (or equivalently R < 1.1, Gelman & Rubin 1992).Our analysis shows that the CMB 4 It follows the usual definition: , where xi is the parameter's average for the i-th chain, J is the number of chains, and ⟨x i ⟩ is the average between all the chains.and BAO+BBN chains achieved the required convergence already with 10% of samples under the flat-prior assumption, and the CMB+Lensing+BAO+SN with ∼ 30%, whereas BAO+BBN+SN requires longer chains, ∼ 50%, to get R−1 ∼ 0.1.On the other hand, the Occam's prior assumption shows a more robust convergence, easily understood since we are casting the Λ-CDM model with a 90% prior probability, and therefore most of the information is already in the model conditional chains for which convergence has already been assessed and achieved.The top panel of Fig. 5 shows the individual variance contributions to the computation of R, namely the between-chain variance (solid lines) and the collapsed within-chain variance (dashed lines)-referring to collapsed couples of modelconditional chains-as functions of the fraction of the total chain length and normalized for the value at the full chain length.This illustration can be very helpful to understand the behaviour of the variance of the model posterior distribution which, as already mentioned in this section, does not necessarily converge along with the within-model parameters' posteriors.In fact, we notice that it becomes stable for an overall chain length fraction of ≳ 0.5 − 0.6, for which the within-model parameters are still not converged.

CONCLUSIONS
We have presented an efficient BMA algorithm tailored for cosmological analyses called Fast-MPC.We have used this to investigate the pressing issue of the Hubble Constant tension (H0 tension) within the framework of modern cosmology.Our work has demonstrated the power of BMA to decide between models in a clear and easy to understand way, while providing a robust and accurate characterization of the Universe's fundamental properties, by providing parameter constraints that marginalize over multiple models.
Through the application of Fast-MPC to a diverse set of cosmological datasets, including CMB, BAO, and Type Ia supernovae observations, we have considered the EDE solution to the Hubble tension in the Bayesian framework.Any Bayesian comparison between models must include a prior, as with any other parameter, and BMA makes these priors explicit.By adopting a Bayesian perspective that incorporates model uncertainty, we have performed a more general cosmological parameter estimation, ultimately yielding more general constraints on cosmological parameters that allow EDE as a possible, but not the only, solution.We found that we cannot distinguish between the two models using CMB data alone, and therefore the additional model uncertainty leads to a ∼ 92% larger credible interval for H0 when a flat model prior is applied.On the other hand, BAO+BBN and SNIa datasets show a preference for the Λ-CDM model, with model uncertainty contributions to the overall H0 posterior width of ∼ 155% for the former, and null for the latter, for a flat model prior.Similarly, we found that the combined datasets CMB+lensing+BAO+SN and BAO+BBN+SN show a ≳ 90% EDE model probability for both flat and Occam's prior, with an uncertainty increase on H0 of ∼ 210% for CMB+lensing+BAO+BBN and ∼ 84% for BAO+BBN+SN.
Our findings underscore the broader potential of BMA as a valuable tool for cosmological investigations.This work highlights the necessity of considering a diverse range of cosmological models and the importance of quantifying the associated uncertainties.Though the potential for BMA in cosmology has been acknowledged elsewhere (see Parkinson & Liddle 2013), uptake has been slow, possibly due to the computational challenges of between-model sampling strategies like RJ-MCMC.By contrast the simple computational approach presented here-which involves only post-hoc computation of standard model-conditional MCMC chains-has the po- tential to make BMA accessible in a wide array of problems, including large scale cosmological analyses.
In an era of precision cosmology where ever-more precise measurements are continually pushing the boundaries of our understanding, the Fast-MPC algorithm offers a robust and adaptable approach for model selection, accommodating the complexity of modern cosmological datasets.As we continue to refine our understanding of the Universe, BMA stands ready to contribute to more comprehensive and insightful cosmological investigations in the future.In the bottom panel, R-1 for the considered datasets as a function of the fraction of the overall number of samples.The solid line refer to the flat model prior, the dotted line the Occam's prior; we plot R − 1 = 0.1 with a black dashed lines .On the top panel, the between-chains variance (solid line) and the within-chain variance (dashed) as a function of the overall number of samples, normalized by the value computed for the full chain length σ 2 f .The black dashed lines refers to unitary value for the variance ratio.

Figure 3 .
Figure3.Comparison of the six Λ-CDM parameters marginalized over the model uncertainty with a flat model prior (bottom triangle, solid lines) and the Occam's prior (upper triangle, dot-dashed lines) from CMB (grey), CMB+Lensing+BAO+SN (green), BAO+BBN (red), and BAO+BBN+SN (blue); the parameters shown are the Hubble parameter today H 0 , the baryon density parameter Ω b h 2 , the cold dark matter density parameter Ωch 2 , the scalar perturbation amplitude log(10 10 As), the scalar perturbation spectral index ns and the optical depth of reionization τ .For the BAO+BBN and BAO+BBN+SN, the last two parameters are not constrained by the data, and therefore not available in the plot.We included as vertical (horizontal) bands the the constraint (68% and 95% CL) on H 0 from Planck 2018 (Planck Collaboration et al. 2020b) (gray), and from SNIa(Riess et al. 2019) (orange), both under the Λ-CDM model assumption.

Figure 4 .
Figure 4. Posterior distribution of H 0 from CMB (blue) and SNIa (green) under the Λ-CDM model assumption (dashed line) and after marginalizing over the model uncertainty between Λ-CDM and EDE (solid line), assuming a flat model prior.
EDE in [10 3 , 10 4 ] and f EDE in [0.6, 1.4] with a darker to brighter color palette (for increasing values of z EDE ).The redshift evolution of the EDE component is computed for n = 3 in order to match the assumption in the main text.

Table 1 .
Model posteriors as [P (ΛCDM|d), P(EDE|d)] for two model prior assumptions: 1) the flat-prior is a 50% prior for each model and 2) the Occam's prior with a 90% prior on Λ-CDM versus a 10% prior on EDE.

Table 2 .
Constraints on cosmological parameters marginalized over the model uncertainty with the flat model prior assumption (top row) and the Occam's prior (bottom row).We report 68% credible intervals for each parameter.

Table 3 .
Constraints on the EDE model parameters from the different datasets taken into consideration in our analysis.