Clustering blood donors via mixtures of product partition models with covariates

Motivated by the problem of accurately predicting gap times between successive blood donations, we present here a general class of Bayesian nonparametric models for clustering. These models allow for prediction of new recurrences, accommodating covariate information that describes the personal characteristics of the sample individuals. We introduce a prior for the random partition of the sample individuals which encourages two individuals to be co-clustered if they have similar covariate values. Our prior generalizes PPMx models in the literature, which are defined in terms of cohesion and similarity functions. We assume cohesion functions which yield mixtures of PPMx models, while our similarity functions represent the compactness of a cluster. We show that including covariate information in the prior specification improves the posterior predictive performance and helps interpret the estimated clusters, in terms of covariates in the blood donation application.


Introduction
Blood is an important resource in global health care.To get an idea of its relevance, note that the demand for blood ten years ago was about 10 million units per year in the US and 2.1 million units in Italy (World Health Organization, 2012), and it is constantly growing.In almost all Western countries blood is collected from donors who give their blood voluntarily and freely.Because it cannot be artificially produced, and its short shelf life limits the time interval between withdrawal and utilization.Therefore, efficient blood stocks and supply chains are of paramount importance for its availability.In modern health care systems, blood is supplied by what is called Blood Donation Supply Chain (BDSC), which accounts for provisioning an adequate amount of blood units to meet the demand of transfusion centers and hospitals.Blood collection is the first echelon of the BDSC supply chain, and it has a relevant impact on the entire system in terms of blood unit flow.Some of the problems concerning blood collection from donors, such as demand optimization and scheduling of blood collection centers, have not been thoroughly investigated so far (Baş Güre et al., 2018).In particular, a key issue lies in the uncertainty associated with the arrival of donors at the collection centers.Thus, predicting donations and their temporal distribution is crucial to better feed and control the entire BDSC.
This work has been motivated by applicative and methodological goals.The applicative purpose is the computation of accurate prediction of donation times for the enrolled donors in a blood collection center.Such prediction is useful for estimating the blood supply of different blood groups and Rh factors over time and for managing the resources.This problem has been faced by predicting the number of donors that will arrive on a given date (Bosnes et al., 2005), estimating ARMA models for total daily blood demand (Fortsch and Khapalova, 2016), or by analysing blood donor return behaviour using frequentist survival analysis methods (James and Matthews, 1996).In particular, our data have been collected at the Milano Department of the Associazione Volontari Italiani del Sangue, simply referred to as AVIS in the following, which serves a large hospital located in the same city (Niguarda hospital).The dataset includes donors who did not exit the recurrent donation process in the time window we consider (6.5 years) and, for this reason, we address them as loyal donors in the rest of the paper.Moreover, AVIS Milano aims at finding out if there exist homogeneous clusters of donors, characterized by specific patterns of recurrent donation times and similar characteristics of the donors (given by personal and registry information).Such clustering is relevant for understanding how the donation patterns may vary, given specific characteristics of the donors, but it also helps to improve the predictions of future donation times.
From the methodological viewpoint, we propose suitable Bayesian models for clustering donors using recurrent event data.The models allow prediction of new recurrences, accommodating for covariate information that describes the personal characteristics of the sample individuals.At the same time, we use covariate information of the individuals in the prior distribution of the random partition of the sample individuals themselves.The Bayesian framework naturally handles model-based clustering assuming that the random parameter of the model includes the partition of the sample subjects (Hartigan, 1990;Quintana and Iglesias, 2003).We introduce a prior encouraging two subjects to co-cluster a priori if their corresponding covariate values are similar.To sum up, the methodological goal of this paper is to develop a model for recurrent events data with a flexible non-exchangeable prior for the random partition depending on covariates.
Covariate-dependent priors in a Bayesian nonparametric context are relatively new.The seminal work in this area is MacEachern (1999).However, reference papers with clustering with covariates are Müller and Quintana (2010) and Müller et al. (2011).In these works, the prior on the random partition is given via cohesion and similarity functions.The cohesion function c typically depends only on the cluster size, while the similarity g is a non-negative function that formalizes the similarity among the covariates in the cluster.The covariate-dependent prior is given through a product partition approach as where ρ n denotes the partition of the n sample subjects (cf.Section 2) and x * j = {x i , i ∈ A j } denotes the collection of covariates corresponding to items belonging to the j-th cluster.In Müller and Quintana (2010) and Müller et al. (2011), the cohesion function is derived from the Dirichlet process and the similarity g is the marginal distribution in an auxiliary probability model, even if the x i , i = 1, . . ., n, are not assumed random.For similar approaches, possibly including variable selection or spatial dependence, see Park and Dunson (2010) and Quintana et al. (2015), Barcella et al. (2016), Page and Quintana (2016), Page and Quintana (2018) and Page et al. (2022).Alternative models with dependent priors for random partitions are in Dahl et al. (2017), Dahl (2008), Blei and Frazier (2011) and Bianchini et al. (2020).
Our covariate-dependent prior generalizes (1) into two directions: i) to mitigate the rich-get-richer property, we depart from the cohesion function of the Dirichlet process, and assume the cohesion function c generated by a more general class of random probability measures, namely the normalized completely random measures (Regazzini et al., 2003); ii) we consider similarity functions g which are not marginal densities; borrowing the idea from data-driven clustering approaches, we introduce g's measuring the compactness of covariates in each cluster.In this paper we use compactness to denote a measure of proximity of the covariate vectors in a cluster, i.e., a cluster is compact when the total distance between covariates in the cluster and the associated centroid is small.The resulting model turns out to be a mixture of PPMx models as in (1), allowing the construction of a general MCMC sampler, which does not depend on the specific choice of similarity.
We first describe the model for a unidimensional regression setting and then consider a more general model for AVIS data using a longitudinal approach for the sequence of gap times between recurrent events (blood donations).In the latter case, since the gap times (in the log scale) are skewed, we assume a skew-normal distribution for the response (Azzalini, 2005;Arellano-Valle and Azzalini, 2006).We consider three different similarity functions in the simulated examples and the motivating application, discussing how their analytical properties might influence posterior inference.Note that, since the analytic normalizing constants of some of the full-conditionals of our MCMC are unknown, we cannot assume that the hyperparameters of the cohesion or of the similarity functions are random.However, we discuss how to set these hyperparameters.
The design of an MCMC sampler for the computation of posterior inference is among the contributions of our paper, also accommodating for the longitudinal nature of the responses and the skew-normal sampling model for the blood donation application.
We mention here that our prior π(ρ n ) has the attractive property of encouraging individuals with equal or similar covariates to be co-clustered.Nevertheless, this prior does not have the marginal invariance property, that is, the prior of the random partition for n individual cannot be obtained as the marginal of the prior of the random partition for n + 1 individuals.
The paper is structured as follows.Section 2 introduces mixtures of product partition models derived by normalized completely random measures.In Section 3 we propose our covariate-dependent model, while Section 4 reports a few details on posterior calculation.Section 5 discusses alternative similarity functions.We discuss posterior inference for the AVIS data in Section 6.Finally, Section 7 concludes the paper with a discussion.

A Bayesian nonparametric framework for clustering
We consider a Bayesian model in which the latent partition of data is a random variable, distributed according to a prior.Let {Y 1 , ..., Y n } be a set of observations, where Y i takes values in R k for some integer k, endowed with associated Borel σ-field; we will not refer explicitly to σ-fields in this paper, unless this is necessary.The typical assumption on its conditional distribution is where ρ n := {A 1 , . . ., A kn } is a partition of the data label set [n] = {1, . . ., n}, i.e., a collection of k n non-empty disjoint subsets such that their union equals [n], and {f (•; θ), θ ∈ Θ} is a family of densities on the sample space indexed by a parameter θ ∈ Θ, where Θ is a (measurable) subset of R p for some integer p. Observe that here k n denotes the number of clusters in the partition ρ n .It is apparent from (2) that, conditionally on ρ n , data are independent between different clusters and are independent and identically distributed (i.i.d.) within each cluster.The model specification is completed by assigning a prior distribution to (ρ n , θ * ).In this context, the prior for ρ n is given by for some function p(•), where n j = #A j denotes the cardinality of the j-th cluster in ρ n and, given ρ n , the parameters in {θ 1 , . . ., θ kn } in (2) are i.i.d.from some fixed distribution P 0 diffuse on Θ.
The random partition ρ n is exchangeable if its distribution is invariant under the action of all permutations of {1, . . ., n}.In particular, Pitman (1995) proves that the exchangeability holds if and only if p(•) is a symmetric function of its arguments.Furthermore, if p(1) = 1 and then the function p(•) is called exchangeable partition probability function (eppf).Formula (3) is known as consistency of the eppf or marginal invariance, i.e., the probability distribution for partitions of {1, . . ., n} is the same as the distribution obtained by marginalizing out n + 1 from the probability distribution for partitions of {1, . . ., n, n + 1}.
The practical specification of an exchangeable prior π for ρ n is not a simple task, since ρ n varies in a finite, but complex, space.However, by Pitman (1996), for each diffuse distribution P 0 and any eppf p(•), there exists a discrete random probability measure (r.p.m.) P on Θ, P ∼ Π(•; p, P 0 ), such that model (2) under the specified prior π(ρ n ) is equivalent to where P 0 represents the expectation of P .In this case, the corresponding eppf p(n 1 , . . ., n kn ) is given in formula (30) in Pitman (1996).Among the possible choices, an interesting class of discrete random probability measures is given by the class of normalized completely random measures (Regazzini et al., 2003).This family of processes can be defined as where {J i } i≥1 are the points of a Poisson process on R + with intensity κρ(s)ds, with +∞ 0 min{1, s}ρ(s)ds < +∞, +∞ 0 ρ(s)ds = +∞, and κ > 0 and T := i≥1 J i .The random variables τ i s in (4) are i.i.d.from P 0 and independent of the J i 's.
The corresponding eppf assumes the following form: where See Pitman (2003) for more details on the expressions in (6).
In the next section, we define a new prior for the random partition ρ n generalizing (5).We relax the exchangeability condition incorporating covariates in the prior specification, so that subjects with equal or similar covariates are a priori more likely to co-cluster than others.In this way, we extend product partition models with covariates (PPMx, Müller et al., 2011).

Bayesian covariate driven clustering
In a regression context, let y i be the response variable and let x i ∈ R m be the covariate vector of the ith observation.However, our modelling setup works also in the more general case when the responses is multidimensional, as we assume in Section 6.We denote by y * j (or x * j ) the set of all responses y i (or covariates x i ) in cluster A j , with y * j = {y i , i ∈ A j } (equivalently x * j = {x i , i ∈ A j }).As in (2), we assume that data are independent across groups, conditionally on covariates and the cluster specific parameters, and they are distributed according a regression sampling model f (•; x, θ).The cluster specific parameters are assumed i.i.d from a base distribution P 0 .The prior on the partition depends on covariates through a similarity function.
We then assume where , n j is the size of cluster A j , g(x j ) is the similarity function on cluster A j , and P 0 is a probability measure diffuse on Θ.Here D(u, n) and c(u, n j ) are those defined in (6).
The likelihood specification in ( 7) may be any model, from simple regression models as in Section C of the Appendix to the more complex models for gap times of recurrent events as the case study of Section 6.The prior ( 9) is a perturbation of prior (5) in Section 2: when g ≡ 1, i.e., there are no extra information from covariates, the prior mass of each cluster depends only on its size through c(u, n j ), and the prior in (9) coincides with (5); when g is a proper function of x j , the prior mass associated to the j-th cluster increases with g(x j ).We remark that a distribution as in ( 9) is well defined, up to a normalization constant that depends on the covariates and the function g.By assuming that g takes values in (0, 1], we have that and hence π(ρ n | x 1 , . . ., x n ) in ( 9) is well defined.
As a keypoint, we observe that equation ( 9) is a mixture, with respect to u, of a product partition model with covariates.In particular, conditionally to u, the cohesion function is given by c(u, n j ) and the marginal density of the mixing variable is These comments justify the name PPMx-mixt for the prior (9).Note that covariates that enter in the similarity function do not need to be necessarily the same as those in the regression part of the likelihood, but they can be selected specifically for each application.
For the sake of concreteness, the presentation focuses on a cohesion functions arising from a specific normalized completeley random measure, the normalized generalized gamma process, denoted by NGG(κ, σ, P 0 ).Such a choice recovers as particular cases several models commonly used in the Bayesian nonparametric literature, such as the Dirichlet process (Ferguson, 1973), the normalized inverse-Gaussian process (Lijoi et al., 2005) and the normalized σ-stable process (Pitman, 2003).When a NGG process is used directly as a mixing measure in a mixture, it induces a prior on the number of groups which is more disperse than the one from the Dirichlet process, allowing us for a more flexible model specification.The intensity of the NGG process is given by Parameter σ has a strong impact on the clustering.In particular, the larger it is, the more disperse is the distribution on the number of clusters.This feature mitigates the annoying rich-get-richer effect, typical of the Dirichlet process, leading to more size-balanced clusters.For more details on the behavior of σ in NGG's, see for instance Lijoi et al. (2007), Argiento et al. (2010) and Argiento et al. (2015).

Posterior calculations
In this section we sketch the Gibbs sampler Pólya urn scheme for model ( 7)-( 9).The joint law of data and all parameters, included the auxiliary variable u, is given by The corresponding algorithm extends the augmented marginal Gibbs sampler for normalized completely random measures mixture models (Lijoi and Prünster, 2010;Favaro and Teh, 2013).We repeatedly sample the fullconditionals below; see Section C of the Appendix for full details.
• The full-conditional of the mixing parameter u, given ρ n , does not include terms which depend on covariates.In particular, we have • For each j = 1, . . ., k n , we independently sample from If f and P 0 are conjugate, this step is straightforward.If not, we resort to a different sampling strategy such as, e.g., the algorithm in Section D of the Appendix.
• We update the latent partition the random partition ρ n using a Gibbs sampling step where the cluster assignment of each item i is updated once at a time.We denote by ρ n−1 the partition of n − 1 items where the i-th item has been removed and by {i ∈ A j } the event that the i-th item is assigned to cluster j, where j varies in 1, . . ., k n−1 is the number of clusters available in the partition without i.

Note that k (−i)
n−1 + 1 is included to consider the case where the item forms a new cluster.We have where m(y) is the marginal density of the parametric Bayesian model defined in ( 12).This density is available analytically in case f and P 0 are conjugate.If not, we need to modify this step; see Section D of the Appendix.In the case of j = k (13) gives the probability of a new cluster as proportional to m(y i )c(u, 1).
The lack of marginal invariance of the prior for the random partition prevent us to compute posterior predictive distributions for new individuals as the integral of the sampling model with respect to the posterior distribution.However, we deal with this calculation considering the responses of new individuals as missing data and including the associated new covariates in the set of all covariates values.For an alternative approach, based on an importance sampling re-weighting step, see Müller et al. (2011).

The choice of the similarity function
We remind that in this paper we mean that a cluster is compact when the total distance between covariates in the cluster and the associated centroid is small.We consider similarity functions g, with 0 < g ≤ 1 (see Section 3), that quantifies the compactness of the cluster through covariates.To this end, let us denote g(x * j ) := g(D Aj ), where d is a distance between vectors and c Aj is the centroid of the set of covariates in cluster j, here assumed as the Fréchet mean.We assume that g is a decreasing (i.e., non-increasing) function of D Aj , so that the smaller is D Aj (and hence the more compact is the cluster A j ), the larger is the value g(x * j ).We let i are the available continuous covariates and x b i the available binary covariates.We define the function d(•, •) in ( 14) as where m c , m b and m denote the number of continuous covariates, the number of binary covariates and the number of total covariates respectively, while d c and d b denote the Mahalanobis distance and d b is the normalized Hamming distance.
We propose a list of similarity functions that has proved to work reasonably well in practice: (i) (1+t) .Here t = λD Aj .Hyperparameter λ is responsible for rescaling the range of values of D Aj where we evaluate the similarity function.It is the analogue to the temperature parameter defined in Dahl et al. (2017) and it tempers how covariates impact on the prior.Similarly, the power parameter α drives the influence of the covariates in the prior of the random partition, by stretching or compressing the function over its support.Typical vaules for α are 1/2, 1, 2. Figure 1 shows the graphs of the three similarities as a function of t ≥ 0. Similarity functions g A and g B are intuitive, i.e., their behaviour for t → +∞ is exponential and polynomial, respectively.As far as g C is concerned, we have proposed the expression e −t log(1+t) in such a way that, for large t, we contrast the asymptotic behavior of the Gamma function in the cohesion (10) induced by the NGG.Note that, D Aj ∪{i} ≥ D Aj where {i} is a singleton; see Section A of the Appendix, equation ( 21).This imply that the function g penalizes large clusters that are not compact at the same time.This is exactly the feature we would like to guarantee in order to mitigate the rich-get-richer property of the cohesion function associated to the Dirichlet process.
We propose an heuristic strategy to fix λ: given the available data, we estimate the increment of D Aj when we add the new observation {x i } across all possible values of the sample size of A j .For instance, for any sample size n j from 2 to n, we uniformly choose a cluster A j of size n j , and we add a point i (not in A j ), to obtain a Monte Carlo estimate of the increment (D Aj ∪{i} − D Aj ).We average over the sample size n j , obtaining and estimate ε.Then, we choose λ such that λε = ε * , for small values of ε * , i.e., ε * = 10 −1 , 10 −2 , 5×10 −3 .Note that the choice of ε * and consequently of λ calibrates the influence of the similarity function in the posterior estimated clusters, which might be over-driven by the covariates values.For a thorough discussion about calibration of similarities in PPMx models, see Page and Quintana (2018).
The specification of a PPMx-mixt prior consists in choosing a cohesion function, c(u; n j ) ≥ 0 for n j ∈ {1, . . ., n}, and a nonnegative similarity function g.The former quantifies the probability mass of a generic cluster A j , j = 1, . . ., k n , through its cardinality, conditioning to u > 0 and regardless the knowledge of the subjects in A j , while the latter formalizes the similarity of the covariates.The similarity function affects posterior inference through the predictive (13).The formula shows that there two factors, the first depending only on the responses y i 's through the marginal density of the parametric Bayesian model defined in (12), while the second factor depends on the cohesion and the similarity, i.e., c(u, n j + 1)g(x * j ∪ {x i }) c(u, n j )g(x * j ) .
Observe that the ratio between cohesions c(u, n j + 1) and c(u, n j ) assume values proportional to n j − σ, and consists in the usual predictive weight in NGG mixture models.Henceforth, we focus on the above ratio between the similarity values.For λ fixed as we have described above, let t = λD Aj and t + ε = λD Aj ∪{i} , so that ε represents the increment of the average center-based distance when {x i } is assigned to cluster A j .So, it is interesting to study g(t + ε)/g(t), for t > 0 and any fixed ε > 0. This ratio is smaller or equal than 1, since g is non-increasing.It is advantageous to have a ratio that assumes small values when t is large, to discourage non-compact clusters.The function g C is the only one, among the three similarities proposed here, to fulfill this requirements, as shown in Figure 1 in Section B of the Appendix.From the same figure, it is apparent that the ratio is constant for g A and it is increasing for g B .Similarly, it is interesting to study the ratio g(t + ε)/g(t) also as function of ε > 0, for any fixed value of t > 0 which, of course, is non-increasing with ε.Hence, when we add observation i in cluster A j , two scenarios can occurr: 1. the new observation is similar to the others belonging to A j , so ε is small, and the ratio is close to one, yielding to a weak penalization of the weight of the cluster A j ∪ {i}; 2. the new observation strongly differs from the elements in A j , so ε is large and the ratio becomes small.In this case the model strongly penalizes the weight of the cluster A j ∪ {i}.
A simulation study to compare the effect of the three similarity functions on posterior distribution is shown in Section E of the Appendix.Moreover, a comparison with alternative models using benchmark data is given in Section F of the Appendix 6 Blood donation data application Our data concern new donors of whole blood donating between January 1st, 2010 and May 15th, 2016 in the main building of AVIS Milano.By a new donor we mean a blood donor who donated for the first time after January 1st, 2010.Data are recurrent donation times, with extra information summarized in a set of covariates, collected by AVIS physicians.Donors include only loyal individuals, i.e., a new donation is expected within a finite amount of time with probability one.The resulting dataset contains 11 505 donations, made by 2 912 donors; the number of recurrent donations for each donor, including the first one, is between 2 and 21, so that the number of gap times between recurrent donations vary from 1 to 20.
The statistical focus is the clustering of donors according to the trajectories of gap times.Figure 2 reports the histogram of gap times (in the log-scale) for men and women.The skewness of these histograms can be explained since, according to the Italian law, the maximum number of whole blood donations is 4 per year for men and 2 for women, with a minimum of 90 days between a donation and the next one.Note that the minimum for men is around 4.47 (e 4.47 87 days), while in median the gap time for the men is 121 days.For women, the distribution has a median approximately in 5.24 in the log scale: this means 189 days, that corresponds to about 6 months.Observe that donors may donate before the minimum imposed by law, under good donor's health conditions and the physician's consent.Figure 3 reports the mean and median trajectories of gap times for any recurrence j = 1, . . ., 20.The average values decrease as j increases, because, as the number of donations increases, the more loyal and regular the donor is.Note that donors enter the study randomly in the whole time window.The number of donors for each j = 1, . . ., 20 is decreasing: there are 2 912 donors with at least the first gap time, but only two with 20 gap times.
Among different covariates available, we selected some of them which are known to be associated with the gap times, according to a preliminary study (see Gianoli, 2016): -Gender: indicator of gender, 1 if woman, 0 if man; -Blood group: 4-level categorical variable, equal to 0, A, B, and AB; -RH: rhesus factor, 1 if it is positive, 0 if negative; -Smoke: indicator of smoking habit, i.e., 1 if the donor regularly smokes, 0 otherwise; -Age: age in years at the first donation (at the entrance in the study); -BMI: body mass index (at the entrance in the study).
Covariates as weight, height and smoke are not directly controlled by AVIS physicians, but are communicated by donors themselves, so that they can be inaccurate.As far as the age (in years) at the first donation is concerned, sample statistics give that the minimum is 18, the maximum is a maximum of 68, empirical quantiles of order 25%, 50%, 75% equal to 27, 35, 44, while the empirical mean and standard deviation are 33.83 and 10.27.Similar sample statistics for the BMI values at first donation are 21.56, 23.93 and 25.70 (sample quartiles) and 23.93 and 3.37 (sample mean and standard deviation).

A framework for recurrent events
Let n be the number of individuals (donors) and T i,t be the time of the t-th donation of donor i.We assume that 0 := T i,0 corresponds to the time of first donation for each i and that individual i is observed over the time interval [0, τ i ], where τ i denotes the censoring time of the i-th observation.If m i events are observed at times 0 . ., m i denote the waiting times (gap times) between events of subject i and W i,mi+1 = T i,mi+1 − T i,mi , assuming that T i,mi+1 > τ i denotes the (m i + 1)-th gap time for the i-th donor censored at time τ i .We assume that the study has been administratively censored, i.e., censoring and observations are independent.Further, our approach considers the time of all first donations as known.We aim at modelling the waiting times W i,t , t = 1, . . ., m i , for i = 1, . . ., n, by incorporating some exogenous information in the prior distribution of the latent partition in form of covariates.
Let Y i,t = log(W i,t ) for all i and all t, and let ).We assume that gap times are conditionally independent within clusters , i.e., f (y , but differently from Section 3, each Y i has dimension m i + 1.Furthermore, the evaluation of the sampling distribution includes the information on censoring of the (m i + 1)-th gap time for each i = 1, . . ., n.For each t = 1, . . ., m i + 1 and each i in cluster A j , j = 1, . . ., k n , we assume that where η i,t are latent variables from the standard half-normal distribution and s i represents the cluster allocation of individual i.Note that (16) corresponds to assuming that Y i,t has a skew-normal distribution.This is motivated by the strong asymmetry in the distribution of the logarithm of the gap-times (see Figure 2).Skew normal mixture models have been successfully employed in various contexts; in the Bayesian framework see, for instance, Bayes and Branco (2007), Frühwirth-Schnatter and Pyne (2010), Arellano-Valle et al. (2007), Canale et al. (2016).For a definition and its properties, see Azzalini (2005) and Arellano-Valle and Azzalini (2006).
We use parameterization as in Frühwirth-Schnatter and Pyne (2010), which takes into account a skewness parameter, in addition to location and scale parameters.Note that, in ( 16), the conditional distributions of the gap times on the log scale in cluster A j share the group-specific parameter θ * j = (α j , ψ j , σ 2 j ), where α j is the random intercept, ψ j /σ j is the skewness parameter and σ j is a scale parameter.From ( 16), the expectation of Y i,t , in addition to the two linear terms, is α j + ψ j 2/π, while its variance is σ 2 j + ψ 2 j (1 − 2/π).As far as the linear predictor is concerned, we distinguish regression parameters corresponding to fixed-time covariates (β 0 ) from the parameters referring to time-varying covariates (β t ), and x i includes p 1 fixed-time covariates and x it denotes p 2 time-varying covariates.No intercept is included in the linear predictor to avoid identification issues with the cluster-specific random intercept α j .The prior we assume is described as follows Note that the number k n of cluster-specific parameters is determined by ρ n , and its is random.Notation IG(•; a, b) denotes the inverse-gamma density with mean b/(a − 1).Here diag(ξ 2 1 , . . ., ξ 2 p2 ) is a diagonal matrix which entries ξ 2 1 , . . ., ξ 2 p2 .In our specific case, p 2 = 1 and the distributions of β 1 , . . ., β J collapse on a univariate Gaussian distribution, where J = max i (n i + 1) is the maximum number of gap times.Notation PPMx-mixt denotes the prior described in Section 3. We assume that the cohesion function c(u, n j ) and the intensity ρ(ds) are those corresponding to the NGG process.Note that the choice of P 0 yields conjugacy of the associated full-conditional (see Frühwirth-Schnatter and Pyne, 2010).
The same covariates may enter both in the linear predictor and in the prior of the random partition.In this application, after covariate selection via LPML (log pseudo marginal likelihood) evaluation, we choose to include Gender, Blood Group, RH, Smoke and BMI (at the first donation) in the linear term, so that p 1 = 7 considering dummy variables too.The only time-varyng covariate included in the linear term is Age at the t-th donation.Only static covariates enter in the prior of the random partition: Gender, Blood Group, RH, Smoke, Age at the first donation and BMI at the first donation.

Posterior inference
To perform posterior inference for model ( 16)-( 20), we modify the Gibbs sampler in Section C of the Appendix to take into account the likelihood for recurrent events.See Section D of the Appendix for details.We fix hyperparameters as follows: the covariance matrix Σ 0 is assumed equal to diag(1, . . ., 1) and (ν 0 , τ 0 ) = (2, 1) so that ξ 2 1 , as p 2 = 1, has prior mean equal to 1 and infinite prior variance; we assume α 0 = ψ 0 = 0 and κ = 0.5 (see ( 10)).Since n j − σ is the unnormalized weight that a new item is assigned to cluster A j , σ in ( 10) is a key hyperparameter; hence, we assume three different values for σ and report the associated posterior estimates in Table 2 for sensitivity analysis.The distance d(x i , x j ) entering in the definition of the similarity fuctions is defined in (15).Every run of the Gibbs sampler produced a final sample size of 10 000 iterations, after a burn-in of 5 000 iterations.In all simulations, convergence was checked using both visual inspection and standard diagnostics.Table 2 shows the number of clusters of the estimated partition and LPML (Christensen et al., 2010) values with similarity functions g C and g ≡ 1 (no covariates in the prior), for different values of the temperature hyperparameter λ and the reinforcement parameter σ.By its definition, the larger the LPML is, the better the model fits the data.There is a clear effect of σ, λ and covariates (through g C ) on LPML: when λ or σ increase, LPML decreases, corresponding to a better fit of the model.Values of LPML for g C are much larger than in the case of g ≡ 1.For any of the hyperparameter values in the table, we have computed an estimate of the random partition for the sample donors, minimizing a posteriori the expectation of the variation of information (VI) loss function (see, for instance, Wade and Ghahramani ( 2018)).We report the number KVI of estimated clusters in Table 2. Since the cardinality of the visited partitions is quite large, as suggested by Wade and Ghahramani (2018), from the MCMC estimate of the posterior similarity matrix, we consider all the partitions designed by a hierarchical clustering algorithm with complete linkage.Then, as the point estimate, we select the partition that achieves the minimum value of the posterior loss function.It is clear that KVI is robust with respect to the effect of covariates in the prior and changes in σ and λ, though as expected, KVI increases with σ.This is an aspect of the well-known trade-off between estimation of the number of clusters and posterior predictive checks, especially in case of misspecified models (see, for instance Beraha et al., 2022, Section 7), which generally improve with a large number of clusters.
The best model fit, in terms of LPML, is obtained when λ = 0.100 and σ = 0.150.The rest of posterior inference that we report here below is computed for these values of the hyperparameters.Note that σ = 0.001 in Table 2 approximates the cohesion function yielded by the Dirichlet process as in Müller and Quintana (2010) and Müller et al. (2011) 3: Posterior summaries of the fixed-time regression coefficients β 0 for the blood donation application.
Table 3 shows posterior means of the regression coefficients of the fixed-time covariates.All the fixed-time covariates included in the study are significantly different from zero; see the last column.The average of log-gap time increases for donors with blood group 0, A and B with respect to the reference level AB.Of course, women exhibit longer gap-times in accordance with the Italian law. Figure 4 shows the regression coefficients for the : Recurrent gap times (on the log scale) by estimated cluster for the blood donation application.For each cluster we draw the sample mean (continuous line), the sample median (dashed line) and 90% sample quatile band in each cluster.The black continuous and dashed lines denote the overall mean and median, respectively, while the number n denotes the cluster size.
only time-dependent covariate included in the study (age of the donor).These parameters are significantly different from zero for all donation occasions.Further, as the occasion of donation increases, the impact of the age on the log-gap time decreases in magnitude, implying that loyal donors (i.e., donors who have donated many times) are less subject to age differences.Figure 5 shows the trajectories of the observed log-gap times grouped by the estimated clusters as explained before.It is clear from the cluster sizes that rich-get-richer property of the cohesion associated to the Dirichlet process is here mitigated.We do not observe substantial differences among the log-gap times in the estimated clusters, but Cluster 1 seems to groups longer trajectories (see also the last column of Table 4).Table 4: Empirical summaries of the covariates within each estimated cluster for the blood donation application.

Age
Table 4 reports empirical summaries of the covariates (included in the prior) within each estimated cluster.We report empirical means for any continuous covariate and empirical frequency for the binary or categorical covariates.The last column display the empirical average and standard deviation for the number of recurrences (m i 's) per cluster.Cluster 1 groups older donors, since the cluster mean is one standard deviation above the overall mean.These donors also have a slight higher BMI.Cluster 2 contains donors with age at the first donation that is close to the overall mean (33.83 years), while Cluster 3 groups younger donors than in Cluster 2, with a lower percentage of smokers.Clusters 4 and 5 group very young donors, and Cluster 5 is mostly made of men.
To study if the inclusion of g C in the prior affect the posterior predictive inference, we consider a crossvalidation approach where we subsample 50 different training subsets containing 90% of the donors.For each training subset we compute the associated posterior and use the remaining donors as testing set for prediction.We adopt the same model specification given in Section 6.2 with λ = 0.1 and σ = 0.15.Posterior predictive inference has been computed as explained at the end of Section 4. We also compare with the case g ≡ 1.For each training subsample, we ran the Gibbs sampler for 3 000 iterations, of which 2 000 were discarded as burn-in iterations.The root mean square errors (rMSE) between observed and predicted log-gap times are 1.825 and 2.242 for the PPMx-mixt with g C and the model with g ≡ 1 respectively, i.e., the rMSE is decreasing by 18.6% while including the covariates in the partitions' distribution through the similarity function g C .

Discussion
In this work we propose a regression model for gap times of recurrent events, where parameterization includes the partition ρ n of the blood donors.The prior we assume includes covariate information, encouraging two individuals to be co-clustered if they have similar covariate values.We have seen that including covariate information improves the posterior predictive performance and helps interpret the estimated clusters in terms of covariates.Thanks to the introduction of a latent variable u > 0, we are able to express the cohesion function in the prior, and hence the whole prior for the random partition of the sample, as mixtures of PPMx.Our new prior on the random partition is given via cohesion and similarity functions, as specified by PPMx models (Müller et al., 2011).
Differently from Müller et al. (2011), we assume a wide class of non-increasing similarity function of the cluster compactness which takes values in (0, 1].We propose three examples of such non-increasing functions, emphasizing their properties and their effects on the posterior predictive distribution of the model.Crossvalidated posterior predictive root mean-squared errors for the AVIS dataset shows that the inclusion of the similarity function g in the prior for the random partition yields a lower value than in the case with no covariates in the prior.For our motivating application, we have introduced a model for recurrent data when gap times assume skew distributions.The model also includes cluster specific random effects modelled via PPMx-mixt.We estimate five clusters of homogeneous donors.This grouping helps in identifying characteristic and important features (covariates) of individuals resulting in a more accurate prediction of the time of a future donation.
An interesting characteristic of our model is that, though it clusters donor gap times trajectories, when we aim at interpreting the estimated clusters in terms of number of distinct gap times trajectories in the responses, we should also consider that the prior we assume for the random partition of the sample subjects is covariatedependent.In fact, some of the estimated clusters are similar when looking at the response trajectories but different when looking at the covariates.We consider this aspect as an advantage of all models with covariatedependent prior for the random partition), included ours, that allows for greater flexibility for clustering, rather than an inadequacy.
As far as the sensitivity of the model to the distance d is concerned, different choices can be considered and combined with the functions g A , g B and g C that we used in the manuscript, but not all the distances support the increasing properties described in Section A of the Appendix.The similarity functions g that we propose must be calibrated via parameter λ and we discuss how we can fix it.This is a key parameter that prevents the overpowering effect of covariates on clusters with respect to likelihood.For a thorough discussion on the calibration of similarity functions in PPMx models, see Page and Quintana (2018).
The pitfall of our strategy consists in its computational cost.Future work may consider the use of approximate sampling strategies to overcome this limitation.Model flexibility can be enhanced by assuming some of the hyperparameters of the model, such as those in the cohesion and the similarity functions, to be random.However, this extension is certainly not trivial, due to the intractability of the normalization constant M g (x 1 , . . ., x n ) in the prior distribution of the random partition.Rastelli, R. and Friel, N. (2018).Optimal Bayesian estimators for latent variable cluster models.Stat.Comput., 28(6):1169-1186.
Regazzini, E., Lijoi, A., and Prünster, I. (2003) A D A j is decreasing with the size of A j Let A j be an element of the partition of the sample labels.Since D Aj = i∈Aj d(x i , c Aj ) and the Fréchet mean of order one is defined as it is easy to check that B Supplementary figure Let t = D Aj and t + ε = D Aj ∪{i} , where ε represents the increment of the average center-based distance when {x i } is assigned to cluster A j .Figure 6 shows the ratio g(t + ε)/g(t), for different values of ε and similarity functions g A , g B and g C .See Section 5 in the manuscript.C Gibbs sampler for the Gaussian kernel with the linear regressor In this section, we illustrate the Gibbs sampler Pólya urn scheme for model ( 7)-( 9) in the context of linear regression, i.e. when f (y j | x * j , θ * j ) = i∈Aj φ(y i ; x i β * j , σ 2 * j ).In this case the cluster specific parameter θ = where j = 1, . . ., k n−1 + 1, and x := {x i } n i=1 .Moreover, observe that, for any l = 1, . . ., k n−1 , the prior on the partition can be written as: while, since g(∅) = 1, the conditional probability of assigning item i to a new cluster is equal to The contribution of the likelihood in ( 22) can be written as , where m(∅) = 1 in the case of a new cluster.Therefore, (22) becomes c(u, n j + 1)g(x * j ∪ {x i }) c(u, n j )g(x * j ) , j = 1, . . ., k and, similarly, We sample each s i is sequentially assigned according to this law.Observe that, because of conjugacy, m(y * j ) is available analitically and it equals to a Student's t density.

D Gibbs sampler for the blood donations application
In this section we describe a Gibbs sampler for the posterior of model ( 16)-( 20).The state of the Markov chain is .
The full-conditionals are outlined below: we provide the details of the computation only when the conditional posterior distribution is not straightforward.As before, L(• | −) indicates that we consider the law of the variable • given all remaining variables − (including the data).
[1] Update the latent variables η i,t 's: each η i,t , conditionally on s i = j, is independently sampled from which turns out to be a truncated normal, namely independently for each t = 1, . . ., m i + 1 and i = 1, . . ., n such that s i = j.
[2] Update the censored values: the censored observations are independently sampled from Here y i,mi is the last observed gap time (in the log scale) for any i, and is the amount of time, in the log scale, between the censoring time and the time of the last observed event.
[5] Update the dispersion parameter ξ 2 1 , . . ., ξ 2 p2 : each parameter is independently sampled from [6] Update the cluster specific parameters: the likelihood for data in cluster A j that is used to build the joint distribution for (α j , ψ j , σ 2 j ) is proportional to It is straightforward to check that this is the likelihood of a regression model where the regression parameters are α j , ψ j and the residual variance is σ 2 j , with prior P 0 as specified in (20), that is the usual conjugate prior.Therefore, the full-conditionals these parameters are: where ) .
[7] Update the latent partition of the data: we adapt Algorithm 8 in Neal (2000) to consider non-conjugacy of the kernel density and P 0 and to take into account the predictive structure of the PPMx-mixt prior.In the non-conjugate case, the full conditionals of the cluster allocations depend on a vector of cluster-specific parameters.The latter vector must be augmented by considering R new auxiliary variables α r , ψ r , σ 2 r sampled from the prior P 0 , representing potential new clusters.To improve the mixing, this augmentation step is implemented adopting the re-use strategy in Favaro and Teh (2013).In particular, the probability of assigning the i-th subject to cluster j, j = 1, 2, . . ., k −i n , similarly as in ( 23), here becomes where ρ −i n−1 has the same definition as in the previous section.Analogously, observing that g(∅) = 1, the probability of allocating the subject to one of the new R clusters is, for j if n j > 0, and κ(u + 1) σ for n j = 0.
[8] Update the latent parameter u: given the partition ρ n , the auxiliary variable u and data are independent, so that L(u | −) ∝ u n−1 e −Ψ(u) kn j=1 c(u, n j ) with Ψ(u) = κ +∞ 0 (1 − e −us )ρ(s)ds; in the case of the NGG process, this full-conditional simplifies to In particular, to sample from this distribution, we use a simple Metropolis-Hastings update with a Gaussian proposal kernel truncated in (0, +∞).
We run the algorithm described in Section C of the Appendix to obtain 5 000 final iterations, after a burnin of 10 000.computed the posterior estimated partition as the one ρ n minimizing the posterior expection of the variation of information loss function with equal missclassification costs (Wade and Ghahramani, 2018;Rastelli and Friel, 2018).When classifying the datapoints according to this cluster estimates, we found that missclassification rates are 1%, 2.5% and 8.5% for g C , g A (α = 1), and g ≡ 1, respectively, when λ = 0.5.
We have also computed posterior predictive distributions as explained at the end of Section 4. Figure 8 reports the posterior predictive distribution for a new individual, with the same covariate vector as the first subject in the sample.It is clear from the figure that the prediction is more precise under g A and g C .In fact, when we do not include covariate information in the prior for the random partition, the posterior predictive density is not able to distinguish to which of the three groups the item belongs and the three peaks have approximately the same height.In contrast, when g A or g C are included in the prior, the posterior predictive density exhibits one main peak, so that covariate information helps, in this case, in selecting the right group the observation should be assigned to.This is also confirmed by the missclassification rates we have reported above.
We adopt the conditional distribution of data in cluster A j (see ( 7)) as f (y * j | x * j , θ * j ) = i∈Aj φ(y i ; x i β j , σ 2 j ), where φ(y i ; x i β j , σ 2 j ) is the univariate Gaussian density with mean x i β j and variance σ 2 j .The linear term includes the intercept.We assume the prior as in Section 3, with similarity function g C and distance d(x 1 , x 2 ) as described in Section 5.The prior P 0 of the cluster-specific parameters is a normal-inverse-gamma distribution, with (β j , σ 2 j ) ∼ N 4 (β 0 , σ 2 j /κ 0 I 4×4 ) × IG(σ 2 j ; a 0 , b 0 ).
To replicate tests in Müller et al. (2011) and Bianchini et al. (2020) a total of M = 100 datasets of size 200 were generated by randomly subsampling 200 out of the 1,000 available observations.We compute the root MSE (over the 100 datasets) for estimating E(Y | x 1 ; x 2 ; x 3 ) for each of the 12 covariate combinations (see details in Müller et al., 2011).We fix P 0 according via the empirical Bayes approach using overall sample mean and variance of the responses.Table 5 displays root MSE under our model when the hyperparameters in the cohesion function are fixed as σ = 0.2, κ = 0.001; this corresponds assuming that, without covariate effect (i.e. when g ≡ 1), the prior number of clusters has mean equal to 3 and variance equal to 5.8.Parameter λ in g C has been fixed to 0.041, with ε * in Section 5 equal to 0.1; the table reports also the last two columns in Table 7 in Bianchini et al. (2020).Table 5: Root MSE for estimating E(Y | x 1 , x 2 , x 3 ) for 12 combinations of covariates (x 1 , x 2 , x 3 ) and P P M x and DPP as competing models of reference; the last two columns are those in Table 7 of Bianchini et al., 2020.
The PPMx-mixt shows performance comparable to the competitors DPP and PPMx in Bianchini et al. (2020) and Müller et al. (2011), respectively.Note that our model gives more variability among different combinations of covariates.

Figure 1 :
Figure 1: Left: plot of the similarity functions for α = 1 and λ = 1; right: plot of g C for different values of λ.

Figure 2 :
Figure 2: Histogram of the logarithm of the observed gap-times grouped by gender; male donors on the left and female donors on the right.The red line denotes the minimum waiting time between two donations, according to the Italian law.The black continuous and dashed lines denotes the empirical median and mean, respectively.
Figure4: Recurrent gap times (on the log scale) by estimated cluster for the blood donation application.For each cluster we draw the sample mean (continuous line), the sample median (dashed line) and 90% sample quatile band in each cluster.The black continuous and dashed lines denote the overall mean and median, respectively, while the number n denotes the cluster size.

Figure 5 :
Figure5: Recurrent gap times (on the log scale) by estimated cluster for the blood donation application.For each cluster we draw the sample mean (continuous line), the sample median (dashed line) and 90% sample quatile band in each cluster.The black continuous and dashed lines denote the overall mean and median, respectively.

Figure 7 :
Figure7: Simulated data.Left to right: histogram of the response variable y i , scatterplots of x i1 and x i2 (continuous covariates) and scatterplots of x i3 and x i4 (discrete covariates) versus the response variable.Different colours represent the three different groups from which the data have been generated.

Table 1 :
Table1shows the empirical frequencies for the categorical covariates listed above.Empirical frequencies of the static covariates.

Table 2 :
LPML and number of clusters in the estimated partition, obtained minimizing a posteriori VI, for the blood donation data, for different values of λ and σ and similarities g C , g ≡ 1.In evidence: the best model in terms of LPML.
(though they use a different similarity).