Conformalized Survival Analysis

Existing survival analysis techniques heavily rely on strong modelling assumptions and are, therefore, prone to model misspecification errors. In this paper, we develop an inferential method based on ideas from conformal prediction, which can wrap around any survival prediction algorithm to produce calibrated, covariate-dependent lower predictive bounds on survival times. In the Type I right-censoring setting, when the censoring times are completely exogenous, the lower predictive bounds have guaranteed coverage in finite samples without any assumptions other than that of operating on independent and identically distributed data points. Under a more general conditionally independent censoring assumption, the bounds satisfy a doubly robust property which states the following: marginal coverage is approximately guaranteed if either the censoring mechanism or the conditional survival function is estimated well. Further, we demonstrate that the lower predictive bounds remain valid and informative for other types of censoring. The validity and efficiency of our procedure are demonstrated on synthetic data and real COVID-19 data from the UK Biobank.


Introduction
The COVID-19 pandemic has placed extraordinary demands on health systems (e.g., Ranney et al., 2020). In turn, these demands create an unavoidable need for medical resource allocation and, in response, several groups of researchers have communicated clinical ethics recommendations (e.g., Emanuel et al., 2020;Vergano et al., 2020). By and large, these recommendations require a reliable individual risk assessment for patients who test positive; see Table 2 of Emanuel et al. (2020). Clearly, one risk measure of interest might be the survival time, the time lapse between the confirmation of COVID-19 and an event such as death or reaching a critical state, should this ever occur.

Survival analysis
Survival times are not always observed due to censoring (Leung et al., 1997). A main goal of survival analysis is to infer the survival function-the probability that a patient will survive beyond any specified time-from censored data. The Kaplan-Meier curve (Kaplan and Meier, 1958) produces such an inference when the population under study is a group of patients with certain characteristics. On the positive side, the Kaplan-Meier curve does not make any assumption on the distribution of survival times. On the negative side, it can only be applied to a handful of subpopulations because it requires sufficiently many events in each subgroup (Kalbfleisch and Prentice, 2011). More often than not, the scientist has available multiple categorical and continuous covariates, and it thus becomes of interest to understand heterogeneity by studying the conditional survival function; that is, the dependence on the available factors. In the conditional setting, however, distribution-free inference for the conditional survival function gets to be challenging. Standard approaches make parametric or nonparametric assumptions about the distribution of the covariates and that of the survival times conditional on covariate values. A well-known example is of course the celebrated Cox model which posits a proportional hazards model in which an unspecified nonparametric base line is modified via a parametric model describing how the hazard varies in response to explanatory covariates (Cox, 1972;Breslow, 1975). Other popular models, such as accelerated failure time (AFT) (Cox, 1972;Wei, 1992) and proportional odds models (Murphy et al., 1997;Harrell Jr, 2015), also combine nonparametric and parametric model specifications.
As medical technologies produce ever larger and more complex clinical datasets, we have witnessed a rapid development of machine learning methods adapted to high-dimensional and heterogeneous survival data (e.g., Verweij and Van Houwelingen, 1993;Faraggi and Simon, 1995;Tibshirani, 1997;Gui and Li, 2005;Hothorn et al., 2006;Zhang and Lu, 2007;Ishwaran et al., 2008;Witten and Tibshirani, 2010;Goeman, 2010;Simon et al., 2011;Katzman et al., 2016;Lao et al., 2017;Wang et al., 2019;Li and Bradic, 2020). An appealing feature of these methods is that they typically do not make modeling assumptions. To quote from Efron (2020): " Neither surface nor noise is required as input to randomForest, gbm, or their kin." The downside is that it is often challenging to quantify the uncertainty for these methods. To be sure, blind application of off-the-shelf uncertainty quantification tools, such as the bootstrap (Efron, 1979;Efron and Tibshirani, 1994), can yield unreliable results since their validity 1) rests on implicit modeling assumptions, and 2) holds only asymptotically (e.g., Lei and Candès, 2021;Ratkovic and Tingley, 2021).

Prediction intervals
For decision-making in sensitive and uncertain environments-think of the COVID-19 pandemicit is preferable to produce prediction intervals for the uncensored survival time with guaranteed coverage rather than point predictions. In this regard, the use of (1 − α) prediction intervals is an effective way of summarizing what can be learned from the available data; wide intervals reveal a lack of knowledge and keep overconfidence at arm's length. Here and below, an interval is said to be a (1 − α) prediction interval if it has the property that it contains the true label, here, the survival time, at least 100(1 − α)% of the time (a formal definition is in Section 2). Prediction intervals have been widely studied in statistics (e.g., Wilks, 1941;Wald, 1943;Aitchison and Dunsmore, 1980;Stine, 1985;Geisser, 1993;Vovk et al., 2005;Krishnamoorthy and Mathew, 2009) and much research has been concerned with the construction of covariate-dependent intervals.
Of special interest is the subject of conformal inference, a generic procedure that can be used in conjunction with sophisticated machine learning prediction algorithms to produce prediction intervals with valid marginal coverage without making any distributional assumption whatsoever (e.g., Saunders et al., 1999;Vovk, 2002;Vovk et al., 2005;Lei and Wasserman, 2014;Tibshirani et al., 2019). While coverage is only guaranteed in a marginal sense, it has been theoretically proved and empirically observed that some conformal prediction methods can also achieve near conditional coverage-that is, coverage assuming a fixed value of the covariates-when some key parameters of the underlying conditional distribution can be estimated reasonably well (e.g., Sesia and Candès, 2020; Lei and Candès, 2021).

Our contribution
Standard conformal inference requires fully observed outcomes and is not directly applicable to samples with censored outcomes. In this paper, we extend conformal inference to handle rightcensored outcomes in the setting of Type-I censoring (e.g., Leung et al., 1997). This setting assumes that the censoring time is observed for every unit while the outcome is only observed for uncensored units. In particular, we generate a covariate-dependent lower prediction bound (LPB) on the uncensored survival time, which can be regarded as a one-sided (1 − α)-prediction interval. As we just argued, the LPB is a conservative assessment of the survival time, which is particularly desirable for high-stakes decision-making. A low LPB value suggests either a high risk for the patient, or a high degree of uncertainty for similar patients due to data scarcity. Either way, the signal to a decision-maker is that the patient deserves some attention.
Under the completely independent censoring assumption defined below, which states that the censoring time is independent of both the outcome and covariates, our LPB provably yields a (1 − α) prediction interval. This property holds in finite samples without any assumption other than that of operating on i.i.d. samples. Under the more general conditionally independent censoring assumption introduced later, our LPB satisfies a doubly robust property which states the following: marginal coverage is approximately guaranteed if either the censoring mechanism or the conditional survival function is estimated well. In the latter case, the LPB even has approximately guaranteed conditional coverage.
Readers familiar with conformal inference would notice that the above guarantees can be achieved by simply applying conformal inference to the censored outcomes, i.e., by constructing an LPB on the censored outcome treated as the response. This unsophisticated approach is conservative. Instead, we will see how to provide tighter bounds and sharper inference by applying conformal inference on a subpopulation with large censoring times; that is, on which censored outcomes are closer to actual outcomes. To achieve this, we shall see how to carefully combine the selection of a subpopulation with ideas from weighted conformal inference (Tibshirani et al., 2019).
Lastly, while we focus on clinical examples, it will be clear from our exposition that our methods can be applied to other time-to-event outcomes in a variety of other disciplines, such as industrial life testing (Bain, 2017), sociology (Allison, 1984), and economics (Powell, 1986;Hong and Tamer, 2003;Sant'Anna, 2016).

Prediction intervals for survival times
2.1. Problem setup Let X i , C i , T i , i = 1, . . . , n, be respectively the vector of covariates, the censoring time, and the survival time of the i-th unit/patient. Throughout the paper, we assume that (X i , C i , T i ) are i.i.d. copies of the random vector (X, C, T ). We consider the Type I right-censoring setting, where the observables for the i-th unit include X i , C i , and the censored survival time T i , defined as the minimum of the survival and censoring time: For instance, if T i measures the time lapse between the admission into the hospital and death, and C i measures the time lapse between the admission into the hospital and the day data analysis is conducted, then T i = T i if the i-th patient died before the day of data analysis and T i = C i if she survives beyond that day.
The censoring time C partially masks information from the inferential target T . As discussed by Leung et al. (1997), it is necessary to impose constraints on the dependence structure between T and C to enable meaningful inference. In particular, we make the following conditionally independent censoring assumption (e.g., Kalbfleisch and Prentice, 2011): Assumption 1 (conditionally independent censoring).
This assumes away any unmeasured confounder affecting both the survival and censoring time; please see immediately below for an example. In some cases, we also consider the completely independent censoring assumption, which is stronger in the sense that it implies the former: Assumption 2 (completely independent censoring).
For instance, in a randomized clinical trial, the end-of-study censoring time C is defined as the time lapse between the recruitment and the end of the study. For single-site trials, C is often modelled as a draw from an exogenous stochastic process (e.g., Carter, 2004;Gajewski et al., 2008) and thus obeys (2). For multicentral trials, C is often assumed to depend on the site location only (e.g., Carter et al., 2005;Anisimov and Fedorov, 2007;Barnard et al., 2010), and thus (1) holds as soon as the vector of covariates includes the site of the trial. For an observational study such as the COVID-19 example discussed later in Section 5, additional covariates would be included to make the conditionally independent censoring assumption plausible.
Although (1) is a strong assumption, it is a widely used starting point to study survival analysis methods (Kalbfleisch and Prentice, 2011). We leave the investigation of informative censoring (e.g., Lagakos, 1979;Wu and Carroll, 1988;Scharfstein and Robins, 2002) to future research. Additionally, whereas the setting of Type I censoring appears to be restrictive, we will show in Section 6.1 that an LPB in this setting can still be informative for other censoring types.

Naive lower prediction bounds
Our ultimate goal is to generate a covariate-dependent LPB as a conservative assessment of the uncensored survival time T . Denote byL(·) a generic LPB estimated from the observed data We say an LPB is calibrated if it satisfies the following coverage criterion: where α is a pre-specified level (e.g., 0.1), and the probability is computed over bothL(·) and a future unit (X, C, T ) that is independent of (X i , C i , T i ) n i=1 . Since T ≤ T , any calibrated LPB on the censored survival time T is also a calibrated LPB on the uncensored survival time T . Consequently, a naive approach is to discard the censoring time C i 's and construct an LPB on T directly. Since the samples (X i , T i ) are i.i.d., a distribution-free calibrated LPB on T can be obtained via standard techniques from conformal inference (e.g., Vovk et al., 2005;Lei et al., 2018;Romano et al., 2019b). Our first result is somewhat negative: indeed, it states that all distribution-free calibrated LPBs on T must be LPBs on T .
Theorem 1. Take X ∈ R p and C ≥ 0, T ≥ 0. Assume thatL(·) is a calibrated LPB on T for all joint distributions of (X, C, T ) obeying the conditionally independent censoring assumption with X being continuous and (T, C) being continuous or discrete. Then for any such distribution, P( T ≥L(X)) ≥ 1 − α.
Our proof can be extended to include the case where either C or T or both are mixtures of discrete and continuous distributions but we do not consider such extensions here. An LPB constructed by taking T as the response may be calibrated but also overly conservative because of the censoring mechanism. To see this, note that the oracle LPB on T is, by definition, the α-th conditional quantile of T | X, denoted byq α (X). Similarly, let q α (X) be the oracle LPB on T . Under the conditionally independent censoring assumption, If the censoring times are small, the gap betweenq α (x) and q α (x) can be large. For illustration, assume that X, C, and T are mutually independent, and T ∼ Exp (1) Thus, a naive approach taking T as a target of inference can be arbitrarily conservative.
In sum, Theorem 1 implies that any calibrated LPB on T must be a calibrated LPB on T , under the conditionally independent censoring assumption only. This is why to make progress and overcome the limitations of the naive approach, we shall need additional distributional assumptions.

Leveraging the censoring mechanism
We have just seen that the conservativeness of the naive approach is driven by small censoring times. A heuristic way to mitigate this issue is to discard units with small values of C. Consider a threshold c 0 , and extract the subpopulation on which C ≥ c 0 . One immediate issue with this is that the selection induces a distributional shift between the subpopulation and the whole population, namely, For instance, the patients with larger censoring times tend to be healthier than the remaining ones. To examine the distributional shift in detail, note that the joint distribution of (X, T ) on the whole population is P X × P T |X while that on the subpopulation is P (X, T )|C≥c0 = P X|C≥c0 × P T |X,C≥c0 .
Next, observe that P T |X,C≥c0 = P T |X even under the completely independent censoring assumption because (T, X) |= C does not imply T |= C | X in general. For example, as in Section 2.2, if X, C, and T are mutually independent and T, C i.i.d.
∼ Exp(1), then P( T ≥ a, C ≥ a) = P( T ≥ a) > P( T ≥ a)P(C ≥ a), for any a > 0. As a result, both the covariate distribution and the conditional distribution of T given X differ in the two populations. Now consider a secondary censored outcome T ∧ c 0 , where a ∧ b = min{a, b}. We have where (a) uses the fact that and (b) follows from the conditionally independent censoring assumption. On the other hand, the joint distribution of (X, T ∧ c 0 ) on the whole population is Contrasting (4) with (5), we observe that there is only a covariate shift between the subpopulation and the whole population.
The likelihood ratio between the two covariate distributions is While there is a distributional shift between the selected units and the target population, the special form of the covariate shift allows us to adjust for the bias by carefully reweighting the samples. In particular, applying the one-sided version of weighted conformal inference (Tibshirani et al., 2019), discussed in the next section, gives a calibrated LPB on T ∧ c 0 , and thus a calibrated LPB on T . With sufficiently many units with large values of C, we can choose a large threshold c 0 to reduce the loss of power caused by censoring. We emphasize that there is no contradiction with Theorem 1 because, as shown in Section 3, weighted conformal inference requires P(C ≥ c 0 | X) to be (approximately) known. We refer to the denominator P(C ≥ c 0 | X = x) in (6) as the censoring mechanism, denoted by c(x; c 0 ). We write it as c(x) for brevity when no confusion can arise. This is the conditional survival function of C evaluated at c 0 . Under a censoring of Type I, the C i 's are fully observed while the T i 's are only partially observed. Thus, P(C | X) is typically far easier to estimate than P(T | X). Practically, the censoring mechanism is usually far better understood than the conditional survival function of T ; for example, as mentioned in Section 2.1, in randomized clinical trials, C often solely depends on the site location.
Under the completely independent censoring assumption, the covariate shift even disappears since P X = P X|C≥c0 . In this case, we can apply a one-sided version of conformal inference to obtain a calibrated LPB on T ∧ c 0 , and hence a calibrated LPB on T (e.g., Vovk et al., 2005;Lei et al., 2018;Romano et al., 2019b). With infinite samples, as c 0 → ∞, the method is tight in the sense that the censoring issue disappears. Again, this result does not contradict Theorem 1, which requires the LPB to be calibrated under the weaker condition (1). With finite samples, there is a tradeoff between the choice of the threshold c 0 and the size of the induced subpopulation.

Weighted conformal inference
Returning to (4) and (5), the goal is to construct an LPBL(·) on T ∧ c 0 from training samples Since T ∧ c 0 ≤ T ,L(·) is a calibrated LPB on T . We consider c 0 to be a fixed threshold in Section 3.1 and 3.2, and discuss a data-adaptive approach to choosing this threshold in Section 3.4.
To deal with covariate shifts, Tibshirani et al. (2019) introduced weighted conformal inference, which extends standard conformal inference (e.g., Vovk et al., 2005;Shafer and Vovk, 2008;Lei and Wasserman, 2014;Barber et al., 2019a,b;Sadinle et al., 2019;Romano et al., 2020;Cauchois et al., 2020)). Imagine we have i.i.d. training samples (X i , Y i ) n i=1 drawn from a distribution P X × P Y |X and wish to construct prediction intervals for test points drawn from the target distribution Q X × P Y |X (in standard conformal inference, P X = Q X ). Assuming w(x) = dQ X (x)/dP X (x) is known, then weighted conformal inference produces prediction in-tervalsĈ(·) with the property Above, the probability is computed over both the training set and the test point (X, Y ). In our case, the outcome is T ∧ c 0 and the covariate shift w(x) = P(C ≥ c 0 )/c(x), as shown in (6).
In Algorithm 1, we sketched a version of weighted conformal inference based on data splitting, which is adapted to our setting and has low computational overhead. Operationally, it has three main steps: (a) split the data into a training and a calibration fold; (b) apply any prediction algorithm on the training fold to generate a conformity score indicating how atypical a value of the outcome is given observed covariate values; here, we generate a conformity score such that a large value indicates a lack of conformity to training data.
(c) calibrate the predicted outcome by the distribution of conformity scores on the calibration fold. In the calibration step from Algorithm 1, Quantile(1 − α; Q) is the (1 − α) quantile of the distribution Q defined as Algorithm 1: conformalized survival analysis Input: level α; data Z = (X i , T i , C i ) i∈I ; testing point x; function V (x, y; D) to compute the conformity score between (x, y) and data D; functionŵ(x; D) to fit the weight function at x using D as data; function C(D) to select the threshold c 0 using D as data.
Procedure: 1. Split Z into a training fold Z tr (X i , Y i ) i∈Itr and a calibration fold 2. Select c 0 = C(Z tr ) and let I ca = {i ∈ I ca : C i ≥ c 0 }. 3. For each i ∈ I ca , compute the conformity score V i = V (X i , T i ∧ c 0 ; Z tr ). 4. For each i ∈ I ca , compute the weight W i =ŵ(X i ; Z tr ) ∈ [0, ∞).

Compute the weightsp
Output:L(x) = inf{y : V (x, y; Z tr ) ≤ η(x)}∧c 0 A few comments regarding Algorithm 1 are in order. First, when the covariate shift w(x) is unknown, it can be estimated using the training fold. Second, note that in step 4, ifŵ(x; Z tr ) = ∞, thenp i (x) = 0 (i ∈ Z ca ) andp ∞ (x) = 1. In this case, step 5 givesL(x) = −∞. Third, the requirement that W i ∈ [0, ∞) is natural because X i ∼ P X and w(X) ∈ [0, ∞) almost surely under P X even if Q X is not absolutely continuous with respect to P X . Fourth, it is worth mentioning in passing that η(x) is invariant to positive rescalings ofŵ(x). Thus, we can set w(x) = 1/ĉ(x) in our case whereĉ(x) is an estimate of c(x). Finally, apart from fitting V (·, ·; Z tr ) andŵ(·; Z tr ) once on the training fold, the additional computational cost of our algorithm comes from computing |I ca | conformity scores and finding the (1 − α)-th quantile. We provide a detailed analysis of time complexity in Section D.4 of the Appendix.
In the algorithm, the conformity score function V (x, y; D) can be arbitrary and we discuss three popular choices from the literature: • Conformalized mean regression (CMR) scores are defined via V (x, y; Z tr ) =m(x) − y, wherem(·) is an estimate of the conditional mean of Y given X. The resulting LPB is then (m(x) − η(x)) ∧ c 0 . This is the one-sided version of the conformity score used in Vovk et al. (2005) and Lei and Wasserman (2014).
• Conformalized quantile regression (CQR) scores are defined via V (x, y; Z tr ) =q α (x) − y, whereq α (·) is an estimate of the conditional α-th quantile of Y given X. The resulting LPB is then (q α (x) − η(x)) ∧ c 0 . This score was proposed by Romano et al. (2019b); it is more adaptive than CMR and usually has better conditional coverage.
• Conformalized distribution regression (CDR) scores are defined via V (x, y; is an estimate of the conditional distribution of Y given X. The resulting LPB is thenF −1 Y |X=x (α−η(x))∧c 0 , or equivalently, the (α−η(x))-th quantile of the estimated conditional distribution. This score was proposed by Chernozhukov et al. (2019). It is particularly suitable to our problem because most survival analysis methods estimate the whole conditional distribution.
Under the completely independent censoring assumption, P(C ≥ c 0 | X) = P(C ≥ c 0 ) almost surely. As a consequence, we can setŵ(x) = w(x) ≡ 1 and obtain a calibrated LPB without any distributional assumption.

Doubly robust lower prediction bounds
Under the more general conditionally independent censoring assumption, the censoring mechanism needs to be estimated. We can apply any distributional regression techniques such as the kernel method or the newly invented distribution boosting (Friedman, 2020) to estimate c(x) = P(C ≥ c 0 | X = x). For two-sided weighted split-CQR, Lei and Candès (2021) prove that the intervals satisfy a doubly robust property which states the following: the average coverage is guaranteed if either the covariate shift or the conditional quantiles are estimated well, and the conditional coverage is approximately controlled if the latter is true. In Section B in the Appendix, we present more general results, both non-asymptotic and asymptotic, that are applicable to a broad class of conformity scores proposed by Gupta et al. (2019), including the CMR-, CQR-and CDR-based scores.
In this section, we first present a version of the asymptotic result tailored to the CQR-LPB for simplicity. Theorem 2. Let N = |Z tr |, n = |Z ca |, c 0 be any threshold independent of Z ca , and q α (x; c 0 ) denote the α-th conditional quantile of T ∧ c 0 given X = x. Further, letĉ(x) andq α (x; c 0 ) be estimates of c(x) and q α (x; c 0 ) respectively using Z tr , andL(x) be the corresponding CQR-LPB. Assume that there exists δ > 0 such that E 1/ĉ(X) 1+δ < ∞ and E[1/c(X) 1+δ ] < ∞. Suppose that either A1 or A2 (or both) holds: A2 (i) There exists b 2 > b 1 > 0 and r > 0 such that, for any x and ε ∈ [0, r], Furthermore, under A2, for any ε > 0, Remark 1. The condition A2 (i) holds if T has a bounded and absolutely continuous density conditional on X in a neighborhood of q α (x). In fact, noting that q α (x; Intuitively, ifĉ(x) ≈ c(x), then the procedure approximates the oracle version of weighted split-CQR with the true weights, and the LPBs should be approximately calibrated. On the other hand, ifq α (x; Thus, the (1−α)-th quantile of the V i 's conditional on Z tr is approximately 0. To keep on going, recall that η(x) is the (1−α)-th quantile of the random distribution i∈Zcap i (x)δ V i +p ∞ (x)δ ∞ , and set G to be the cumulative distribution function of this random distribution. Then, implying that η(x) ≈ 0. Therefore,L(x) ≈ q α (x; c 0 ), which approximately achieves the desired conditional coverage.
With the same intuition, we can establish a similar result for the CDR-LPB with a slightly more complicated version of Assumption A2.
Theorem 3. Let F (· | x) denote the conditional distribution of T ∧ c 0 given X = x. With the same settings and assumptions as in Theorem 2, the same conclusions hold if A2 is replaced by the following conditions: (i) there exists r > 0 such that, for any x and ε ∈ [0, r], The double robustness of weighted split conformal inference has some appeal; indeed, the researcher can leverage knowledge about both the conditional survival function and the censoring mechanism without any concern for which is more accurate. Suppose the Cox model is adequate in a randomized clinical trial; then it can be used to produceq α (x; c 0 ) in conjunction with the known censoring mechanism. If the model is indeed correctly specified, the LPB is conditionally calibrated, as are classical prediction intervals derived from the Cox model (Kalbfleisch and Prentice, 2011); if the model is misspecified, however, the LPB is still calibrated.
Remark 2. A special case is when the completely independent censoring assumption holds, yet the researcher is unaware of this and still applies the estimatedĉ(·) to obtain the prediction intervals. As implied by Theorem 2 and 3, ifĉ(·) is approximately a constant function, the prediction interval is approximately calibrated. Notably, even ifĉ(·) deviates from a constant, our prediction interval still achieves coverage as long as the estimated weights are non-decreasing in the conformity scores. We present this additional robustness result in Section D.3 of the Appendix.
As a concluding remark, the prediction interval can become numerically and statistically unstable in the presence of extreme weights since the proposed method depends on c(x) (or the estimatedĉ(x)) through its inverse. The reader may have observed that c(x) plays a role similar to that of the propensity score in causal inference; the reweighting step in Algorithm 1 is analogous to inverse propensity score weighting-type methods. Assumption A1 in Theorem 2 mimics the overlap condition (e.g. D' Amour et al., 2021) in the causal inference literature. That said, there is a crucial difference. In a typical causal setting, the overlap condition is an assumption about the unknown data generating process, which cannot be manipulated. In contrast, in our work Assumption A1 can always be satisfied by selecting a sufficiently low threshold c 0 . We provide a detailed discussion in Section D.1 of the Appendix.

Adaptivity to high-quality modeling
We have seen that when the quantiles of survival times are well estimated,L(x) ≈ q α (x; c 0 ), which is the oracle lower prediction bound for T ∧c 0 , had the true survival function been known. This holds without knowing whether the survival model is well estimated or not. This suggests that conformalized survival analysis has favorable adaptivity properties, as formalized below.
Theorem 4. (a) Under the settings and assumptions of Theorem 2, assume further that A2 (ii) holds and a modified version of A2 (i) holds: there exists b 1 > 0 and r > 0 such that, for any x and ε ∈ [0, r], Then, for any ε > 0, (b) Under the settings and assumptions of Theorem 3, assume further the condition (ii) and the modified version of condition (i): there exists r > 0 such that, for any x and ε ∈ [0, r], Then, for any ε > 0, In theory, if c 0 is allowed to grow with n and C exceeds c 0 with sufficient probability, then L(x) ≈ q α (x) (see Appendix C.3). In practice, it would however be wiser to tune c 0 in a dataadaptive fashion (discussed in the next subsection) than to prescribe a predetermined growing sequence.

Choice of threshold
The threshold c 0 induces an estimation-censoring tradeoff: a larger c 0 mitigates the censoring effect, closing the gap between the target outcome T and the operating outcome T ∧ c 0 , but reduces the sample size to estimate the censoring mechanism and the conditional survival function. It is thus important to pinpoint the optimal value of c 0 to maximize efficiency.
To avoid double-dipping, we choose c 0 on the training fold Z tr . In this way, c 0 is independent of the calibration fold Z ca and we are not using the same data twice. In particular, Proposition 1, Theorem 2 and 3 all apply. Concretely, we (1) set a grid of values for c 0 , (2) randomly sample a holdout set from Z tr , (3) apply Algorithm 1 on the rest of Z tr for each value of c 0 to generate LPBs for each unit in the holdout set, and (4) select c 0 which maximizes the average LPBs on the holdout set. One way to see all of this is to pretend that the training fold is the whole dataset and measure efficiency as the average realized LPBs. In practice, we choose 25% units from Z tr as the holdout set. The procedure is convenient to implement, though it is by no means the most powerful approach.
Under suitable conditions, we can choose c 0 by using the calibration fold Z ca and have the resulting LPBs still be (approximately) calibrated. To be specific, given a candidate set C for c 0 , we simply maximize the average LPB on Z ca : whereL c0 (X) is given by the conformalized survival analysis with the threshold c 0 . In Section D.2 of the Appendix we derive uniform results for the c 0 's in C, and prove coverage guarantees for Lĉ 0 (X) via a generalization of the techniques for unweighted conformal inference by Yang and Kuchibhotla (2021).

Simulation studies
In this section, we design simulation studies to evaluate the performance of our method. Specifically, we run four sets of experiments detailed in Table 1. In each experiment, we compare the CQR-and CDR-LPB with the following alternatives: • Cox model: we generate the LPB as the α-th quantile from an estimated Cox model. The method is implemented via the survival R-package (Therneau, 2020).
• Accelerated failure time (AFT) model: we generate the LPB as the α-th quantile from an estimated AFT model with Weibull noise. The method is implemented in the survival R package.
• Censored quantile regression forest (Li and Bradic, 2020): this is a variant of quantile random forest (Athey et al., 2019) designed to handle time-to-event outcomes. We reimplement the method based on the code provided in https://github.com/AlexanderYogurt/ censored_ExtremelyRandomForest.
• Naive CQR: we apply split-CQR (Romano et al., 2019b) naively to (X i , T i ) n i=1 , where the quantiles are estimated by the quantreg R package.
For the CQR-LPB, the conditional quantiles are estimated via censored quantile regression forest or distribution boosting (Friedman, 2020); for the CDR-LPB, the conditional survival function is estimated via distribution boosting, which is implemented in the R package conTree (Friedman and Narasimhan, 2020).
In each experiment, we generate 200 independent datasets, each containing a training set of size n = 3000, and a test set of size n = 3000. For conformal methods, 50% of the training set is used for fitting the predictive model, and the remaining 50% of the training set is reserved for calibration. The splitting ratio between the training set and the test set is slightly different from the recommendation by Sesia and Candès (2020), where they suggest using 75% of the data for training and 25% for calibration. We reserve more data for calibration to ensure there are still enough samples in the calibration set after the selection and to decrease the variability of the LPBs. We then evaluate the coverage of LPBs as (1/n test ) ntest i=1 1{T i ≥L(X i )}. All the results in this section can be replicated with the code available at https://github.com/ Table 1.
Parameters used in the simulation study. "Homosc." and "Heterosc." are short for homoscedastic and heteroscedastic; "Uvt." and "Mvt." are short for univariate and multivariate. U(a, b) denotes the uniform distribution supported on [a, b]; E(λ) denotes the exponential distribution with rate λ.
zhimeir/cfsurv_paper. In addition, the proposed CQR-and CDR-LPB are implemented in the R package cfsurvival, available at https://github.com/zhimeir/cfsurvival. The covariate vector X ∈ R p is generated from P X . The survival time T is generated from an AFT model with Gaussian noise, i.e. log T | X ∼ N µ(X), σ 2 (X) .
We consider 2 × 2 settings with univariate or multivariate covariates plus homoscedastic or heteroscedastic errors. Here the term "homoscedastic" or "heteroscedastic" is applied to log T . The choice of the parameters in each setting is specified in Table 1.
Finally, we apply all the methods with target coverage level 1−α = 90%. In each experiment, we estimate c(x) by distribution boosting. Figure 1 presents the empirical coverage of the LPBs on uncensored survival times. Censored random forests, the Cox model, the AFT model, and the three quantile regression methods fail to achieve the target coverage in most cases. On the other hand, the naive CQR attains the desired coverage but at the price of being overly conservative. In contrast, both the CQR-and CDR-LPB achieve near-exact marginal coverage, as predicted by our theory. Fig. 1. Empirical 90% coverage of the uncensored survival time T . "CQR-cRF" is short for the CQR-LPB with censored quantile regression forest; "CQR-conTree" and "CDR-conTree" are short for the CQR-and CDR-LPB with distribution boosting. The other abbreviations are the same as in Table 1.
Next, we investigate the conditional coverage and efficiency of these methods. In Figure 2(a), we plot the empirical conditional coverage as a function of the conditional variance of T on X.
In particular, we stratify the data into 10 groups based on equispaced percentiles of Var(T | X) and plot the average coverage within each stratum along with a 90% confidence band obtained via repeated sampling. Note that in either the homoscedastic or the heteroscedastic case, Var(T | X) is varying with X. Not surprisingly, the naive CQR is conditionally conservative. In the univariate case, both the CQR-and CDR-LPB approximately achieve desired conditional coverage; in the multivariate case, the conditional coverage is slightly uneven, though still concentrating around the target line. Figure 2(b) presents the ratio between the LPBs and the true α-th conditional quantile as a function of Var(T | X). This is a measure of efficiency since the true conditional quantile is the oracle LPB. Here, we observe that naive CQR-LPBs are close to zero, confirming that they are overly conservative, while the CQR-and CDR-LPBs are fairly close to the oracle LPB, implying that both methods are relatively efficient.
Results from the experiments detailed in Table 1: (a) empirical 90% conditional coverage and (b) ratio between the LPB and the theoretical quantile as a function of Var(T | X). The blue curves correspond to the mean coverage in (a) and the median ratio in (b). The gray confidence bands correspond to the 95% and 5% quantiles of the estimates over repeated sampling. The abbreviations are the same as in Figure 1.

Application to UK Biobank COVID-19 data
We apply our method to the UK Biobank COVID-19 dataset to demonstrate robustness and practicability. UK Biobank (Bycroft et al., 2018) is a large-scale biomedical database and research resource, containing in-depth genetic and health information from half a million UK participants. In April 2020, UK Biobank started to release COVID-19 testing data, and has since continued to regularly provide updates. This gives researchers access to a cohort of COVID-19 patients, along with their date of confirmation, survival status, pre-existing conditions, and other demographic covariates. We include in our analysis all individuals in UK Biobank who received a positive COVID-19 test result before January 21st, 2021. This results in a dataset of size n = 14,861 with 484 events, defined as a COVID-related death. We extract eight covariate features, namely, age, gender, body mass index (BMI), waist size, cardiovascular disease status, diabetes status, hypothyroidism status, and respiratory disease status. As in Section 2, the censoring time is the time lapse between the date of a positive test and January 21st, 2021. The survival time is the time lapse between the date of a positive test and the event (which may have yet to occur).
We wish to harness this data to produce an LPB on the survival time of each COVID-19 patient. To apply the CQR-or CDR-LPB, we set the threshold c 0 to be 14 days. Since survival time assessment likely informs high-stakes decision-making, we set the target level to 99% for reliability.

Semi-synthetic examples
To demonstrate robustness, we start our analysis with two semi-synthetic examples so that the ground truth is known and calibration can be assessed (results on real outcomes are presented next). We keep the covariate matrix X from the UK Biobank COVID-19 data. In the first simulation study, we substitute the censoring time with a synthetic C. In the second, each survival time, observed or not, is substituted with a synthetic version. Details follow: • Synthetic C: we take the censored survival time T as the uncensored survival time and generate the censoring time C syn as C syn ∼ E(0.001 · age + 0.01 · gender).
In this setting, the observables are (X, C syn , T ∧ C syn ), and we wish to construct LPBs on T .
• Synthetic T : we keep the real censoring time C, and generate a survival time T syn as: log T syn | X ∼ N (2 + 0.05 · age + 0.1 · gender, 1).
In this setting, the observables are (X, C, T syn ∧ C), and we wish to construct LPBs on T syn .  Figure 3 shows the histograms of the survival time, censoring time, and censored survival time from the two simulated datasets. We apply the CDR-LPB (with c 0 = 14) to both. For comparison, we also apply the AFT and naive CQR. To evaluate the LPBs, we randomly split the data into a training set with 75% of the data and a holdout set with the remaining 25%. Each method is applied to the training set, and the resulting LPBs are evaluated on the holdout set. We repeat the above procedure 100 times to create 100 pairs of training and test data sets.
To visualize conditional calibration, we fit a Cox model on the data to generate a predicted risk score for each unit and stratify all units into 10 subgroups defined by deciles of the predicted risk. The results for synthetic C and T are plotted in Figures 4 and 5, respectively. As in the simulation studies from Section 4, we see that the naive CQR is overly conservative. Notably, although the AFT-LPB is well calibrated in the synthetic-C setting, this method is overly conservative in the synthetic-T setting, even though the model is correctly specified. In contrast, the CDR-LPB is calibrated in both examples. From the middle panels of Figures 4 and 5, we also observe that the CDR-LPB is approximately conditionally calibrated. Finally, the right panels show that CDR-LPB nearly preserves the rank of the predicted risk given by the Cox model. The flat portion of the LPB towards the left end corresponds to the threshold, implying that at least 99% of people with predicted risk scores lower than 0.5 can survive beyond 14 days.

Real data analysis
We now turn attention to actual COVID-19 responses. Again, we randomly split the data into a training set including 75% of data and a holdout set including the remaining 25%. Then we run the CDR on the training set and validate the LPBs on the holdout set. The issue is that the actual survival time is only partially observed, and thus, the coverage of a given LPB cannot be assessed accurately (this is precisely why we generated semi-synthetic responses in the previous section.) Nevertheless, we note that where both β lo and β hi are estimable from the data. This says that we can assess the marginal coverage of the LPBs by evaluating a lower and upper bound on the coverage. Of course, this extends to conditional coverage.
To assess the stability, we evaluate our method on 100 independent sample splits. Figure 6 presents the empirical lower and upper bound of the marginal coverage and those of the conditional coverage as functions of the predicted risk (as in the semi-synthetic examples), together with their variability across 100 sample splits. The left panel shows that the upper bound is very close to the lower bound, and both concentrate around the target level. Thus we can be assured that the CDR-LPB is well calibrated. Similarly, the other panels show that the CDR-LPB is approximately conditionally calibrated. We conclude this section by showing in Figure 7 the LPBs as functions of the percentiles of the predicted risk, age, and BMI, respectively.

Discussion and extensions
6.1. Beyond Type-I censoring In practice, censoring can be driven by multiple factors. As discussed in Leung et al. (1997), the two most common types of right censoring in a clinical study are the end-of-study censor- The target coverage level is 99%. The blue curves correspond to the mean coverage, and the gray confidence bands correspond to the 5% and 95% quantiles of the estimates across 100 sample splits. ing caused by the trial termination and the loss-to-follow-up censoring caused by unexpected attrition; see also Korn (1986) and Schemper and Smith (1996) for an account of the two types of censoring. Let C end denote the former and C loss the latter. By definition, C end is observable for every patient, as long as the entry times are accurately recorded. When the event is not death (e.g., the patient's returning visit), C loss is observable if all patients are tracked until the end of the study. However, when the event is death, C loss can only be observed for surviving patients. This is because for dead patients, it is impossible to know when they would have been lost to follow-up, had they survived.
In survival analysis without loss-to-follow-up censoring, or time-to-event analysis with nondeath events, the setting of Type I censoring considered in this paper is plausible. However, it is found that both the end-of-study and loss-to-follow-up censoring are involved in many applications (Leung et al., 1997). In these cases, the effective censoring time C is the minimum of C end and C loss , and is only observable for surviving patients, namely the patients with T > C. This situation prevents us from applying Algorithm 1 below because the subpopulation with C ≥ c 0 is not fully observed. If we use the subpopulation whose C is 1) observed and 2) larger than or equal to a threshold c 0 instead, then the joint distribution of (X, T ) becomes P X|C≥c0,T >C × P T |X,C≥c0,T >C . The extra conditioning event T > C induces a shift of the conditional distribution, since P T |X,C≥c0,T >C = P T |X,C≥c0 in general, rendering the weighted split conformal inference invalid.
Our method can nevertheless be adapted to yield meaningful inference under an additional assumption: (T, C loss ) |= C end | X.
Unlike Korn (1986) and Schemper and Smith (1996), (7) does not impose any restrictions on the dependence between T and C loss , which is harder to conceptualize. The assumption (7) tends to be plausible, especially when the total length of follow-up is short, since the randomness of the end-of-study censoring time often comes from the entry time of a patient, which is arguably exogenous to the survival time and attrition, at least when conditioning on a few demographic variables. There are certain cases where (7) could be violated. For example, if new treatments become available during the course of a study, subjects who enter later are different from those who enter earlier as they could have been given the alternative treatments, but were not. Let T = T ∧ C loss , the survival time censored merely by the loss to follow-up. Then the censored survival timeT = T ∧ C = T ∧ C end , and (7) implies that T |= C end | X, an analog of the conditionally independent censoring assumption (1). Since C end is observed for every patient, Algorithm 1 can be applied to produce an LPBL(·) such that P(T ≥L(X)) ≥ 1 − α =⇒ P(T ≥L(X)) ≥ 1 − α.
In Section D.5 of the Appendix, we provide an additional simulation illustrating the result of our method in this setting. An observation in conjunction with this line of reasoning is that, unlike most survival analysis techniques, our method distinguishes two sources of censoring and takes advantage of the censoring mechanism itself. It can be regarded as a building block to remove the adverse effect of C end . It remains an interesting question whether the censoring issue induced by C loss can be resolved or alleviated in this context.

Sharper coverage criteria
It is more desirable to achieve a stronger conditional coverage criterion: which states thatL(X) is a conditionally calibrated LPB. Clearly, (8) implies valid marginal coverage. Theorem 2 and 3 show that the CQR-and CDR-LPB are approximately conditionally calibrated if the conditional quantiles are estimated well. However, without distributional assumptions, we can show that (8) can only be achieved by trivial LPBs.
Theorem 5. Assume that X ∈ R p and C ≥ 0, T ≥ 0. Let P (X,C) be any given distribution of (X, C). IfL(·) satisfies (8) uniformly for all joint distributions of (X, C, T ) with (X, C) ∼ P (X,C) , then for all such distributions, at almost surely all points x aside from the atoms of P X .
Theorem 5 implies that no nontrivial LPB exists even if the distribution of (X, C) is known. Put another way, it is impossible to achieve desired conditional coverage while being agnostic to the conditional survival function. This impossibility result is inspired by previous works on uncensored outcomes and two-sided intervals (Vovk, 2012; Barber et al., 2019a).
Mondrian conformal inference allows the subgroups to also depend on the outcome; see Vovk et al. (2005), which refers to the rule of forming subgroups as a "taxonomy." Besides, the subgroups can also be overlapping; see Barber et al. (2019a). Following their techniques, we can extend Mondrian conformal inference to our case by modifying the calibration term η(x) (in Algorithm 1): Suppose X 1 and X 2 correspond to male and female subpopulations. Then η(x) is a function of both the testing point x and the gender. That said, estimation of censoring mechanisms and conditional survival functions can still depend on the whole training fold Z tr as joint training may be more powerful than separate training on each subpopulation (Romano et al., 2019a). When the censoring mechanism is known, we can prove that By the conditionally independent censoring assumption, the target distribution in the localized criterion (10) for a given k can be rewritten as The covariate shift between the observed and target distributions is .
This justifies the calibration term (9) in the weighted Mondrian conformal inference. Since the weighted Mondrian conformal inference is a special case of Algorithm 1, it also enjoys the double robustness property, implied by Theorem B.4 in Section B in the Appendix.

Survival counterfactual prediction
The proposed method in this paper is designed for a single cohort. In practice, patients are often exposed to multiple conditions, and the goal is to predict the counterfactual survival times had the cohort been exposed to a different condition. For example, a clinical study typically involves a treatment group and a control group. For a new patient, it is of interest to predict her survival time had she been assigned the treatment. For uncensored outcomes, Lei and Candès (2021) proposed a method based on weighted conformal inference for counterfactual prediction under the potential outcome framework (Neyman, 1923(Neyman, /1990Rubin, 1974). We can extend their strategy to handle censored outcomes and apply it to the survival counterfactual prediction.
Suppose each patient has a pair of potential survival times (T (1), T (0)), where T (1) (resp. T (0)) denotes the survival time had the patient been assigned into the treatment (resp. control) group. Our goal is to construct a calibrated LPB on T (1), given i.i.d. observations with W i denoting the treatment assignment and Without further assumptions on the correlation structures between T (1) and T (0), it is natural to conduct inference based on the observed treated group since the control group contains no information about T (1). The joint distribution of (X, T (1) ∧ c 0 ) on this group becomes Under the assumption that (T (1), T (0)) |= (W, C) | X, the conditional distribution of T (1) ∧ c 0 matches the target: The assumption is a combination of the strong ignorability assumption (Rubin, 1978), a widely accepted starting point in causal inference, and the conditionally independent censoring assumption. The density ratio of the two covariate distributions can be characterized by .
In many applications, it is plausible to further assume that C |= W | X. In this case, where the first term is the censoring mechanism and the second term is the propensity score (Rosenbaum and Rubin, 1983). Therefore, we can obtain calibrated LPBs on counterfactual survival times if both the censoring mechanism and the propensity score are known. This assumption is often plausible for randomized clinical trials. Furthermore, it has a doubly robust guarantee of coverage that is similar to Theorems 2 and 3.
As a result, if we treat T |= C | X as a null hypothesis, φ(Z 1 , . . . , Z n+1 ) is an α-level test. Note that X is continuous, and (T, C) are continuous or discrete. By Theorem 2 and Remark 4 of Shah et al. (2020), for any joint distribution Q of Z with the same continuity conditions on (X, C, T ), Clearly, X is absolutely continuous with respect to the Lebesgue measure and T , C are absolutely continuous with respect to the Lebesgue measure or the counting measure. By (A.1) and the definition of φ, we have P T n+1 ≥L n (Z 1 , . . . , Z n ; X n+1 ) ≥ 1 − α.
The proof is then completed by replacing (T n+1 , X n+1 ) with (T, X).

A.2. Proof of Theorem 5
We prove the theorem by modifying the proof of Proposition 4 from Vovk (2012). To avoid confusion, we expandL(x) intoL(Z 1 , . . . , Z n ; x) where Z i = (X i , C i , T i ). Fix any distribution P with (X, C) ∼ P (X,C) and C ≥ 0, T ≥ 0 almost surely. Suppose there exists a set V of P X -non-atom x such that P X (V) > 0, and for any x ∈ V, ∼ P (L(Z 1 , . . . , Z n , x) > 0) > α. Since V only includes non-atom x's, there exists t 0 > 0 and δ > 0 such that We can further shrink V so that Fix any t 1 ∈ (0, t 0 ). Define a new probability distribution Q on (X, C, T ) with (X, C) ∼ P (X,C) and the regular conditional probability where δ t1 defines the point mass on t 1 . Let d TV denote the total-variation distance. Then, Using the tensorization inequality for the total-variation distance (see e.g., Tsybakov (2008), Section 2.4) and (A.3), we obtain that d TV (P n , Q n ) ≤ 2 − 2(1 − d TV (P, Q)) n ≤ δ/2.
Together with (A.2), this implies that Let Z = (X, C, T ) be an independent draw from Q. The above inequality can be reformulated as P By definition of Q, T = t 1 < t 0 almost surely conditional on X ∈ V. Thus, ∼ Q (T <L(X), X ∈ V) > (α + δ/2)Q X (V).
On the other hand, since Q is a distribution with the same marginal distribution of (X, C) and T ≥ 0 almost surely, for any x, Marginalizing over x ∈ V, it implies that This contradicts (A.4) since Q X (V) = P X (V) > 0. The theorem is proved by contradiction.

B. Double robustness of weighted conformal inference
Throughout this section, we will focus on the generic weighted conformal inference sketched in Algorithm 2 below.
Algorithm 2: generic weighted split conformal inference Input: level α; data Z = (X i , Y i ) i∈I ; testing point x; function V (x, y; D) to compute the conformity score between (x, y) and data D; functionŵ(x; D) to fit the weight function at x using D as data.
Procedure: 1. Split Z into a training fold Z tr (X i , Y i ) i∈Itr and a calibration fold Output:L(x) = inf{y : V (x, y; Z tr ) ≤ η(x)} B.1. Conformity score via nested sets Gupta et al. (2019) introduced a broad class of conformity scores characterized by nested sets. Suppose we have a totally ordered index set S (e.g., R) and a sequence of nested sets {F s (x; D) : s ∈ S} in the sense that F s1 (x; D) ⊂ F s2 (x; D) for any s 1 ≤ s 2 ∈ S. Define a score as the index of the minimal set that includes y, i.e.
Without loss of generality, we assume throughout that F inf S (x) is the empty set and F sup S (x) is the full domain of Y . The CMR-, CQR-, and CDR-based scores are instances of this: We refer the readers to Table 1

B.2. Nonasymptotic theory for weighted conformal inference
In this section, we establish nonasymptotic bounds for the coverage which would imply the asymptotic results (e.g., Theorem 2 and Theorem 3), formally proved in Section C. The first result is identical to Theorem A.1 of Lei and Candès (2021).
∼ (X, Y ) ∼ P X × P Y |X and Q X be another distribution on the domain of X. Set N = |Z tr | and n = |Z ca |. Further, letŵ(x) =ŵ(x; Z tr ) be an estimate of w(x) = (dQ X /dP X )(x), andĈ(x) be the conformal interval resulting from Algorithm 2 with an arbitrary conformity score (not necessarily the ones defined in Section B.1). As- The second result generalizes Theorem A.2 of Lei and Candès (2021).
The next theorem shows that the conformal prediction interval is approximately the oracle interval when the outcome model is well estimated.

B.3. Proof of Theorem B.2
We first state a technical lemma whose proof is deferred to Section B.5.
Lemma B.1. Under Assumption C1, there exists a constant A > 0 that only depends on δ and M , such that where δ = min{δ, 1}. In particular, A depends on M polynomially.
Throughout the proof, we treat Z tr , henceŵ(·) andF s (·; Z tr ), as fixed. We shall prove the Z tr -conditional versions of (B.1) and (B.2) (with P(·) and E[·] replaced by P(· | Z tr ) and E[· | Z tr ]). The results then follow by the law of iterated expectations and the Hölder's inequality.
By definition, w(X) is almost surely finite under P X . Assumption C1 implies that w(X) is almost surely finite under Q X andŵ(X) is almost surely finite under P X . As a result, for any measurable function f , In addition, the assumption E X∼P X [ŵ(X) | Z tr ] < ∞ implies that P X∼P X (ŵ(X) < ∞) = 1. By (B.3), Thus,ŵ(X) is almost surely finite under Q X . Let ε < r/3 and (X,Ỹ ) denote a generic random vector drawn from Q X × P Y |X , which is independent of the data. Then (1) is due to the definition of ∆(x), and step (2) follows from Assumption C2 (i). Next, we derive an upper bound on P(η(X) < s 0 − ε |X). Let G denote the cumulative distribution function of the random distribution n i=1p i (X)δ V i +p ∞ (X)δ ∞ . Again, G implicitly depends on N , n andX. Then η(X) < s 0 − ε implies G(s 0 − ε) ≥ 1 − α, and thus, For any t > 0, the triangle inequality implies that To bound the first term, we note that For any t > 0, Let γ n be any fixed sequence with γ n = O(1). Taking expectation over D \ {X}, we obtain that where (1) uses the fact thatX is independent of (ŵ(X i )) n i=1 . Note that this bound holds uniformly withX. Throughout the rest of the proof, we write a 1,n a 2,n if there exists a constant B that only depends on r, b 1 , b 2 , δ, M, k, and, in particular, on M polynomially such that a 1,n ≤ Ba 2,n for all n.
Together with (B.4), it implies that almost surely. Assume B ≥ 3b 2 without loss of generality. Then For any β ∈ (0, 1), let (B.10) and by Markov's inequality and (B.3), Furthermore, Assumption C2 (ii) implies that ε n ≤ r/2 when N ≥ N (r) and n ≥ n(r) for some constants N (r), n(r) that only depend on r. Replacing B by 3B, we obtain that, for N ≥ N (r), n ≥ n(r), We can further enlarge B so that Bε n ≥ 1 − α when N < N (r) or n < n(r), in which case (B.2) trivially holds.

B.4. Proof of Theorem B.3
Let G denote the cumulative distribution function of the random distribution n i=1p i (X)δ V i + p ∞ (X)δ ∞ and G * denote the expectation of G conditional on D = {(X i ) n i=1 ,X}. Clearly, for any ε > 0, Then, for any t > 0, the triangle inequality implies that, Throughout the rest of the proof, we set t = b 1 ε/16.

Recall that
Thus, on the event η(X) ≤ s 0 + ε and ∆ (X) ≤ ε, which has probability at least 1 − β, The proof is then completed by noting that The result is almost the same as a step in the proof of Theorem A.1 in Lei and Candès (2021). We repeat the proof below for the sake of completeness.
We start with the following two Rosenthal-type inequalities for sums of independent random variables with finite (1 + δ)-th moments.
B.6. Asymptotic theory for weighted conformal inference As a direct consequence of Theorem B.1, Theorem B.2, and Theorem B.3, we can derive the following asymptotic results.
∼ (X, Y ) ∼ P X × P Y |X and let Q X be another distribution on the domain of X. Set N = |Z tr | and n = |Z ca |. Further, let {F s (x; D) : s ∈ S} be any sequence of nested sets,ŵ(x) =ŵ(x; Z tr ) be an estimate of w(x) = (dQ X /dP X )(x), andĈ(x) be the resulting conformal interval from Algorithm 2. Assume that E [ŵ(X) | Z tr ] = 1 and E[w(X)] = 1. Assume that either B1 or B2 (or both) holds: Furthermore, under B2, for any ε > 0, lim N,n→∞ Theorem B.5. In the settings of Theorem B.4, assume further that the assumptions C'1 and C'2 in Theorem B.3 hold. Then for any ε > 0,

C.1. Proof of Theorem 2
Let ξ = min{r, ε/b 2 }. SinceL(x) ≤ c 0 for any x, where the first case is proved by the definition of q α (x) and second case is proved by the condition A2 (i). Thus, whenever q α (x) ≥ c 0 − ξ, As a result, Since ε is arbitrary, it remains to prove lim N,n→∞ The results to be proved are equivalent to Theorem 2 with r replaced by ξ and an additional assumption that q α (X) < c 0 − r almost surely under P X and Q X .
The latter is due to that w(X) < ∞ almost surely under P X . Throughout the rest of the proof, we will assume (C.1).

C.2. Proof of Theorem 3
As in the proof of Theorem 2, we first show that it remains to prove the result when q α+r (x; c 0 ) < c 0 almost surely. Let ξ = min{r, ε}. SinceL(x) ≤ c 0 for any x, Then the condition (i) implies As a result, Since ε is arbitrary, it remains to prove lim N,n→∞ The results to be proved are equivalent to Theorem 3 with r replaced by ξ and an additional assumption that q α+r (X) < c 0 almost surely under P X and Q X .
The latter is due to that w(X) < ∞ almost surely under P X . Throughout the rest of the proof, we will assume (C.2).
Using the same argument as in the proof of Theorem 2, it suffices to show that A2 from Theorem 3 implies Assumption C2. Let Clearly, Assumption A2 (i) implies Assumption C2 (i) with s 0 = 0.

C.3. Adaptivity of conformalized survival analysis: asymptotic results
Following the steps of Theorem 2 and Theorem 3, we can show that ∆ s (X) ≤ E(X).
By Theorem B.3 with k = = 1, for CQR, and for CDR, lim Theorem 4 is then proved. Next, we discuss the more involved case where c 0 is growing with n. Intuitively, as c 0 approaches the upper endpoint of the domain of T , both CQR and CDR lower predictive bound should approach the oracle α-th conditional quantile of T . Nevertheless, we cannot invoke Theorem B.4 directly because 1/P(C ≥ c 0 | X) would diverge as c 0 grows. However, the nonasymptotic result provides sufficient conditions under whichL(X) converges to q α (X) with high probability under Q X . We say a sequence a n = O (polyLog(n)) iff there exists γ > 0 such that a n = O((log n) γ ).

(C.4)
(a) For CQR, assume further that (i) there exists b 1 , r > 0 such that, for any ε ∈ [0, r], Then for any ε > 0, lim (b) For CDR, assume further that (i) there exists r > 0 such that, for any ε ∈ [0, r], and some estimateq s (x) of the s-th conditional quantile of T . (iii) For some γ, ω > 0, Then for any ε > 0, lim Remark 3. The condition (C.3) implies that essential supremum of C must be at least as large as the essential supremum of q α (X). Apparently, it is a necessary condition since otherwise q α (x) would be strictly larger thanL(x) for a nonnegligible fraction of units.
Since z → z ∧ c 0 is a contraction and F (· | x) is non-decreasing, it is easy to show for both CQR and CDR where E(x) is defined in Theorem 2 for CQR and in Theorem 3 for CDR. Using the same arguments as in the proofs of Theorem 2 and Theorem 3, we can show that ∆ (x) ≤ E(x).
For any s > 0, For any s > 0, Thus, we have shown that all assumptions of Theorem B.3 are satisfied. Let β = 1/ log n. Since B 3 depends on M polynomially and other constants, (C.5) implies that By Theorem B.3, Since lim n→∞ c 0,n ≥ q α (X) almost surely under Q X , for any s ≥ 0, and P X∼Q X lim n→∞ q α−s (X) ∧ c 0,n ≥ q α−s (X) = 1.
The proof is then completed by plugging in O s (x) for CQR and CDR, respectively.

C.4. When is CMR-LPB doubly robust?
For CMR, a natural oracle nested set is given by However, Assumption B2 (b) (i) with ε = 0 requires the existence of s 0 such that This implies that for some s 0 , The above equality does not hold in general. One exception is the additive case with homoscedastic errors: In this case, we can derive the double robustness of the CMR-LPB based on Theorem B.4.

D.1. Additional details for the selection of c 0
We consider two cases.
• When there is prior information on the consoring mechanism: for example, if the researcher knows that there existsc such that P(C ≥c | X) ≥ η almost surely, for some η > 0. With any consistent estimatorc(·) for c(·), we can again consider the truncated estimator c(x) =c(x) ∨ η 2 and for any ε > 0, The above implies that Assumpation A1 is satisfied.
• When there is no available prior information on suchc, researchers can instead use the data to determinec. Suppose the covariate X takes value in a finite set {x 1 , x 2 , . . . , x m }, where P(X = x l ) > 0 for any l ∈ [m]. Let ∆ = min l∈[m] P(X = x l ). Further divide Z tr into two disjoint Z 1 tr and Z 2 tr , and the upper bound on c 0 can be obtained via: Note thatc ≥ 0 for any η < 0.5. We then claim that the event A := {P(C ≥c | X = x l , Z 1 tr ) ≥ η, ∀l ∈ [m]} occurs with probability at least 1−m exp(−η 2 |Z 1 tr |∆)−m exp(−∆ 2 |Z 1 tr |/2). To prove this claim, for simplicity, we assume the censoring time has a continuous distribution; the results can be applied to the general case with a silimar but slightly more complicated argument. Fix η > 0. For any l ∈ [m], define By the continuity of C, we have P(C ≥ c * l | X = x l ) = η for ∀l ∈ [m]. Next, ≤m · E exp − 2η 2 · |Z 1,l tr | , where |Z 1,l tr | = i∈Z 1 tr 1{X i = x l } and step (1) is due to Hoeffding's inequality. Finally, we have E exp − 2η 2 · |Z 1,l tr | =E exp − 2η 2 · |Z 1,l tr | · 1 |Z 1,l tr | ≥ where the last line applies Hoeffding's inequality again. This completes the proof for the lower bound on P(A). Now for any c 0 ≤c, and any consistent estimatorc(·), the truncated estimatorĉ(x) =c(x) ∨ η 2 satisfies that for any ε > 0, Above, step (a) is because on event A, step (b) is due to the lower bound on P(A) we have proved previously. The above quantity goes to zero as N, n → ∞. Hence we arrive at a condition slightly weaker than Assumption A1, and would imply a slightly weaker coverage guarantee: for any ε > 0, D.2. Selecting c 0 based on the calibration set Throughout this section, we only consider the case whereŵ(·) = w(·) is known. Without loss of generality, assume I ca = {1, . . . , n}. Letĉ 0 be any choice of c 0 that potentially depends on both Z tr and Z ca . In particular, we chooseĉ 0 by maximizing the average lower prediction bounds on Z ca over a candidate set C ⊂ R + , i.e., whereL c0 (X i ) denotes the lower prediction bound with threshold c 0 . We shall prove that Lĉ 0 (X) is approximately valid under regularity conditions on C and the conformity score.
Theorem D.1. If |C| < ∞ and E[w(X i ) r ] < ∞ for some r > 2, then, as n → ∞, Theorem D.2. Assume C = R + and E[w(X i ) r ] < ∞ for some r > 4. If there exists a positive integer M such that, for any (x 1 ,t 1 ) and (x 2 ,t 2 ), both {c 0 ∈ R + : V (x 1 ,t 1 ; c 0 ) > V (x 2 ,t 2 ; c 0 )} and {c 0 ∈ R + : V (x 1 ,t 1 ; c 0 ) < V (x 2 ,t 2 ; c 0 )} are unions of at most M intervals, then, as n → ∞, is an estimate of the (1 − α)-th quantile of T using survival estimation techniques, then Clearly, V (x,t; c 0 ) is a piecewise linear function where the first and the third pieces are constants 0 andq α (x) −t, respectively, connecting by a linear piece with slope sign(q α (x) −t). Thus, the condition of Theorem D.2 holds with M = 1.

D.2.1. Proof of Theorem D.1 and D.2
We start with the following lemma that bounds the weighted empirical process associated with the weighted conformal inference procedure indexed by c 0 and v.
Lemma D.1. Let V (x, y; c 0 ) be any conformity score that depends on c 0 and Z tr . Further let where N (A(C)) denotes the shattering number of A(C), i.e., The proof is lengthy and deferred to Section D.2.2. Next, we prove a lemma relating the empirical process bound to the coverage.
Lemma D.2. Letĉ 0 be selected from C based solely on Z tr and Z ca (e.g., (D.1)). If Proof. Let η c0 (x) denote the cutoff for the conformity score defined in Algorithm 1 that corresponds to the threshold c 0 and η c0 = Quantile 1 − α; .
The result is then implied by the first bound in Lemma D.1 and Lemma D.2.
Proof (Theorem D.2). For any subset S ⊂ {1, . . . , n}, let e S denote the binary vector with the k-th entry equal to 1 iff k ∈ S. Given (x i ,t i ) n i=1 , let C 0,jk denote all boundary points of {c 0 ∈ R + : V (x j ,t j ; c 0 ) > V (x k ,t k ; c 0 )} and C 0 denote the union of all C 0,jk 's, allowing the same value to appear for multiple times. Then |C 0 | ≤ 2M n 2 = O(n 2 ).
On the other hand, sup c0∈C,v∈R E w(X)I(V (X,T ; c 0 ) ≤ v) By triangle inequality,

D.3. Coverage results under complete independent censoring
Suppose now weights W i = 1/ĉ(X i ) is non-decreasing in the conformity scores V i = V (X i ,T i ; c 0 ) in the sense that for any i, j ∈ I ca ∪ {n + 1}. The predictive interval given by our algorithm iŝ , 1 − α .

D.4. Time complexity analysis
We consider the following cases separately: • The censoring mechanism is known and the threshold c 0 is determined a priori: the time complexity of the comformalized method can be decomposed into TC(conformalized survival analysis) = TC(fitting the quantile of T ) + negligible cost to compute conformity scores and weighted empirical quantile.
• The censoring mechanism is unknown and the threshold c 0 is determined a priori: the time complexity can be decomposed into TC(conformalized survival analysis) = TC(fitting the quantile of T ) + TC(fitting P(C ≥ c 0 | X)) + negligible cost to compute the conformity scores and weighted empirical quantile.
D.5. An illustrating simulation in the presence of additional censoring (Section 6.1) To evaluate the validity and efficiency of our method under the more general setting discussed in Section 6.1, we set up our simulation as follows: the covariate X ∼ U(0, 4) and the survival time T satisfies log T | X ∼ N (µ(X), σ 2 (X)), where µ(x) = 2 + 0.37 · √ x and σ(x) = 1 + x/5. There are censoring times. The end-of-study censoring time C end ∼ E(0.4); the loss-to-follow-up censoring time C loss is generated from the following model: log C loss | X ∼ N (2 + 0.05 log T + 0.09 · (X − 2)(X − 3)(X − 4), 1).
Clearly, C loss is not independent of T even conditional on X, but Assumption 7 is satisfied in this example. We then apply our method to the observable data (X, T ∧ C end ∧ C loss , C end ) with a target level 90%; the implementation details are the same as in Section 4. Figure 8 plots the empirical coverage of T and T ∧ C loss of three variants of our proposed methods and the naive CQR method. The naive CQR method is very conservative, showing an almost 100% coverage for both T and T ∧ C loss ; the three variants of the proposed methods are all less conservative than the naive CQR since they are able to remove the consoring from C end -both CDR-conTree and CQR-cRF are able to achieve exact coverage for T ∧ C loss ; the coverage for T is higher than the target level, where the conservativeness comes from the censoring of C loss . Figure 9 further plots the empirical conditional coverage of T and T ∧C loss as functions of Var(T | X). The three variants of our method are less conservative than the naive CQR; CDR-conTree and CQR-cRF achieves conditional coverage approximately. Figure 10 plots the ratio between the LPB to the theoretical conditional quantile of three variants of our proposed methods and the naive CQR method, where we again see that the naive CQR provides non-informative lower bounds. .