Abstract

Motivation

Combining the results of different experiments to exhibit complex patterns or to improve statistical power is a typical aim of data integration. The starting point of the statistical analysis often comes as a set of P-values resulting from previous analyses, that need to be combined flexibly to explore complex hypotheses, while guaranteeing a low proportion of false discoveries.

Results

We introduce the generic concept of composed hypothesis, which corresponds to an arbitrary complex combination of simple hypotheses. We rephrase the problem of testing a composed hypothesis as a classification task and show that finding items for which the composed null hypothesis is rejected boils down to fitting a mixture model and classifying the items according to their posterior probabilities. We show that inference can be efficiently performed and provide a thorough classification rule to control for type I error. The performance and the usefulness of the approach are illustrated in simulations and on two different applications. The method is scalable, does not require any parameter tuning, and provided valuable biological insight on the considered application cases.

Availability and implementation

The QCH methodology is available in the qch package hosted on CRAN. Additionally, R codes to reproduce the Einkorn example are available on the personal webpage of the first author: https://www6.inrae.fr/mia-paris/Equipes/Membres/Tristan-Mary-Huard.

Supplementary information

Supplementary data are available at Bioinformatics online.

1 Introduction

1.1 Combining analyses

Since the beginning of the omics era, it has been a common practice to compare and intersect lists of P-values, where all lists describe the same items (say genes) whose differential activity was tested and compared in different conditions or using different omics technologies (Irizarry et al., 2005; Natarajan et al., 2012). One may e.g. consider (i) genes whose expression has been measured in Q different types of tissues (and compared to a reference), (ii) genes whose expression has been measured in the same tissue at Q different timepoints (compared to baseline), or (iii) genes whose activity has been investigated through Q omics, such as expression, methylation, and copy number. The goal of the post-analysis is then to identify items that have been perturbed in either all or in a predefined subset of conditions. Finding such genes by integrating information from multiple data sources may be a first step towards understanding the underlying process at stake in the response to treatment, or the inference of the undergoing regulation network (Das et al., 2019; Xiong et al., 2012).

1.2 List crossing

One simple way to cross sets of P-values is to simply apply a multiple testing procedure separately to each list, identify the subsets of items for which the H0 hypothesis is rejected, then intersect these subsets. The graphical counterpart of this strategy is the popular Venn diagram that can be found in numerous articles (see e.g. Conway et al., 2017 and references inside). Alternatively, a lot of attention has been dedicated to the consensus ranking problem, where one aims at combining multiple ranked lists into a single one that is expected to be more reliable, see Li et al. (2019) for an extensive review. However, neither intersecting nor consensus ranking relies on an explicit biological hypothesis. Moreover, apart from rare exceptions, such as Natarajan et al. (2012), in most cases, the final set of identified items comes with no statistical guarantee regarding false positive control.

1.3 Composed hypotheses

The primary objective of the present article is to provide a statistical framework suitable to answer complex queries called hereafter composed hypotheses, such as ‘which genes are expressed in a (pre-defined) subset of conditions?’. This corresponds to a complex query because it cannot be answered through standard testing procedures. A classical example of a composed hypothesis is the so-called Intersection-Union Test (IUT) (Berger and Hsu, 1996) where one aims at finding the items for which all the Q-tested hypotheses should be rejected, e.g. genes declared differentially expressed for all treatment comparisons.

Testing composed hypotheses on a large set of items requires (i) the identification of a proper test statistic to rank the items based on their P-value profiles and (ii) a thresholding rule that provides guarantees about type I error rate control. Although different methods have been proposed to build the test statistic and to control the type I error rate in the specific context of the IUT problem (Deng et al., 2008; Tuke et al., 2008; Van Deun et al., 2009), no generic framework has emerged to easily handle any arbitrary composed hypothesis so far.

1.4 Contribution

We propose to formulate the composed hypothesis testing problem as an inference task in a mixture model, where the observations correspond to items and the components correspond to classes characterized by a combination of H0 and H1 hypotheses. The strategy we propose is efficient in many ways. First, it comes with a natural way to rank the items and control type I error rate through their posterior probabilities and their local False Discovery Rate (FDR) interpretation. Second, we show that, under mild conditions on the conditional distributions, inference can be efficiently performed on a large collection of items (up to several million in our simulations) within seconds, making the method amenable to applications, such as meta-analysis of genome-wide association studies. The method consists of three independent steps: first, fit a non-parametric mixture model on each marginal distribution, then estimate the proportion of the joint mixture model, and finally query the composed hypothesis. Importantly, the first two fitting steps are performed only once and can then be used to query any number of composed hypotheses without additional computational burden. Lastly, using both simulated and real genomic data, we illustrate the performance of the strategy (in terms of FDR control and detection power) and its richness in terms of application. In particular, for the classical IUT problem, it is shown that the detection power improves with respect to the number of P-value sets up to almost 100% in some cases where a classical list crossing would be almost inefficient.

2 Model

2.1 Composed hypothesis

We consider the test of a composed hypothesis relying on Q different tests. More specifically, we denote by H0q (respectively H1q) the null (respectively alternative) hypothesis corresponding to test q (1qQ) and consider the set C:={0,1}Q of all possible combinations of null and alternative hypotheses among the Q. We name configuration any element c:=(c1,,cQ)C. There exist |C|=2Q such configurations. For a given configuration c, we define the joint hypothesisHc as

Considering Q =3 tests and the configuration c=(0,1,1),Hc states that H01 holds, but neither H02 nor H03.

Now, considering two complementary subsets C0 and C1 of C (i.e. C0C1=,C0C1=C), we define the composed null and alternative hypotheses H0 and H1 as

As an example, in the case where Q =3 the Intersection Union Test corresponds to the case where C1 reduces to the configuration cmax=(1,1,1) and C0 to the union of all others: C0=C{cmax}. Alternatively, if one aims at detecting items such that hypothesis H0={fewer than two H1 hypotheses hold among the three} is rejected, then one can define C1={(1,1,0),(1,0,1),(0,1,1),(1,1,1)}.

In the sequel, we consider an experiment involving n items (e.g. genes or markers), for which one wants to test H0 versus H1. We will denote by H0i the null composed hypothesis for item i (1in) and similarly for H0iq,Hic and so on.

2.2 Joint mixture model

Assume that Q tests have been performed on each of the n items and denote by Piq the P-value obtained for test q on item i. We note Pi:=(Pi1,,PiQ) the P-value profile of item i. Let us further define Ziq the binary variable being 0 if H0iq holds and 1 if H1iq holds. A vector Zi:=(Zi1,,ZiQ)C is hence associated with each item. Assuming that the items are independent, each P-value profile arises from a mixture model with 2Q components defined as follows:

  • the vectors {Zi}1in are drawn independently within C with probabilities wc=Pr{Zi=c},

  • the P-value profiles {Pi}1in are drawn independently conditionally on the Zi’s with distribution ψc:

(Pi|Zi=c)ψc. So, the vectors Pi are all independent and arise from the same multivariate mixture distribution
(1)

2.3 A classification point of view

In this framework, the H0 items correspond to items for which ZiC0, whereas the H1 items are the ones for which ZiC1. Model (1) can be rewritten as
where W0:=cC0wc,ψ0:=W01cC0wcψc and respectively for W1 and ψ1. Hence, following Efron et al. (2001), we may turn the question of significance into a classification problem and focus on the evaluation of the conditional probability
(2)
which is also known as the local FDR (Aubert et al., 2004; Efron et al., 2001; Guedj et al., 2009; Strimmer, 2008).

2.4 About intersection-union

In the particular case of the IUT, a major difference between the classical approach and the one presented here is that the natural criterion to select the items for which H1 is likely to hold are the posterior probabilities and not the maximal P-value Pmaxi=maxqPiq. This of course changes the ranking of the items in terms of significance (see Section 3.3). As will be shown in Sections 4 and 5, this modified ranking impacts the power of the overall procedure. As mentioned earlier the formalism we adopt enables one to consider more complex composed hypotheses than the IUT, and the same beneficial ranking strategy will be applied whatever the tested composed hypothesis.

2.5 Modeling the ψc

The mixture model (1) involves 2Q multivariate distributions ψc that need to be estimated, which may become quite cumbersome when Q becomes large. To achieve this task, we will assume that all distributions have the following product form:
(3)
so that only the 2Q univariate distributions f01,,f0Q,f11,,f1Q need to be estimated. We emphasize that this product form does not imply that the P-values Piq are independent from one test to another because no restriction is imposed on the proportions wc, that control the joint distribution (Zi1,,ZiQ). Equation (3) only means that the Q p-values are independent conditionally on the configuration associated with entity i; they are not supposed to be marginally independent (see Supplementary Material for an illustration).

Still, the procedure presented here relies on the conditional independence assumption, and its application to cases where the assumption does not hold may lead to spurious results. In particular, the FDR may not be satisfactorily controlled when strong conditional correlations exist between sets of P-values (see Supplementary Table S6). As an example, combining very similar test statistics computed on the same data would result in such conditional correlations; applying the proposed procedure here would constitute an improper use of the methodology.

3 Inference

We named our approach the QCH (Query of Composed Hypotheses) procedure. It can be summarized into three steps:

  1. Fit a mixture model on each set of (probit-transformed) P-values {Piq}1in to get an estimate of each alternative distribution f1q;

  2. Estimate the proportion wc of each configuration c using an EM algorithm and deduce the estimates of the conditional probabilities of interest τi;

  3. Rank the items according to the τ^i and compute an estimate of the false discovery rate to control for multiple testing.

3.1 Marginal distributions

The marginal distribution of the P-values Piq associated with the q-th test can be deduced from Model (1) combined with (3). One has
(4)
where f0q is the distribution of Piq conditional on Ziq=0 and f1q its distribution conditional on Ziq=1. The proportion π0q is a function of the proportions wc. Now, because each Piq is a P-value, its null distribution (i.e. its distribution conditional on Ziq=0) is uniform over (0, 1). Because this holds for each test, we have that f0q(Piq)1 for all i’s and q’s.

3.2 Fitting marginal mixture models

The mixture model (4) has received a lot of attention in the past because of its very specific form, one of its components being completely determined (and uniform). This specificity entails a series of nice properties. For example, Storey (2002) introduced a very simple yet consistent estimate of the null proportion π0q, namely
which amounts to assuming that the alternative distribution f1q has no (or negligible) mass above λ. The value of λ can be tuned in an adaptive way, such as proposed by Storey et al. (2004). In the present paper, following Guedj et al. (2009), Robin et al. (2007), and Storey (2002), we use λ=.5 which turns out to give good results in practice. Given such an estimate, Robin et al. (2007) showed that the alternative density can be estimated in a non-parametric way. To this aim, they resort to the negative probit transform: Xiq=Φ1(Piq) (where Φ stands for the standard Gaussian cumulative distribution function and ϕ for its density function) to better focus on the distribution tails (see also Efron, 2008; McLachlan et al., 2005, 2006; Robin et al., 2007). Model (4) thus becomes
Since the null density function is known to be ϕ and π0q can be estimated beforehand, the only unknown quantity is g1q that can be inferred using the following kernel estimator
(5)
where Kh is a kernel function (with width h) and where
(6)

Robin et al. (2007) showed that there exists a unique set of {τ^iq} satisfying both (5) and (6), which can be found using a fix-point algorithm.

In practice, one needs to choose both the kernel function and its bandwidth h. In this article, we used a Gaussian kernel function whose bandwidth can be tuned adaptively from the data using cross-validation (Chacón and Duong, 2018). Both the Gaussian kernel density estimation and the cross-validation tuning are implemented in the kde function of R package ks (Duong, 2007).

3.2 Configuration proportions

Similar to Model (4), Model (1) can be translated into a mixture model for the Xi=(Xi1,,XiQ), with the same proportions wc but replacing each f0q with ϕ and f1q with g1q, namely
The estimates g^1q introduced in the previous section directly provide one with estimates for the γ^c’s that can be plugged into the mixture, so that the only quantities to infer are the weights wc. This inference can be efficiently performed using a standard EM thanks to closed-form expressions for the quantities to compute at each step:

Observe that the log-likelihood of the corresponding mixture model is strictly convex with respect to the weights wc (as a direct consequence of Jensen’s inequality), therefore the EM algorithm will converge to the unique global maximum.

3.3 Ranking and error control

As an important by-product, the algorithm provides one with estimates of the conditional probabilities (2) as
according to which one can rank the items 1in so that
Following McLachlan et al. (2005, 2006) and Robin et al. (2007), we use the conditional probabilities τ^i to estimate the false discovery rate when thresholding at a given level t:
Consequently, threshold t can be calibrated to control the type-I error rate at a nominal level by setting
see McLachlan et al. (2005) for technical details. Note that the τ^i’s are used twice, to rank the items and to estimate the FDR. Each of these two usages is investigated in the next section.

We emphasize that QCH already has two attractive features. First, because inference steps 1 and 2 do not require the knowledge of the composed hypothesis to be tested, once the model is fitted one can query any number of composed hypotheses without any additional computational burden. Second, the procedure does not require any model selection step for the tuning of the number of components in mixture model (1) as it corresponds to the number of configurations C, which is directly deduced from the number of test sets Q.

4 Simulations

In this section the performance of the QCH method is evaluated and compared to those of two procedures previously considered for the IUT:

  • The Pmax procedure consists in considering for each item i its associated maximum P-value Pimax=max(Pi1,,PiQ) as both the test statistic, for ranking, and the associated P-value, for false positive (FP) control. Once Pimax is computed a multiple testing procedure is applied. Here we applied the Benjamini–Hochberg (BH, Benjamini and Hochberg, 1995) procedure for FDR control. This procedure corresponds to the IUT procedure described in Zhong et al. (2019).

  • The FDR set intersection procedure (called hereafter IntersectFDR) consists in applying an FDR control procedure (namely, the BH procedure) to each of the Q P-value sets. The lists of items for which H0q has been rejected are then intersected. This corresponds to the ‘list crossing strategy’ presented in the Introduction section (see e.g. Conway et al., 2017).

In the IUT setting the set C1 of the corresponding QCH procedure reduces to the cmax configuration that consists only of 1’s. All the analyses presented in this section were performed with a type-I error rate nominal level fixed at α = 5%.

We first consider a scenario where Q sets of n P-values are generated as follows. First, the proportion of H0 hypotheses in each set is drawn according to a Dirichlet distribution D(8,2). The proportion of each configuration c is then obtained by multiplying the proportions of the corresponding H0 and H1 hypotheses. Once the configuration proportions are obtained, the one corresponding to the full H1 (i.e. all hypotheses of the configuration are H1) is re-weighted upward to ensure a minimal representation of this configuration (see the simulation code provided as Supplementary Material). Note that imposing a minimal proportion for the full H1 configuration also yields non-independent P-values series, as the probability to be under any configuration is not equal to the product of the corresponding marginal probabilities anymore. The configuration memberships Zi are then independently drawn according to Model (1). The test statistics TiqN(μiq,1) are drawn independently, where μiq=0 if H0 holds for item i in condition q, and μiq=2 otherwise. Lastly, these test statistics are translated into P-values. Several values of n (104, 105, 106) and Q (2, 4, 8) are considered, and for a given parameter setting (n, Q) 100 P-value matrices were generated. This scenario is called ‘Equal Power’ hereafter since the deviation μiq under H1 is the same for all P-value sets.

Figure  1 displays the ROC curves of the Pmax and QCH procedures, computed on a single simulation with n=105 and Q =2 or 8. The x-axis (respectively the y-axis) corresponds to the FP rate (respectively TP = true positive rate) computed for all possible cutoffs on either Pimax or the posterior probabilities τ^i. When Q =2 the two procedures are roughly equivalent in terms of ranking, with a slight advantage for QCH. When Q =8 QCH significantly outperforms Pmax and reaches an AUC close to 1. This behavior is even more pronounced for larger values of n (not shown) and shows that using Pmax as a test statistic is only relevant for small values of Q (typically 2 or 3). The ranking based on the QCH posterior probabilities always performs better than the ranking based on Pmax, and the higher Q the higher the gap in terms of AUC. Also, note that one cannot easily derive a ROC curve for the IntersectFDR comparable to the ones of Pmax and QCH since IntersectFDR is based on list crossing rather than selecting items based on an explicit test statistic.

Left: ROC curves for the Pmax (blue) and QCH (red) procedures, when Q = 2. The x-axis displays the FP rate and the y-axis the TP rate. Right: same graph with Q = 8
Fig. 1.

Left: ROC curves for the Pmax (blue) and QCH (red) procedures, when Q =2. The x-axis displays the FP rate and the y-axis the TP rate. Right: same graph with Q =8

Table  1 provides some additional insight in terms of comparison between methods. First, one can observe that all procedures control the type-I error rate, i.e. the actual error rate is lower than the nominal level whatever the simulation setting. Importantly, whatever Q the power of Pmax is always close to 0 whereas the power of QCH is not. Since there are little differences between Pmax and QCH in terms of ranking when Q =2, it means that the multiple testing procedure for Pmax is conservative, i.e. the maximum P-value should not be used as the P-value of the test. One can also observe that the power of the two baseline methods decreases whenever Q increases whereas QCH displays an opposite behavior.

Table 1.

Performance of the three procedures for the ‘Equal Power’ scenario

NQPmax
IntersectFDR
QCH
FDRPowerFDRPowerFDRPower
10420 (0)0 (0.001)0.059 (0.089)0.034 (0.025)0.054 (0.1)0.031 (0.022)
10440 (0)0 (0)0 (0)0.001 (0.002)0.024 (0.038)0.095 (0.086)
10480 (0)0 (0)0 (0)0 (0)0.004 (0.007)0.363 (0.134)
10520 (0)0 (0)0.059 (0.031)0.031 (0.023)0.044 (0.025)0.027 (0.018)
10540 (0)0 (0)0.012 (0.049)0.001 (0.001)0.032 (0.022)0.09 (0.082)
10580 (0)0 (0)0 (0)0 (0)0.004 (0.002)0.364 (0.112)
10620 (0)0 (0)0.058 (0.022)0.032 (0.023)0.047 (0.008)0.027 (0.02)
10640 (0)0 (0)0.009 (0.018)0.001 (0.001)0.03 (0.007)0.094 (0.08)
10680 (0)0 (0)0 (0)0 (0)0.003 (0.001)0.349 (0.105)
NQPmax
IntersectFDR
QCH
FDRPowerFDRPowerFDRPower
10420 (0)0 (0.001)0.059 (0.089)0.034 (0.025)0.054 (0.1)0.031 (0.022)
10440 (0)0 (0)0 (0)0.001 (0.002)0.024 (0.038)0.095 (0.086)
10480 (0)0 (0)0 (0)0 (0)0.004 (0.007)0.363 (0.134)
10520 (0)0 (0)0.059 (0.031)0.031 (0.023)0.044 (0.025)0.027 (0.018)
10540 (0)0 (0)0.012 (0.049)0.001 (0.001)0.032 (0.022)0.09 (0.082)
10580 (0)0 (0)0 (0)0 (0)0.004 (0.002)0.364 (0.112)
10620 (0)0 (0)0.058 (0.022)0.032 (0.023)0.047 (0.008)0.027 (0.02)
10640 (0)0 (0)0.009 (0.018)0.001 (0.001)0.03 (0.007)0.094 (0.08)
10680 (0)0 (0)0 (0)0 (0)0.003 (0.001)0.349 (0.105)

Note. For each procedure, the FDR and Power (averaged over 100 runs) are displayed for different settings. Numbers in brackets correspond to standard errors.

Table 1.

Performance of the three procedures for the ‘Equal Power’ scenario

NQPmax
IntersectFDR
QCH
FDRPowerFDRPowerFDRPower
10420 (0)0 (0.001)0.059 (0.089)0.034 (0.025)0.054 (0.1)0.031 (0.022)
10440 (0)0 (0)0 (0)0.001 (0.002)0.024 (0.038)0.095 (0.086)
10480 (0)0 (0)0 (0)0 (0)0.004 (0.007)0.363 (0.134)
10520 (0)0 (0)0.059 (0.031)0.031 (0.023)0.044 (0.025)0.027 (0.018)
10540 (0)0 (0)0.012 (0.049)0.001 (0.001)0.032 (0.022)0.09 (0.082)
10580 (0)0 (0)0 (0)0 (0)0.004 (0.002)0.364 (0.112)
10620 (0)0 (0)0.058 (0.022)0.032 (0.023)0.047 (0.008)0.027 (0.02)
10640 (0)0 (0)0.009 (0.018)0.001 (0.001)0.03 (0.007)0.094 (0.08)
10680 (0)0 (0)0 (0)0 (0)0.003 (0.001)0.349 (0.105)
NQPmax
IntersectFDR
QCH
FDRPowerFDRPowerFDRPower
10420 (0)0 (0.001)0.059 (0.089)0.034 (0.025)0.054 (0.1)0.031 (0.022)
10440 (0)0 (0)0 (0)0.001 (0.002)0.024 (0.038)0.095 (0.086)
10480 (0)0 (0)0 (0)0 (0)0.004 (0.007)0.363 (0.134)
10520 (0)0 (0)0.059 (0.031)0.031 (0.023)0.044 (0.025)0.027 (0.018)
10540 (0)0 (0)0.012 (0.049)0.001 (0.001)0.032 (0.022)0.09 (0.082)
10580 (0)0 (0)0 (0)0 (0)0.004 (0.002)0.364 (0.112)
10620 (0)0 (0)0.058 (0.022)0.032 (0.023)0.047 (0.008)0.027 (0.02)
10640 (0)0 (0)0.009 (0.018)0.001 (0.001)0.03 (0.007)0.094 (0.08)
10680 (0)0 (0)0 (0)0 (0)0.003 (0.001)0.349 (0.105)

Note. For each procedure, the FDR and Power (averaged over 100 runs) are displayed for different settings. Numbers in brackets correspond to standard errors.

In a second scenario called ‘Linear power’ P-value sets were generated the same way as in the previous case, except for the fact that the deviation parameter μiq is 0 if H0 holds for item i in condition q, and μiq=μq otherwise, where μq=q+1 for q=1,,Q, i.e. the statistical power associated to set q increases with q. The performance of the three procedures is displayed in Table  2. One can observe that the Pmax and IntersectFDR procedures get little benefit from the fact that the distinction between H0 and H1 is easy for some P-value sets. On the contrary, QCH successfully exploits this information, consistently achieving a power higher than 60% for Q4 when Pmax is at 0 and IntersectFDR is lower than 15%.

Table 2.

Performance of the three procedures for the ‘Linear Power’ scenario

NQPmax
IntersectFDR
QCH
FDRPowerFDRPowerFDRPower
10420 (0)0.001 (0.003)0.059 (0.039)0.134 (0.063)0.05 (0.044)0.138 (0.069)
10440 (0)0 (0)0.01 (0.02)0.127 (0.06)0.04 (0.016)0.713 (0.179)
10480 (0)0 (0.001)0 (0)0.133 (0.065)0.028 (0.015)0.995 (0.021)
10520 (0)0 (0)0.056 (0.025)0.127 (0.056)0.048 (0.009)0.135 (0.052)
10540 (0)0 (0)0.009 (0.01)0.124 (0.049)0.037 (0.008)0.696 (0.163)
10580 (0)0 (0)0 (0.001)0.136 (0.054)0.024 (0.007)1 (0.001)
10620.001 (0.013)0 (0.002)0.062 (0.019)0.137 (0.059)0.05 (0.003)0.137 (0.058)
10640 (0)0 (0)0.009 (0.011)0.126 (0.058)0.038 (0.007)0.705 (0.172)
10680 (0)0 (0)0 (0)0.129 (0.053)0.023 (0.003)1 (0)
NQPmax
IntersectFDR
QCH
FDRPowerFDRPowerFDRPower
10420 (0)0.001 (0.003)0.059 (0.039)0.134 (0.063)0.05 (0.044)0.138 (0.069)
10440 (0)0 (0)0.01 (0.02)0.127 (0.06)0.04 (0.016)0.713 (0.179)
10480 (0)0 (0.001)0 (0)0.133 (0.065)0.028 (0.015)0.995 (0.021)
10520 (0)0 (0)0.056 (0.025)0.127 (0.056)0.048 (0.009)0.135 (0.052)
10540 (0)0 (0)0.009 (0.01)0.124 (0.049)0.037 (0.008)0.696 (0.163)
10580 (0)0 (0)0 (0.001)0.136 (0.054)0.024 (0.007)1 (0.001)
10620.001 (0.013)0 (0.002)0.062 (0.019)0.137 (0.059)0.05 (0.003)0.137 (0.058)
10640 (0)0 (0)0.009 (0.011)0.126 (0.058)0.038 (0.007)0.705 (0.172)
10680 (0)0 (0)0 (0)0.129 (0.053)0.023 (0.003)1 (0)

Note. For each procedure, the FDR and Power (averaged over 100 runs) are displayed for different settings. Numbers in brackets correspond to standard errors.

Table 2.

Performance of the three procedures for the ‘Linear Power’ scenario

NQPmax
IntersectFDR
QCH
FDRPowerFDRPowerFDRPower
10420 (0)0.001 (0.003)0.059 (0.039)0.134 (0.063)0.05 (0.044)0.138 (0.069)
10440 (0)0 (0)0.01 (0.02)0.127 (0.06)0.04 (0.016)0.713 (0.179)
10480 (0)0 (0.001)0 (0)0.133 (0.065)0.028 (0.015)0.995 (0.021)
10520 (0)0 (0)0.056 (0.025)0.127 (0.056)0.048 (0.009)0.135 (0.052)
10540 (0)0 (0)0.009 (0.01)0.124 (0.049)0.037 (0.008)0.696 (0.163)
10580 (0)0 (0)0 (0.001)0.136 (0.054)0.024 (0.007)1 (0.001)
10620.001 (0.013)0 (0.002)0.062 (0.019)0.137 (0.059)0.05 (0.003)0.137 (0.058)
10640 (0)0 (0)0.009 (0.011)0.126 (0.058)0.038 (0.007)0.705 (0.172)
10680 (0)0 (0)0 (0)0.129 (0.053)0.023 (0.003)1 (0)
NQPmax
IntersectFDR
QCH
FDRPowerFDRPowerFDRPower
10420 (0)0.001 (0.003)0.059 (0.039)0.134 (0.063)0.05 (0.044)0.138 (0.069)
10440 (0)0 (0)0.01 (0.02)0.127 (0.06)0.04 (0.016)0.713 (0.179)
10480 (0)0 (0.001)0 (0)0.133 (0.065)0.028 (0.015)0.995 (0.021)
10520 (0)0 (0)0.056 (0.025)0.127 (0.056)0.048 (0.009)0.135 (0.052)
10540 (0)0 (0)0.009 (0.01)0.124 (0.049)0.037 (0.008)0.696 (0.163)
10580 (0)0 (0)0 (0.001)0.136 (0.054)0.024 (0.007)1 (0.001)
10620.001 (0.013)0 (0.002)0.062 (0.019)0.137 (0.059)0.05 (0.003)0.137 (0.058)
10640 (0)0 (0)0.009 (0.011)0.126 (0.058)0.038 (0.007)0.705 (0.172)
10680 (0)0 (0)0 (0)0.129 (0.053)0.023 (0.003)1 (0)

Note. For each procedure, the FDR and Power (averaged over 100 runs) are displayed for different settings. Numbers in brackets correspond to standard errors.

Lastly, we considered the composed alternative hypothesis

In the case where Q =2 the set C1 of the corresponding QCH procedure is {(0,1),(1,0),(1,1)}. The two procedures Pmax and FDR intersect can be empirically modified to handle such composed hypothesis as follows:

  • rather than computing Pmax as the maximum P-value, one can get the second-largest P-value,

  • rather than intersecting all Q, FDR lists one can consider all the combinations of Q—1 intersected lists among Q.

Note that these adaptations are feasible for some composed hypotheses but considering a general composed hypothesis usually leads to a complete redefinition of the testing procedure that can become quite cumbersome, whereas no additional fitting is required for QCH. Indeed, for each new composed hypothesis to be tested only the posterior probabilities corresponding to the new hypothesis have to be computed. This comes without any additional computational cost since one only needs to sum up configuration posterior probabilities corresponding to the new hypothesis, which does not require one to refit the model (1). The results are provided in Table  3 for the ‘Linear Power’ scenario. In terms of performance, the detection problem becomes easier for all procedures (since the proportion of H1 items is now much bigger). Still, QCH consistently achieves the best performance and significantly outperforms its competitors as soon as Q4, being close to a 100% detection rate when Q =8, while being efficient at controlling the FDR rate at its nominal level. Additional results on alternative simulation settings can be found in Supplementary Material.

Table 3.

Performance of the three procedures for the ‘Linear Power’ scenario and the ‘at least Q1 H1 composed hypothesis’

NQPmax
IntersectFDR
QCH
FDRPowerFDRPowerFDRPower
10420.06 (0.012)0.624 (0.111)0.031 (0.006)0.519 (0.123)0.05 (0.004)0.619 (0.126)
10440.005 (0.004)0.262 (0.029)0.022 (0.015)0.411 (0.048)0.049 (0.007)0.733 (0.093)
10480 (0)0.201 (0.018)0 (0.001)0.353 (0.037)0.036 (0.007)1 (0)
10520.063 (0.013)0.608 (0.116)0.032 (0.007)0.502 (0.127)0.05 (0.002)0.598 (0.133)
10540.005 (0.004)0.265 (0.011)0.021 (0.013)0.412 (0.04)0.047 (0.003)0.727 (0.092)
10580 (0)0.199 (0.005)0 (0)0.365 (0.038)0.029 (0.004)1 (0.001)
10620.062 (0.012)0.613 (0.11)0.032 (0.006)0.504 (0.122)0.05 (0.001)0.601 (0.128)
10640.005 (0.003)0.264 (0.009)0.022 (0.013)0.413 (0.035)0.047 (0.002)0.724 (0.088)
10680 (0)0.199 (0.002)0 (0)0.356 (0.032)0.029 (0.004)1 (0.001)
NQPmax
IntersectFDR
QCH
FDRPowerFDRPowerFDRPower
10420.06 (0.012)0.624 (0.111)0.031 (0.006)0.519 (0.123)0.05 (0.004)0.619 (0.126)
10440.005 (0.004)0.262 (0.029)0.022 (0.015)0.411 (0.048)0.049 (0.007)0.733 (0.093)
10480 (0)0.201 (0.018)0 (0.001)0.353 (0.037)0.036 (0.007)1 (0)
10520.063 (0.013)0.608 (0.116)0.032 (0.007)0.502 (0.127)0.05 (0.002)0.598 (0.133)
10540.005 (0.004)0.265 (0.011)0.021 (0.013)0.412 (0.04)0.047 (0.003)0.727 (0.092)
10580 (0)0.199 (0.005)0 (0)0.365 (0.038)0.029 (0.004)1 (0.001)
10620.062 (0.012)0.613 (0.11)0.032 (0.006)0.504 (0.122)0.05 (0.001)0.601 (0.128)
10640.005 (0.003)0.264 (0.009)0.022 (0.013)0.413 (0.035)0.047 (0.002)0.724 (0.088)
10680 (0)0.199 (0.002)0 (0)0.356 (0.032)0.029 (0.004)1 (0.001)

Note. For each procedure, the FDR and Power (averaged over 100 runs) are displayed for different settings. Numbers in brackets correspond to standard errors.

Table 3.

Performance of the three procedures for the ‘Linear Power’ scenario and the ‘at least Q1 H1 composed hypothesis’

NQPmax
IntersectFDR
QCH
FDRPowerFDRPowerFDRPower
10420.06 (0.012)0.624 (0.111)0.031 (0.006)0.519 (0.123)0.05 (0.004)0.619 (0.126)
10440.005 (0.004)0.262 (0.029)0.022 (0.015)0.411 (0.048)0.049 (0.007)0.733 (0.093)
10480 (0)0.201 (0.018)0 (0.001)0.353 (0.037)0.036 (0.007)1 (0)
10520.063 (0.013)0.608 (0.116)0.032 (0.007)0.502 (0.127)0.05 (0.002)0.598 (0.133)
10540.005 (0.004)0.265 (0.011)0.021 (0.013)0.412 (0.04)0.047 (0.003)0.727 (0.092)
10580 (0)0.199 (0.005)0 (0)0.365 (0.038)0.029 (0.004)1 (0.001)
10620.062 (0.012)0.613 (0.11)0.032 (0.006)0.504 (0.122)0.05 (0.001)0.601 (0.128)
10640.005 (0.003)0.264 (0.009)0.022 (0.013)0.413 (0.035)0.047 (0.002)0.724 (0.088)
10680 (0)0.199 (0.002)0 (0)0.356 (0.032)0.029 (0.004)1 (0.001)
NQPmax
IntersectFDR
QCH
FDRPowerFDRPowerFDRPower
10420.06 (0.012)0.624 (0.111)0.031 (0.006)0.519 (0.123)0.05 (0.004)0.619 (0.126)
10440.005 (0.004)0.262 (0.029)0.022 (0.015)0.411 (0.048)0.049 (0.007)0.733 (0.093)
10480 (0)0.201 (0.018)0 (0.001)0.353 (0.037)0.036 (0.007)1 (0)
10520.063 (0.013)0.608 (0.116)0.032 (0.007)0.502 (0.127)0.05 (0.002)0.598 (0.133)
10540.005 (0.004)0.265 (0.011)0.021 (0.013)0.412 (0.04)0.047 (0.003)0.727 (0.092)
10580 (0)0.199 (0.005)0 (0)0.365 (0.038)0.029 (0.004)1 (0.001)
10620.062 (0.012)0.613 (0.11)0.032 (0.006)0.504 (0.122)0.05 (0.001)0.601 (0.128)
10640.005 (0.003)0.264 (0.009)0.022 (0.013)0.413 (0.035)0.047 (0.002)0.724 (0.088)
10680 (0)0.199 (0.002)0 (0)0.356 (0.032)0.029 (0.004)1 (0.001)

Note. For each procedure, the FDR and Power (averaged over 100 runs) are displayed for different settings. Numbers in brackets correspond to standard errors.

5 Illustrations

5.1 TSC dataset

In our first application, we consider the Tuberous Sclerosis Complex (TSC) dataset obtained from the Database of Genotypes and Phenotypes website (dbGap: accession code phs001357.v1.p1). The dataset consists of 34 TSC and seven control tissue samples, for which gene expression and/or methylation were quantified. Differential analysis was performed separately on the two types of omics data, leading to the characterization of 7222 genes for expression and 273 215 CpG sites for methylation. To combine the corresponding P-values sets, we considered pairs between a gene and a CpG site, the pairs being constituted according to one of the following two strategies:

  • pair a gene with the methylation sites directly located at the gene position; this strategy resulted in 128 879 pairs (some CpG sites being not directly linked to any gene) and is called ‘Direct’ (for Direct vicinity) hereafter,

  • pair a gene with any methylation site that is physically close to that gene due to chromatin interactions; this strategy, called ‘HiC’ hereafter as a reference to the chromosome conformation capture technique employed, resulted in 6 532 368 pairs.

Depending on the strategy the same gene could be paired with several methylation sites and vice versa. Strategy (b) requires additional information about chromatin information that was obtained from HiC data obtained from Gene Expression Omnibus (GEO accession code GSM455133).

The purpose of the data integration analysis is then to identify pairs that exhibit a significant differential effect on both expression and methylation. In terms of composed hypothesis, this boils down to perform IUT: the configurations corresponding to H0 is C0={00,01,10} and H1 is C1={11}. In addition to using QCH, we evaluated the Pmax procedure associated with a Benjamini–Hochberg correction.

The results are summarized in Table  4. Using the Direct strategy combined with the QCH procedure, 4030 genes were classified as H1. As for the simulations studies the QCH procedure detected an increased number of pairs and genes compared with Pmax. Applying the HiC strategy resulted in a significantly higher number of tested pairs but a lower number of identified genes. Interestingly, the list of H1 genes detected with the HiC strategy contains many candidates that were not detected using the Direct strategy. Among these candidates found with the HiC strategy only are TSC1 and TSC2 for which mutations are well-known to be associated with the TSC disease. The QCH procedure also identified three additional genes (again with the HiC strategy only) that were not detected using Pmax, and whose combinations with methylation sites in the TSC1 gene were significant. These genes are LRP1B, PEX14, and CHD7. LRP1B is associated with renal angiomyolipoma (AML) and AML is observed in 75% of patients with TSC (Wang et al., 2020). PEX14 is known to interact with PEX5 for importing proteins to the peroxisomes (Neuhaus et al., 2014). Mutations in CHD7 have been found to increase the risk of idiopathic autism (O’Roak et al., 2012), which could suggest a link between monogenic disorders like TSC and idiopathic autism due to complex inheritance (Gamsiz et al., 2015). It has indeed been reported that TSC patients may develop abnormal neurons and cellular enlargement, making difficult the differentiation between TSC and autistic symptoms in the setting of severe intellectual disability (Takei and Nawa, 2014). In conclusion, the QCH procedure combined with the HiC information detected genes that have been previously reported to have a functional association with the disease in different studies. These genes may reveal more insight into the genetic architecture of TSC.

Table 4.

Number of H1 genes identified through different pairing strategies and procedures (Pmax or QCH)

StrategyTotal PmaxTotal QCHExtra PmaxExtra QCH
Direct362440304410
HiC350138750374
StrategyTotal PmaxTotal QCHExtra PmaxExtra QCH
Direct362440304410
HiC350138750374
Table 4.

Number of H1 genes identified through different pairing strategies and procedures (Pmax or QCH)

StrategyTotal PmaxTotal QCHExtra PmaxExtra QCH
Direct362440304410
HiC350138750374
StrategyTotal PmaxTotal QCHExtra PmaxExtra QCH
Direct362440304410
HiC350138750374

5.2 Einkorn dataset

We consider the Einkorn study of Bonnot et al. (2020), in which the grain transcriptome was investigated in four nutritional treatments at different time points during grain development (see the original reference for a complete description of the experimental design). For each combination of a treatment and a timepoint 3 einkorn grains were harvested and gene expression was quantified through RNAseq. The results of the differential analyses are publicly available at forgemia.inra.fr/GNet/einkorn_grain_transcriptome/-/tree/master/. Here we focus on the comparison between the NmSm (control) treatment and the NpSm (enriched in Nitrate) treatment, compared at Q =4 different timepoints (T300, T400, T500, and T600). The differential analysis involved n =12, 327 transcripts and was performed at each timepoint. We extracted the results that were summarized into a n×Q P-value matrix.

In this context, we consider the H0 composed null hypothesis
that corresponds to the following composed alternative subset:

Only the results of QCH are reported since there is no clear way to adapt the Pmax or IntersectFDR procedures to the present setting. The results are displayed in Figure  2 and Table  5. A total of 249 transcripts were declared significant for the composed alternative hypothesis at an FDR nominal level of 5%. These transcripts were further classified into a configuration according to their maximal posteriors, resulting in a four-class classification presented in Table  5. The table displays for each configuration the median value of the −log10(P-value) at the different timepoints over the transcripts belonging to the class. One can first observe that several configurations, such as 1100 or 1110 are missing, i.e. no transcripts are differentially expressed in early times but not later. Alternatively, configurations 0110 and 0011 are well-represented by 12 and 30 transcripts, respectively. These classes can be clearly observed in Figure  2 that displays the (−log10 transformed) P-value profiles of the 249 transcripts over time, classified according to their configuration (color level on the left side). One can observe the time-shifted effect of the transcripts belonging to these two classes. Identifying these transcripts provides valuable insight about the kinetics of the response to nitrate enrichment that could be incorporated in/confirmed by the network reconstruction to follow.

P-value profiles of the H1 transcripts over time, after −log10 transformation. Colors on the left side correspond to the configurations (black = 0110, red = 0011, green = 0111, and blue = 1111)
Fig. 2.

P-value profiles of the H1 transcripts over time, after −log10 transformation. Colors on the left side correspond to the configurations (black = 0110, red = 0011, green = 0111, and blue = 1111)

Table 5.

Description of the 4 H-config classes identified from the NmSm-NpSm comparison

ConfigNbTrsptT300T400T500T600
0110120.22250313.82429463.2967690.3776435
0011300.22002450.12562154.4794224.4642771
01112040.29820422.41551162.8512273.1877051
1111311.984337114.963833015.25124817.2187644
ConfigNbTrsptT300T400T500T600
0110120.22250313.82429463.2967690.3776435
0011300.22002450.12562154.4794224.4642771
01112040.29820422.41551162.8512273.1877051
1111311.984337114.963833015.25124817.2187644

Note. For each class (in row) and each timepoint (in column) the median value of the −log10(P-value) is reported along with the number of detected transcripts (column NbTrspt).

Table 5.

Description of the 4 H-config classes identified from the NmSm-NpSm comparison

ConfigNbTrsptT300T400T500T600
0110120.22250313.82429463.2967690.3776435
0011300.22002450.12562154.4794224.4642771
01112040.29820422.41551162.8512273.1877051
1111311.984337114.963833015.25124817.2187644
ConfigNbTrsptT300T400T500T600
0110120.22250313.82429463.2967690.3776435
0011300.22002450.12562154.4794224.4642771
01112040.29820422.41551162.8512273.1877051
1111311.984337114.963833015.25124817.2187644

Note. For each class (in row) and each timepoint (in column) the median value of the −log10(P-value) is reported along with the number of detected transcripts (column NbTrspt).

6 Discussion

6.1 Summary

We introduced a versatile approach to assess composed hypotheses, that is any set of configurations of null and alternative hypotheses among Q-tested hypotheses. The approach relies on a multivariate mixture model that is fitted efficiently so that very large omic datasets can be handled. The classification point-of-view we adopted yields a natural ranking of the items based on their associated posterior probabilities. These probabilities can also be used as local FDR estimates to compute (and control) the FDR. Note that in the present work we considered FDR control, but one could consider directly controlling the local FDR (Efron, 2008), or alternatively controlling more refined error rates, such as the multiple FDR developed in the context of multi-class mixture models (Mary-Huard et al., 2013).

The Simulation section illustrated the gap between Pmax and QCH in terms of power. The poor performance of the Pmax procedure is due to the use of Pmax as both a test statistic (i.e. for ranking the items) and a P-value (i.e. the P-value associated with Pmax is assumed to be Pmax itself). Although this corresponds to what has been applied in practice (Zhong et al., 2019), one can observe that Pmax cannot be considered as a P-value since it is not uniformly distributed under the null (composed) hypothesis H0. Consequently, a direct application of multiple testing correction procedures to Pmax will automatically lead to a conservative testing procedure. Although it may be feasible to improve on the current practice, note that (i) finding the null distribution of Pmax and (ii) extending the Pmax procedure to the more general question of testing any composed hypothesis are two difficult tasks. The QCH methodology presented here solves these two problems efficiently, in terms of power, FDR control, and computational efficiency.

From a more general perspective, we emphasize that likewise the whole theory of statistical testing, the proposed methodology is valid (i.e. error rate is controlled) only when the composed hypotheses H0 and H1 are specified a priori. More precisely, valid conclusions can be drawn if and only if the set of null combinations C0 (and, consequently, C1) is fixed beforehand, that is, before having any look at the data. The flexibility of the QCH procedure should not be used as a tool to magnify statistical significance artificially.

6.2 Future works

The proposed methodology also provides information about the joint distributions of the latent variables Ziq, that is the probability that, say, both Hi1 and Hi3 hold, but not Hi2. This distribution encodes the dependency structure between the tests. This available information should be further carefully investigated as it provides insights into the way items (e.g. genes) respond to each combination of tested treatments.

Although the proposed model allows for dependency between the P-values through the distribution of the latent variables Zi, conditional independence (i.e. independence within each configuration) is assumed to alleviate the computational burden. This assumption could be removed in various ways. A parametric form, such as a (probit-transformed) multivariate Gaussian with constrained variance matrix, could be considered for each joint distribution ψc. Alternatively, a non-parametric form could be preserved, using copulas to encode the conditional dependency structure.

Acknowledgements

SD and IM acknowledge the submitters of the TSC data to the dbGaP repository (https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs001357.v1.p1).

Funding

This work has been supported by the Indo-French Center for Applied Mathematics (IFCAM), the ‘Investissement d’Avenir’ project (Amaizing, ANR-10-BTBR-0001), and the Department of Biotechnology, Govt. of India, for their partial support to this study through SyMeC.

Conflict of Interest: none declared.

References

Aubert
J.
 et al. (
2004
)
Determination of the differentially expressed genes in microarray experiments using local FDR
.
BMC Bioinformatics
,
5
,
125
.

Benjamini
Y.
,
Hochberg
Y.
(
1995
)
Controlling the false discovery rate: a practical and powerful approach to multiple testing
.
J. R. Stat. Soc. B Methodol
.,
57
,
289
300
.

Berger
R.L.
,
Hsu
J.C.
(
1996
)
Bioequivalence trials, intersection-union tests and equivalence confidence sets
.
Stat. Sci
.,
11
,
283
319
.

Bonnot
T.
 et al. (
2020
)
Omics data reveal putative regulators of einkorn grain protein composition under sulphur deficiency
.
Plant Physiol
.,
183
,
501
516
.

Chacón
J.E.
,
Duong
T.
(
2018
)
Multivariate Kernel Smoothing and Its Applications
.
CRC Press, London
.

Conway
J.R.
 et al. (
2017
)
UpSetR: an R package for the visualization of intersecting sets and their properties
.
Bioinformatics
,
33
,
2938
2940
.

Das
S.
 et al. (
2019
)
A powerful method to integrate genotype and gene expression data for dissecting the genetic architecture of a disease
.
Genomics
,
111
,
1387
1394
.

Deng
X.
 et al. (
2008
)
Improving the power for detecting overlapping genes from multiple DNA microarray-derived gene lists
.
BMC Bioinformatics
,
9
,
S14
.

Duong
T.
(
2007
)
ks: Kernel density estimation and kernel discriminant analysis for multivariate data in R
.
J. Stat. Softw
.,
21
,
1
16
.

Efron
B.
(
2008
)
Microarrays, empirical bayes and the two-groups model
.
Stat. Sci
.,
23
,
1
22
.

Efron
B.
 et al. (
2001
)
Empirical bayes analysis of a microarray experiment
.
J. Am. Stat. Assoc
.,
96
,
1151
1160
.

Gamsiz
E.D.
 et al. (
2015
)
Discovery of rare mutations in autism: elucidating neurodevelopmental mechanisms
.
Neurotherapeutics
,
12
,
553
571
.

Guedj
M.
 et al. (
2009
)
Kerfdr: a semi-parametric kernel-based approach to local false discovery rate estimation
.
BMC Bioinformatics
,
10
,
84
.

Irizarry
R.A.
 et al. (
2005
)
Multiple-laboratory comparison of microarray platforms
.
Nat. Methods
,
2
,
345
350
.

Li
X.
 et al. (
2019
)
A comparative study of rank aggregation methods for partial and top ranked lists in genomic applications
.
Brief. Bioinformatics
,
20
,
178
189
.

Mary-Huard
T.
 et al. (
2013
)
Error rate control for classification rules in multi-class mixture models
.
J. Soc. Franç. Stat
.

McLachlan
G.J.
 et al. (
2006
)
A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays
.
Bioinformatics
,
22
,
1608
1615
.

McLachlan
G.J.
 et al. (
2005
)
Analyzing Microarray Gene Expression Data
.
John Wiley & Sons, Sydney
.

Natarajan
L.
 et al. (
2012
)
Exact statistical tests for the intersection of independent lists of genes
.
Ann. Appl. Stat
.,
6
,
521
541
.

Neuhaus
A.
 et al. (
2014
)
A novel pex14 protein-interacting site of human pex5 is critical for matrix protein import into peroxisomes
.
J. Biol. Chem
.,
289
,
437
448
.

O’Roak
B.J.
 et al. (
2012
)
Sporadic autism exomes reveal a highly interconnected protein network of de novo mutations
.
Nature
,
485
,
246
250
.

Robin
S.
 et al. (
2007
)
A semi-parametric approach for mixture models: application to local false discovery rate estimation
.
Comput. Stat. Data Anal
.,
51
,
5483
5493
.

Storey
J.D.
(
2002
)
A direct approach to false discovery rates
.
J. R. Stat. Soc. B
,
64
,
479
498
.

Storey
J.D.
 et al. (
2004
)
Strong control, conservative point estimation and simultaneous conservative consistency of false discovery rates: a unified approach
.
J. R. Stat. Soc. B Stat. Methodol
.,
66
,
187
205
.

Strimmer
K.
(
2008
)
fdrtool: a versatile R package for estimating local and tail area-based false discovery rates
.
Bioinformatics
,
24
,
1461
1462
.

Takei
N.
,
Nawa
H.
(
2014
)
mTOR signaling and its roles in normal and abnormal brain development
.
Front. Mol. Neurosci
.,
7
,
1
12
.

Tuke
J.
 et al. (
2008
)
Gene profiling for determining pluripotent genes in a time course microarray experiment
.
Biostatistics
,
10
,
80
93
.

Van Deun
K.
 et al. (
2009
)
Testing the hypothesis of tissue selectivity: the intersection–union test and a Bayesian approach
.
Bioinformatics
,
25
,
2588
2594
.

Wang
T.
 et al. (
2020
)
Two novel TSC2 mutations in renal epithelioid angiomyolipoma sensitive to everolimus
.
Cancer Biol. Ther
.,
21
,
4
11
.

Xiong
Q.
 et al. (
2012
)
Integrating genetic and gene expression evidence into genome-wide association analysis of gene sets
.
Genome Res
.,
22
,
386
397
.

Zhong
W.
 et al. (
2019
)
Multi-SNP mediation intersection-union test
.
Bioinformatics
,
35
,
4724
4729
.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)
Associate Editor: Inanc Birol
Inanc Birol
Associate Editor
Search for other works by this author on:

Supplementary data