Motivation: An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive.
Results: By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
Often the first step, and indeed the major goal for many microarray studies, is the detection of genes that are differentially expressed in a known number of classes, C1, … , Cg. Statistical significance of differential expression can be tested by performing a test for each gene. When many hypotheses are tested, the probability that a type I error (a false positive error) is committed increases sharply with the number of hypotheses. In this paper, we focus on the use of a two-component mixture model to handle the multiplicity issue. This model is becoming more widely adopted in the context of microarrays, where one component density corresponds to that of the test statistic for genes that are not differentially expressed, and the other component density to that of the test statistic for genes that are differentially expressed. For the adopted test statistic, we propose that its values be transformed to z-scores, whose null and non-null distributions can be represented by a single normal each. We show how this two-component normal mixture model can be fitted very quickly via the EM algorithm started from a point that is completely determined by an initial specification of the proportion π0 of genes that are not differentially expressed. A procedure for determining suitable initial values for π0 is suggested in the case where the null density is taken to be standard normal (the theoretical null distribution). We also consider the provision of an initial partition of the genes into two groups for the application of the EM algorithm in the case where the adoption of the theoretical null distribution would appear not to be appropriate and an empirical null distribution needs to be used. We demonstrate our approach on three datasets that have been analyzed previously in the bioinformatics literature.
Although biological experiments vary considerably in their design, the data generated by microarrays can be viewed as a matrix of expression levels. For M microarray experiments (corresponding to M tissue samples), where we measure the expression levels of N genes in each experiment, the results can be represented by the N × M matrix. Typically, M is no more than 100 (usually much less in the present context), while the number of genes N is of the order of 104. The M tissue samples on the N available genes are classified with respect to g different classes, and it is assumed that the (logged) expression levels have been preprocessed with adjustment for array effects.
2.2 Detection of differential expression
Differential expression of a gene means that the (class-conditional) distribution of its expression levels is not the same for all g classes. These distributions can differ in any possible way, but the statistics usually adopted are designed to be sensitive to primarily a difference in the means; for example, the one-way analysis of variance (ANOVA) F-statistic. Even so, the gene hypotheses being tested are of equality of distributions across the g classes, which allows the use of permutation methods to estimate P-values if necessary.
In the special case of g = 2 classes, the one-way ANOVA F-statistic reduces to the square of the classical (pooled) t-statistic. Various refinements of the t-statistic have been suggested; see, for example, the procedure of Tusher et al. (2001).
3 TWO-COMPONENT MIXTURE MODEL
3.1 Posterior probability of non-differential expression
In this paper, we focus on a decision–theoretic approach to the problem of finding genes that are differentially expressed. We use a prediction rule approach based on a two-component mixture model as formulated in Lee et al. (2000) and Efron et al. (2001). We let G denote the population of genes under consideration. It can be decomposed into two groups G0 and G1, where G0 is the group of genes that are not differentially expressed, and G1 is the complement of G0; that is, G1 contains the genes that are differentially expressed. We let πi denote the prior probability of a gene belonging to Gi (i = 0, 1), and assume that the common density of the test statistic Wj for a gene j in Gi is fi(wj). The unconditional density of Wj is then given by the two-component mixture model,
It can be seen from (2) that in order to use this posterior probability of non-differential expression in practice, we need to be able to estimate π0, the mixture density f(wj) and the null density f0(wj), or equivalently, the ratio of densities f0(wj)/f(wj). Efron et al. (2001) have developed a simple empirical Bayes approach to this problem with minimal assumptions. This problem has been studied since under more specific assumptions, including works by Newton et al. (2001, 2004), Lönnstedt and Speed (2002), Pan et al. (2003), Zhao and Pan (2003), Broët et al. (2004), Newton et al. (2004), Smyth (2004), Do et al. (2005) and Gottardo et al. (2006), among many others. The fully parametric methods that have been proposed are computationally intensive.
3.2 Bayes decision rule
Let e01 and e10 denote the two errors when a rule is used to assign a gene as being differentially expressed or not, where e01 is the probability of a false positive and e10 is the probability of a false negative. That is, the sensitivity is 1 − e10 and the specificity is 1 − e01. The so-called risk of allocation is given by
The Bayes rule, which is the rule that minimizes the risk (3), assigns a gene to G1 if τ0(wj) ≤ c; otherwise, the j-th gene is assigned to G0.
4 SELECTION OF GENES
In practice, we do not know the prior probability π0 nor the densities f0(wj) and f(wj), which will have to be estimated. We shall shortly discuss a simple and quick approach to this estimation problem. If , (wj) and (wj) denote estimates of π0, f0(wj) and f1(wj), respectively, the gene-specific summaries of differential expression can be expressed in terms of the estimated posterior probabilities (wj), where
When controlling the FDR, it is important to have a guide to the value of the associated FNR in particular, as setting the FDR too low may result in too many false negatives in situations where the genes of interest (related to biological pathway or target drug) are not necessarily the top ranked genes; see, for example, Pawitan et al. (2005). The local FDR in the form of the posterior probability of non-differential expression of a gene has an advantage over the global measure of FDR in interpreting the data for an individual gene; see more details in Efron (2005b) and also Supplementary information.
5 USE OF Z-SCORE
5.1 Normal transformation
5.2 Permutation assessment of P-value
In cases where we are unwilling to assume the null distribution F0 of the original test statistic Wj for use in our normal transformation (11), we can obtain an assessment of the P-value Pj via permutation methods. We can use just permutations of the class labels for the gene-specific statistic Wj. This suffers from a granularity problem, since it estimates the P-value with a resolution of only 1/B, where B is the number of the permutations. Hence it is common to pool over all N genes; see Supplementary information for details. The drawback of pooling the null statistics across the genes to assess the null distribution of Wj is that one is using different distributions unless all the null hypotheses Hj are true. The distribution of the null values of the differentially expressed genes is different from that of the truly null genes, and so the tails of the true null distribution of the test statistic is overestimated, leading to conservative inferences; see, for example, Pan (2003), Guo and Pan (2005) and Xie et al. (2005).
6 TWO-COMPONENT NORMAL MIXTURE
We now proceed to show that by working in terms of the zj-scores as defined by (11), we can provide a parametric version of the two-component mixture model (1) that is easy to fit. The density of the test statistic Zj corresponding to the use of the z-score (11) for the j-th gene is to be represented by the two-component normal mixture model
As pointed out in a series of papers by Efron (2004, 2005a, b), for some microarray datasets the normal scores do not appear to have the theoretical null distribution, which is the standard normal. In this case, Efron has considered the estimation of the actual null distribution called the empirical null as distinct from the theoretical null. As explained in Efron (2005b), the two-component mixture model (1) assumes two classes, null and non-null, whereas in reality the differences between the genes range smoothly from zero or near zero to very large.
In the case where the theoretical null distribution does not appear to be valid and the use of an empirical null distribution would seem appropriate, we shall adopt the two-component mixture model obtained by replacing the standard normal density by a normal with mean μ0 and variance to be inferred from the data. That is, the density of the zj-score is modelled as
In the sequel, we shall model the density of the zj-score by (16). In the case of the theoretical N(0, 1) null being adopted, we shall set μ0 = 0 and in (16).
7 FITTING OF NORMAL MIXTURE MODEL
7.1 Theoretical null
We now consider the fitting of the two-component mixture model (15) to the zj, firstly with the theoretical N(0, 1) null adopted. In order to fit the two-component normal mixture (15), we need to be able to estimate π0, μ1 and . This is effected by maximum likelihood (ML) via the EM algorithm of Dempster et al. (1977), using the EMMIX program as described in McLachlan and Peel (2000); see also McLachlan and Krishnan (1997). To provide a suitable starting value for the EM algorithm in this task, we note that the ML estimate of the parameters in a two-component mixture model satisfies the moment equations obtained by equating the sample mean and variance of the mixture to their population counterparts, which gives
Hence with the specification of an initial value for , initial values for the other parameters to be estimated, μ1 and , are automatically obtained from (19) and (20). If there is a problem in so finding a suitable solution for and , it gives a clue that perhaps the theoretical null is inappropriate and that consideration should be given to the use of an empirical null, as to be discussed shortly.
Following the approach of Storey and Tibshirani (2003) to the estimation of π0, we can obtain an initial estimate for use in (19) and (20) by taking to be
7.2 Empirical null
In this case, we do not assume that the mean μ0 and variance of the null distribution are zero and one, respectively, but rather they are estimated in addition to the other parameters π0, μ1 and . For an initial value for π0, we let n0 be the greatest integer less than or equal to , and assign the n0 smallest values of the zj to one class corresponding to the null component and the remaining N − n0 to the other class corresponding to the alternative component. We then obtain initial values for the mean and variances of the null and alternative components by taking them equal to the means and variances of the corresponding classes so formed. The two-component mixture model is then run from these starting values for the parameters. For the real datasets to be considered next, the fitting of the normal mixture model takes <0.1 s. The calculation of the t-statistics in the first instance for the individual genes takes ∼4 s for the Hedenfalk dataset and 2 s for the other two sets on a Pentium IV 1.8 GHz computer.
8 EXAMPLE 1: BREAST CANCER DATA
For our first example, we consider some data from the study of Hedenfalk et al. (2001), which examined gene expressions in breast cancer tissues from women who were carriers of the hereditary BRCA1 or BRCA2 gene mutations, predisposing to breast cancer. The dataset comprised the measurement of N = 3226 genes using cDNA arrays, for n1 = 7 BRCA1 tumours and n2 = 8 BRCA2 tumours. We column normalized the logged expression values, and ran our analysis with the aim of finding differentially expressed genes between the tumours associated with the different mutations. As in Efron (2004), we adopted the classical pooled t-statistic as our test statistic Wj for each gene j and we used the t-distribution function with 13 degrees of freedom, F13, as the null distribution of Wj in the computation of the P-value Pj from (12).
We fitted the two-component normal mixture model (15) with the standard normal N(0, 1) as the theoretical null, using various values of , as obtained from (21). For example, using (21) for ξ = 0 and −0.675 led to the initial values of 0.70 and 0.66 for . The fit we obtained (corresponding to the largest local maximum) is given by and In Figure 1, we display the fitted mixture density superimposed on the histogram of zj-scores, along with its two components, the theoretical N(0, 1) null density and the N(1.49, 0.94) non-null density weighted by their prior probabilities of and (). It can be seen that this two-component normal mixture model gives a good fit to the empirical distribution of the zj-scores.
In Table 1, we have listed the FDR estimated from (6) for various levels of the threshold co in (5). It can be seen, for example, that if co is set equal to 0.1, then the estimated FDR is 0.06 and Nr = 143 genes would be declared to be differentially expressed. It is not suggested that the FDR should be controlled to be around 0.05. It is just that in this example, its control at this approximate level yields a number (143) of differentially expressed genes that is not too unwieldy for a biologist to handle in subsequent confirmatory experiments; the choice of co is discussed in Efron (2005b).
In the original paper, Hedenfalk et al. (2001) selected 176 genes based on a modified F-test, with a p-value cut off of 0.001. Comparing genes which were selected in our set of 143, we found 107 in common, including genes involved in DNA repair and cell death, which are over-expressed in BRCA1-mutation-positive tumours, such as MSH2 (DNA repair) and PDCD5 (induction of apoptosis). Storey and Tibshirani (2003) in their analysis of this dataset, selected 160 genes by thresholding genes with q-values less than or equal to α = 0.05 (an arbitrary cut-off value), of which there are 113 in common with our set of 143. Overall, 101 genes were selected in common to all three studies, with 24 genes unique to our set. We searched publicly available databases for the biological functions of these genes, and found these included DNA repair, cell cycle control and cell death, suggesting good evidence for inclusion of these genes (see the Supplementary information for a fuller description).
Concerning the other type of allocation rates for the choice of co = 0.1 in (5), the estimates of the FNDR, FNR and FPR are equal to 0.32, 0.88 and 0.004, respectively. The FNR of 0.88 means that there would be quite a few false negatives among the genes declared to be null (not differentially expressed).
Among other analyses of this dataset, π0 was estimated to be 0.52 by Broët et al. (2004), 0.64 by Gottardo et al. (2006), 0.61 by Ploner et al. (2006) and 0.47 by Storey (2002). In the fully parametric Bayesian approach of Broët et al. (2004), the mean of the null component was fixed at zero, but the variance was allowed to be free during the estimation process for computational convenience. In Ploner et al. (2006), 56 genes with highly extreme expression values were first removed as in Storey and Tibshirani (2003).
We also considered the fitting of the two-component normal mixture model (16) with the null component mean and variance, μ0 and , now estimated in addition to π0 and the non-null mean and variance, μ1 and . We found that this fit from using the empirical null in place of the N(0, 1) theoretical null is similar to the fit in Figure 1; see Figure 1 in Supplementary information. Efron (2003) writes that ‘there is ample reason to distrust the theoretical null’ in the case of the Hedenfalk data. The difference in our findings may be due to the fact that our gene expression data seems to differ when compared with the expression data presented in Efron and Tibshirani (2002). In other analyses of this dataset, Newton et al. (2001), Tusher et al. (2001) and Gottardo et al. (2006) concluded that there were 375, 374 and 291 genes, respectively, differentially expressed when the FDR is controlled at the 10% level. It can be seen from Table 1 that our approach gives 338 genes as being differentially expressed when the threshold of co = 0.2 is imposed on the posterior probability of non-differential expression for which the implied FDR is 11% and the FNR is 73%. The corresponding values with the use of the empirical null are 12 and 76% for the FDR and FNR, respectively, with 235 genes declared to be differentially expressed. According to the Bayesian Information criterion (BIC), the empirical null would not be selected in favor of the theoretical N(0, 1) null (see Supplementary information). The (main) reason for fewer genes being declared differentially expressed with the use of the empirical than with the theoretical null is that the estimate of π0 is greater ().
9 EXAMPLE 2: COLON CANCER DATA
We apply our method next to the well-known study of Alon et al. (1999), where Affymetrix arrays were used to compare the gene expressions from colon cancer tissues with normal colon tissues. They measured expressions of over 6500 genes in n1 = 40 tumor and n2 = 22 normal colon tissue samples. The samples were taken from 40 different patients, so that 22 patients supplied both a normal and a tumor tissue sample. We consider the set of N = 2000 genes with highest minimal intensity across the samples as in Alon et al. (1999).
As in the last example, we adopted the (pooled) t-statistic as our test statistic Wj for each gene j, and so we used the t-distribution function with 60 degrees of freedom, F60, as the null distribution of Wj in the computation of the P-value Pj from (12). We fitted the two-component normal mixture model (15) with the theoretical N(0, 1) null to the zj-scores, which gave the fit, , and . A plot of the normal mixture density and its two components is displayed in Figure 2. Our estimate of 0.39 for π0 coincides with the empirical Bayes estimate reported in Do et al. (2005).
If we again consider a threshold of co = 0.1 in (5), then 433 genes would be declared to be differentially expressed with an implied FDR and FNR of 3 and 65%, respectively. The smooth muscle genes provide a positive control for differential gene expression, as the normal tissues were later found to have included smooth muscle tissue in the biopsy samples. We find that for the smooth muscle genes (J02854, T60155, M63391, D31885, X74295, X12369) each has an estimated posterior probability of non-differential expression that is <0.0015, and these genes are found within our top 66 ranked genes. For this dataset, Do et al. (2005) concluded that each gene in this cluster has an estimated posterior probability (of non-differential expression) of 0.002 or less.
In their original analysis, Alon et al. (1999) identified a ribosomal gene cluster, associated with over-expression in tumor tissues relative to normal tissues. Of these 29 genes (as listed in Table 1 of Alon et al. 1999), only 10 are declared differentially expressed at a threshold of co of 0.05 (within a subset of 296 genes). Do et al. (2005) similarly identified only 13 genes with posterior probabilities of non-differential expression of <0.05. Their two genes with weakest discriminatory ability were H77302 and T63484, with posterior probabilities of non-differential expression of 0.44 and 0.49; ours are H77302 and R85464, with posterior probabilities of 0.59 and 0.65. Our results, supported by the findings in Do et al. (2005), suggest that only a minority of the ribosomal genes are actually differentially expressed between the tumor and normal classes. We also considered the use of the empirical null instead of the theoretical N(0, 1) null for this dataset. The plot of the normal mixture density is given in Figure 2 in the Supplementary information. For a threshold of co = 0.1 in (5), 253 genes would be declared to be differentially expressed with an implied FDR and FNR of 4 and 74%, respectively. The reason for fewer genes being declared differentially expressed with the use of the empirical than with the theoretical null is that the estimate of π0 is greater (). According to the BIC, the empirical null would not be selected in favor of the theoretical N(0, 1) null (see Supplementary Information).
10 Example 3: HIV Data
We consider finally the HIV dataset of van't Wout et al. (2003), and as discussed in Gottardo et al. (2005). van't Wout et al. used cDNA arrays to measure expression levels of N = 7680 genes in CD4-T-cell lines, at time t = 24 h after infection with the HIV-1 virus. There were four control slides (pooled mRNA from uninfected cells) and four slides run using pooled mRNA from the infected cells, so that n1 = 4 and n2 = 4. We column normalized the logged expression values, using the dataset as downloaded from . The dataset contains 12 HIV-1 genes which were used as positive controls, as they are known to be differentially expressed in infected versus uninfected cells. As in Efron (2004), we adopted the (pooled) t-statistic and used the t distribution with 6 degrees of freedom for F0 in the computation of the P-value Pj from (12). As noted by Efron (2004, 2005b), this dataset is an example of one where the theoretical N(0, 1) is inapplicable due to the underdispersion of the zj-scores about the origin (Fig. 3). A fit of a single normal to all the zj-scores gives an N(−0.16, 1.06) distribution, clearly showing that it is not appropriate to adopt the theoretical N(0, 1) null component in our two-component mixture model (15). As there are only 6 degrees available for the use of the parametric t assumption for the null distribution F0 of the test statistic, we also computed the P-values using the permutation estimate with B = 35 permutations (pooled over the genes). It gave a similar fit, N(−0.146, 0.997), for a single normal fitted to the zj-scores. As explained by Efron (2005b), permutation methods in the context of this problem should be considered as a means for providing improved versions of the theoretical null rather than empirical nulls.
We considered an empirical null for this dataset by fitting the normal mixture model (16), obtaining as our fit, and . The empirical null obtained is similar to the fit , and obtained by Efron (2005b) for this dataset. Using a fully Bayesian approach to model the gene expressions, Gottardo et al. (2006) estimated π0 to be 0.993, which is similar to our quick and easy estimate of 0.93.
On setting a threshold of co = 0.01 on the posterior probability of non-differential expression, we find that there is a subset of 15 genes that are differentially expressed, which contains the 12 genes known to be differentially expressed from external information. The implied FDR is 0.002. For co = 0.1, we obtain that 37 genes are differentially expressed with an implied FDR of 0.03.
The HIV dataset is unique in the examples discussed, as cell lines were used rather than tissue biopsies from patients. Additionally, each of the four microarray experiments within the class (uninfected or infected) used the same pooled RNA, so that each of these were technical replicates rather than individual samples. These factors may explain the decreased dispersion.
In this paper, we consider the problem of detecting which genes are differentially expressed in multiple classes of tissue samples, where the classes represent various clinical or experimental conditions. The available data consist of the expression levels of typically a very large number of genes for a limited number of tissues in each class. Usually, a test statistic such as the classical t in the case of two classes or the F in case of multiple classes is formed for a test of equality of the class means. The key step in this approach is to transform the observed value of the test statistic for each gene j to a z-score zj by using the inverse standard normal distribution function of the implied P-value Pj, similar to its use in Efron (2004) and his subsequent papers on this problem. We demonstrate that a two-component normal mixture model is adequate for modelling the empirical distribution of the zj-scores, where the first component is the standard normal, corresponding to the null distribution of the score, and the second component is a normal density with unspecified (positive) mean and variance, corresponding to the non-null distribution of the score. This model can be used to provide a straightforward and easily implemented assessment of whether a gene is null (not differentially expressed) in terms of its posterior probability of being a null gene. Estimates of this posterior probability can be easily obtained by using the EM algorithm to fit the two-component normal mixture model via ML. As there are multiple local maximizers, consideration has to be given to the choice of starting values for the algorithm. We show that the specification of an initial value for the proportion π0 of null genes completely specifies a starting point for the fitting of the normal mixture model with the theoretical choice of N(0, 1) as the null component. An interval of values for can be tried, and a guide to its endpoints is given by values of π0 obtained by equating the number of zj values less than a threshold ξ to the expected number under the theoretical N(0, 1) null component. We consider too the case where the theoretical N(0, 1) null is not tenable and an empirical null is adopted with mean and variance estimated from the data. Also, the estimation of the FDR and its control are considered, along with the estimation of other relevant rates such as the FNR. Note that it is not valid to make claims as to the relative superiority of the two models corresponding to the theoretical and empirical nulls on the basis of these error rates, as they are only valid for the model under which they were calculated. Our approach is demonstrated on three real datasets.
Concerning the choice between the use of the theoretical N(0, 1) null and an empirical null, the intent in the first instance is to use the former in modelling the density of the zj-scores. In some situations as with the HIV dataset above, it will be clear that the use of the theoretical null is inappropriate. In other situations, an informed choice between the theoretical and empirical null components can be made on the basis of the increase in the log likelihood due to the use of an empirical null with its two extra parameters. For this purpose we can use BIC as in the first two examples above.
The reliability of our approach obviously depends on how well the proposed two-component normal mixture model approximates the empirical distribution of the zj-scores. In the three examples here and in our analyses of other datasets not presented here, this normal mixture model provides an excellent approximation. Its fit can be assessed either by visual inspection of a plot of the fitted normal mixture density versus a histogram of the zj-scores or, more formally, by a likelihood ratio test for the need for an additional normal density to represent the non-null distribution of the zj-scores. On a similar note on the adequacy of a two-component normal mixture model, Pounds and Morris (2003) found that a two-component mixture of the uniform (0,1) distribution and a single beta component (with one unspecified unknown parameter) was adequate to model the distribution of the P-values in their analyses. However, it is advantageous to work as proposed here in terms of the zj-scores, which can be modelled by normal components on the real line rather than working in terms of the P-values.
Finally, we should mention explicitly that it has been assumed throughout that the genes are all independently distributed. Typically in practice, this independence assumption will not hold for all the genes. As cautioned by Qiu et al. (2005), care is needed in extrapolating results valid in the case of independence to dependent gene data. In the Supplementary information, we report the results of a few initial simulations that we have performed to investigate the effect of correlation between some of the genes on the distribution of the z-scores. For moderate correlation, the effect was small, while for strong correlation it was found that the use of the empirical null in place of the theoretical N(0, 1) null avoided any marked effect (see Figs 3 and 4 in the Supplementary information). Another approach to handle the effect of strong correlation between the genes would be first to cluster the (normalized) gene profiles on the basis of Euclidean distance, so that highly correlated genes are put in the same cluster. The genes in each of those clusters with highly correlated members can then be represented by a single gene or a linear combination of the member genes (a metagene). To incorporate a priori knowledge that some genes belong to the same pathway, we can take the cluster labels for these genes to be the same in the specification of the complete data in the EM framework for the problem.
Conflict of Interest: none declared.