-
PDF
- Split View
-
Views
-
Cite
Cite
Peng Liu, J. T. Gene Hwang, Quick calculation for sample size while controlling false discovery rate with application to microarray analysis, Bioinformatics, Volume 23, Issue 6, 15 March 2007, Pages 739–746, https://doi.org/10.1093/bioinformatics/btl664
Close -
Share
Abstract
Motivation: Sample size calculation is important in experimental design and is even more so in microarray or proteomic experiments since only a few repetitions can be afforded. In the multiple testing problems involving these experiments, it is more powerful and more reasonable to control false discovery rate (FDR) or positive FDR (pFDR) instead of type I error, e.g. family-wise error rate (FWER). When controlling FDR, the traditional approach of estimating sample size by controlling type I error is no longer applicable.
Results: Our proposed method applies to controlling FDR. The sample size calculation is straightforward and requires minimal computation, as illustrated with two sample t-tests and F-tests. Based on simulation with the resultant sample size, the power is shown to be achievable by the q-value procedure.
Availability: A Matlab code implementing the described methods is available upon request.
Contact:pliu@iastate.edu
Supplementary information: Supplementary data are available at Bioinformatics online.
1 INTRODUCTION
Microarray and proteomic experiments are becoming popular and important in many biological disciplines, such as neuroscience (Mandel et al., 2003), pharmacogenomics, genetic disease and cancer diagnosis (Heller, 2002). These experiments are rather costly in terms of both materials (samples, reagents, equipments, etc.) and laboratory manpower. Many microarray experiments employ only a small number of replicates (2–8) (Yang and Speed, 2003). In many cases, the sample size is not adequate to achieve reliable statistical inference, resulting in wastage of resources. Therefore, scientists often ask the following question. How big should the sample size be?
To answer this question, we will calculate sample size that controls some error rate and achieves a desired power. When calculating sample size for a single test, the error rate to control is traditionally the type I error rate, the probability of concluding a false positive by rejecting the true null hypothesis. However, we are simultaneously testing a huge number of hypotheses, each relating to a gene. Hence, multiple testing is commonly applied in the analysis of microarray data. There are several kinds of error rates to control in this context, such as family-wise error rate (FWER) or false discovery rate (FDR). Assume there are m genes on microarray chips and each gene is tested for the significance of differential expression. The test outcomes are summarized in Table 1, where, for example, V is the number of false positives and R is the number of rejections among the m tests (Benjamini and Hochberg, 1995).
Outcomes when testing m hypothesis
| Hypothesis . | Accept . | Reject . | Total . |
|---|---|---|---|
| Null true | U | V | m0 |
| Alternative true | T | S | m1 |
| Total | W | R | m |
| Hypothesis . | Accept . | Reject . | Total . |
|---|---|---|---|
| Null true | U | V | m0 |
| Alternative true | T | S | m1 |
| Total | W | R | m |
Outcomes when testing m hypothesis
| Hypothesis . | Accept . | Reject . | Total . |
|---|---|---|---|
| Null true | U | V | m0 |
| Alternative true | T | S | m1 |
| Total | W | R | m |
| Hypothesis . | Accept . | Reject . | Total . |
|---|---|---|---|
| Null true | U | V | m0 |
| Alternative true | T | S | m1 |
| Total | W | R | m |
In many cases of genomic data such as microarray, it was argued in Storey and Tibshirani (2003) to be more reasonable and more powerful to control FDR or pFDR instead of FWER. However, the sample size has been traditionally calculated with a certain type I error rate and cannot be directly applied with FDR control.
Several articles have addressed the problem of sample size calculation in microarray experiments (Hwang et al., 2002; Lee and Whitmore, 2002; Warnes and Liu, 2006. Lee and Whitmore (2002) calculated the sample size table with an ANOVA model when controlling the number of false positives (E[V]). Hwang et al. (2002) proposed a method that first identifies differentially expressed genes and then calculates the power and sample size on a space reduced by Fisher discriminant analysis. Warnes and Liu (2006) proposed a method with accumulative plot to visualize the trade-off between power and sample size. Some articles have addressed the sample size calculation problem in different designs (Dobbin and Simon, 2005) or specific settings such as classification (Hua et al., 2005). The above methods control type I error and not FDR.
Recently, a few articles investigated the need to calculate sample size while controlling FDR and proposed ways to pursue this goal. Yang et al. (2003) applied several inequalities to get a type I error rate that corresponds to the controlled level of FDR. Due to the inequalities applied, the sample size is likely overestimated. Pawitan et al. (2005) investigated several operating characteristic curves to visualize the relationship between FDR, sensitivity and sample size. Although their approach can be useful in calculating the sample size, no simple direct algorithm was provided. Jung (2005) derived a formula which relates FDR and the type I error rate. Then FDR is controlled by an appropriate level of type I error rate. Pounds and Cheng (2005) proposed an algorithm to iteratively search for the sample size at which the desired power and controlled level of FDR can be achieved. Since FDR controlling procedure is gaining popularity in multiple testings for many problems including microarray analysis, it is important to be able to calculate sample size needed to control the FDR when designing the experiment.
Here, we propose a procedure to calculate the sample size for multiple testing while controlling FDR. First, for any estimate of the proportion of non-differentially expressed genes and the level of FDR to control, we find a rejection region for each sample size. Then power is calculated for the selected rejection region for each sample size. According to the desired power, a sample size is finally decided.
Jung's approach (2005), which was known to us after we had finished our first draft, is more related to our proposed approach than others. Both Jung's and our approaches are based on the same model assumptions which lead to the same FDR expression. The FDR expression is then controlled by studying its relationship to a quantity, which is the type I error rate for Jung and the critical value (the rejection region) for us. Jung provided formulas for Z-tests and t-tests. When applying our approach to Jung's setting, it yields the same result. Our approach, however, is more graphical than Jung's. This allows the visualization of the trade-off between power and sample size and provides quick answer when the user-defined quantities such as power are modified.
In spite of the similarity, this article extends the approach further to several different directions and we find our approach very satisfactory. First, we apply our approach to F-tests which are widely used in microarray data analysis (Cui et al., 2005). Second, we study our approach carefully for the case when the means and variances for expression levels vary among genes, an important and practical setting for microarray. Third, we also show by simulation, that the q-value procedure for controlling FDR proposed by Storey et al. (2004) using our suggested sample size achieves the target power to a satisfactory degree. This answers the question positively as to whether there would be any statistical procedure that can realize the target power claimed by the proposed method. Finally, we also compare our approach with Yang et al. (2003) and Pounds and Cheng (2005) which provide more well-defined algorithms than other articles. Our simulation demonstrates that our proposed method is superior.
The article is organized as follows. Section 2 describes our proposed method illustrated with two-sample t-tests and F-tests. In Section 3, we report the result of simulation studies that compare the power based on proposed method to the actual result from q-value procedure. Section 4 summarizes our results.
Codes for the proposed method in Matlab are available to implement the method.
2 METHOD
In this section, we first illustrate our idea and then show how to apply the proposed method for two designs of microarray experiment.
2.1 Proposed method
2.2 Applications of proposed method
Microarray experiments are usually set up to find differentially expressed genes between different treatments. The data of scanned intensity for microarray usually go through quality control, transformation and normalization, as reviewed in Smyth et al. (2003) and Quackenbush (2002). We assume that data first go through those steps before statistical tests are applied. Before the experiment, we have no observations to check the distribution. It seems reasonable to make a convenient assumption that the distribution of the pre-processed data is normal and hence two-sample t-tests and F-tests are applicable. The same assumption is also made by other proposed methods to calculate sample size (Dobbin and Simon, 2005; Jung, 2005; Hua et al., 2005; Hwang et al., 2002).
2.2.1 Two-sample comparison with t-test
is the pooled sample variance,
and
are the means of observed expression levels for gene g for the two groups, respectively. The test statistic Tg has a central t-distribution under the null hypothesis and non-central t-distribution under the alternative hypothesis. We reject the null hypothesis if |Tg | > cg, for which cg is to be determined. Applying Equation (4), we find critical value cg that satisfies: where Td(· | θ) is the cumulative distribution function (c.d.f) of a non-central t-distribution with d degrees of freedom and non-centrality parameter θ. Moreover, Td(·) is Td(· | θ) for θ = 0. In (7), where Δg = μT,g- μ C,g is the true difference between the mean expressions of treatment and control groups and σg is the standard deviation for gene g. In this section, we assume a simplified case that Δg and σg are identical for all genes. Section 2.2.3 deals with the more realistic case when Δg and σg vary among genes. So the subscript g is dropped in this section.The right-hand side of (7) is strictly decreasing in c and hence the solution of c is unique when exists. The same comment applies to the two equations (14) and (17) in later sections. See Appendix C for proof. In responding to a referee's question, we discover that the minimum (over c) level of FDR is positive, occurring at c → ∞. This is quite interesting since there is no such positive lower bound for the type I error. The minimum FDR, however, converges to zero very fast as sample size increases. See Figure S1 in appendix.
Figure 1 plots power versus sample size when FDR is controlled at 5%. As an example, we want to determine the sample size when π0 = 90 %. Suppose a 2-fold change is desired (correspondingly, Δ = log2(2)=1) and σ = 0.5 from previous knowledge, then Δ / σ = 2. Using the middle curve in Figure 1a, a desired power of 80% would require a sample size of 9 for each group.
Plot of power versus sample size for t-test. Controlling FDR at 5%, we applied the proposed method to calculate power for each sample size. Panel (a) is for Δ / σ = 2 and panel (b) is for Δ/ σ = 5.
Plot of power versus sample size for t-test. Controlling FDR at 5%, we applied the proposed method to calculate power for each sample size. Panel (a) is for Δ / σ = 2 and panel (b) is for Δ/ σ = 5.
We have included the case when π0 is relatively small (50%) in Figure 1. When π0 is small, the microarray data should be normalized with care because the normalization method for microarray typically relies on the assumption of big π0, i.e. a small number of differentially expressed genes. In this case, we suggest to use housekeeping genes to perform normalization. Our method would still be applicable if the proper estimate of σ (based on appropriately normalized values) is used.
We shall take σ to be 0.2, which is the median of standard deviations of the U133 microarray data set in Warnes and Liu (2006), i.e. the gene expression levels of human smooth muscle cells from healthy volunteers. (One of the referees mentioned to us that the median σ is typically around 0.7 with human samples and U133A arrays. In such a case, we would set σ to be 0.7 instead.) Also in Cui et al. (2005), 0.2 is approximately the 90th percentile of residual standard deviations for the granulosa cell tumor microarray data. (Here 90th percentile is a conservative choice in that if we had used a percentage smaller than 90%, the sample size needed would be smaller.) If still a 2-fold change (Δ = log2(2) =1) is considered to be true effect size, then Δ / σ = 5. From the middle curve of Figure 1b, corresponding to π0 = 0.9, one can determine that a sample size of 4 is needed to obtain at least 80% of power.
2.2.2 Multi-sample comparison with F-test
For microarray experiments comparing several treatments, there are different design schemes applied (Yang and Speed, 2003). Suppose without any replication, a design requires s slides. We call the s slides a ‘set’ for this design. For example, we want to compare gene expressions among three independent treatments, such as livers from three genotypes of mice (Horton et al., 2003). If we apply a loop design as shown in Figure 2, a ‘set’ of three slides is needed for two-color microarray experiment. Whether the replicates are different biological samples or different technical repetitions, our method is applicable as long as the appropriate parameter (means and variances) are used in the calculation. We recommend to use different biological samples in the experiment because this would provide more general conclusions. The question is how many sets of the slides are adequate to obtain a sufficient power and a controlled FDR.
A design example for microarray experiment to compare gene expressions among three treatments. By convention, each arrow represents one two-color array with the green-labeled sample at the tail and the red-labeled sample at the head of the arrow. This design needs three arrays for one loop.
A design example for microarray experiment to compare gene expressions among three treatments. By convention, each arrow represents one two-color array with the green-labeled sample at the tail and the red-labeled sample at the head of the arrow. This design needs three arrays for one loop.
is also normally distributed, We can apply this result and draw statistical inference for these parameters and their linear contrasts.2.2.3 Case for unequal Δg and σg
So far, we have proceeded as if all genes have the same set of parameters. In such cases, the average power across all genes would be the same as the power for individual genes. In reality, each gene may have a different set of parameters. If we use the two-sample comparison as an example, the gene-specific parameters include σg, the standard deviation, and Δg, the true difference between the mean expressions of the treatment and the control group.
For the integration with respect to σg, we can apply adaptive Lobatto quadrature for numerical integration which allows a stable calculation to get the root of c. The calculation with this numerical integration provides answers instantly. Once we get answers of c for each sample size, we calculate power accordingly and find the needed sample size based on power.
3 SIMULATION
How realistic is the calculated sample size proposed in this article? More specifically, if the desired power is 80%, FDR = 5 % and our approach results in a sample size of 9 for the two-sample comparison with t-test, is there a statistical test that would actually achieve all the operating characteristics with 9 slides? To find out, we simulate data with calculated sample size and perform multiple testing with an FDR controlling procedure. Then we checked:
whether the multiple testing actually results in desired power for the calculated sample size, and
whether the observed FDR is comparable with the level that we want to control.
If we can find a statistical procedure that achieves the desired FDR and power at the calculated sample size, our procedure is then demonstrated to be practical. This is indeed the case.
There are several procedures to control FDR, such as the q-value procedure proposed by Storey and Tibshirani (2003) and Storey et al. (2004), and the procedures proposed by Benjamini and Hochberg (1995, 2000). These procedures all have the FDR conservatively controlled (Storey et al., 2004). For the purpose of simulation study, we apply the q-value procedure as outlined in Storey et al. (2004) to control FDR. The earlier version of the manuscript applied the procedure in Storey and Tibshirani (2003) and the results were similar to the report here.
We first test the proposed method when observations (genes) are independent of each other. In a microarray setting, we suppose there are a total of 5000 genes and we have equal sample size for the treatment and the control groups (n1 = n2 = n). Gene-specific variances,
, are simulated from an inverse gamma distribution. Same as in Wright and Simon (2003), we chose 1/ σ2 ∼ Γ (3,1) because this distribution approximates well several microarray data sets that we have been analyzing. For the control group, gene expression values are simulated from
. For the treatment group, we set Δg=0 for non-differentially expressed genes and simulate Δg from
for differentially expressed genes, then gene expression values are simulated from
.
There are several parameters involved for the simulation, π0 (the proportion of non-differentially expressed genes), σΔ (the standard deviation of effect size) and for the dependent case, the correlation coefficient ρ. To evaluate the accuracy of our sample size calculation method, we perform the simulation with a factorial design and the levels (values) of each factor (parameter) are summarized in Table 2. For each of the 48 parameter settings, the FDR is controlled at 5% for multiple testing.
Parameter values in simulation study
| Parameter . | Values in simulation . |
|---|---|
| π0 | 0.995, 0.95, 0.9, 0.8 |
| σΔ | 0.2, 1, 2 |
| ρ | 0, 0.2, 0.5, 0.8 |
| Parameter . | Values in simulation . |
|---|---|
| π0 | 0.995, 0.95, 0.9, 0.8 |
| σΔ | 0.2, 1, 2 |
| ρ | 0, 0.2, 0.5, 0.8 |
Parameter values in simulation study
| Parameter . | Values in simulation . |
|---|---|
| π0 | 0.995, 0.95, 0.9, 0.8 |
| σΔ | 0.2, 1, 2 |
| ρ | 0, 0.2, 0.5, 0.8 |
| Parameter . | Values in simulation . |
|---|---|
| π0 | 0.995, 0.95, 0.9, 0.8 |
| σΔ | 0.2, 1, 2 |
| ρ | 0, 0.2, 0.5, 0.8 |
For each parameter setting of independent cases, we calculate the anticipated power for each sample size and generate the power curve as described in Section 2. We also simulate 200 sets of data and perform t-tests for each data set with q-value procedure (Storey et al., 2004) to control FDR. The observed power is averaged over the 200 simulated data sets and observed proportion of false discoveries is also recorded. Comparing with the simulation results, the anticipated power curves based on our calculation are almost indistinguishable from the simulation results for all parameter settings. Examples are shown in Figure 3a. Hence, our proposed method provides an accurate estimate of sample sizes. The observed FDR is also close to the controlled level (5%) as shown in Figure 3b, justifying the validity of the procedure in Storey et al. (2004).
Simulation results. (a) Observed power curves are plotted with dashed lines while the anticipated power curves based on our calculation are plotted with solid lines for different π0's. For all three π0's, the difference between the anticipated and observed power are almost indistinguishable. (b) Observed false discovery rates (FDRs) for the three parameter settings corresponding to (a) are plotted. The controlled level of 5% is indicated with the dashed line.
Simulation results. (a) Observed power curves are plotted with dashed lines while the anticipated power curves based on our calculation are plotted with solid lines for different π0's. For all three π0's, the difference between the anticipated and observed power are almost indistinguishable. (b) Observed false discovery rates (FDRs) for the three parameter settings corresponding to (a) are plotted. The controlled level of 5% is indicated with the dashed line.
Since many genes may function as groups, it is very likely that dependencies exist in gene expression data. To check the performance of the proposed method when the assumption of independence is violated, gene expression levels are also simulated according to a dependence structure (Ibrahim et al., 2002). Then the same procedure of testing as above is applied and the resulting power curves are compared with our calculation.
and
with Δg = μYg- μXg. Examples of power curves are presented in Figure 4. For all 36 parameter settings of the dependent case, 34 of them show results similar as in Figure 4a. This demonstrates that the anticipated power approximates really well to the actual power. There are two settings that the discrepancy between anticipated power and calculation is relatively larger than others. Figure 4b includes the worse one (ρ = 0.8) of the two. Even for this case, the anticipated power based on our calculated sample size is very close to the simulation results. Simulation results. Observed power curves are plotted with dashed lines while the anticipated power curve based on our calculation is plotted with solid lines for different parameter settings in (a) and (b).
Simulation results. Observed power curves are plotted with dashed lines while the anticipated power curve based on our calculation is plotted with solid lines for different parameter settings in (a) and (b).
When Δg and
are the same for all genes, simulation shows that our method can provide accurate sample size estimation both for independent genes and dependent data similarly as the simulation results shown above.
There are several articles addressing the question of calculating sample size while controlling FDR. Among these articles, Yang et al. (2003) and Pounds and Cheng (2005) provided clearly defined algorithms. We have compared our approach with these methods in the context of two-sample t-test for fixed Δg and
. Table 3 shows that the calculated sample size based on our proposed approach agrees with the actual sample size needed based on simulation result. Yang's approach results in similar answers as ours except that in some case, it is a little conservative. Answers from Pounds and Cheng's algorithm are too liberal in one situation (when Δ /σ = 1) and deviate from the right answer a lot more than the other two methods.
Comparison of sample size calculation methods including Yang's approach, Pounds and Cheng's approach (PC), the proposed method in this paper (LH) with the actual simulation result (Simu)
| Δ/ σ = 2 . | Yang's . | PC . | LH . | Simu . |
|---|---|---|---|---|
| π0 = 0.5 | 8 | 7 | 6 | 6 |
| π0 = 0.9 | 10 | 10 | 9 | 10 |
| π0 = 0.95 | 11 | 11 | 11 | 11 |
| Δ/ σ = 2 . | Yang's . | PC . | LH . | Simu . |
|---|---|---|---|---|
| π0 = 0.5 | 8 | 7 | 6 | 6 |
| π0 = 0.9 | 10 | 10 | 9 | 10 |
| π0 = 0.95 | 11 | 11 | 11 | 11 |
| Δ /σ = 1 . | Yang's . | PC . | LH . | Simu . |
|---|---|---|---|---|
| π0 = 0.5 | 22 | 12 | 18 | 18 |
| π0 = 0.9 | 30 | 16 | 29 | 30 |
| π0 = 0.95 | 34 | 18 | 33 | 33 |
| Δ /σ = 1 . | Yang's . | PC . | LH . | Simu . |
|---|---|---|---|---|
| π0 = 0.5 | 22 | 12 | 18 | 18 |
| π0 = 0.9 | 30 | 16 | 29 | 30 |
| π0 = 0.95 | 34 | 18 | 33 | 33 |
The sample size is selected based on desired power of 80% and FDR at 5%.
Comparison of sample size calculation methods including Yang's approach, Pounds and Cheng's approach (PC), the proposed method in this paper (LH) with the actual simulation result (Simu)
| Δ/ σ = 2 . | Yang's . | PC . | LH . | Simu . |
|---|---|---|---|---|
| π0 = 0.5 | 8 | 7 | 6 | 6 |
| π0 = 0.9 | 10 | 10 | 9 | 10 |
| π0 = 0.95 | 11 | 11 | 11 | 11 |
| Δ/ σ = 2 . | Yang's . | PC . | LH . | Simu . |
|---|---|---|---|---|
| π0 = 0.5 | 8 | 7 | 6 | 6 |
| π0 = 0.9 | 10 | 10 | 9 | 10 |
| π0 = 0.95 | 11 | 11 | 11 | 11 |
| Δ /σ = 1 . | Yang's . | PC . | LH . | Simu . |
|---|---|---|---|---|
| π0 = 0.5 | 22 | 12 | 18 | 18 |
| π0 = 0.9 | 30 | 16 | 29 | 30 |
| π0 = 0.95 | 34 | 18 | 33 | 33 |
| Δ /σ = 1 . | Yang's . | PC . | LH . | Simu . |
|---|---|---|---|---|
| π0 = 0.5 | 22 | 12 | 18 | 18 |
| π0 = 0.9 | 30 | 16 | 29 | 30 |
| π0 = 0.95 | 34 | 18 | 33 | 33 |
The sample size is selected based on desired power of 80% and FDR at 5%.
4 DISCUSSION
The number of arrays included in microarray experiments directly affects the power of data analysis. It is critical to have a guideline to select a sample size. Because of the huge dimensionality associated with those data sets, controlling FWER is very conservative in many cases (Storey and Tibshirani, 2003). Instead, FDR proposed by Benjamini and Hochberg (1995) and Storey (2002) seem to be a more appropriate error rate to control and has been widely applied to microarray analysis. Therefore, it is important to obtain a method to give the sample size that would control the FDR and guarantee a certain power.
The method is straightforward to apply as described in Section 2 for t- and F-tests. The proposed method can be generalized to other tests, as long as there is an explicit form to calculate the type I error and power of an individual test. The method presented in this article allows calculation for an accurate sample size with minimum effort when designing an experiment.
ACKNOWLEDGEMENTS
The authors thank the two reviewers and Dr Gregory R. Warnes for insightful comments and suggestions. We also thank Dr Chong Wang for pointing out the Lobatto Quadrature for numerical integration.
REFERENCES
Author notes
Associate Editor: Joaquin Dopazo




























