-
PDF
- Split View
-
Views
-
Cite
Cite
E. S. Venkatraman, Adam B. Olshen, A faster circular binary segmentation algorithm for the analysis of array CGH data, Bioinformatics, Volume 23, Issue 6, march 2007, Pages 657–663, https://doi.org/10.1093/bioinformatics/btl646
- Share Icon Share
Abstract
Motivation: Array CGH technologies enable the simultaneous measurement of DNA copy number for thousands of sites on a genome. We developed the circular binary segmentation (CBS) algorithm to divide the genome into regions of equal copy number. The algorithm tests for change-points using a maximal t-statistic with a permutation reference distribution to obtain the corresponding P-value. The number of computations required for the maximal test statistic is O(N2), where N is the number of markers. This makes the full permutation approach computationally prohibitive for the newer arrays that contain tens of thousands markers and highlights the need for a faster algorithm.
Results: We present a hybrid approach to obtain the P-value of the test statistic in linear time. We also introduce a rule for stopping early when there is strong evidence for the presence of a change. We show through simulations that the hybrid approach provides a substantial gain in speed with only a negligible loss in accuracy and that the stopping rule further increases speed. We also present the analyses of array CGH data from breast cancer cell lines to show the impact of the new approaches on the analysis of real data.
Availability: An R version of the CBS algorithm has been implemented in the “DNAcopy” package of the Bioconductor project. The proposed hybrid method for the P-value is available in version 1.2.1 or higher and the stopping rule for declaring a change early is available in version 1.5.1 or higher.
Contact: [email protected]
Supplementary information: Supplementary data are available at Bioinformatics online.
1 INTRODUCTION
The DNA copy number at a location in a genome is the number of copies of DNA. The normal copy number is two for the autosomal chromosomes in humans. Chromosomal aberrations in the form of copy number gains or losses are common in cancer and studying them is a way of identifying and validating important cancer genes. For example, Whang-Peng et al. (1982) identified deletion in chromosome 3p(14-23) in small cell lung cancer cell lines. Comparative genomic hybridization (CGH) (Kallioniemi et al., 1992; du Manoir et al., 1993) was the first method developed to measure the DNA copy number variation of entire genomes at a 10–20M resolution. Higher throughput techniques based on microarray technology (hence array CGH) have been developed to simultaneously measure DNA copy number at thousands of locations on a genome. Pinkel and Albertson (2005) present a review of the array CGH technologies.
The purpose of these technologies is to study variations in DNA copy number and to identify chromosomal regions that have been gained or lost. We developed the circular binary segmentation (CBS) algorithm (Olshen et al., 2004) to divide the genome into regions of equal DNA copy number. Several alternate algorithms have also been proposed for the analysis of array copy number data. Willenbrock and Fridlyand (2005) in a comparison of algorithms for array CGH data concluded that DNAcopy (our software implementing the CBS algorithm) has the best operational characteristics in terms of its sensitivity and FDR for breakpoint detection. Lai et al. (2005) conducted a comparison of methods for analyzing array CGH data that included CBS and 10 other approaches. They concluded that CBS is one of the two methods that appear to perform consistently well. Unfortunately, they also found the CBS algorithm to be one of the slowest. In light of the proven ability of the CBS algorithm to identify the locations of copy number changes it is desirable to improve its speed.
In this manuscript we present two speed enhancements to the original CBS algorithm. The first one is a hybrid approach for the computation of the P-value of the maximal t-statistic using a tail probability approximation for the maxima of a Gaussian random field. The second one is a sequential testing approach for deriving a stopping rule that reduces the number of permutations when there is strong evidence for the existence of a change-point. The enhanced algorithms use the same test statistic as the original to detect change-points but modifies the procedure used to determine whether the change-points are statistically significant. We compare the performance of the new methods to the original permutation approach both in terms of speed and accuracy on simulated data as well as real data from breast cancer cell lines.
2 METHODS
The CBS procedure formulates the analysis of array CGH data as a problem of detecting change-points, where the change-points are the genomic locations of copy number transitions. The algorithm starts with the whole chromosome and segments it recursively by testing for change-points; it stops when none can be found in any of the segments. The test statistic was chosen to enable us to detect a narrow changed segment in the middle of a large segment.

where Xj +1 + · + Xm)/(m − j + i), and
is the corresponding mean squared error. Note that if we view the segment being tested as indexed by a circle by connecting its two endpoints then the method tests whether there are two complementary arcs that have unequal means. We declare a change to be statistically significant if the P-value is smaller than a threshold level α (typically 0.01) and estimate the locations of the change-points as the i and j (if j < m) that maximize the test statistic.
In the original implementation of the CBS algorithm, we compute the P-value using a permutation reference distribution due it being a robust nonparametric method. However, this resulted in the computation time growing quadratically with the number of markers on the array (this was also noticed by Lai et al., 2005). This is due to the test-statistic T being the maximum of m(m-1)/2 different statistics for every segment considered and m increases with the number of markers on the array. This is computationally burdensome when analyzing high resolution array CGH data because although computing it once is quick it has to be repeated thousands of times to construct the permutation reference distribution. Another source of computational burden is that the procedure computes the entire set of permuted statistics in order to declare that a change-point exists even if part way through the process there is overwhelming evidence for its existence. We will now present a hybrid method to compute the P-value and derive a stopping rule to declare a change early, both of which will speed up the CBS algorithm substantially.
2.1 Hybrid P-value







The CBS algorithm is modified in the following manner. Let b be the value of the test statistic T on the observed X1,…,Xm. Since the approximation is asymptotic we compute the full permutation P-value if there are fewer than m0 markers m. Otherwise, we first compute P(T2 > b from the approximation above. If it exceeds α we can declare that there is no change; otherwise we compute P(T1 > b and the P-value is the sum of the two and a change is declared if the sum is less than α . The only difference between the hybrid and the permutation approaches is the P-value. The choice of P-value method thus affects whether a change-point is detected but not its estimated location. Since the hybrid P-value is an approximation for an upper bound it can result in fewer change-points detected. We will study empirically, the impact of this on the procedure's ability to detect the true change-points.
2.2 Stopping rule to declare a change
In the CBS algorithm the magnitude of the P-value is relevant only to decide if it exceeds α. The permutation P-value is given by the proportion of times the permuted statistic exceeds the original statistic. Thus the permutations can be stopped and the null of no change accepted as soon as the permuted statistic exceeds the original more than αB times, where B is the number of permutations. However, at least (1-α)B permuted statistics must be computed to declare that a change exists even if there is overwhelming evidence earlier for it. For example, when B = 10000 and α = 0.01 the procedure cannot stop and declare that a change is present even if none of the first 1000 permuted statistics exceed the original statistic. We use concepts from sequential testing to derive a stopping rule that declares a change before all the permutations are completed. Such a stopping rule will also benefit the hybrid method since it has a permutation component.








We will now give a heuristic proof for the claim that this stopping rule results in a minimal increase in the type I error. Under the null hypothesis of no change R(B) has a uniform distribution on {0,…, B}. Using the Bonferroni inequality we can bound P{R(bi) < i for any 1 ≤ i ≤ r | R(B) = l} by . This conditional distribution of R(bi) is hypergeometric with mean μl = l × bi/B and variance
. Recall that i is the η* quantile of R(bi) when l =r. Thus (i-μl)/ σl is smaller than
, where Q is the η* quantile of R(bi). Appealing to normal approximation of the hypergeometric distribution we see that P{R(bi) < i | R(B) = l} decreases rapidly as l increases. Empirically when l = 2r, P{R(bi) < i | R(B) = l} is approximately (η*)2 when i = 1 and becomes much smaller as i increases resulting in an excess type I error smaller than α η .
2.3 Simulation experiments
We conducted simulation experiments to evaluate the performance gain that the improved CBS algorithm provides and the cost in terms of its ability to detect change-points. We first show that the approximation for P(T2 > b works well for large m and suitably chosen k and hence could be used to obtain the P-value for the statistic T. We then segment the same simulated data with and without change-points, using the original permutation P-value and the new hybrid by itself and with early stopping. For all these simulations the marker data are generated from standard normal, uniform or beta (0.5, 1.0) distributions. Since the approximation was obtained for the maximum of a Gaussian random field, the normal data is where it is expected to perform the best. The other two were chosen to provide different degrees of non-normality with the uniform having a flat density and the beta a very skewed J-shaped one. The computing times for the procedures give us a measure of the performance gain and the proportion of times change-points are detected give us the impact on the ability to detect change-points (false positive or missed ones). All computations were done on a 3.2 GHz Intel Xeon PC running the Debian Linux operating system with gcc and g77 compilers and R statistical software.
3 RESULTS
3.1 Evaluating the approximation
We generated 100 000 data sets with m markers, where m is one of 100, 200, 500 or 1000 from each of the three distributions. We calculated the statistic T2 for each of the data sets with k = 25. This choice of k was used so that the differences in means in Tij are based on sufficiently large number of observations for a normal approximation to be reasonable. We computed the tail probability P(T2 > b empirically as well as by using the approximation for a range of bs for each m. The results are shown in Figure 1. Observe that the approximation substantially underestimates the tail probability when m is only 100. For larger values of m the approximation is close for all three marker distributions. Hence we recommend a minimum of 200 markers (the recommended value of m0 in Section 2.1) when using the hybrid approach. Since the full permutation approach for this number of markers can be accomplished with modest computing effort, this requirement is not a serious burden.

The empirical tail probability distribution of the statistic T2 when the data are normal (red), uniform (green) or beta (0.5, 1.0) (blue) for m =100, 200, 500, or 1000 and k = 25. The black line is the approximation.
The tail probability approximation is an asymptotic result with k / m a constant. Thus one concern is the use of a fixed k for all values of m. In order to assess this we chose m to be 5000 and evaluated the tail probability empirically when k is 25 or 50. This is shown in Figure 2, along with the approximation. Observe that there is a larger difference between the empirical values and the approximation when k is 25 while the degree to which the approximation agrees with the empirical values when k is 50 is similar to that found in the m = 1000 panel of Figure 1. These simulations show that with a properly chosen k the approximation for P(T2 > b works well for larger m and could be used to obtain the P-value for the segmentation procedure as described in the previous section. This also suggests that it would be prudent to choose a larger k when m increases. We recommend that k be set at 25 for m smaller than 1000 and be increased in increments of 5 as the number of markers doubles.

The empirical distribution of the statistic T2 when the data are normal (red), uniform (green) or beta (0.5, 1.0) (blue) for 5000 markers and k = 25 or 50.
3.2 Performance of the segmentation procedure
In the previous section we showed that the approximation to P(T2 > b works well. We will now evaluate the operating characteristics of the CBS algorithm when the P-value is approximated by P(T1 > b) + P(T2 > b) as well as when the stopping rule to declare a change early is included. Specifically, we wish to assess the reduction in computing time achieved by these modifications and their cost in terms of false positive or missed change-points. We compare the CBS algorithms with the hybrid P-value alone, as well as when combined with the stopping rule to declare a change early, to the original that uses the full permutation P-value.
We simulated the case of no change-points by generating 5000 data sets each for each of 250, 500 or 1000 markers and for each of the three distributions. The segmentations were performed using a P-value threshold (α) of 0.01 or 0.05. Table 1 shows the percent of times the procedure segments the data (size) and the CPU time used. The size values are very similar for the three algorithms and are consistent with the nominal α level. The size of the new algorithms are lower than that of the original, which is consistent with the hybrid P-value being an approximate upper bound. Also, the addition of the stopping rule results in only a negligible increase in the significance level of the test as the achieved size barely increases. However, the newer algorithms show an enormous gain in computing speed. For example, the normal data with 1000 markers and α = 0.05 took 29 h using the original algorithm but only a bit over an hour with the hybrid algorithms. Since the stopping rule only enables us to declare a change early, we do not expect it to affect the time taken when there is no change and the results confirm it. Notice also that the computing times for the original algorithm quadruples when the number of markers doubles, whereas it grows linearly for the hybrid algorithms.
. | . | m = 250 . | m = 500 . | m = 1000 . | |||
---|---|---|---|---|---|---|---|
. | . | Size . | Timea . | Size . | Timea . | Size . | Timea . |
Permutation | 1.04 | 36.6 | 0.92 | 124.6 | 1.08 | 500.1 | |
N | Hybrid | 1.08 | 11.2 | 0.88 | 18.7 | 1.00 | 35.3 |
Hybrid + ES | 1.08 | 10.2 | 0.84 | 17.2 | 1.08 | 32.3 | |
Permutation | 1.12 | 36.6 | 1.00 | 132.5 | 1.22 | 492.1 | |
U | Hybrid | 1.16 | 11.4 | 0.90 | 19.1 | 1.00 | 34.5 |
Hybrid + ES | 1.18 | 10.3 | 0.94 | 17.2 | 1.08 | 31.8 | |
Permutation | 0.86 | 35.4 | 0.86 | 128.6 | 1.14 | 498.0 | |
B | Hybrid | 1.04 | 11.1 | 0.92 | 19.4 | 1.08 | 39.5 |
Hybrid + ES | 1.04 | 10.5 | 0.92 | 17.8 | 1.06 | 36.1 | |
Permutation | 5.44 | 121.1 | 4.56 | 441.7 | 4.72 | 1749.7 | |
N | Hybrid | 5.22 | 22.4 | 4.20 | 36.5 | 4.44 | 68.6 |
Hybrid + ES | 5.20 | 16.9 | 4.24 | 27.6 | 4.46 | 50.9 | |
Permutation | 5.24 | 120.7 | 5.08 | 451.2 | 4.60 | 1710.4 | |
U | Hybrid | 4.82 | 19.9 | 4.44 | 32.8 | 3.64 | 49.8 |
Hybrid + ES | 4.92 | 14.9 | 4.50 | 23.4 | 3.64 | 39.5 | |
Permutation | 5.04 | 119.5 | 5.12 | 451.6 | 4.96 | 1765.7 | |
B | Hybrid | 4.94 | 21.8 | 4.80 | 35.6 | 4.64 | 68.7 |
Hybrid + ES | 5.02 | 16.7 | 4.80 | 27.2 | 4.54 | 49.3 |
. | . | m = 250 . | m = 500 . | m = 1000 . | |||
---|---|---|---|---|---|---|---|
. | . | Size . | Timea . | Size . | Timea . | Size . | Timea . |
Permutation | 1.04 | 36.6 | 0.92 | 124.6 | 1.08 | 500.1 | |
N | Hybrid | 1.08 | 11.2 | 0.88 | 18.7 | 1.00 | 35.3 |
Hybrid + ES | 1.08 | 10.2 | 0.84 | 17.2 | 1.08 | 32.3 | |
Permutation | 1.12 | 36.6 | 1.00 | 132.5 | 1.22 | 492.1 | |
U | Hybrid | 1.16 | 11.4 | 0.90 | 19.1 | 1.00 | 34.5 |
Hybrid + ES | 1.18 | 10.3 | 0.94 | 17.2 | 1.08 | 31.8 | |
Permutation | 0.86 | 35.4 | 0.86 | 128.6 | 1.14 | 498.0 | |
B | Hybrid | 1.04 | 11.1 | 0.92 | 19.4 | 1.08 | 39.5 |
Hybrid + ES | 1.04 | 10.5 | 0.92 | 17.8 | 1.06 | 36.1 | |
Permutation | 5.44 | 121.1 | 4.56 | 441.7 | 4.72 | 1749.7 | |
N | Hybrid | 5.22 | 22.4 | 4.20 | 36.5 | 4.44 | 68.6 |
Hybrid + ES | 5.20 | 16.9 | 4.24 | 27.6 | 4.46 | 50.9 | |
Permutation | 5.24 | 120.7 | 5.08 | 451.2 | 4.60 | 1710.4 | |
U | Hybrid | 4.82 | 19.9 | 4.44 | 32.8 | 3.64 | 49.8 |
Hybrid + ES | 4.92 | 14.9 | 4.50 | 23.4 | 3.64 | 39.5 | |
Permutation | 5.04 | 119.5 | 5.12 | 451.6 | 4.96 | 1765.7 | |
B | Hybrid | 4.94 | 21.8 | 4.80 | 35.6 | 4.64 | 68.7 |
Hybrid + ES | 5.02 | 16.7 | 4.80 | 27.2 | 4.54 | 49.3 |
aElapsed times on a 3.2 GHz Pentium 4 computer.
The size column gives the percent of data sets when change-points were detected (false positives) and the time column gives the user cpu time in minutes to segment the 5000 data sets. The top and bottom halves correspond to α =0.01 and 0.05, respectively. The N, U and B in the first column indicate the marker distribution. Permutation used the original full permutation P-value, Hybrid used the approximation and Hybrid + ES used the approximation and early stopping.
. | . | m = 250 . | m = 500 . | m = 1000 . | |||
---|---|---|---|---|---|---|---|
. | . | Size . | Timea . | Size . | Timea . | Size . | Timea . |
Permutation | 1.04 | 36.6 | 0.92 | 124.6 | 1.08 | 500.1 | |
N | Hybrid | 1.08 | 11.2 | 0.88 | 18.7 | 1.00 | 35.3 |
Hybrid + ES | 1.08 | 10.2 | 0.84 | 17.2 | 1.08 | 32.3 | |
Permutation | 1.12 | 36.6 | 1.00 | 132.5 | 1.22 | 492.1 | |
U | Hybrid | 1.16 | 11.4 | 0.90 | 19.1 | 1.00 | 34.5 |
Hybrid + ES | 1.18 | 10.3 | 0.94 | 17.2 | 1.08 | 31.8 | |
Permutation | 0.86 | 35.4 | 0.86 | 128.6 | 1.14 | 498.0 | |
B | Hybrid | 1.04 | 11.1 | 0.92 | 19.4 | 1.08 | 39.5 |
Hybrid + ES | 1.04 | 10.5 | 0.92 | 17.8 | 1.06 | 36.1 | |
Permutation | 5.44 | 121.1 | 4.56 | 441.7 | 4.72 | 1749.7 | |
N | Hybrid | 5.22 | 22.4 | 4.20 | 36.5 | 4.44 | 68.6 |
Hybrid + ES | 5.20 | 16.9 | 4.24 | 27.6 | 4.46 | 50.9 | |
Permutation | 5.24 | 120.7 | 5.08 | 451.2 | 4.60 | 1710.4 | |
U | Hybrid | 4.82 | 19.9 | 4.44 | 32.8 | 3.64 | 49.8 |
Hybrid + ES | 4.92 | 14.9 | 4.50 | 23.4 | 3.64 | 39.5 | |
Permutation | 5.04 | 119.5 | 5.12 | 451.6 | 4.96 | 1765.7 | |
B | Hybrid | 4.94 | 21.8 | 4.80 | 35.6 | 4.64 | 68.7 |
Hybrid + ES | 5.02 | 16.7 | 4.80 | 27.2 | 4.54 | 49.3 |
. | . | m = 250 . | m = 500 . | m = 1000 . | |||
---|---|---|---|---|---|---|---|
. | . | Size . | Timea . | Size . | Timea . | Size . | Timea . |
Permutation | 1.04 | 36.6 | 0.92 | 124.6 | 1.08 | 500.1 | |
N | Hybrid | 1.08 | 11.2 | 0.88 | 18.7 | 1.00 | 35.3 |
Hybrid + ES | 1.08 | 10.2 | 0.84 | 17.2 | 1.08 | 32.3 | |
Permutation | 1.12 | 36.6 | 1.00 | 132.5 | 1.22 | 492.1 | |
U | Hybrid | 1.16 | 11.4 | 0.90 | 19.1 | 1.00 | 34.5 |
Hybrid + ES | 1.18 | 10.3 | 0.94 | 17.2 | 1.08 | 31.8 | |
Permutation | 0.86 | 35.4 | 0.86 | 128.6 | 1.14 | 498.0 | |
B | Hybrid | 1.04 | 11.1 | 0.92 | 19.4 | 1.08 | 39.5 |
Hybrid + ES | 1.04 | 10.5 | 0.92 | 17.8 | 1.06 | 36.1 | |
Permutation | 5.44 | 121.1 | 4.56 | 441.7 | 4.72 | 1749.7 | |
N | Hybrid | 5.22 | 22.4 | 4.20 | 36.5 | 4.44 | 68.6 |
Hybrid + ES | 5.20 | 16.9 | 4.24 | 27.6 | 4.46 | 50.9 | |
Permutation | 5.24 | 120.7 | 5.08 | 451.2 | 4.60 | 1710.4 | |
U | Hybrid | 4.82 | 19.9 | 4.44 | 32.8 | 3.64 | 49.8 |
Hybrid + ES | 4.92 | 14.9 | 4.50 | 23.4 | 3.64 | 39.5 | |
Permutation | 5.04 | 119.5 | 5.12 | 451.6 | 4.96 | 1765.7 | |
B | Hybrid | 4.94 | 21.8 | 4.80 | 35.6 | 4.64 | 68.7 |
Hybrid + ES | 5.02 | 16.7 | 4.80 | 27.2 | 4.54 | 49.3 |
aElapsed times on a 3.2 GHz Pentium 4 computer.
The size column gives the percent of data sets when change-points were detected (false positives) and the time column gives the user cpu time in minutes to segment the 5000 data sets. The top and bottom halves correspond to α =0.01 and 0.05, respectively. The N, U and B in the first column indicate the marker distribution. Permutation used the original full permutation P-value, Hybrid used the approximation and Hybrid + ES used the approximation and early stopping.
For the alternative of there being change-points, we generated 1000 data sets each and added μ to 10 contiguous markers in the middle of each set, where μ is chosen (empirically) to give approximately 80% power of detecting the change when α =0.01. These results are in Table 2, with power being the percent of times the change was detected. The power of all three procedures are comparable with the original only minimally more powerful than the hybrid algorithms. Thus the fact that the size of the hybrid procedure could be moderately conservative has minimal impact on the power of the procedure. As seen in the no change case the hybrid algorithms show enormous speed gains over the original. The addition of the stopping rule makes the procedure 3 to 4 times faster than the hybrid alone.
. | . | m = 250 . | m = 500 . | m = 1000 . | |||
---|---|---|---|---|---|---|---|
. | . | Power . | Timea . | Power . | Timea . | Power . | Timea . |
Permutation | 79.8 | 102.9 | 80.1 | 380.6 | 78.2 | 1509.0 | |
N | Hybrid | 80.3 | 26.0 | 80.0 | 47.0 | 78.0 | 91.5 |
Hybrid + ES | 80.4 | 8.6 | 79.9 | 13.0 | 78.3 | 26.4 | |
Permutation | 80.0 | 102.8 | 78.6 | 379.9 | 75.7 | 1485.3 | |
U | Hybrid | 80.5 | 25.8 | 78.4 | 45.6 | 74.8 | 86.3 |
Hybrid + ES | 80.6 | 7.9 | 78.3 | 12.0 | 74.7 | 22.3 | |
Permutation | 76.7 | 99.5 | 76.8 | 378.6 | 72.7 | 1450.9 | |
B | Hybrid | 77.1 | 24.8 | 76.8 | 46.3 | 73.0 | 86.6 |
Hybrid + ES | 77.0 | 8.3 | 77.0 | 14.2 | 72.7 | 26.7 | |
Permutation | 92.1 | 121.2 | 87.8 | 441.8 | 89.1 | 1830.8 | |
N | Hybrid | 91.7 | 36.8 | 87.7 | 54.7 | 88.4 | 110.2 |
Hybrid + ES | 91.6 | 13.6 | 87.5 | 13.6 | 88.7 | 28.5 | |
Permutation | 91.5 | 119.7 | 89.1 | 445.3 | 88.4 | 1824.0 | |
U | Hybrid | 91.1 | 36.0 | 88.0 | 54.2 | 86.8 | 107.6 |
Hybrid + ES | 91.2 | 13.1 | 88.1 | 12.8 | 86.8 | 25.6 | |
Permutation | 90.1 | 119.5 | 90.3 | 452.3 | 87.3 | 1823.9 | |
B | Hybrid | 89.8 | 35.9 | 90.0 | 56.3 | 86.9 | 108.5 |
Hybrid + ES | 90.0 | 13.6 | 90.0 | 14.3 | 87.2 | 28.2 |
. | . | m = 250 . | m = 500 . | m = 1000 . | |||
---|---|---|---|---|---|---|---|
. | . | Power . | Timea . | Power . | Timea . | Power . | Timea . |
Permutation | 79.8 | 102.9 | 80.1 | 380.6 | 78.2 | 1509.0 | |
N | Hybrid | 80.3 | 26.0 | 80.0 | 47.0 | 78.0 | 91.5 |
Hybrid + ES | 80.4 | 8.6 | 79.9 | 13.0 | 78.3 | 26.4 | |
Permutation | 80.0 | 102.8 | 78.6 | 379.9 | 75.7 | 1485.3 | |
U | Hybrid | 80.5 | 25.8 | 78.4 | 45.6 | 74.8 | 86.3 |
Hybrid + ES | 80.6 | 7.9 | 78.3 | 12.0 | 74.7 | 22.3 | |
Permutation | 76.7 | 99.5 | 76.8 | 378.6 | 72.7 | 1450.9 | |
B | Hybrid | 77.1 | 24.8 | 76.8 | 46.3 | 73.0 | 86.6 |
Hybrid + ES | 77.0 | 8.3 | 77.0 | 14.2 | 72.7 | 26.7 | |
Permutation | 92.1 | 121.2 | 87.8 | 441.8 | 89.1 | 1830.8 | |
N | Hybrid | 91.7 | 36.8 | 87.7 | 54.7 | 88.4 | 110.2 |
Hybrid + ES | 91.6 | 13.6 | 87.5 | 13.6 | 88.7 | 28.5 | |
Permutation | 91.5 | 119.7 | 89.1 | 445.3 | 88.4 | 1824.0 | |
U | Hybrid | 91.1 | 36.0 | 88.0 | 54.2 | 86.8 | 107.6 |
Hybrid + ES | 91.2 | 13.1 | 88.1 | 12.8 | 86.8 | 25.6 | |
Permutation | 90.1 | 119.5 | 90.3 | 452.3 | 87.3 | 1823.9 | |
B | Hybrid | 89.8 | 35.9 | 90.0 | 56.3 | 86.9 | 108.5 |
Hybrid + ES | 90.0 | 13.6 | 90.0 | 14.3 | 87.2 | 28.2 |
aElapsed times in minutes on a 3.2 GHz Pentium 4 computer.
The power column gives the percent of data sets when change-points were detected (100 - power is the percent of missed change-points). The description of the other elements of the table is the same as the one for Table 1. The times are the total time to segment the 1000 data sets of each type.
. | . | m = 250 . | m = 500 . | m = 1000 . | |||
---|---|---|---|---|---|---|---|
. | . | Power . | Timea . | Power . | Timea . | Power . | Timea . |
Permutation | 79.8 | 102.9 | 80.1 | 380.6 | 78.2 | 1509.0 | |
N | Hybrid | 80.3 | 26.0 | 80.0 | 47.0 | 78.0 | 91.5 |
Hybrid + ES | 80.4 | 8.6 | 79.9 | 13.0 | 78.3 | 26.4 | |
Permutation | 80.0 | 102.8 | 78.6 | 379.9 | 75.7 | 1485.3 | |
U | Hybrid | 80.5 | 25.8 | 78.4 | 45.6 | 74.8 | 86.3 |
Hybrid + ES | 80.6 | 7.9 | 78.3 | 12.0 | 74.7 | 22.3 | |
Permutation | 76.7 | 99.5 | 76.8 | 378.6 | 72.7 | 1450.9 | |
B | Hybrid | 77.1 | 24.8 | 76.8 | 46.3 | 73.0 | 86.6 |
Hybrid + ES | 77.0 | 8.3 | 77.0 | 14.2 | 72.7 | 26.7 | |
Permutation | 92.1 | 121.2 | 87.8 | 441.8 | 89.1 | 1830.8 | |
N | Hybrid | 91.7 | 36.8 | 87.7 | 54.7 | 88.4 | 110.2 |
Hybrid + ES | 91.6 | 13.6 | 87.5 | 13.6 | 88.7 | 28.5 | |
Permutation | 91.5 | 119.7 | 89.1 | 445.3 | 88.4 | 1824.0 | |
U | Hybrid | 91.1 | 36.0 | 88.0 | 54.2 | 86.8 | 107.6 |
Hybrid + ES | 91.2 | 13.1 | 88.1 | 12.8 | 86.8 | 25.6 | |
Permutation | 90.1 | 119.5 | 90.3 | 452.3 | 87.3 | 1823.9 | |
B | Hybrid | 89.8 | 35.9 | 90.0 | 56.3 | 86.9 | 108.5 |
Hybrid + ES | 90.0 | 13.6 | 90.0 | 14.3 | 87.2 | 28.2 |
. | . | m = 250 . | m = 500 . | m = 1000 . | |||
---|---|---|---|---|---|---|---|
. | . | Power . | Timea . | Power . | Timea . | Power . | Timea . |
Permutation | 79.8 | 102.9 | 80.1 | 380.6 | 78.2 | 1509.0 | |
N | Hybrid | 80.3 | 26.0 | 80.0 | 47.0 | 78.0 | 91.5 |
Hybrid + ES | 80.4 | 8.6 | 79.9 | 13.0 | 78.3 | 26.4 | |
Permutation | 80.0 | 102.8 | 78.6 | 379.9 | 75.7 | 1485.3 | |
U | Hybrid | 80.5 | 25.8 | 78.4 | 45.6 | 74.8 | 86.3 |
Hybrid + ES | 80.6 | 7.9 | 78.3 | 12.0 | 74.7 | 22.3 | |
Permutation | 76.7 | 99.5 | 76.8 | 378.6 | 72.7 | 1450.9 | |
B | Hybrid | 77.1 | 24.8 | 76.8 | 46.3 | 73.0 | 86.6 |
Hybrid + ES | 77.0 | 8.3 | 77.0 | 14.2 | 72.7 | 26.7 | |
Permutation | 92.1 | 121.2 | 87.8 | 441.8 | 89.1 | 1830.8 | |
N | Hybrid | 91.7 | 36.8 | 87.7 | 54.7 | 88.4 | 110.2 |
Hybrid + ES | 91.6 | 13.6 | 87.5 | 13.6 | 88.7 | 28.5 | |
Permutation | 91.5 | 119.7 | 89.1 | 445.3 | 88.4 | 1824.0 | |
U | Hybrid | 91.1 | 36.0 | 88.0 | 54.2 | 86.8 | 107.6 |
Hybrid + ES | 91.2 | 13.1 | 88.1 | 12.8 | 86.8 | 25.6 | |
Permutation | 90.1 | 119.5 | 90.3 | 452.3 | 87.3 | 1823.9 | |
B | Hybrid | 89.8 | 35.9 | 90.0 | 56.3 | 86.9 | 108.5 |
Hybrid + ES | 90.0 | 13.6 | 90.0 | 14.3 | 87.2 | 28.2 |
aElapsed times in minutes on a 3.2 GHz Pentium 4 computer.
The power column gives the percent of data sets when change-points were detected (100 - power is the percent of missed change-points). The description of the other elements of the table is the same as the one for Table 1. The times are the total time to segment the 1000 data sets of each type.
A final set of simulations were done to represent the case of multiple change-points in a chromosome. We reproduce a subset of the cases considered in the second set of simulations in Olshen et al. (2004). There are 497 markers in the chromosome with 6 change-points and the mean log-ratios (μ) for the markers given by:
Marker . | 1– . | 138– . | 225– . | 242– . | 299– . | 308– . | 332– . |
---|---|---|---|---|---|---|---|
i . | 137 . | 225 . | 241 . | 298 . | 307 . | 331 . | 497 . |
μ i | −0.18 | 0.08 | 1.07 | −0.53 | 0.16 | −0.69 | −0.16 |
Marker . | 1– . | 138– . | 225– . | 242– . | 299– . | 308– . | 332– . |
---|---|---|---|---|---|---|---|
i . | 137 . | 225 . | 241 . | 298 . | 307 . | 331 . | 497 . |
μ i | −0.18 | 0.08 | 1.07 | −0.53 | 0.16 | −0.69 | −0.16 |
Marker . | 1– . | 138– . | 225– . | 242– . | 299– . | 308– . | 332– . |
---|---|---|---|---|---|---|---|
i . | 137 . | 225 . | 241 . | 298 . | 307 . | 331 . | 497 . |
μ i | −0.18 | 0.08 | 1.07 | −0.53 | 0.16 | −0.69 | −0.16 |
Marker . | 1– . | 138– . | 225– . | 242– . | 299– . | 308– . | 332– . |
---|---|---|---|---|---|---|---|
i . | 137 . | 225 . | 241 . | 298 . | 307 . | 331 . | 497 . |
μ i | −0.18 | 0.08 | 1.07 | −0.53 | 0.16 | −0.69 | −0.16 |
One thousand observed log-ratio data sets were generated as c* μ + Z, where Z is standard normal and the scale factor c is 10 or 5 to represent low and high noise scenarios. The data were segmented using the permutation, the hybrid and the hybrid with early stopping procedures using α = 0.01 and 0.05. The number of change-points detected by the three procedures are reported in Table 3 and show that the three are nearly identical. These numbers are consistent with the results in Olshen et al. (2004). The number of data sets for which all three procedures identified the same set of change-points are:
c . | α . | # . | 6 . | 7 . | 8 . | 9 . | 10 . | 11+ . | Timea . |
---|---|---|---|---|---|---|---|---|---|
P | 918 | 45 | 36 | 1 | 0 | 0 | 667.5 | ||
10 | 0.01 | H | 914 | 46 | 39 | 1 | 0 | 0 | 123.5 |
H+ES | 915 | 48 | 36 | 1 | 0 | 0 | 20.5 | ||
P | 712 | 67 | 167 | 24 | 23 | 7 | 684.2 | ||
10 | 0.05 | H | 719 | 66 | 161 | 24 | 23 | 7 | 139.7 |
H+ES | 709 | 69 | 167 | 24 | 24 | 7 | 23.3 | ||
P | 859 | 103 | 35 | 3 | 0 | 0 | 669.2 | ||
5 | 0.01 | H | 859 | 103 | 34 | 3 | 1 | 0 | 125.9 |
H+ES | 859 | 99 | 38 | 4 | 0 | 0 | 31.1 | ||
P | 572 | 190 | 168 | 41 | 21 | 8 | 685.4 | ||
5 | 0.05 | H | 574 | 184 | 169 | 42 | 24 | 7 | 142.1 |
H+ES | 571 | 190 | 170 | 40 | 21 | 8 | 33.7 |
c . | α . | # . | 6 . | 7 . | 8 . | 9 . | 10 . | 11+ . | Timea . |
---|---|---|---|---|---|---|---|---|---|
P | 918 | 45 | 36 | 1 | 0 | 0 | 667.5 | ||
10 | 0.01 | H | 914 | 46 | 39 | 1 | 0 | 0 | 123.5 |
H+ES | 915 | 48 | 36 | 1 | 0 | 0 | 20.5 | ||
P | 712 | 67 | 167 | 24 | 23 | 7 | 684.2 | ||
10 | 0.05 | H | 719 | 66 | 161 | 24 | 23 | 7 | 139.7 |
H+ES | 709 | 69 | 167 | 24 | 24 | 7 | 23.3 | ||
P | 859 | 103 | 35 | 3 | 0 | 0 | 669.2 | ||
5 | 0.01 | H | 859 | 103 | 34 | 3 | 1 | 0 | 125.9 |
H+ES | 859 | 99 | 38 | 4 | 0 | 0 | 31.1 | ||
P | 572 | 190 | 168 | 41 | 21 | 8 | 685.4 | ||
5 | 0.05 | H | 574 | 184 | 169 | 42 | 24 | 7 | 142.1 |
H+ES | 571 | 190 | 170 | 40 | 21 | 8 | 33.7 |
aElapsed times in minutes on a 3.2 GHz Pentium 4 computer. The times are the total time to segment the 1000 data sets of each type.
c . | α . | # . | 6 . | 7 . | 8 . | 9 . | 10 . | 11+ . | Timea . |
---|---|---|---|---|---|---|---|---|---|
P | 918 | 45 | 36 | 1 | 0 | 0 | 667.5 | ||
10 | 0.01 | H | 914 | 46 | 39 | 1 | 0 | 0 | 123.5 |
H+ES | 915 | 48 | 36 | 1 | 0 | 0 | 20.5 | ||
P | 712 | 67 | 167 | 24 | 23 | 7 | 684.2 | ||
10 | 0.05 | H | 719 | 66 | 161 | 24 | 23 | 7 | 139.7 |
H+ES | 709 | 69 | 167 | 24 | 24 | 7 | 23.3 | ||
P | 859 | 103 | 35 | 3 | 0 | 0 | 669.2 | ||
5 | 0.01 | H | 859 | 103 | 34 | 3 | 1 | 0 | 125.9 |
H+ES | 859 | 99 | 38 | 4 | 0 | 0 | 31.1 | ||
P | 572 | 190 | 168 | 41 | 21 | 8 | 685.4 | ||
5 | 0.05 | H | 574 | 184 | 169 | 42 | 24 | 7 | 142.1 |
H+ES | 571 | 190 | 170 | 40 | 21 | 8 | 33.7 |
c . | α . | # . | 6 . | 7 . | 8 . | 9 . | 10 . | 11+ . | Timea . |
---|---|---|---|---|---|---|---|---|---|
P | 918 | 45 | 36 | 1 | 0 | 0 | 667.5 | ||
10 | 0.01 | H | 914 | 46 | 39 | 1 | 0 | 0 | 123.5 |
H+ES | 915 | 48 | 36 | 1 | 0 | 0 | 20.5 | ||
P | 712 | 67 | 167 | 24 | 23 | 7 | 684.2 | ||
10 | 0.05 | H | 719 | 66 | 161 | 24 | 23 | 7 | 139.7 |
H+ES | 709 | 69 | 167 | 24 | 24 | 7 | 23.3 | ||
P | 859 | 103 | 35 | 3 | 0 | 0 | 669.2 | ||
5 | 0.01 | H | 859 | 103 | 34 | 3 | 1 | 0 | 125.9 |
H+ES | 859 | 99 | 38 | 4 | 0 | 0 | 31.1 | ||
P | 572 | 190 | 168 | 41 | 21 | 8 | 685.4 | ||
5 | 0.05 | H | 574 | 184 | 169 | 42 | 24 | 7 | 142.1 |
H+ES | 571 | 190 | 170 | 40 | 21 | 8 | 33.7 |
aElapsed times in minutes on a 3.2 GHz Pentium 4 computer. The times are the total time to segment the 1000 data sets of each type.
α, c . | (0.01, 10) . | (0.01, 5) . | (0.05, 10) . | (0.05, 5) . |
---|---|---|---|---|
identical | 985 | 975 | 977 | 963 |
α, c . | (0.01, 10) . | (0.01, 5) . | (0.05, 10) . | (0.05, 5) . |
---|---|---|---|---|
identical | 985 | 975 | 977 | 963 |
α, c . | (0.01, 10) . | (0.01, 5) . | (0.05, 10) . | (0.05, 5) . |
---|---|---|---|---|
identical | 985 | 975 | 977 | 963 |
α, c . | (0.01, 10) . | (0.01, 5) . | (0.05, 10) . | (0.05, 5) . |
---|---|---|---|---|
identical | 985 | 975 | 977 | 963 |
The differences between the three procedures are minimal and could be partly explained by the differences in the stream of random numbers in their permutation component.
These simulation results clearly demonstrate that the new CBS algorithm with the hybrid P-value and stopping rule for declaring change early provide speed gains that make CBS a more practical approach for analyzing high resolutions arrays.
4 EXAMPLE
In this section we present two analyses of data from cell lines in order to evaluate how the improved algorithms perform on real data. The first is the analysis of the data from the ROMA (Lucito et al., 2003) array example shown in Figure 4 of Olshen et al. (2004). There are 9820 markers in this array with the maximum of 824 on chromosome 2. The data were segmented with all three procedures at an α level of 0.01. We used the hybrid P-value when the segment being tested had more than 200 markers and used k = 25. All three algorithms found the same 48 segments in the genome as can be seen in Figure 3. The original CBS algorithm segmented the data in 347 s, which was reduced to 44 s with the hybrid P-value and further reduced to 13 s with the inclusion of the stopping rule to declare a change early.

The ROMA array on a breast cancer cell line shown in Olshen et al. (2004). The segments by the original full permutation P-value (black), the new hybrid p-value (red) and the hybrid P-value along with the early stopping to declare change (green).
The second example is the analysis of data from three breast cancer cell lines (MCF7, SKBR3 and ZR75) evaluated using the Affymetrix 100k SNP platform. These data are available on the SNPscan website [http://pevsnerlab.kennedykrieger.org/snpscan.htm; Ting et al. (2006)]. These 100k SNP array data have a total of 115571 markers over 23 chromosomes (none from Y) with chromosome 2 having the maximum number (10352). The data were segmented using all three procedures. The numbers of change-points detected, the number identical and the total computing time per procedure are in the following table.
. | Cell line . | Time (min) . | ||
---|---|---|---|---|
. | MCF7 . | SKBR3 . | ZR75 . | . |
P | 220 | 242 | 270 | 7085.8 |
H | 217 | 242 | 271 | 100.2 |
H+ES | 216 | 243 | 269 | 25.0 |
Identical | 213 | 242 | 263 |
. | Cell line . | Time (min) . | ||
---|---|---|---|---|
. | MCF7 . | SKBR3 . | ZR75 . | . |
P | 220 | 242 | 270 | 7085.8 |
H | 217 | 242 | 271 | 100.2 |
H+ES | 216 | 243 | 269 | 25.0 |
Identical | 213 | 242 | 263 |
. | Cell line . | Time (min) . | ||
---|---|---|---|---|
. | MCF7 . | SKBR3 . | ZR75 . | . |
P | 220 | 242 | 270 | 7085.8 |
H | 217 | 242 | 271 | 100.2 |
H+ES | 216 | 243 | 269 | 25.0 |
Identical | 213 | 242 | 263 |
. | Cell line . | Time (min) . | ||
---|---|---|---|---|
. | MCF7 . | SKBR3 . | ZR75 . | . |
P | 220 | 242 | 270 | 7085.8 |
H | 217 | 242 | 271 | 100.2 |
H+ES | 216 | 243 | 269 | 25.0 |
Identical | 213 | 242 | 263 |
Detailed breakdown of the number of change-points by cell line and chromosome are provided in the supplementary table. This example demonstrates the speed gain (total computing time for the three cell lines is nearly 5 days for the original versus 100 min for hybrid alone and 25 min for the hybrid with early stopping) the new procedures accomplish without sacrificing efficacy.
5 DISCUSSION
We developed the CBS algorithm as a robust non-parametric method for segmenting array CGH data in order to facilitate the identification of chromosomal regions of gain or loss. This method has been successfully applied in several studies to characterize the copy number variation in different types of cancer (Aguirre et al., 2004; Brennan et al., 2004; Chen et al., 2006; Zhao et al., 2005). In their comparative study Lai et al., (2005) found that the CBS algorithm performed consistently well but was slow. In this manuscript, we presented the hybrid approach to compute the P-value for the maximal statistic, and added a stopping rule to declare a change early, in order to increase speed. The simulation study we conducted shows that the improved methods perform as well as the full permutation approach and results in the desired speed gains. The similarity of the results among the methods was demonstrated on real data from breast cancer cell lines. We conclude that the CBS algorithm is a natural choice for the analysis of high resolution array CGH data.
ACKNOWLEDGEMENT
We thank Mike Wigler for the breast cancer cell line data and Glenn Heller for helpful discussions.
Conflict of Interest: none declared.
REFERENCES
Author notes
Associate Editor: Chris Stoeckert