-
PDF
- Split View
-
Views
-
Cite
Cite
Daniela M. Witten, Robert Tibshirani, Trevor Hastie, A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis, Biostatistics, Volume 10, Issue 3, July 2009, Pages 515–534, https://doi.org/10.1093/biostatistics/kxp008
Close -
Share
Abstract
We present a penalized matrix decomposition (PMD), a new framework for computing a rank-K approximation for a matrix. We approximate the matrix X as
, where dk, uk, and vk minimize the squared Frobenius norm of X
, subject to penalties on uk and vk. This results in a regularized version of the singular value decomposition. Of particular interest is the use of L1-penalties on uk and vk, which yields a decomposition of X using sparse vectors. We show that when the PMD is applied using an L1-penalty on vk but not on uk, a method for sparse principal components results. In fact, this yields an efficient algorithm for the “SCoTLASS” proposal (Jolliffe and others 2003) for obtaining sparse principal components. This method is demonstrated on a publicly available gene expression data set. We also establish connections between the SCoTLASS method for sparse principal component analysis and the method of Zou and others (2006). In addition, we show that when the PMD is applied to a cross-products matrix, it results in a method for penalized canonical correlation analysis (CCA). We apply this penalized CCA method to simulated data and to a genomic data set consisting of gene expression and DNA copy number measurements on the same set of samples.
1. INTRODUCTION

In this paper, we will show that this decomposition has many uses:
Applying PMD to a data matrix can yield interpretable factors that provide insight into the data.
Applying PMD to a data matrix with L1-constraints on the columns but not the rows yields an efficient algorithm for the SCoTLASS method for finding sparse principal components. This is similar to a method of Shen and Huang (2008).
Applying PMD to a cross-product matrix yields a new method for penalized CCA.
The main area of application in this paper relates to (3) above. In recent years, it has become increasingly common for biologists to perform 2 different assays on the same set of samples. For instance, both gene expression and DNA copy number measurements often are available on a set of patient samples. In this situation, an integrative analysis of both the gene expression and the copy number data is desired. If X and Y are n × p and n × q matrices with standardized columns, then PMD applied to the matrix of cross-products XTY results in an efficient method for performing penalized CCA. This method can be applied to gene expression and copy number data in order to identify sets of genes that are correlated with regions of copy number change. We will demonstrate the use of our penalized CCA method for this purpose on a publicly available breast cancer data set.
In Section 2, we present the PMD. We show that the PMD can be used to identify shared regions of gain and loss in simulated DNA copy number data. In Section 3, we use the PMD to arrive at an efficient algorithm for finding sparse principal components, and we use PMD to unify preexisting methods for sparse principal component analysis (PCA). In Section 4, we extend the PMD framework in order to develop a method for penalized CCA, and we demonstrate its use on a breast cancer data set consisting of both gene expression and DNA copy number measurements on the same set of patients. Section 5 contains the discussion.
2. THE PMD
2.1 General form of PMD


In this paper, we develop generalizations of this decomposition by imposing additional constraints on the elements of U and V. We start with a rank-1 approximation.

.) We now derive a more convenient form for this criterion.lasso: P1(u) = ∑i = 1n|ui| and
fused lasso: P1(u) = ∑i = 1n|ui| + λ∑i = 2n|ui − ui − 1|, where λ > 0





The following iterative algorithm is used to optimize the criterion for the rank-1 PMD.
Algorithm 1: Computation of single-factor PMD modelIn Section 2.2, we present an algorithm for obtaining multiple-factor solutions for the PMD. When P1 and P2 both are L1-penalties, maximizations in Steps 2(a) and 2(b) are simple. This is explained in Algorithm 3 in Section 2.3.
Initialize v to have L2-norm 1. 2.
Iterate until convergence:
(a) u←argmaxuuTXv subject to P1(u) ≤ c1 and ‖u‖22 ≤ 1.
(b) v←argmaxvuTXv subject to P2(v) ≤ c2 and ‖v‖22 ≤ 1.
d←uTXv.

In practice, we suggest using the first right singular vector of X as the initial value v. In general, Algorithm 1 does not necessarily converge to a global optimum for (2.7); however, our empirical studies indicate that the algorithm does converge to interpretable factors for appropriate choices of the penalty terms. Note that each iteration of Step 2 in Algorithm 1 results in a decrease in the objective in (2.7).
The PMD is similar to a method of Shen and Huang (2008) for identifying sparse principal components; we will elaborate on the relationship between the 2 methods in Section 3.
2.2 PMD for multiple factors
In order to obtain multiple factors of the PMD, we minimize the single-factor criterion (2.7) repeatedly, each time using as the X matrix the residuals obtained by subtracting from the data matrix the previous factors found. The algorithm is as follows.
Algorithm 2: Computation of K factors of PMDWithout the P1- and P2-penalty constraints, it can be shown that the K-factor PMD algorithm leads to the rank-K SVD of X. In particular, the successive solutions are orthogonal. This can be seen since the solutions uk and vk are in the column and row spaces of Xk, which has been orthogonalized with respect to uj, vj for j ∈ 1,…, k − 1. With P1 and/or P2 present, the solutions are no longer in the column and row spaces, and so the orthogonality does not hold. In Section 3.2, we discuss an alternative multifactor approach, in the setting where PMD is specialized to sparse principal components.
Let X1 ← X.
For k ∈ 1, …, K:
(a) Find uk, vk, and dk by applying the single-factor PMD algorithm (Algorithm 1) to data Xk.
(b) Xk + 1←Xk − dkukvkT.
2.3 Forms of PMD of special interest

.A graphical representation of the L1- and L2-constraints on u in the PMD(L1, L1) criterion. The constraints are as follows: ‖u‖22 ≤ 1 and ‖u‖1 ≤ c. Here, u is two-dimensional, and the grey lines indicate the coordinate axes u1 and u2. Left: the L2-constraint is the solid circle. For both the L1- and L2-constraints to be active, c must be between 1 and
. The constraints ‖u‖1 = 1 and ‖u‖1=
are shown using dashed lines. Right: The L2- and L1-constraints on u are shown for some c between 1 and
. Small circles indicate the points where both the L1- and the L2-constraints are active. The solid arcs indicate the solutions that occur when Δ1 = 0 in Algorithm 3. The figure shows that in 2D, the points where both the L1- and L2-constraints are active do not have either u1 or u2 equal to 0.
A graphical representation of the L1- and L2-constraints on u in the PMD(L1, L1) criterion. The constraints are as follows: ‖u‖22 ≤ 1 and ‖u‖1 ≤ c. Here, u is two-dimensional, and the grey lines indicate the coordinate axes u1 and u2. Left: the L2-constraint is the solid circle. For both the L1- and L2-constraints to be active, c must be between 1 and
. The constraints ‖u‖1 = 1 and ‖u‖1=
are shown using dashed lines. Right: The L2- and L1-constraints on u are shown for some c between 1 and
. Small circles indicate the points where both the L1- and the L2-constraints are active. The solid arcs indicate the solutions that occur when Δ1 = 0 in Algorithm 3. The figure shows that in 2D, the points where both the L1- and L2-constraints are active do not have either u1 or u2 equal to 0.
Let S denote the soft thresholding operator; that is, S(a,c) = sgn(a)(|a| − c)+, where c > 0 is a constant and where x+ is defined to equal x if x > 0 and 0 if x ≤ 0. We have the following lemma.
The solution satisfies
, with Δ = 0 if this results in ‖u‖1 ≤ c; otherwise, Δ is chosen so that ‖u‖1 = c.The proof is given in the Appendix. We solve the PMD criterion in (2.11) using Algorithm 1, with Steps 2(a) and 2(b) adjusted as follows.
Algorithm 3: Computation of single-factor PMD(L1, L1) modelIf one desires that u and v be equally sparse, one can simply fix a constant c and set
. For each update of u and v, Δ1 and Δ2 are chosen by a binary search.
Initialize v to have L2-norm 1.
Iterate until convergence:
(a)
, where Δ1 = 0 if this results in ‖u‖1 ≤ c1; otherwise, Δ1 is chosen to be a positive constant such that ‖u‖1 = c1.(b)
, where Δ2 = 0 if this results in ‖v‖1 ≤ c2; otherwise, Δ2 is chosen to be a positive constant such that ‖v‖1 = c2.
d ← uTXv.
Figure 1 shows a graphical representation of the L1- and L2-constraints on u that are present in the PMD(L1, L1) criterion: namely, ‖u‖22 ≤ 1 and ‖u‖1 ≤ c1. From the figure, it is clear that in two dimensions, the intersection of the L1- and L2-constraints results in both u1 and u2 nonzero. However, when n = 2, the dimension of u, is at least 3, then the right panel of Figure 1 can be thought of as the hyperplane {ui = 0,∀i > 2}. In this case, the small circles indicate regions where both constraints are active and the solution is sparse (since ui = 0 for i > 2).


Algorithm 4: Computation of single-factor PMD(L1, FL) modelStep 2(b) can be performed using fast software implementing fused lasso regression, as described in Friedman and others (2007), Tibshirani and Wang (2008), and Hoefling (2009).
Initialize v to have L2-norm 1.
Iterate until convergence:
(a)
, where Δ = 0 if this results in ‖u‖1 ≤ c; otherwise, Δ is chosen to be a positive constant such that ‖u‖1 = c.(b)
.
d ← uTXv.
2.4 PMD for missing data and choice of c 1 and c 2

The possibility of computing the PMD in the presence of missing data leads to a simple and automated method for the selection of the constants c1 and c2 in the PMD criterion. We can treat c1 and c2 as tuning parameters and can take an approach similar to cross-validation in order to select their values. For simplicity, we demonstrate this method for the rank-1 case here.
Algorithm 5: Selection of tuning parameters for PMDNote that in Step 1 of this method, we construct each Xi by randomly removing scattered elements of the matrix X. That is, we are not removing entire rows of X or entire columns of X, but rather individual elements of the data matrix. Similar approaches are taken in Wold (1978) and Owen and Perry (2009).
From the original data matrix X, construct 10 data matrices X1,…,X10, each of which is missing a nonoverlapping one-tenth of the elements of X, sampled at random from the rows and columns.
For each candidate value of c1 and c2:
(a) For i ∈ 1, …, 10:
i) Fit the PMD to Xi with tuning parameters c1 and c2 and calculate
i = duvT, the resulting estimate of Xi.ii) Record the mean squared error of the estimate
i. This mean squared error is obtained by computing the mean of the squared differences between elements of X and the corresponding elements of
i, where the mean is taken only over elements that are missing from Xi.
(b) Record the average mean squared error across X1,…,X10 for tuning parameters c1 and c2.
The optimal values of c1 and c2 are those which correspond to the lowest mean squared error.
Though c1 and c2 can always be chosen as described above, for certain applications cross-validation may not be necessary. If the PMD is applied to a data set as a descriptive method, in order to obtain an intuitive understanding of the data, then one might simply fix c1 and c2 based on some other criterion. For instance, one could select small values of c1 and c2 in order to obtain factors that have a desirable level of sparsity.
2.5 Relationship between PMD and other matrix decompositions
In the statistical and machine learning literature, a number of matrix decompositions have been developed. We present some of these decompositions here, as they are related to the PMD. The best known of these decompositions is the SVD, which takes the form of (2.1). The SVD has a number of interesting properties, but the vectors uk and vk of the SVD have (in general) no nonzero elements, and the elements may be positive or negative. These qualities result in vectors uk and vk that are often not interpretable.

Hoyer (2002, 2004) presents the nonnegative sparse coding (NNSC), an extension of the NNMF that results in nonnegative vectors vk and uk, one or both of which may be sparse. Sparsity is achieved using an L1-penalty. Since NNSC enforces a nonnegativity constraint, the resulting vectors can be quite different from those obtained via PMD; moreover, the iterative algorithm for finding the NNSC vectors is not guaranteed to decrease the objective at each step.

2.6 An example: PMD for DNA copy number data

is a smoothed estimate of the copy number. Note that λ1, λ2 ≥ 0.Now, suppose that multiple CGH samples are available. We expect some patterns of gain and loss to be shared between some of the samples, and we wish to identify those patterns and samples. Let X denote the data matrix; the n rows denote the samples and the p columns correspond to (ordered) CGH spots. In this case, the use of PMD(L1, FL) is appropriate because we wish to encourage sparsity in u (corresponding to a subset of samples) and sparsity and smoothness in v (corresponding to chromosomal regions). The use of PMD(L1, FL) in this context is related to ongoing work by Nowak and others (2009). One could apply PMD(L1, FL) to all chromosomes together (making sure that smoothness in the fused lasso penalty is not required between chromosomes) or one could apply PMD(L1, FL) to each chromosome separately.
We demonstrate this method on a simple simulated example. We simulate 12 samples, each of which consists of copy number measurements on 1000 spots on a single chromosome. Five of the 12 samples contain a region of gain from spots 100–500. In Figure 2, we compare the results of PMD(L1, L1) to PMD(L1, FL). It is clear that the latter method precisely uncovers the region of gain and the set of samples in which that gained region is present. Simulation details are given in the Appendix (Section A.3).
Simulated CGH data. Top: results of PMD(L1, FL); middle: results of PMD(L1, L1); bottom: generative model. PMD(L1, FL) successfully identifies both the region of gain and the subset of samples for which that region is present.
Simulated CGH data. Top: results of PMD(L1, FL); middle: results of PMD(L1, L1); bottom: generative model. PMD(L1, FL) successfully identifies both the region of gain and the subset of samples for which that region is present.
3. SPARSE PCA VIA PMD
3.1 Three methods for sparse PCA
Here, we begin with an n × p data matrix X, with centered columns. Several methods have been proposed for estimating sparse principal components, based on either the maximum variance property of principal components or the regression/reconstruction error property. In this section, we present 2 existing methods for sparse PCA from the literature, as well as a new method based on the PMD. We will then go on to show that these 3 methods are closely related to each other. We will use the connection between PMD and one of the other methods to develop a fast algorithm for what was previously a computationally difficult formulation for sparse PCA.
The 3 methods for sparse PCA are as follows:
- SPCA: Zou and others (2006) exploit the regression/reconstruction error property of principal components in order to obtain sparse principal components. For a single component, their sparse principal components analysis (SPCA) technique solveswhere λ1, λ2 ≤ 0 and v and θ are p-vectors. The criterion can equivalently be written with an inequality L2 bound on θ, in which case it is biconvex in θ and v.(3.1)

- SCoTLASS: The SCoTLASS procedure of Jolliffe and others (2003) uses the maximal variance characterization for principal components. The first sparse principal component solves the problemand subsequent components solve the same problem with the additional constraint that they must be orthogonal to the previous components. This problem is not convex, since a convex objective must be maximized, and the computations are difficult. Trendafilov and Jolliffe (2006) provide a projected gradient algorithm for optimizing (3.2). We will show that this criterion can be optimized much more simply by direct application of Algorithm 3 in Section 2.3.(3.2)

- SPC: We propose a new method for sparse PCA. Consider the PMD criterion with P2(v) = ‖v‖1, and no P1-constraint on u. We call this criterion PMD(·, L1), and it can be written as follows:The algorithm for PMD(·, L1) is obtained by replacing Step 2(a) of Algorithm 3 (the single-factor PMD(L1, L1) algorithm) with the simpler update(3.3)

. We will refer to this as “sparse principal components,” or SPC.
. Therefore, v that maximizes (3.3) also maximizes 







We compare the proportion of variance explained by SPC and SPCA on a publicly available gene expression data set from http://icbp.lbl.gov/breastcancer/, and described in Chin and others (2006), consisting of 19 672 gene expression measurements on 89 samples. (For consistency with Section 4.3, we use the subset of the samples for which both gene expression and CGH measurements are available.) For computational reasons, we use only the subset of the data consisting of the 5% of genes with highest variance. We compute the first 25 sparse principal components for SPC using the constraint on v that results in an average of 195 genes with nonzero elements per sparse component. We then perform SPCA on the same data, using tuning parameters such that each loading has the same number of nonzero elements obtained using the SPC method. Figure 3 shows the proportion of variance explained by the first k sparse principal components, defined as tr(XkTXk), where Xk = XVk(VkTVk) − 1VkT and where Vk is the matrix that has the first k sparse principal components as its columns. (This definition is proposed in Shen and Huang, 2008.) SPC results in a substantially greater proportion of variance explained, as expected.
Breast cancer gene expression data: a greater proportion of variance is explained when SPC is used to obtain the sparse principal components, rather than SPCA. Multiple SPC components were obtained as described in Algorithm 2.
Breast cancer gene expression data: a greater proportion of variance is explained when SPC is used to obtain the sparse principal components, rather than SPCA. Multiple SPC components were obtained as described in Algorithm 2.

is the first sparse principal component of their method. They present a number of forms for Pλ(v), including Pλ(v) = ‖v‖1. This is very close in spirit to PMD(·, L1), and in fact the algorithm is almost the same. Since Shen and Huang (2008) use the Lagrange form of the constraint on v, their formulation does not solve the SCoTLASS criterion. Our method unifies the regularized low-rank matrix approximation approach of Shen and Huang (2008) with the maximum variance criterion of Jolliffe and others (2003) and the SPCA method of Zou and others (2006).To summarize, in our view, the SCoTLASS criterion (3.2) is the simplest, most natural way to define the notion of sparse principal components. Unfortunately, the criterion is difficult to optimize. Our SPC criterion (3.3) recasts this problem as a biconvex one, leading to an extremely simple algorithm for the solution of the first SCoTLASS component. Furthermore, the SPCA criterion (3.1) is somewhat complex. But we have shown that when a natural symmetric constraint is added to the SPCA criterion (3.1), it is also equivalent to (3.2) and (3.3). Taken as a whole, these arguments point to the SPC criterion (3.3) as the criterion of choice for this problem, at least for a single component.
3.2 Another option for SPC with multiple factors
As mentioned in Section 3.1, the first sparse principal component of our SPC method optimizes the SCoTLASS criterion. But subsequent sparse principal components obtained using SPC do not, since we do not enforce that vk be orthogonal to v1,…,vk − 1. It is not obvious that SPC can be extended to achieve orthogonality among subsequent vis, or even that orthogonality is desirable. However, SPC can be easily extended to give something similar to orthogonality.





4. PENALIZED CCA VIA PMD
4.1 PMD and other approaches to penalized CCA

CCA results in vectors u, v that are not sparse, and these vectors are not unique if p or q exceeds n. In certain applications, especially if p or q is large, one might be interested in finding a linear combination of the variables in X and Y that has large correlation but is also sparse in the variables used.


PMD(L1, L1) yields sparse vectors u and v for c1 and c2 sufficiently small. Waaijenborg and others (2008) also present a sparse CCA method, and their algorithm for finding the sparse canonical variates is quite similar to PMD(L1, L1). However, they arrive at their algorithm in a circuitous way, as an approximation to the elastic net, and they do not state the exact criterion that they are solving. Moreover, their method involves the Lagrange form rather than the bound form of the L1-constraints on u and v; as a result, it yields a different solution. Parkhomenko and others (2007) and Wiesel and others (2008) present sparse CCA algorithms that lack exact criteria and biconvexity, respectively. Our method for sparse CCA is very closely related to the method of Parkhomenko and others (2009), which we encountered after our paper was submitted.
When L1-penalties are used for both P1 and P2, then the values of c1 and c2 can be chosen by cross-validation, where c1 and c2 are chosen using a grid search to maximize (across the cross-validation folds) the quantity cor(Xu,Yv), where u and v are computed on a training set and X and Y constitute an independent test set. Alternatively, values of c1 and c2 can simply be chosen to result in desired amounts of sparsity of u and v.
4.2 Sparse CCA applied to simulated data
We demonstrate the PMD(L1, L1) method on a simple simulated example. In this simulation, p,q > n, so classical CCA cannot be used. There are 2 sparse latent factors that generate X and Y; the factors are orthogonal to each other. Results can be seen in Figure 4. For comparison, we also computed the SVD of XTY. Compared to the SVD, PMD(L1, L1) does fairly well at identifying linear combinations of the underlying factors. Details of the simulation are given in Section A. of the Appendix.
The efficacy of PMD(L1, L1) is demonstrated using a simulation in which Xis generated from 2 sparse latent factors, called u1and u2, and Yis generated from 2 sparse latent factors, called v1and v2. The PMD(L1, L1) method identifies linear combinations of these sparse factors. Details of the simulation are given in Section A. of the Appendix.
The efficacy of PMD(L1, L1) is demonstrated using a simulation in which Xis generated from 2 sparse latent factors, called u1and u2, and Yis generated from 2 sparse latent factors, called v1and v2. The PMD(L1, L1) method identifies linear combinations of these sparse factors. Details of the simulation are given in Section A. of the Appendix.
4.3 Application of penalized CCA to genomic data
In genomic research, it is becoming increasingly common for researchers to use multiple assays in order to characterize a single set of samples. For instance, gene expression measurements and DNA copy number data might be available on the same set of tissue samples. The patients might also be genotyped. Examples of studies that combine gene expression and copy number and/or single-nucleotide polymorphism (SNP) data include Hyman and others (2002), Pollack and others (2002), Morley and others (2004), and Stranger and others (2005), (Stranger07). While much research has gone into developing methods for the identification of genes and SNPs that are associated with an outcome based on a single gene expression or SNP data set, the question of how to combine the results of multiple assays in order to perform inference across the data sets has not been thoroughly investigated.
If both gene expression data and genotype data are available on the same set of samples, then a natural question is to identify sets of genes that are correlated with sets of SNPs. Both Parkhomenko and others (2007), Parkhomenko and others (2009) and Waaijenborg and others (2008) demonstrate the use of sparse CCA for this purpose. Similarly, if CGH and gene expression data both are available for a set of cancer samples, then one might wish to identify a set of genes that have expression that is correlated with a set of chromosomal gains or losses.
In the case of gene expression and CGH data, it makes sense to perform penalized CCA with an L1-penalty on the canonical variate corresponding to genes and a fused lasso penalty on the canonical variate corresponding to copy number—in other words, we might use PMD(L1, FL). There are 2 ways that this could be done. Penalized CCA could be applied using all available gene expression data and copy number data on all chromosomes (being sure that the fused lasso smoothness penalty is not applied between chromosomes). Alternatively, one could perform penalized CCA once per chromosome, each time using copy number data on that chromosome and all the available gene expression data. (The expression data are not restricted to a particular chromosome.) We choose to pursue this latter approach.
We examine the performance of PMD(L1, FL) on the breast cancer data set publicly available at http://icbp.lbl.gov/breastcancer/ and described in Chin and others (2006). In addition to the gene expression data described earlier, CGH measurements are also available on the same set of 89 samples. There are p = 19672 gene expression measurements and q = 2149 CGH measurements. For convenience and interpretability of the results, we ran PMD(L1, FL) using values of the tuning parameters that resulted in a list of approximately 25 nonzero genes per chromosome (i.e. 25 nonzero elements of u) and very smooth and somewhat sparse v. Since we performed PMD(L1, FL) once for each of the 23 chromosomes, we obtained 23 v vectors. The 23 vs are shown in the left panel of Figure 5. Nonzero us and vs were found for all chromosomes except for chromosome 2. It is clear that PMD(L1, FL) resulted in both sparsity and smoothness of the v vectors.
PMD(L1, FL) was performed for the breast cancer data set. Left: for each chromosome, the weights of vobtained using PMD(L1, FL) are shown. All the vweights shown are positive, but the results would not be affected by flipping the signs of both vand u. On chromosome 2, vhas no nonzero elements. Right: for each chromosome, uand vwere computed on a training set consisting of 3/4 of the samples, and cor(Xu,Yv)is plotted, where Xand Yare the training (dashed) and test (solid) data.
PMD(L1, FL) was performed for the breast cancer data set. Left: for each chromosome, the weights of vobtained using PMD(L1, FL) are shown. All the vweights shown are positive, but the results would not be affected by flipping the signs of both vand u. On chromosome 2, vhas no nonzero elements. Right: for each chromosome, uand vwere computed on a training set consisting of 3/4 of the samples, and cor(Xu,Yv)is plotted, where Xand Yare the training (dashed) and test (solid) data.
The genes corresponding to nonzero weights in each u vector can also be examined. Consider Table 1, which shows the genes that had nonzero weights when sparse CCA was run using the CGH spots on chromosome 1 and all the available genes. Notably, only genes located on chromosome 1 were given nonzero weights. This is intuitive: a copy number change on chromosome 1 should be correlated with expression changes in the genes that were amplified or deleted. Similar results were seen when PMD(L1, FL) was run using the CGH spots on other chromosomes. If one is interested only in discovering genes not on chromosome k that are correlated with copy number change on chromosome k, then one could perform PMD(L1, FL) using the CGH spots on chromosome k and only genes that are not on chromosome k.
PMD(L1, FL) was performed using the CGH spots on chromosome 1 and gene expression measurements on all chromosomes. The genes corresponding to nonzero elements of u are shown. Notably, all the genes with nonzero uis are located on chromosome 1. Similar results hold for the other chromosomes
| i | Gene | Chromosome | ui |
| 1 | Jumping translocation break point | 1 | 0.039 |
| 2 | Translocated promoter region (to activated MET oncogene) | 1 | 0.153 |
| 3 | Glyceronephosphate O-acyltransferase | 1 | 0.255 |
| 4 | NADH dehydrogenase (ubiquinone) Fe-S protein 2 | 1 | 0.265 |
| 5 | Nucleoporin 133kD | 1 | 0.007 |
| 6 | Geranylgeranyl diphosphate synthase 1 | 1 | 0.131 |
| 7 | Rab3 GTPase-activating protein, noncatalytic subunit (150 kD) | 1 | 0.283 |
| 8 | Peroxisomal biogenesis factor 11B | 1 | 0.154 |
| 9 | Phosphatidylinositol glycan, class C | 1 | 0.124 |
| 10 | Tubulin-specific chaperone e | 1 | 0.069 |
| 11 | Protoporphyrinogen oxidase | 1 | 0.052 |
| 12 | Tuftelin 1 | 1 | 0.037 |
| 13 | Papillary renal cell carcinoma (translocation associated) | 1 | 0.055 |
| 14 | Splicing factor 3b, subunit 4, 49 kD | 1 | 0.469 |
| 15 | UDP-Gal:betaGlcNAc beta 1,4- galactosyltransferase | 1 | 0.27 |
| 16 | Hypothetical protein FLJ12671 | 1 | 0.229 |
| 17 | Hypothetical protein HSPC155 | 1 | 0.168 |
| 18 | Mitochondrial ribosomal protein L24 | 1 | 0.195 |
| 19 | HSPC003 protein | 1 | 0.391 |
| 20 | Hypothetical protein FLJ10876 | 1 | 0.091 |
| 21 | CGI-78 protein | 1 | 0.154 |
| 22 | Chromosome 1 open reading frame 27 | 1 | 0.133 |
| 23 | Hypothetical protein My014 | 1 | 0.278 |
| i | Gene | Chromosome | ui |
| 1 | Jumping translocation break point | 1 | 0.039 |
| 2 | Translocated promoter region (to activated MET oncogene) | 1 | 0.153 |
| 3 | Glyceronephosphate O-acyltransferase | 1 | 0.255 |
| 4 | NADH dehydrogenase (ubiquinone) Fe-S protein 2 | 1 | 0.265 |
| 5 | Nucleoporin 133kD | 1 | 0.007 |
| 6 | Geranylgeranyl diphosphate synthase 1 | 1 | 0.131 |
| 7 | Rab3 GTPase-activating protein, noncatalytic subunit (150 kD) | 1 | 0.283 |
| 8 | Peroxisomal biogenesis factor 11B | 1 | 0.154 |
| 9 | Phosphatidylinositol glycan, class C | 1 | 0.124 |
| 10 | Tubulin-specific chaperone e | 1 | 0.069 |
| 11 | Protoporphyrinogen oxidase | 1 | 0.052 |
| 12 | Tuftelin 1 | 1 | 0.037 |
| 13 | Papillary renal cell carcinoma (translocation associated) | 1 | 0.055 |
| 14 | Splicing factor 3b, subunit 4, 49 kD | 1 | 0.469 |
| 15 | UDP-Gal:betaGlcNAc beta 1,4- galactosyltransferase | 1 | 0.27 |
| 16 | Hypothetical protein FLJ12671 | 1 | 0.229 |
| 17 | Hypothetical protein HSPC155 | 1 | 0.168 |
| 18 | Mitochondrial ribosomal protein L24 | 1 | 0.195 |
| 19 | HSPC003 protein | 1 | 0.391 |
| 20 | Hypothetical protein FLJ10876 | 1 | 0.091 |
| 21 | CGI-78 protein | 1 | 0.154 |
| 22 | Chromosome 1 open reading frame 27 | 1 | 0.133 |
| 23 | Hypothetical protein My014 | 1 | 0.278 |
PMD(L1, FL) was performed using the CGH spots on chromosome 1 and gene expression measurements on all chromosomes. The genes corresponding to nonzero elements of u are shown. Notably, all the genes with nonzero uis are located on chromosome 1. Similar results hold for the other chromosomes
| i | Gene | Chromosome | ui |
| 1 | Jumping translocation break point | 1 | 0.039 |
| 2 | Translocated promoter region (to activated MET oncogene) | 1 | 0.153 |
| 3 | Glyceronephosphate O-acyltransferase | 1 | 0.255 |
| 4 | NADH dehydrogenase (ubiquinone) Fe-S protein 2 | 1 | 0.265 |
| 5 | Nucleoporin 133kD | 1 | 0.007 |
| 6 | Geranylgeranyl diphosphate synthase 1 | 1 | 0.131 |
| 7 | Rab3 GTPase-activating protein, noncatalytic subunit (150 kD) | 1 | 0.283 |
| 8 | Peroxisomal biogenesis factor 11B | 1 | 0.154 |
| 9 | Phosphatidylinositol glycan, class C | 1 | 0.124 |
| 10 | Tubulin-specific chaperone e | 1 | 0.069 |
| 11 | Protoporphyrinogen oxidase | 1 | 0.052 |
| 12 | Tuftelin 1 | 1 | 0.037 |
| 13 | Papillary renal cell carcinoma (translocation associated) | 1 | 0.055 |
| 14 | Splicing factor 3b, subunit 4, 49 kD | 1 | 0.469 |
| 15 | UDP-Gal:betaGlcNAc beta 1,4- galactosyltransferase | 1 | 0.27 |
| 16 | Hypothetical protein FLJ12671 | 1 | 0.229 |
| 17 | Hypothetical protein HSPC155 | 1 | 0.168 |
| 18 | Mitochondrial ribosomal protein L24 | 1 | 0.195 |
| 19 | HSPC003 protein | 1 | 0.391 |
| 20 | Hypothetical protein FLJ10876 | 1 | 0.091 |
| 21 | CGI-78 protein | 1 | 0.154 |
| 22 | Chromosome 1 open reading frame 27 | 1 | 0.133 |
| 23 | Hypothetical protein My014 | 1 | 0.278 |
| i | Gene | Chromosome | ui |
| 1 | Jumping translocation break point | 1 | 0.039 |
| 2 | Translocated promoter region (to activated MET oncogene) | 1 | 0.153 |
| 3 | Glyceronephosphate O-acyltransferase | 1 | 0.255 |
| 4 | NADH dehydrogenase (ubiquinone) Fe-S protein 2 | 1 | 0.265 |
| 5 | Nucleoporin 133kD | 1 | 0.007 |
| 6 | Geranylgeranyl diphosphate synthase 1 | 1 | 0.131 |
| 7 | Rab3 GTPase-activating protein, noncatalytic subunit (150 kD) | 1 | 0.283 |
| 8 | Peroxisomal biogenesis factor 11B | 1 | 0.154 |
| 9 | Phosphatidylinositol glycan, class C | 1 | 0.124 |
| 10 | Tubulin-specific chaperone e | 1 | 0.069 |
| 11 | Protoporphyrinogen oxidase | 1 | 0.052 |
| 12 | Tuftelin 1 | 1 | 0.037 |
| 13 | Papillary renal cell carcinoma (translocation associated) | 1 | 0.055 |
| 14 | Splicing factor 3b, subunit 4, 49 kD | 1 | 0.469 |
| 15 | UDP-Gal:betaGlcNAc beta 1,4- galactosyltransferase | 1 | 0.27 |
| 16 | Hypothetical protein FLJ12671 | 1 | 0.229 |
| 17 | Hypothetical protein HSPC155 | 1 | 0.168 |
| 18 | Mitochondrial ribosomal protein L24 | 1 | 0.195 |
| 19 | HSPC003 protein | 1 | 0.391 |
| 20 | Hypothetical protein FLJ10876 | 1 | 0.091 |
| 21 | CGI-78 protein | 1 | 0.154 |
| 22 | Chromosome 1 open reading frame 27 | 1 | 0.133 |
| 23 | Hypothetical protein My014 | 1 | 0.278 |
In order to assess whether PMD(L1, FL) is capturing real structure in the breast cancer data, we computed p-values for the penalized canonical variates. For each chromosome, a p-value for the penalized canonical variates was computed as follows:With the exception of chromosome 2, all p-values were significant.
Let u, v denote the penalized canonical variates found for this chromosome, and record c = cor(Xu,Yv).
For i ∈ 1, …, B, permute the samples in X to obtain X*; then, compute u* and v*, the penalized canonical variates based on data (X*,Y). Record ci = cor(X*u*,Yv*).
The p-value is given by
.
In order to further assess the penalized canonical variates that we obtained, we used a training set/test set approach, as follows:The right panel of Figure 5 shows the average values of cor(Xtrutr,Ytrvtr) and cor(Xteutr,Ytevtr) for each chromosome. Even on the test set, quite high correlations result for most chromosomes. In the absence of signal, the average value of cor(Xteutr,Ytevtr) would be 0.
We repeatedly divided the 89 samples into a training set (Xtr,Ytr) containing 3/4 of the samples, and a test set (Xte,Yte) containing the remaining samples.
Penalized CCA was performed on the training set to obtain the vectors utr and vtr.
cor(Xtrutr,Ytrvtr) and cor(Xteutr,Ytevtr) were computed.
5. DISCUSSION
We have developed a method for finding a PMD in an efficient manner. This decomposition builds upon a variety of existing matrix decompositions, such as the SVD, the NNMF (Lee and Seung, 1999), (Lee and Seung, 2001), and the plaid model (Lazzeroni and Owen, 2002). We are most interested in obtaining a decomposition made up of sparse vectors. To do this, we use an L1-penalty on the rows and columns of our decomposition. We also explore the use of an L1-penalty on the rows and a fused lasso penalty on the columns; this is appropriate if the samples correspond to DNA copy number, ordered by chromosomal location. We exploit the biconvex nature of the PMD criterion in order to minimize it via an alternating algorithm.
We have applied the PMD to give attractive solutions to 2 additional problems: sparse PCA and sparse CCA. We used the resulting sparse CCA method to identify the sets of genes that are correlated with regions of DNA copy number change using a data set consisting of DNA copy number change and gene expression measurements on the same set of samples. In addition, we have established connections between 3 different methods for obtaining sparse principal components.
An R package implementing these methods, called PMA (for penalized multivariate analysis) is available on CRAN.
FUNDING
National Defense Science and Engineering Graduate Fellowship to D.W.; National Science Foundation (DMS-9971405 to R.T., DMS-0505676 to T.H.); National Institutes of Health (N01-HV-28183 to R.T., 2R01 CA 72028-07 to T.H.).
We thank an associate editor and 2 referees for helpful comments, Stephen Boyd for a useful discussion of biconvexity, and Holger Hoefling for the use of his R code for fused lasso regression. Conflict of Interest: None declared.
APPENDIX
A.1 Proof of Theorem 2.1

A.2 Proof of Lemma 2.2





A.3 Simulation details for Figure 2
Let X be a 12 × 1000 matrix of data. The elements of X are generated as follows:In other words, the first 5 patients have a region of gain between positions 100 and 500.
For i ∈ 1, …, 5 and j ∈ 100, …, 500, Xij ∼ N(1,1).
Otherwise, Xij ∼ N(0,1).
A.4 Simulation details for Figure 4
We generate matrices X and Y, with n = 50 and p = 100.
Let u1 be a vector of length p, with 20 1s, 20 − 1s, and 60 0s.
Let u2 be a vector of length p, with 10 − 1s, 10 1s, 10 − 1s, 10 1s, and 60 0s.
Let v1 be a vector of length p, with 60 0s, 20 − 1s, and 20 1s.
Let v2 be a vector of length p, with 60 0s, 10 1s, 10 − 1s, 10 1s, and 10 − 1s.
Let w1 and w2 be orthonormal vectors of length n.
Generate the data matrices as follows: Xij ∼ N(w1iu1j + w2iu2j,0.32) and Yij ∼ N(w1iv1j + w2iv2j,0.32).








