ACE of Space: Estimating Genetic Components of High-Dimensional Imaging Data

It is of great interest to quantify the contributions of genetic variation to brain structure and function, which are usually measured by high-dimensional imaging data (e.g., magnetic resonance imaging). In addition to the variance, the covariance patterns in the genetic eﬀects of a functional phenotype are of biological importance, and covariance patterns have been linked to psychiatric disorders. The aim of this paper is to develop a scalable method to estimate heritability and the non-stationary covariance components in high-dimensional imaging data from twin studies. Our motivating example is from the Human Connectome Project (HCP). Several major big-data challenges arise from estimating the genetic and environmental covariance functions of functional phenotypes extracted from imaging data, such as cortical thickness with 60,000 vertices. Notably, truncating to positive eigenvalues and their eigenfunctions from unconstrained estimators can result in large bias. This motivated our development of a novel estimator ensuring positive semideﬁniteness. Simulation studies demonstrate large improvements over existing approaches, both with respect to heritability estimates and covariance estimation. We applied the proposed method to cortical thickness data from the HCP. Our analysis suggests ﬁne-scale diﬀerences in covariance patterns, identifying locations in which genetic control is correlated with large areas of the brain and locations where it is highly localized.


Introduction
It is of great interest to quantify the contribution of genetic effects to brain structure and function, but scientific understanding of such effects is in its infancy (Chen et al. 2013).Measuring the relative size of genetic and environmental effects on brain traits may provide insight into the etiology of neurological and neurodegenerative disorders (Kendler 2001).One method to estimate genetic variation in brain phenotypes is to use twin studies.A major goal of the young-adult Human Connectome Project (HCP) is to map the heritability and genetic underpinnings of brain traits (Van Essen et al. 2013).This dataset includes imaging data on approximately 1200 subjects with over 200 twin pairs.The traditional heritability model uses monozygotic and dizygotic twins to decompose genetic and environmental components of univariate phenotypes, and it provides guidance for molecular genetic studies (Van Dongen et al., 2012).Brain structure and function, however, are often measured by high-dimensional imaging data and commonly represented as functional phenotypes.For instance, various shape analysis methods have been developed to characterize brain cortical and subcortical structures in humans.Functional phenotypes extracted from shape analysis may be effective for the identification of causal genes and a mechanistic understanding of the pathophysiological processes of neurological disorders (Zhao and Castellanos, 2016).
As an illustration, we consider the cortical thickness dataset obtained from the HCP, where each cortical thickness functional phenotype is measured at approximately 60,000 locations on the cortical surface.The cerebral cortex is the outer layer of the brain and consists of a highly folded sheet of gray matter varying in thickness from approximately two to four millimeters.Cortical thickness is important to cognition and intelligence, and cortical thinning may be associated with dementia (Dickerson et al., 2009).
Correlations in cortical thickness between subregions measured across a population are biologically important, but the extent to which these correlations result from genetic influences is poorly understood (Evans, 2013).These correlations, also called cortical networks, may reflect coordinated developmental pathways and recapitulate certain white-matter tracts and functional networks (Alexander-Bloch et al., 2013).Cortical correlations have been associated with psychiatric and neurological disorders including depression and Alzheimer's (Wang et al., 2016;He et al., 2008).Cortical correlations are typically estimated between different regions using a parcellation, but additional insight may be gained by examining higher resolution spatial covariance functions.
In this paper, we will use the HCP data set to measure the heritability and various covariance structures (e.g., genetic) of cortical thickness in the healthy human brain.In particular, we focus on the additive genetic covariance function, Σ a (v, v ), described in Section 2.1.The clinical meaning of Σ a (v, v ) is the covariance in the genetic component of cortical thickness between two locations.
For example, a positive value indicates an individual with a thicker cortex in location v tends to have a thicker cortex in location v due to genetic factors.
• We estimate the covariance patterns in genetic effects in cortical thickness in the Human Connectome Project, which provides detailed insight into cortical networks.
The remainder of this paper is organized as follows.In Section 2, we present Fisher's model for heritability and a recent functional extension.We then present our estimators and algorithm.
In Section 3, we conduct a simulation study.In Section 4, we analyze the HCP cortical thickness data and conclude with a discussion in Section 5.

The ACE of space model
Fisher's ACDE model proposes additive genetic (A), dominant genetic (D), common environmental (C), and unique environmental (E) components of variation in a phenotype.Additive genetic effects can be estimated by assuming the correlation between genetic traits is 1 for monozygotic (MZ) twins and 0.5 for dizygotic (DZ) twins, which is based on genetic theory.The correlation for dominant effects is 1 for MZ and 0.25 for DZ, but dominant and additive effects are not simultaneously identifiable in the basic twin study design.The ACE model controls for effects due to the shared environment and is most appropriate for polygenic phenotypes.Heritability estimates from the ACE model are called narrow-sense heritability, denoted by h 2 , which contrasts with broad-sense heritability, H 2 , which includes dominant effects.
Let i ∈ {1, . . ., n}, where i indexes the family and n the total number of families.Let n 1 denote the number of MZ pairs and N 1 the set of families with MZ pairs; n 2 and N 2 denote the number and set, respectively, of DZ pairs; and n 3 and N 3 denote singletons with no relatives in the dataset.
Let m i = 2 if the ith family contains twins and m i = 1 if the ith family comprises a singleton, and let j index the individual in the ith family.For clarity, we here assume that all observations y ij belong to one of these three classes, but inclusion of non-twin siblings is discussed in Section 2.2.We define the standard ACE model using the mixed model formulation (Rabe-Hesketh et al., 2008) as follows: where 1l(•) is an indicator function; X ij ∈ R p are fixed covariates, which are typically effects we want to control for that are not of primary interest; β ∈ R p is a vector of coefficients; and a i iid ∼ N (0, σ 2 a ) are the additive genetic effects; is the random effect for the total unique variance, which is equal to the sum of the unique environmental effects plus measurement error; and a ij , a i , c i , and e ij are mutually independent.In the next section, we will decompose e ij into the unique environmental effect and measurement error.The ACE model can also be formulated as a structural equation model.The formulation in (1) assumes that there are no dominant effects (no non-additive genetic effects), gene-gene interactions (epistasis), gene-environment interactions, and no assortative mating.The standard approach for heritability estimates in neuroimaging is to pre-smooth the data and then estimate a separate model at each location, where the amount of smoothing is fixed a priori.
Next, we apply the functional model in Luo et al. (2018) to a spatial domain, which we call the ACE of space.Let V denote a spatial domain and v an arbitrary location.In our application, V is the cortical surface, in which the subjects are aligned in a common domain consisting of two 2D manifolds (one for each hemisphere) embedded in 3D space, and data (e.g., cortical thickness) is measured at approximately 60,000 locations (vertices).We use the spherical representation of each manifold in which each cerebral "hemisphere" is represented by its own sphere, with null values in the non-cortical areas corresponding to the connection between the two hemispheres (Figure S.8 in the Supplementary Materials).This is the common template in the processed data in which cortical thickness at a given vertex represents the same location across subjects relative to the aligned cortical folding patterns of the sulci and gyri.The location of each vertex is denoted by a triple , where the last coordinate denotes the cerebral hemisphere.The locations with data are denoted as For conciseness, we hereafter denote these locations with single indices v ∈ {1, . . ., V } or v ∈ V 0 .Modeling will incorporate the local spatial correlation using kernel regression with geodesic distance, as measured using the great circle distance, and the kernel is equal to zero when vertices are in different hemispheres.
See Appendix C.More generally, long-distance correlations will be estimated from the data.See Section 2.2.For display, the vertices are mapped to the Conte69 atlas, which is an average of cortical folding patterns from subjects in an independent dataset.
Let e ij,G (v) denote the unique environmental effect generated from a Gaussian process, and let e ij,L (v) denote the measurement error.Then, the functional ACE model is v), and e ij,L (v) are mutually independent mean-zero Gaussian processes with covariance functions To simplify notation, we let and v).Then narrow-sense heritability is defined: which is corrected for the measurement error due to σ 2 e,L (v).
Pairwise covariance estimators were proposed in Luo et al. (2018) based on kernel regression and applied to 150 locations on the corpus callosum for 129 twin pairs.We examine these estimators, called S-FSEM (Symmetric estimators from the Functional Structural Equation model), in simulations (Section 3).This approach can result in negative estimates of variance parameters, particularly for V n.Here, we develop a method that jointly estimates the covariance function (under PSD constraints) for thousands of locations, which can improve both estimates of heritability and correlation patterns.

Covariance estimation
When the estimate of a covariance function is not PSD, it is common to truncate to the positive eigenvalues and their associated eigenfunctions.This approach generally results in lower MISE than the original symmetric functions (Hall et al., 2008).Truncation was used to estimate a genetic covariance function in a pedigree model for cow growth in Lei et al. (2015).In the ACE of space model, we initially truncated to positive eigenvalues of the FSEM, called PSD-FSEM, but this resulted in large biases as examined in Section 3.This may be more problematic in our V n application.This motivated estimation of the ACE model under PSD constraints (PSD-ACE), which we summarize in six steps.
• Step 1. Calculate point-wise MLEs of unknown parameters in model (1). • Step 2. Smooth the parameter estimates using bandwidths selected using GCV and calculate the fixed effect residuals.
• Step 3. Estimate the measurement error.
• Step 4. Use the fixed-effect residuals and estimates of measurement error as input to an initial estimator of the covariance functions, which has a convenient closed-form solution, and estimate the bandwidths using GCV.
• Step 5. Estimate the rank of the covariance functions from the number of positive eigenvalues (aided by scree plots) and truncate to the corresponding positive eigenvalues/vectors.
• Step 6. Estimate the covariance functions under PSD constraints initialized from Step 5.
Step 1 is straightforward, e.g., Rabe-Hesketh et al. (2008).In our implementation, we numerically optimize the log of the variance parameters, which constrains their estimates to be positive.
Step 2 aims to decrease the mean square error (MSE) of the point-wise MLEs since the MLEs in one location tend to be similar to those in adjacent locations.The smoothed MLEs (SMLEs) are calculated using a biweight (quartic) kernel with bandwidth h, denoted k h (v, v 0 ), based on the geodesic distance between locations v and v 0 .The biweight kernel is defined as and is used throughout.We use the convention that v is the focal vertex, which here we restrict to v ∈ V 0 , and v 0 ∈ V 0 are locations that can contribute to the estimate at the focal vertex v.A sparse Let σ2 e M LE be a vector of length V of the MLE estimates of the unique environmental plus measurement error variance.Then define σ2 e SM LE−h = Kh σ2 e M LE .GCV is an approximation to leave-one-location out cross-validation and is calculated as where tr is the trace operator.Then the GCV is calculated for a grid of values of h and the value minimizing the GCV is chosen.Note that this approach allows a separate bandwidth to be chosen for each parameter, which contrasts with maximum weighted likelihood.This procedure is repeated for βMLE , and the residuals from the smoothed estimates are calculated.
Step 3 uses the difference between the estimates of the total variance from the SMLEs and estimates of the variance due to genetic and environmental effects derived based on smoothness assumptions.We use kernel regression with the biweight kernel to estimate the smooth sum of covariance functions in which diagonal elements are excluded as described in Appendix A.1, which has similarities to estimating the nugget effect in spatial statistics.
Step 4 is related to the closedform solutions presented in the FSEM in Luo et al. (2018), whose method is described in Appendix A.2.Our modification lends itself to GCV for bandwidth selection and is described in Appendix A.3.We call this modification the sandwich estimator in the spirit of Xiao et al. (2016), abbreviated as SW hereafter.
Step 5 chooses the rank based on the scree plot.When V is greater than the number of subjects, such as our data application, the rank is clearly determined by the number of twins and subjects due to constraints on the maximum possible rank (e.g., Figure S.11 of the Appendix).Hence, the approach is arguably less ad-hoc in our application than its use in standard PCA.
Step 6 greatly improves upon the initial estimates.Let m i denote the number of individuals in a family.We order the families such that N 1 = {1, . . ., n 1 }, N 2 = {n 1 + 1, . . ., n 2 }, and N 3 = {n 2 + 1, . . ., n}, and order the individuals such that the twins are indexed by j = 1 and 2. Let ), the fixed effect residuals obtained from Step 2, and σ2 e,L (v 0 ) be the estimate of the measurement error obtained We consider the discrete problem restricting the covariance functions Σ a (•, •) and and d e,G = rank(Σ e,G ).Consider the class of V × V (symmetric) PSD matrices A of rank d a .
We can define this class as Similarly define z c v and z e,G v for Z c ∈ R V ×dc and Z e,G ∈ R V ×d e,G .For focal locations v, v ∈ V 0 and contributing locations v 0 , v 0 ∈ V 0 (i.e., have non-zero weight when nearby), define the PSD-ACE objective function 1 We note that this decomposition is not identifiable because the minimum is not unique.However, we are not interested in the decompositions beyond their usefulness as a memory efficient representation of the covariance matrices.The key observation is that this re-parameterization allows the calculation of an analytic gradient that is scalable.Then we can optimize (8) by initializing from the closed-form solution in Step 5. Extensive simulations support the estimation steps outlined here.The steps and iterations of the gradient descent algorithm can be seen as progressively decreasing the MSE: truncating the eigenvalues/vectors decreases the MSE relative to the symmetric (closed form) estimator, and then the gradient steps result in additional improvements.This will be seen in the simulations.See also the discussion in Section 3.2.In our applications, we did not have issues with convergence, but an approach using diagonalization after each iteration could also be pursued.
Also note this objective function uses the product of two bivariate functions as the kernel for covariance estimation, i.e., , which results in computational simplifications that enable estimation for large datasets.We again use the biweight kernel, and note its finite support decreases computational expense.
The computational complexity of ( 8) is O(V 4 ), which makes it impracticable for modestly sized V , much less V = 60,000.However, we can derive a gradient descent algorithm in which there is a one time cost of O(V 3 ) and updates are O(V 2 ); the formulas for the analytic gradients appear in Appendix A.4.We initially use a modestly sized learning rate.In each iteration, the algorithm checks if the norm of the gradient increased relative to the previous iteration, and if so, halves the learning rate.Convergence is assessed relative to the initial size of the gradient; see Algorithm 1.

While ||∇
(n) , and e,G || 2 F , then return to the previous step and decrease the learning rate, , and similarly for Z For clarity, we described (8) for twins and unrelated singletons; however, we can also use information from non-twin siblings.By looking at the expected values of the products of residuals in ( 6), it can be seen that the non-twin siblings can be treated in the same way as singletons, and thus are included in the first summand in (8).In a similar manner, we can treat non-twin siblings as singletons in steps 3-5.Note we do not make the assumption that non-twin siblings have the same common environment as their twin siblings.This procedure results in three covariance matrices, which are the covariance functions evaluated at V ≈ 60,000 locations.However, it may be desirable to obtain their corresponding covariance functions, which can be used to evaluate the heritabilities and covariances at unobserved grid locations; see Appendix A.5.The approach can also be used to estimate the covariance functions from a sample of points from the spatial domain to decrease computational expense, particularly with respect to memory usage.We can partition the locations, estimate the covariance functions for each subset, interpolate to all locations, and subsequently combine the estimates.Code on the author's GitHub is available to implement this low-memory approach.

Simulation design
We simulated functional spatial data for 100 MZ pairs, 100 DZ pairs, and 200 singletons with 1,002 grid points on the unit sphere with the sizes of the variances motivated from the HCP cortical thickness data and the basis functions chosen to result in a realistic range of variances and covariances.We defined covariance functions using sixth-order even spherical harmonics, which comprise twenty-eight basis functions, x 1 (v), . . ., x 28 (v).This order was chosen to result in a mixture of high and low frequency fluctuations.Higher frequencies capture quick-changes in correlation structure, which is motivated by the patterns observed in Section 4. We then defined Σ a using five basis functions as Σ a (v, v ) = α a k={1,7,13,19,25} x k (v)x k (v ), where α a was chosen such that v Σ a (v, v)/V = 0.015.The value of 0.015 is approximately equal to the average of the MLE of the genetic variance (in mm 2 ) in cortical thickness across all locations, as calculated in Section 4. Similarly, we define The value of 0.010 is approximately equal to the average common environmental variance in Section 4. Next, Σ e,G (v, v ) was defined with basis functions k ∈ {1, 3, 9, 15, 21, 27} and scaled so that the average variance was 0.12.This resulted in heritability that ranged from 0.016 to 0.498 with mean 0.126.Finally, σ e,L (v) was defined from the diagonal of the matrix formed from the basis functions k ∈ {1, 4, 10, 16, 22, 28} and scaled to have average equal to 0.03.Estimation included a design matrix with a column of ones and a continuous covariate, while their true coefficients were equal to zero.
Here, we define the quantities We include the primary estimators S-FSEM, PSD-FSEM, and PSD-ACE in the main manuscript.
Based on an inspection of the scree plots from a few hundred simulations, we chose eight, eight, and six eigenvalues for Σa , Σc , and Σe,G , respectively (where the true ranks were 5, 5, and 6) for the PSD-ACE.We also compared the variances and heritabilities with the point-wise MLE and the maximum weighted likelihood estimator (MWLE) with bandwidth selected using 5-fold CV as in Luo et al. (2018) (defined in Appendix A.6).

Simulation results
Overall, the MISEs for Σa and Σc are much lower for PSD-ACE than the other methods (  ) Σ ^e,G q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 0.0000 0.0005 0.0010 0.0015 0.0020 Σ ^a q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 0e+00 5e−04 1e−03 Method ISE e) Σ ^c q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 1e−04 Note that the PSD-ACE improved upon the initial estimators in all simulations.Previous literature has noted that when imposing positive definite constraints with the parameterization Σ = ZZ T , non-uniqueness can lead to issues with convergence when optima are close to each other (Pinheiro and Bates, 1996).Here, this does not appear to detrimentally affect parameter estimation, where we used an adaptive learning rate.We used a strict convergence criteria (ratio of the norm of the current gradient to the initial gradient less than 0.0001), and found that when the algorithm did not converge (in the sense that the size of the gradient failed to get smaller for vanishingly small learning rates), the estimate appeared to have adequately minimized the objective function (e.g., S.1).In practice, we found that convergence can be improved by increasing the ranks in PSD-ACE, but this does not necessarily improve the estimate.
In summary, PSD-ACE has the lowest MISE albeit with more bias than the S-FSEM.Restricting q q q q q q q q q q q q q q q q q q q q q q q q q q q q 0.01 0.10 the analysis to heritability, the bias in PSD-ACE was less than the MLE and MWLE, while also having the lowest overall MISE.Truncating covariance functions (PSD-FSEM) from symmetric estimates led to a large amount of bias in σ2 a (v) and σ2 c (v).

Application to HCP
We used the 32k (per hemisphere) preprocessed cortical thickness data from the 1200-subject HCP data release.We controlled for age, gender, and total intracranial volume (Appendix C of the Appendix).The HCP dataset contains cortical thickness for 1094 subjects, which includes twins, non-twin siblings, and unrelated individuals.In this sample, the age (mean±sd) was 28.8±3.7 years.
There were 595 females versus 499 males with 75% White, 15% African-American, 6% Asian/Native Hawaiian/Pacific Islander, and 4% "other".Of these, 452 were genotyped, which revealed that 31 of 109 twins pairs that self-reported DZ twin status were in fact MZs.In contrast, 151 out of 151 genotyped twin pairs that self-reported MZ were in fact MZ.Thus, the set of subjects that self-reported MZ was 100% accurate, while the set of subjects that self-reported DZ was only 72% accurate.Consequently, we included all self-reported MZs but excluded self-reported DZs without genotype data.This resulted in 151 MZ and 78 DZ pairs.
For S-FSEM and PSD-ACE, we included all 1094 subjects, where non-twin siblings were treated as singletons as discussed in Section 2.2.We excluded non-twin siblings from the MLEs and MWLEs, where the likelihoods assume independence or require the assumption that siblings can be treated in the same manner as DZ pairs (i.e., the common environmental variance effects, c i (v), are the same for siblings of different ages and twins).Similarly, we elected to include one randomly selected member of a family for the families with siblings and no twins, which resulted in 676 subjects in the MLE and MWLE analyses.
We estimated heritabilities using the point-wise MLE, point-wise MWLE with 5-fold leave-onefamily out CV, S-FSEM, and PSD-ACE.Overall, the selected bandwidths were notably small; for Step 6 is the most computationally intensive step.In the Appendix, we provide scripts with the option to partition the data to decrease memory overhead, where the number of partitions can be tuned to meet a user's memory limits.Here, we used the full data on a high memory server (required approximately 1.8 TB) and 24 CPUs.The PSD-ACE took approximately 26 hours to fit, while the S-FSEM took approximately 0.5 hours, and the PSD-FSEM took approximately 1 hour.
We assessed the sensitivity of the PSD-ACE to the selected ranks.We re-ran Step 6 with the ranks reduced by 10 for each covariance, and found negligible changes; see Appendix C.4.This is expected because the PSD-ACE is initiated from the ordered positive eigenvalues/vectors of the symmetric estimates, such that excluding the smallest eigenvalues/vectors should have negligible impacts.We also assessed convergence of the PSD-ACE by running an additional 600 iterations and found negligible changes; see Appendix C.5.
In general, heritability was higher near the central sulci and medial areas near the corpus callo-sum (Figure 4; also see the annotated PSD-ACE in Figure S.13 of the Appendix).The heritability was higher in PSD-ACE than MLE and MWLE, and there were many zeros in the MLE and MWLE estimates but not PSD-ACE.S-FSEM had some negative heritabilities due to negative estimates of σ 2 a (v) and tended to have estimates that were higher than MLE and MWLE but lower than PSD-ACE.The mean ± sd across all vertices was 0.272 ± 0.034 for PSD-ACE and 0.164 ± 0.305 for S-FSEM, while MLE and MWLE were 0.088 ± 0.093 and 0.088 ± 0.089, respectively.Using σe,L from Step 3 in the PSD-ACE estimation, we can also construct estimates of measurement-error corrected heritability for MLE and MWLE: 0.108 ± 0.115 and 0.108 ± 0.110, respectively.Here, the measurement error only accounts for a small proportion of the differences.The patterns of higher heritability in the central sulci and medial areas near the corpus callosum are similar to Shen et al. (2016), which was based on young adults (22.8 years ± 2.3) and used 10 mm Laplace-Beltrami pre-smoothing.Their vertex-wise heritability estimates averaged across ROIs ranged from 0.026 to 0.523.Since our data were not pre-smoothed and the GCV-selected bandwidth was small, we suggest our Figure 4 has a higher effective resolution than Shen et al.
(2016) Figure 2.There are some notable differences.Shen et al. (2016) found strong heritability in medial frontal areas in the left hemisphere (labeled as "right" using the radiological convention in their paper), whereas we did not find high heritability in these areas in either hemisphere.We found higher heritability in the parahippocampal gyrus and entorhinal cortex (ventral to the medial wall; see annotated Figure S.13 in the Appendix), whereas this pattern was less evident in Shen et al. (2016).The entorhinal cortex is involved in memory, and interestingly, thinning of the entorhinal cortex may interact with the APOE-4 gene in Alzheimer's (Thompson et al., 2011).
The genetic covariance function can be efficiently explored by creating an animation that progresses through different seeds (see https://youtu.be/ew-Pq-Enf1I).We have created figures from selected seeds that highlight findings of scientific interest (Figure 5).The covariances are normalized to define correlations.First, the seed map for vertex 1577 in the right cortex (top left), located in the parahippocampal gyrus near the boundary with the isthmus cingulate, suggests that this location is a potential hub, as it is relatively highly correlated with many areas of the cerebral cortex.Thus the genetic control of cortical thickness for this location is related to the genetic control over cortical thickness across broad areas of the brain.In contrast, the seed map for vertex 161 (top right), also located in the parahippocampal gyrus, exhibits high correlations along a narrow ridge, with substantially lower overall correlations and much more localized genetic control.The parahippocampal gyrus is associated with memory encoding and retrieval, and our analysis indicates heterogeneous genetic patterns in this region.Next, the bottom row of Figure 5 illustrates that the local patterns of correlation can differ greatly between nearby locations.Vertex

Discussion
We present a method to estimate the genetic covariance function and heritability of brain traits from neuroimaging data.Our main contribution is the development of an estimation method that can handle a large number of grid points.In simulations, our approach improves estimates of heritability, genetic covariances, and environmental covariances.We apply our method to gain novel insights into the heritability and genetic correlations from the Human Connectome Project.
Our approach reveals fine-scale differences in covariance patterns, identifying locations in which genetic control is correlated with large areas of the brain and locations where it is highly localized.
This enables insight into the genetic underpinnings of structural networks of cortical thickness.
Our analysis reveals tradeoffs between bias and variance, which is an important consideration for scientific interpretation.Here we discuss three bias-variance tradeoffs in estimates of heritability in neuroimaging: 1) bias from smoothing; 2) bias from constraining the covariance matrix to be PSD; and 3) bias from measurement error.With respect to smoothing, we use GCV for a data-based selection of the bias-variance tradeoff, which in general will reduce the mean squared error relative to using an a priori determined degree of smoothing.The impacts of smoothing on bias in twin studies is discussed in Li et al. (2012).At higher resolutions, there is greater measurement error, which has historically motivated the use of a large amount of smoothing, e.g., a Gaussian kernel with full-width at half-maximum equal to 25-30 mm, or the use of brain traits averaged across a smaller number of regions.These approaches decrease the variance of estimators, which can increase power, but can also decrease the spatial precision, which can increase false positives.The small amount of smoothing selected via GCV in our estimates will generally result in more variable estimates than approaches using larger amounts of smoothing, but also preserves fine-scaled changes in correlation.Different approaches have different costs and benefits, and in this respect can complement one another.
A second form of bias arises when imposing PSD constraints, as revealed by large differences between the unconstrained, truncated, and constrained estimators.Truncating the covariance functions to positive eigenvalues/eigenvectors results in lower MISE but large bias, and this large bias motivated our development of the PSD-ACE estimator.We view PSD-ACE as a compromise that results in dramatically lower variance relative to the symmetric estimators at the cost of some bias and dramatically lower bias relative to truncated estimators at additional computational cost.
In practice, one approach is to estimate both PSD-ACE and S-FSEM to compare the estimators with better variance properties versus better bias properties.
A third form of bias, measurement error, is common in heritability studies.When repeated scans on the same subject are available, Ge et al. (2017) proposed the use of linear mixed effects models with repeated measures, leading to large improvements in estimates of heritability.A functional ACE model for repeated measurements is an important avenue for future research.In the absence of repeated measurements, we utilize the assumption that the underlying genetic and environmental functions are smooth Gaussian processes, which allows the estimation of measurement error in a manner similar to the nugget effect in spatial statistics.In our simulations, likelihoodbased approaches were more biased than PSD-ACE in heritability estimates because they conflate measurement error and unique environmental variance.
In this study, we have not addressed inference.In particular, many of the correlations may not be statistically significant, including the large changes in genetic correlation observed over short distances.For datasets with fewer locations, future research could develop permutation tests to calculate FWER-corrected p-values for genetic correlations.Note that for variance components, Luo et al. (2018) proposed a test for the significance using MWLE.Another important avenue for future research is the extension of the PSD-ACE to more general pedigree models, which could allow the estimation of genetic components from large datasets like UK Biobank.This would also be useful in evaluating the replicability of cortical thickness heritability.
We applied our method to tens of thousands of points and produced a detailed atlas of the covariance in cortical thickness related to genetic factors.By determining the degree of smoothing from the data, our approach allows a more detailed spatial resolution.We used the same kernel and bandwidth for all locations across the cortical surface.However, the large changes in correlation patterns over small distances, e.g., the bottom row of Figure 5, together with the small bandwidth selected by GCV and 5-fold CV, suggest that future research could explore additional modeling approaches.Locally adaptive procedures have been developed for image smoothing, regression, and maximum weighted likelihood (e.g., Li et al. 2012).A recent method for functional PCA based on the Laplace-Beltrami operator for the cortical surface may better characterize local features than a fixed kernel (Lila et al., 2017).The most flexible approach would be to allow jump discontinuities, for example extending Zhu et al. (2014).Developing these approaches to estimate multiple covariance functions in big neuroimaging twin studies is challenging.
where R ij (v 0 ) are the SMLE residuals.Then the solution can be expressed as ; S 0 is defined in (S.6); "./" denotes element-wise division; J is the V × V matrix of ones; and is element-wise multiplication.Let σ2 Then σ2 e,L = σ2 T − diag ΣG .
The bandwidth in (S.1) can not be chosen using GCV because the trace is zero, so we chose h using the matrix S 1 generated by the monozygotic twin pairs defined in (S.6).Note here E S 1 ≈ Σ a +Σ c , which we assume is a sufficient a proxy for the smoothness of Σ e,G .The error is calculated as ||S 1 − KS 1 K|| 2 F with K described in Section 2.2.We found that GCV in this manner appeared to produce better results than 5-fold CV on family id with the estimator in (S.1), where the latter appeared to under-smooth.Additionally, GCV is computationally more tractable.

A.2 FSEM Covariance Estimators
Here we present the original objective function used in covariance estimation from Luo et al. (2018).
Recall the definitions of U i,j,v 0 ,v 0 and U i,v 0 ,v 0 from (2.5) and (2.6) of the main manuscript.Define the objective function, The objective function is equivalent to local constant regression (e.g., (A.3) in Yao et al. (2005)), i.e., kernel regression, and we call the solution S-FSEM, where the S denotes symmetric.The solution to this objective function has a closed form.
where S w0 (v, v ) is a function defined using all individuals, and S w2 (v, v ) is a function defined using the DZ twins, In our application, we use geodesic distance in lieu of v − v 0 .

A.3 Sandwich Estimators
Here we use kernel regression wherein the objective function differs from Luo et al. (2018) in that we include v 0 = v 0 , where we subtract the estimates of the measurement error variance from R 2 ij (v 0 ), as in (2.5) of the main manuscript.The solution is similar to (S.3) and (S.4) except that the terms include v 0 = v 0 and the normalizing constant is replaced by w If we restrict ourselves to the covariance matrices corresponding to the V locations, then the previous estimators can be written in a simple form that is also useful for computations.Define GCV is based on the observation that vec{ Kh S a Kh } = ( Kh ⊗ Kh )vec{S a }.(S.7) Note that here we apply GCV spatially, which approximates leave-one-location out CV.

A.4 Analytic gradients of the PSD-ACE
Let K be the V × V matrix with entries K v,v 0 = k h (v, v 0 ) (not divided by w(v)).Let Σe,L be the diagonal matrix of smoothed estimates of the measurement error; let R denote the N × V matrix of residuals for all individuals and all locations; let R 11 denote the n 1 × V matrix of residuals for the first individual in each MZ pair; let R 12 denote the second individual in each MZ pair; let R 21 denote the n 2 × V matrix for the first individual in each DZ pair; and let R 22 denote the second individual in each DZ pair.Then we can derive the partial derivatives of (2.7) of the main manuscript for the elements corresponding to Z a : where is element-wise multiplication and S * a = 2S 0 + 2S 1 + S 2 .With respect to Z c , we have where S * c = 2S 0 + 2S 1 + 2S 2 .With respect to Z e,G , we have (S.10) which are used in Algorithm 1 of the main manuscript.

A.5 Estimating covariance functions from covariance matrices
In this section, we consider the problem of estimating the covariance functions, which have a continuous domain, from the covariance matrices, defined for the ≈60,000 locations.Towards this, we want to estimate an unsmoothed basis for each covariance function based on K−1 Ẑa , K−1 Ẑc , and K−1 Ẑe,G .However, K is typically poorly conditioned.Using the inverse for poorly conditioned K can lead to ringing artifacts when evaluating the function for some v / ∈ V 0 .Instead, we use a robust inverse as follows.We take the eigenvalue decomposition of K = UΛU T and define , where Λ Q denote the diagonal matrix of Q eigenvalues meeting this criteria truncated to λ q > 0.0001.Then, we set Ŵa = K − Ẑa , Ŵc = K − Ẑc , and Ŵe,G = K − Ẑe,G and estimate the covariance functions for any v, v ∈ V as follows: where As shown in Proposition 1, these estimators are positive semidefinite functions.Moreover, this additional step is only necessary for evaluating the covariance function between unobserved locations or for a data partitioning approach.
Proposition 1.Consider a domain V and locations {v 1 , . . ., v V } ∈ V. Let k(v, v ) be a kernel and for any v ∈ V, and define k Then a T Fa ≥ 0. (S.15) Note that (S.15) is the definition of a positive semidefinite function f (v, v ), e.g., p. 46, (2.29) of Hsing and Eubank (2015).
Proof.Let v * ∈ V m be given.Let K * be the V × m matrix with columns k(v * i ).Then for any a ∈ R m , we have

A.6 MWLE
Throughout this study, we use the biweight kernel: Next, we summarize the maximum weighted likelihood estimate (MWLE) proposed in Luo et al. (2018).Suppose we have obtained βv 0 for all v 0 ∈ V 0 , e.g., point-wise MLE or OLS estimates.Let R(v 0 ) be the fixed effect residuals at location v 0 for all subjects.Let d(v, v 0 ) denote the geodesic between two vertices.Let σ 2 (v) = {σ 2 a (v), σ 2 c (v), σ 2 e (v)}.They define a weighted log-likelihood (S.17) Five-fold leave-family-out cross-validation is used to select h in which the residuals are treated as data.

B Additional simulation results
Detailed results containing the additional model estimation methods (S-SW, PSD-SW, PSD-ACE-O) are included in Table S.1.We also present normalized variances of ISE, defined as Normalized ISE  ) Σ ^e,G q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 0 10 20

Method
Normalized ISE d) Σ ^a q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 0 20 40 60 S−FSEM PSD−FSEM PSD−ACE

Method
Normalized ISE e) Σ ^c q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q 0.025       nation and functional connectivity during registration (Robinson et al., 2014) and alignment to the 164k fs LR standard surface mesh, in which the left and right hemispheres are in geographic correspondence (Van Essen et al., 2012).The registered surface was then transformed back to each subject's surface space by applying the inverse of the subject-to-atlas mapping.The pial surface was also estimated (the outer boundary of the cortical surface) in which the recon-all pipeline was customized to include the use of both T1w and T2w images.Cortical thickness is then equal to the distance between the pial and white matter surfaces.We utilize the lower resolution version of the cortical thickness data in which each hemisphere of the brain is represented by a sphere with 32,492 vertices at approximately 2 mm spacing, labeled as 32k fs LR in the HCP data.Data for a single subject is depicted in Figure S.8.

C.2 Selection of model covariates
We considered controlling for the following covariates: age, gender, handedness, height, weight, body mass index, and total intracranial volume (TIV).We inspected the p-values from the Wald statistics of point-wise MLE estimates from the 29,716 vertices (excluding the medial wall) in the right hemisphere and found that handedness, height, weight, and body mass index were roughly uniformly distributed, whereas the p-values for age, gender, and TIV were right skewed (Figure S.9).Consequently, we included age, gender, and TIV in all subsequent models.

C.3 Geodesic distance
We calculate geodesic distance along the 32k fs LR spherical template in MSMAll, which represents a common space in which cortical thickness at a given vertex represents the same location across subjects relative to the aligned cortical folding patterns of the sulci and gyri (as used in FreeSurfer) and the patterns of myelination and functional connectivity (the additional information used in MSMAll).These distances have been used in previous spatial studies (Bernal-Rusiel et al., 2013;Risk et al., 2016).There are two alternative approaches: 1) use the geodesic distance along the group-averaged cortex rather than the 32k fs LR spheres, or 2) use the registered individual surfaces wherein vertices are in correspondence but distances between vertices vary by subject.Regarding the first alternative, using a group template is not advised because folding patterns are averaged resulting in a smoothed surface with distances affected in undesirable ways.Regarding the second alternative, it is unclear whether using the individual-specific distances would improve or degrade performance, and it creates additional computational and mathematical challenges.The subjectspecific distances would require a separate kernel matrix for every subject, causing scalability issues that prevent the application of Algorithm 1.Additionally, we define our covariance function for a common domain V.In a subject-specific approach, the nodes v 1 , . . ., v V are common across subjects but the map to R 3 is subject-specific.Then conceptually we would be attempting to define a common covariance function for different manifolds.Note the eigenvalues/vectors were estimated using the Matlab function eigs to calculate a reduced number of eigenvalues; naive implementation of eig for large datasets may be impracticable.
We also assessed the sensitivity of PSD-ACE to the selected ranks.We re-ran Step 6 with the ranks reduced by 10 for each covariance (i.e., the ranks were set to 219, 219, and 933 for the additive genetic, common environmental, and unique environmental covariances).There were negligible changes in heritability (max difference = 0.0006), additive genetic correlations (max diff = 0.0017), common environmental correlations (max diff = 0.0009), and unique environmental correlations (max diff = 0.0035).The results appear to be relatively robust to the rank selection.This is expected because the PSD-ACE is initiated from the PSD-FSEM, in which the eigenvectors are ordered by their variance, such that excluding the smallest eigenvalue/eigenvector pairs should have negligible impacts.

C.5 Assessing convergence
We ran 1000 iterations of Algorithm 1, which took 25 hours.The magnitude of the gradient after 1000 iterations was 0.02% of the magnitude in the first iteration; for the additional iterations, this decreased to 0.01%.With the additional iterations, there were negligible changes in the heritability estimates (max difference across all vertices = 0.0048), additive genetic correlations (max diff in correlation = 0.0007), common environmental correlations (max diff = 0.0008), and unique environmental correlations (max diff = 0.0082).A visual examination of the heritability results suggest they are nearly indistinguishable (Figure S.12).Thus we used the results from 1000 iterations.

Algorithm 1 :
Covariance estimationInputs : The N × V data matrix Y and design matrix X; tolerance , e.g., 0.0001; learning rate λ, e.g., 0.1.Result: ΣP SD−ACE a , ΣP SD−ACE c , and ΣP SD−ACE e,G .1. Estimate measurement error, σe,L, and residuals, R, using SMLE with input Y and X. (Steps 1bandwidths are chosen using GCV.These bandwidths will be used in subsequent estimators.(Step 4) 3. Choose the rank da based on the scree plot for ΣS−SW a .Use the selected eigenvalue/eigenvector pairs to generate an initial value Z

2/
V 2 and MISE = t ISE (t) /T as measures of error, where t denotes the tth simulation.We also present normalized versions in Appendix B.We compare three estimators of the covariance functions: (i) the symmetric FSEM proposed inLuo et al. (2018), S-FSEM, defined in Appendix A.2; (ii) the PSD analogue of S-FSEM based on truncating to positive eigenvalues (PSD-FSEM); and (iii) the PSD-ACE estimator.We also compared three additional estimators of the covariance functions: (iv) the symmetric sandwich estimator (S-SW) defined in Appendix A.3 and used in initialization in Step 4 of the PSD-ACE; (v) the PSD-SW based on truncating to positive eigenvalues; and (vi) PSD-ACE oracle estimator (PSD-ACE-O), based on Algorithm 1 but using the true ranks.The S-SW results are very similar to S-FSEM, and the PSD-SW results are very similar to PSD-FSEM; additionally, PSD-ACE-O results are very similar to PSD-ACE (Tables S.1 and S.2 and Figures S.1-S.4 of the Appendix).
Figure S.1, Figure S.1 of the Appendix).In all individual simulations, PSD-ACE has a lower ISE than S-FSEM and PSD-FSEM for all covariance matrices.For Σa and Σc , S-FSEM has the lowest bias but largest variance, whereas PSD-FSEM has high bias but smaller variance, while PSD-ACE has some bias but less than PSD-FSEM and dramatically lower variance (Figure S.1a and b).A different pattern emerges for Σe,G .The S-FSEM version of this estimator is PSD in most simulations.Consequently, the S-FSEM and PSD-FSEM versions are very similar.For Σe,G , PSD-ACE again has the best MISE driven by lower variance, but now has more bias relative to PSD-FSEM.To visualize the bias, we can examine plots of the covariance between a focal vertex, i.e., a seed, and the 1,002 vertices of the discretized spatial domain, which corresponds to a row of the covariance matrix of the V locations.PSD-FSEM tends to inflate differences between locations, in particular having higher values near the seed, whereas the constrained estimates tend to shrink the differences towards zero (Figure2, Figure S.2 of the Appendix).

Figure 1 :
Figure 1: MISE of covariance functions for the S-FSEM, PSD-FSEM (truncated eigenvalues of S-FSEM), and PSD-ACE.Panels d, e, and f depict boxplots of the ISE from 1,000 simulations.See Figure S.1 in the Appendix for a normalized version of this figure.

Figure 3 :
Figure 3: MISE of σ2 a (v), σ2 c (v), σ2 e,G , and heritability across V =1,002 locations.(See Figures S.4-S.7 for a visualization of the bias across space.)In panel c), point-wise MLE and point-wise MWLE estimates of σ 2 e (v) = σ 2 e,G (v) + σ 2 e,L (v) are presented because they do not separate σ 2 e,G , which leads to bias.Panel e) includes boxplots of the ISE from 1,000 simulations, where the y-axis is on the log10 scale due to large differences between methods.See Figure S.3 in the Appendix for a normalized version of this figure.
details, see Appendix C.4.An inspection of the scree plots of the eigenvalues clearly indicates the ranks of the covariance functions are determined by the number of twin pairs and individuals: for ΣSW a and ΣSW c , the rank equals the number of twin families (229); for ΣSW e,G , the rank equals N −n 1 (943) (Figure S.10 of the Appendix).
180 (bottom right) and vertex 239 (bottom left), both located in the isthmus cingulate, have very different local correlation patterns.

Figure 5 :
Figure 5: Genetic correlation function evaluated at selected seeds, as estimated using PSD-ACE.Surface vertex indices from right cortex of the fs LR template, clockwise from top left: 1577, 161, 180, and 239.Animation depicting hundreds of seeds is available in the Supplementary Materials.

=
and S e,G = S 0 − S 1 .The estimators are ΣS−SW Kh S e,G KT h , where S-SW is an abbreviation for symmetric sandwich.We define the PSD-SW by calculating the EVD and truncating to the positive eigenvalue/eigenvector pairs.
Figure S.1: Normalized MISE of covariance functions for the S-FSEM, PSD-FSEM (truncated eigenvalues of S-FSEM), and PSD-ACE.Panels d, e, and f depict boxplots of the normalized ISE from 1,000 simulations.

Figure S. 2 :
Figure S.2: Visualizing bias in covariance estimation at a randomly selected seed location.Plotted is the average across 1,000 simulations of Σ(t) a (•, v), where t denotes the simulation and the covariance between a fixed location (v = 888) and the 1,002 locations is evaluated.Note the same color scale is used in all plots.

Figure S. 4 :
Figure S.4: Visualizing bias.Estimates of σ 2 a (v) averaged across 1,000 simulations.Note the same color scale is used in all plots.

Figure S. 5 :
Figure S.5: Visualizing bias.Estimates of σ 2 c (v) averaged across 1,000 simulations.Note the same color scale is used in all plots.

Figure S. 6 :
Figure S.6: Visualizing bias in the estimation of heritability.Estimates of h 2 (v) averaged across 1,000 simulations.Note the same color scale is used in all plots.

Figure S. 7 :
Figure S.7: Visualizing bias in the estimation of heritability.Estimates of h 2 (v) averaged across 1,000 simulations.Note a different color scale is used in each plot.

Figure S. 8 :
Figure S.8: Example cortical thickness data from subject 101915.Top four images: data on the subject's inflated surface.The gray regions represent the medial wall.Figures on the left represent the left hemisphere.The top row depicts the lateral view.The second row depicts the medial views.Bottom: data on the 32k fs LR spherical template.

C. 4
Bandwidth selection, rank selection, and sensitivity to rank selection In the MWLE analysis, 5-fold CV selected a bandwidth = 1.8 arclengths (in degrees).The bandwidth chosen in Step 2 of PSD-ACE was 1.4 for each of the three variance parameters; the bandwidth chosen in Step 3 for ΣG equaled 1.3; and the bandwidths chosen for Σa , Σc , and Σe,G were 1.3 (Figure S.10).Note the triangles on the 32k fs LR tessellation of the sphere are not equal in area.A typical vertex has two closest neighbors at a distance of approximately 1.15, then two neighbors at 1.2, then two more neighbors near 1.3, which compose the set of vertices of the triangles containing the focal vertex; then the next closest distance is approximately 2, which represents a vertex in a set of triangles that do not contain the focal vertex.For bw=1.3, average weights are 0.878, 0.044, 0.044, 0.015, 0.015, 0.002, and 0.002 for the focal and neighboring vertices.
Figure S.9: Variable selection in the ACE model.Covariates handedness, height, weight, and BMI were excluded from subsequent analysis because the distribution of their p-values from all vertices in the right hemisphere (V=29,716) was approximately uniform, whereas gender, age, and intracranial volume (ICV) were controlled for in all estimates of heritability.

Figure S. 10 :
Figure S.10: Bandwidth selection using GCV to determine the smoothing of the MLE estimates in the HCP data (Step 2) (top panels), the smooth Gaussian process variance Σ G (Step 3) (bottom left), and the covariance functions, Σ a , Σ c , and Σ e,G , in Step 4 (bottom right).

Figure S. 11 :
Figure S.11: Eigenvalues of ΣSW a , ΣSW c , and ΣSW e,G for the HCP data.The red vertical line in a), b), d),and e) corresponds to the number of twin pairs.The green line in c) and f) corresponds to the total number of individuals minus the number of monozygotic twin families.Since we used the power algorithm to iteratively estimate the eigenvalues, we stopped the algorithm when the eigenvalue was numerically zero.

Figure S. 12 :
Figure S.12: Heritabilities estimated from the HCP data using 1000 iterations (left, in Figure4of the main manuscript) and an additional 600 iterations (right), demonstrating convergence.

Figure S. 13 :
Figure S.13: Heritability from the PSD-ACE.The blue ovals correspond to the central sulci, the green ovals correspond to medial areas dorsal to the corpus callosum, and the fuchsia ovals correspond to the parahippocampal gyrus and entorhinal cortex.

Figure
Figure S.14: Correlation between a middle frontal seed vertex in the left hemisphere (black dot, surface vertex for the left cortex of the fs LR template: 30,456) and all other locations.The contralateral location is highlighted in green.The right images are thresholded at correlations whose absolute values are greater than 0.15.

Table S .
1: Bias, variance, and MISE of the covariance functions multiplied by V 2 , where V = 1002.
TableS.2: Squared bias, variance, and MISE of the variance and heritability estimates multiplied by V =1,002.On a side note, SMLE with GCV (approximating leave-one-location out) had lower overall MISE than MWLE with 5-fold CV on family ID (leaving surfaces out).In general, GCV tended to choose a larger bandwidth than 5-fold CV resulting in lower MISE, in addition to computational savings.