Efficient penalized generalized linear mixed models for variable selection and genetic risk prediction in high-dimensional data

Abstract Motivation Sparse regularized regression methods are now widely used in genome-wide association studies (GWAS) to address the multiple testing burden that limits discovery of potentially important predictors. Linear mixed models (LMMs) have become an attractive alternative to principal components (PCs) adjustment to account for population structure and relatedness in high-dimensional penalized models. However, their use in binary trait GWAS rely on the invalid assumption that the residual variance does not depend on the estimated regression coefficients. Moreover, LMMs use a single spectral decomposition of the covariance matrix of the responses, which is no longer possible in generalized linear mixed models (GLMMs). Results We introduce a new method called pglmm, a penalized GLMM that allows to simultaneously select genetic markers and estimate their effects, accounting for between-individual correlations and binary nature of the trait. We develop a computationally efficient algorithm based on penalized quasi-likelihood estimation that allows to scale regularized mixed models on high-dimensional binary trait GWAS. We show through simulations that when the dimensionality of the relatedness matrix is high, penalized LMM and logistic regression with PC adjustment fail to select important predictors, and have inferior prediction accuracy compared to pglmm. Further, we demonstrate through the analysis of two polygenic binary traits in a subset of 6731 related individuals from the UK Biobank data with 320K SNPs that our method can achieve higher predictive performance, while also selecting fewer predictors than a sparse regularized logistic lasso with PC adjustment. Availability and implementation Our Julia package PenalizedGLMM.jl is publicly available on github: https://github.com/julstpierre/PenalizedGLMM. Supplementary information Supplementary data are available at Bioinformatics online.


Introduction
Genome-wide association studies (GWAS) have led to the identification of hundreds of common genetic variants, or single nucleotide polymorphisms (SNPs), associated with complex traits (Visscher et al., 2017) and are typically conducted by testing association on each SNP independently.However, these studies are plagued with the multiple testing burden that limits discovery of potentially important predictors.Moreover, GWAS have brought to light the problem of missing heritability, that is, identified variants only explain a low fraction of the total observed variability for traits under study (Manolio et al., 2009).Multivariable regression methods, on the other hand, simultaneously fit many SNPs in a single model and are exempt from the multiple testing burden.In both simulations and analysis of high-dimensional data, sparse regularized logistic models have shown to achieve lower false-positive rates and higher precision than methods based on univariable GWAS summary statistics in case-control studies (Hoggart et al., 2008;Privé et al., 2019).Contrary to univariable methods which implicitly assume that SNPs are independent, a regularized model makes use of the linkage disequilibrium (LD) structure between different loci, assigning weights to SNPs based on their relative importance after accounting for all other SNPs already in the model.
Confounding due to population structure or subject relatedness is another major issue in genetic association studies.Modern large scale cohorts will often include participants from different ethnic groups as well as admixed individuals, that is, subjects with individual-specific proportions of ancestries, or individuals with known or unknown familial relatedness, defined as cryptic relatedness (Sul et al., 2018).
Confounding comes from the fact that allele frequencies can differ greatly between individuals who do not share similar ancestry.When ignored, population structure and subject relatedness can decrease power and lead to spurious associations (Astle and Balding, 2009;Price et al., 2010).Common practice is still to drop samples by applying filters for relatedness or genetic ancestry, which can result in decreasing the sample size by nearly 30% (Loh et al., 2018) in the full UK Biobank data set (Bycroft et al., 2018).
Principal component analysis (PCA) can control for the confounding effect due to population structure by including the top eigenvectors of a genetic similarity matrix (GSM) as fixed effects in the regression model (Price et al., 2006).With admixture and population structure being low dimensional fixed-effects processes, they can correctly be accounted for by using a relatively small number of PCs (e.g.10) (Astle and Balding, 2009;Novembre and Stephens, 2008).However, using too few PCs can result in residual bias leading to false positives, while adding too many PCs as covariates can lead to a loss of efficiency (Zhao et al., 2018).Alternatively, using mixed models (MMs), one can model population structure and/or closer relatedness by including a polygenic random effect with variance-covariance structure proportional to the GSM (Yu et al., 2005).Indeed, kinship is a high-dimensional process, such that it cannot be fully captured by a few PCs (Hoffman, 2013).Thus, it would require the inclusion of too many PCs as covariates, relative to the dimension of the sample size.Hence, while both PCA and MMs share the same underlying model, MMs are more robust in the sense that they do not require distinguishing between the different types of confounders (Price et al., 2010).Moreover, MMs alleviate the need to evaluate the optimal number of PCs to retain in the model as fixed effects.
Several authors have proposed to combine penalized quasi-likelihood (PQL) estimation with sparsity inducing regularization to perform selection of fixed and/or random effects in generalized linear mixed model (GLMMs) (Groll and Tutz, 2014;Hui et al., 2017).However, none of these methods are currently scalable for modern large-scale genome-wide data, nor can they directly incorporate relatedness structure through the use of a kinship matrix.Indeed, the computational efficiency of recent multivariable methods for high-dimensional MMs rely on performing a spectral decomposition of the covariance matrix to rotate the phenotype and design matrix such that the transformed data become uncorrelated (Bhatnagar et al., 2020b;Rakitsch et al., 2012).These methods are typically restricted to linear models since in GLMMs, it is no longer possible to perform a single spectral decomposition to rotate the phenotype and design matrix, as the covariance matrix depends on the sample weights which in turn depend on the estimated regression coefficients that are being iteratively updated.This limits the application of high-dimensional MMs to analysis of binary traits in genetic association studies.
In this paper, we introduce a new method called pglmm that allows to simultaneously select variables and estimate their effects, accounting for between-individual correlations and binary nature of the trait.
We develop a scalable algorithm based on PQL estimation which makes it possible, for the first time, to fit penalized GLMMs on high-dimensional GWAS data.To speedup computation, we estimate the variance components and dispersion parameter of the model under the null hypothesis of no genetic effect.Secondly, we use an upper-bound for the inverse variance-covariance matrix in order to perform a single spectral decomposition of the GSM and greatly reduce memory usage.Finally, we implement an efficient block coordinate descent algorithm in order to find the optimal estimates for the fixed and random effects parameters.Our method is implemented in an open source Julia programming language (Bezanson et al., 2017) package called PenalizedGLMM.jl and freely available at https: //github.com/julstpierre/PenalizedGLMM.The rest of this paper is structured as follows.In Section 2 we present our model and describe the block coordinate gradient descent algorithm that is used to estimate the model parameters.We also discuss several approaches to select the optimal tuning parameter in regularized models, and we detail how predictions are obtained in GLMs with PC adjustment versus our proposed mixed model.In Section 3, we show through simulations that both LMM and logistic model with PC adjustment fail to correctly select important predictors and estimate their effects when the dimensionality of the kinship matrix is high.Further, we demonstrate through the analysis of two polygenic binary traits in the UKBB data that our method achieves higher predictive performance, while also selecting consistently fewer predictors than a logistic lasso with PC adjustment.We finish with a discussion of our results, some limitations and future directions in Section 4.

Model
We consider the following GLMM for i = 1, .., n, where µ i = E(y i = 1|X i , G i , b i ), X i is a 1 × m row vector of covariates for subject i, α is a m×1 column vector of fixed covariate effects including the intercept, G i is a 1×p row vector of genotypes for subject i taking values {0, 1, 2} as the number of copies of the minor allele, and γ is a p × 1 column vector of fixed additive genotype effects.We assume that b = (b 1 , ..., b n ) ∼ N (0, S s=1 τ s V s ) is an n × 1 column vector of random effects, τ = (τ 1 , ..., τ S ) are variance component parameters, and V s are known n × n relatedness matrices.The phenotypes y i are assumed to be conditionally independent and identically distributed given (X i , G i , b) and follow any exponential family distribution with canonical link function g(•), mean E(y i |b) = µ i and variance Var(y i |b) = φa −1 i ν(µ i ), where φ is a dispersion parameter, a i are known weights and ν(•) is the variance function.In order to estimate the parameters of interest and perform variable selection, we need to use an approximation method to obtain a closed analytical form for the marginal likelihood of model (1).Following the derivation of Chen et al. (2016), we propose to fit (1) using a penalized quasi-likelihood (PQL) method, from where the log integrated quasi-likelihood function is equal to where W = diag ai φν(µi)[g (µi) 2 ] is a diagonal matrix containing weights for each observation, ql i (α, γ|b) = µi yi ai(yi−µ) φν(µ) dµ is the quasi-likelihood for the ith individual given the random effects b, and b is the solution which maximizes (2).
In typical genome-wide studies, the number of predictors is much greater than the number of observations (p > n), and the parameter vector γ becomes underdetermined when modelling all SNPs jointly.Thus, we propose to add a lasso regularization term (Tibshirani, 1996) to the negative quasi-likelihood function in (2) to seek a sparse subset of γ that gives an adequate fit to the data.Because ql(α, γ, φ, τ ) is a nonconvex loss function, we propose a two-step estimation method to reduce the computational complexity.
First, we obtain the variance component estimates φ and τ under the null hypothesis of no genetic effect (γ = 0) using the AI-REML algorithm (Chen et al., 2016;Gilmour et al., 1995) detailed in Appendix A of the supplementary material.Assuming that the weights in W vary slowly with the conditional mean, we drop the first term in (2) (Breslow and Clayton, 1993) and define the following objective function which we seek to minimize with respect to (α, γ, b): where λ is a nonnegative regularization parameter, and v j is a penalty factor for the j th predictor.
In Appendix B, we detail our proposed general purpose block coordinate gradient descent algorithm (CGD) to solve (3) and obtain regularized PQL estimates for α, γ and b.Briefly, our algorithm is equivalent to iteratively solve the two penalized generalized least squares (GLS) and where Σ = W −1 + S s=1 τs V s is the covariance matrix of the working response vector Ỹ , X = [X; G] and β = (α , γ ) .We use the spectral decomposition of Σ to rotate Ỹ , X and b such that the transformed data become uncorrelated.For binary data, because the covariance matrix Σ depends on the sample weights W , we use an upper-bound on Σ −1 to ensure a single spectral decomposition is performed (Böhning and Lindsay, 1988).By cycling through the coordinates and minimizing the objective function with respect to one parameter at a time, b can be estimated by fitting a generalized ridge-like model with a diagonal penalty matrix equal to the inverse of the eigenvalues of S s=1 τs V s .Then, conditional on b, β is estimated by solving a weighed least squares (WLS) with a lasso regularization term.
All calculations and algorithmic steps are detailed in Appendix B.

Model selection
Approaches to selecting the optimal tuning parameter in regularized models are of primary interest since in real data analysis, the underlying true model is unknown.A popular strategy is to select the value of the tuning parameter that minimizes out-of-sample prediction error, e.g., cross-validation (CV), which is asymptotically equivalent to the Akaike information criterion (AIC) (Akaike, 1998;Yang, 2005).
While being conceptually attractive, CV becomes computationally expensive for very high-dimensional data.Moreover, in studies where the proportion of related subjects is important, either by known or cryptic relatedness, the CV prediction error is no longer an unbiased estimator of the generalization error (Rabinowicz and Rosset, 2020).Through simulation studies and real data analysis, Wang et al. (2020) found that LD and minor allele frequencies (MAF) differences between ancestries could explain between 70 and 80% of the loss of relative accuracy of European-based prediction models in African ancestry for traits like body mass index and type 2 diabetes.Thus, there is no clear approach to how multiple admixed and/or similar populations should be split when using CV to minimize out-of-sample prediction error.
Alternatively, we can use the generalized information criterion (GIC) to choose the optimal value of the tuning parameter λ, defined as where P QL is defined in (3), and df λ = |{1 ≤ k ≤ p : βk = 0}| + dim( τ ) is the number of nonzero fixedeffects coefficients (Zou et al., 2007) plus the number of variance components.Special cases of the GIC include AIC (a n = 2) and the Bayesian information criterion (BIC) (Schwarz, 1978) (a n = log(n)).

Prediction
It is often of interest in genetic association studies to make predictions on a new set of individuals, e.g., the genetic risk of developing a disease for a binary response or the expected outcome in the case of a continuous response.In what follows, we compare how predictions are obtained in pglmm versus a GLM with PC adjustment.

pglmm
For the sake of comparison with the GLM with PC adjustment, we suppose a sampling design where subjects that are used to fit the GLMM (1).We iteratively fit on a training set the working linear mixed model where Σ 12 = τ 1 V 12 and V 12 is the n s × n GSM between the testing and training individuals.It follows from standard normal theory that The estimated mean response μs for the testing set is given by where g(•) is a link function and Ũ = U D is the n × n matrix of PCs obtained from the spectral decomposition of the GSM for training subjects.

GLM with PC adjustment
Another approach to control for population structure and/or subjects relatedness is to use the first r columns of Ũ as unpenalized fixed effects covariates.This leads to the following GLM where X = [X; G] is the n × (m + p) design matrix for non-genetic and genetic predictors, β ∈ R p is the corresponding sparse vector of fixed effects, Ũr is the n × r design matrix for the first r PCs and δ ∈ R r is the corresponding vector of fixed effects.Letting Ỹ = Xβ + Ũr δ + g (µ)(y − µ) be the working response vector, one can show that where W is the diagonal matrix of GLM weights.Let V 12 be the n s × n GSM between a testing set of n s individuals and n training individuals such that the projected PCs on the testing subjects are equal to V 12 U r .Then, the estimated mean response μs for the testing set is given by By comparing ( 5) and ( 7), we see that both GLM with PC adjustment and pglmm use a projection of the training PCs on the testing set to predict new responses, but with different coefficients for the projected PCs.For the former, the estimated coefficients for the first r projected PCs in ( 6) are obtained by iteratively solving generalized least squares (GLS) on the partial working residuals Ỹ − X β.For pglmm, the estimated coefficients for all projected PCs are also obtained by iteratively solving GLS on the partial working residuals Ỹ − X β, with an extra ridge penalty for each coefficient that is equal to τ1 −1 Λ i with Λ i the i th eigenvalue of V that is associated with the i th PC.
From a Bayesian point of view, the fixed effect GLM assumes that each of the r selected PCs have equal prior probability, while the remaining n − r components are given a prior with unit mass at zero (Astle and Balding, 2009).In contrast, the MM puts a Gaussian prior on regression coefficients with variances proportional to the corresponding eigenvalues.This implies that the PCs with the largest eigenvalues have a higher prior probability of explaining the phenotype.Hence, pglmm shrinks PCs coefficients in a smooth way, while the fixed effect GLM uses a tresholding approach; the first r predictors with larger eigenvalues are kept intact, and the others are completely removed.This implies that the confounding effect from population structure and/or relatedness on the phenotype is fully captured by the first r PCs.
As we show in simulations results, departure from this assumption leads to less accurate coefficients estimation, lower prediction accuracy and higher variance in predictions.

Simulation design
We evaluated the performance of our proposed method against that of a lasso LMM, using the R package ggmix (Bhatnagar et al., 2020a), and a logistic lasso, using the Julia package GLMNet which wraps the Fortran code from the original R package glmnet (Friedman et al., 2010).We compared glmnet when we included or not the first 10 PCs in the model (glmnetPC).We performed a total of 50 replications for each simulation scenario, drawing anew genotypes and simulated traits.Values for all simulation parameters are presented in Table 1.

Simulated genotype from the admixture model
In the first scenario, we studied the performance of all methods for different population structures by simulating random genotypes from the BN-PSD admixture model for 10 or 20 subpopulations with 1D geography or independent subpopulations using the bnpsd package in R (Ochoa and Storey, 2016a,b).
Sample size was set to n = 2500.We simulated p candidate SNPs and randomly selected p × c to be causal.The kinship matrix V and PCs were calculated using a set of 50, 000 additional simulated SNPs.
We simulated covariates for age and sex using Normal and Binomial distributions, respectively.

Real genotypes from the UK Biobank data
In the second scenario, we compared the performance of all methods when a high proportion of related individuals are present, using real genotype data from the UK Biobank.We retained a total of 6731 subjects of White British ancestry having estimated 1st, 2nd or 3rd degree relationships with at least one other individual.We sampled p candidate SNPS among all chromosomes and randomly selected p × c to be causal.We used PCs as provided with the data set.These were computed using a set of unrelated samples and high quality markers pruned to minimise LD (Bycroft et al., 2018).Then, all subjects were projected onto the principal components using the corresponding loadings.Since the markers that were used to compute the PCs were potentially sampled as candidate causal markers in our simulations, we included all candidate SNPs in the set of markers used for calculating the kinship matrix V .We simulated age using a Normal distribution and used the sex covariate as provided with the data.

Simulation model
The number of candidate SNPs and fraction of causal SNPs were set to p = 5000 and c = 0.01 respectively.
Let S be the set of candidate causal SNPs, with |S| = p × c, then the causal SNPs fixed effects β j were generated from a Gaussian distribution N (0, h 2 g σ 2 /|S|), where h 2 g is the fraction of variance on the logit scale that is due to total additive genetic fixed effects.That is, we assumed the candidate causal markers explained a fraction of the total polygenic heritability, and the rest was explained by a random polygenic effect b ∼ N (0, h 2 b σ 2 V ).We simulated a SNR equal to 1 for the fixed genetic effects (h 2 g = 50%) under strong random polygenic effects (h 2 b = 40%).Finally, we simulated a binary phenotype using a logistic link function where the parameter π 0 was chosen to specify the prevalence under the null, and G j is the j th column of the standardized genotype matrix gij = (g ij − 2p i )/ 2p i (1 − p i ) and p i is the MAF.

Metric of comparison
For each replication, subjects were partitioned into training and test sets using an 80/20 ratio, ensuring all related individuals were assigned into the same set in the second scenario.Variable selection and coefficient estimation were performed on training subjects for all methods.We compared each method at a fixed number of predictors, ranging from 5 to 50 which corresponds to the number of true causal SNPs.
Comparisons were based on three criteria: the ability to retrieve the causal predictors, measured by the true positive rate TPR = |{1 ≤ k ≤ p : βk = 0 ∩ β k = 0}|/|{1 ≤ k ≤ p : βk = 0}|; the ability to accurately estimate coefficients, measured by the root mean squared error RMSE = 1 p p k=1 ( βk − β k ) 2 ; and the ability to predict outcomes in the test sets, measured by the area under the roc curve (AUC).
In addition, we evaluated the performance of our proposed method when using either AIC, BIC or crossvalidation as model selection criteria, rather than fixing the number of predictors in the model.For this, subjects from the real UKBB data were randomly split into training (40%), validation (30%) and test (30%) sets, again ensuring all related individuals were assigned into the same set.For cross-validation, the full lasso solution path was fitted on the training set, and the regularization parameter was obtained on the model which maximized AUC on the validation set.We compared methods performance on the basis of TPR, AUC on the test sets and RMSE.Additionally, we compared each model selection approach on the total number of predictors selected and on the model precision, which is defined as the proportion of selected predictors that are true positives.

Real data application
We used the real UK Biobank data set presented in Section 2.4 to illustrate the potential advantages of pglmm over logistic lasso with PC adjustment for constructing a PRS on two highly heritable binary traits, asthma and high cholesterol, in a set of related individuals.Asthma is a common respiratory disease characterized by inflammation and partial obstruction of the bronchi in the lungs that results in difficulty breathing (Anderson, 2008).High cholesterol can form plaques and fatty deposits on the walls of the arteries, and thus prevent the blood to circulate to the heart and brain.It is one of the main controllable risk factors for coronary artery disease, heart attack and stroke (Kathiresan et al., 2008).
After filtering for SNPs with missing rate greater than 0.01, MAF above 0.05 and a p-value for the Hardy-Weinberg exact test above 10 −6 , a total of 320K genotyped SNPs were remaining.To better understand the contribution of the PRS for predicting asthma and high cholesterol, we fitted for each trait a null model with only age, sex, genotyping array and the first 10 PCs as main effects.For our proposed pglmm method, we did not include any PC since kinship is accounted for by a random effect.Finally, we compared with a logistic lasso in which the top 10 PCs were included as unpenalized covariates in addition to age, sex and genotyping array.To find the optimal regularization parameter for both methods, we split the subjects in training (60%), validation (20%) and test (20%) sets for a total of 40 times.For each replication, the full lasso solution path was fitted on the training set, and the regularization parameter was obtained on the model which maximized AUC on the validation set.We compared mean prediction accuracy on the test sets as well as the median number of predictors included in all models.Finally, we also compared our method's performance when the best model was chosen using BIC on the training fit.

Simulation results from the admixture model
Results for selection of important predictors in the first simulation scenario, as measured by the mean TPR in 50 replications, are presented in Figure 1.For both 1D linear admixture and independent subpopulations, glmnet without PC adjustment failed to retrieve causal markers compared to all other methods.This is expected under population stratification; SNPs that differ in frequency between subpopulations are identified as important predictors because prevalence is not constant across each group.
When the first 10 PCs were added as unpenalized covariates, glmnetPC's ability to select causal predictors was lesser to that of pglmm and ggmix for the 20 independent subpopulations.In this case, the rank of the GSM used to infer the PCs is close to 20, and including only 10 PCs in the model does not correctly capture the confounding structure.Because there is less overlap between subpopulations in the admixture data compared to the independent populations (Reisetter and Breheny, 2021), a greater proportion of the simulated polygenic random effect is explained by the GSM and including only 10 PCs is enough to correct for confounding even when K = 20 (bottom-left panel of Figure 1).On the other hand, including a random effect with variance-covariance structure proportional to the GSM correctly adjusts for population structure in all scenarios while alleviating the burden of choosing the right number of fixed predictors to include in the model.Even though ggmix assumes a standard LMM for the binary trait, it was able to identify causal markers at the same rate as pglmm.
Results for estimation of SNP effects as measured by the mean RMSE in 50 replications are presented in Figure 2. Results are consistent with TPR results in that glmnet without PC adjustment performed poorly in all scenarios, while pglmm outperformed all other methods for the 20 independent subpopulations and performed comparably with glmnetPC for all other settings.As expected, ggmix had higher RMSE compared to pglmm and glmnetPC.Thus, even though ggmix was able to identify causal mark-ers at the same rate as other methods that accounted for the binary nature of the response, resulting estimates for the SNP effects were not accurate.
For both 1D linear admixture and independent subpopulations, ggmix and glmnet had poor predictive performance for K = 10 and K = 20, as reported in Figure 3. Also, the predictive performance of glmnetPC was greatly reduced when K = 20 for both admixture and independent populations.In the case of the admixture data, the RMSE for estimation of SNP effects was comparable for glmnetPC and pglmm.This means that the observed discrepancy in predictive accuracy is due to the difference in how each method handle the confounding effects.Using only 10 PCs as fixed effects when K = 20 likely results in overfitted coefficients, which may potentially decrease prediction accuracy and increase variance of predictions in independent subjects.By using a ridge-like estimator for the random effects, pglmm is less likely to overfit the confounding effects compared to glmnetPC.This is supported by the results of Table 2, where the relative decrease in AUC standard deviation for the predictions obtained by pglmm could be as high as 16% for K = 20 subpopulations.

Simulation results from real genotype data
Results for selection of important predictors, estimation of SNPs effects and prediction accuracy in the second simulation scenario are presented in Figure 4. We compared the ability of glmnetPC and pglmm to adjust for potential confounding stemming from subjects relatedness.Both methods' ability to retrieve important predictors were comparable as measured by mean TPR, with pglmm having a slight advantage.In terms of predictor effect estimation, pglmm had lower reported mean RMSE.Furthermore, pglmm outperformed glmnetPC when making predictions in independent test sets.Once again, this is explained by the fact that pglmm uses a random effect parameterized by the n−dimensional kinship matrix, which we have shown in Section 2.3 to be equivalent to include all PCs as predictors in the model and shrink their coefficients proportionally to their relative importance in a smooth way.On the other hand, for glmnetPC, only the first 10 PCs with larger eigenvalues are kept intact, and the others are completely removed.As the confounding effect from relatedness on the phenotype can not be fully captured by using only the first 10 PCs, prediction accuracy is greatly reduced.

Model selection
Boxplots of the model selection simulations results are presented in Figure 5.As expected, BIC tended to choose sparser models with very high precision values, compared to AIC and CV which tended to select larger models with negligibly higher prediction performance.Thus, using BIC as a model selection criteria resulted in trading a bit of prediction accuracy for a large boost in the model precision.In many situations where it is of interest to identify a smaller set containing the most important predictors, BIC should be preferred over AIC and CV.Moreover, BIC alleviates the computational challenge of performing out-of-sample predictions, which includes identifying pedigrees to ensure independence between training, validation and testing sets.

PRS for the UK Biobank
Results for asthma and high cholesterol PRSs are summarized in Table 3.For asthma, pglmm with either BIC or CV as model selection criteria performed better than glmnetPC and the null model with covariates only when comparing mean AUC on the test sets.The median number of predictors selected by glmnetPC was four times higher than for glmnet when using CV for both methods.Moreover, the variability in predictors selected was more important for glmnetPC, as reported by an IQR value equal to 486, compared to 145 for our method.pglmm with BIC selected 1 predictor (IQR: 1) compared to 16 (IQR: 145) for pglmm with CV.This is consistent with our simulation results showing that BIC results in sparser models with comparable predictive power.For high cholesterol, very few genetic predictors were selected by all models, which suggests that it may not be a highly polygenic trait.In fact, using only the non-genetic covariates and first 10 PCs resulted in the best model for high cholesterol based on mean test sets AUC.

Discussion
We have introduced a new method called pglmm based on regularized PQL estimation, for selecting important predictors and estimating their effects in high-dimensional GWAS data, accounting for population structure, close relatedness and binary nature of the trait.Through a variety of simulations, using both simulated and real genotype data, we showed that pglmm was markedly better than a logistic lasso with PC adjustment when the number of subpopulations was greater than the number of PCs included, or when a high proportion of related subjects were present.We also showed that a lasso LMM was unable to estimate predictor effects with accuracy for binary responses, which greatly decreased its predictive performance.Performance assessment was based on TPR of selected predictors, RMSE of estimated effects and AUC of predictions.These results strongly advocate for using methods that explicitly account for the binary nature of the trait while effectively controlling for population structure and relatedness in genetic studies.
When the dimensionality of the confounding structure was low, we showed that pglmm was equivalent to logistic lasso with PC adjustement.Hence, adjusting a GLM with PCA is at best equivalent to pglmm, but with the additional burden of selecting an appropriate number of PCs to retain in the model.Estimating the dimensionality of real datasets, and thus the number of PCs to include as fixed effects in a regression model can reveal to be challenging because estimated eigenvalues have biased distributions (Yao and Ochoa, 2022).Another strategy involves selecting the appropriate number of PCs based on the Tracy-Widom test (Tracy and Widom, 1994).However, it is known that this test tends to select a very large number of PCs (Lin and Zeng, 2011), causing convergence problems when fitting too many predictors.On the other hand, modeling the population structure by using a random polygenic effect correctly accounts for low and high-dimensional confounding structures, while only fitting one extra variance component parameter.
We used real genotype data from the UK Biobank to simulate binary responses and showed that BIC effectively selected sparser models with very high precision and prediction accuracy, compared to AIC and CV.Using the same data set, we illustrated the potential advantages of pglmm over a logistic lasso with PC adjustment for constructing a PRS on two highly heritable binary traits in a set of related individuals.
Results showed that pglmm had higher predictive performance for asthma, while also selecting consistently fewer predictors as reported by median and IQR values.
In this study, we focused solely on the lasso as a regularization penalty for the genetic markers effects.However, it is known that estimated effects by lasso will have large biases because the resulting shrinkage is constant irrespective of the magnitude of the effects.Alternative regularization like the Smoothly Clipped Absolute Deviation (SCAD) (Fan and Li, 2001) penalty function should be explored.Although, we note that the number of nonzero coefficients in the SCAD estimates is no longer an unbiased estimate of its degrees of freedom.Other alternatives include implementation of the relaxed lasso, which has shown to produce sparser models with equal or lower prediction loss than the regular lasso estimator for high-dimensional data (Meinshausen, 2007).It would also be of interest to explore if tuning of the generalized ridge penalty term on the random effects that arises from the PQL loss could result in better predictive performance.
A limitation of pglmm compared to a logistic lasso with PC adjustment is the computational cost of performing multiple matrix calculations that comes from the estimation of variance components under the null.Indeed, at each iteration, we perform a matrix inversion based on Cholesky decomposition with complexity of O(n 3 ) and matrix multiplications with complexity of O(mn 2 + S 2 n 2 + p 2 n), where n is the sample size, m is the number of non-genetic covariates, and S is the number of variance components.
Then, we need to perform a spectral decomposition of the covariance matrix with a computation time O(n 3 ).These computations become prohibitive for large cohorts such as the full UK Biobank with a total of 500K samples.A solution to explore to increase computation speed and decrease memory usage would be the use of conjugate gradient methods with a diagonal preconditioner matrix, as proposed by Zhou et al. (2018).
Finally, we can take advantage of the fact that it is possible to allow for multiple random effects to account for complex sampling designs and extend pglmm to a wide variety of models.For example, Tables Table 1:

Figure 3 :
Figure 3: Mean of 50 AUCs in test sets of the simulated genotype data.K represents the number of intermediate subpopulations in the 1d linear admixture data (left panel), and the number of independent subpopulations in the independent data (right panel).

Figure 4 :Figure 5 :
Figure 4: Mean of 50 AUCs, RMSEs and TPRs for the UK Biobank genotype data with related subjects.
(Bhatnagar et al., 2020b)rking vector in a set of individuals with predictor set Xs that were not used in the model training, n s denote the number of observations in the testing set and n the number of observations in the training set.Similar to(Bhatnagar et al., 2020b), we assume that the marginal joint distribution of Ỹs and Ỹ is multivariate Normal :

Table 2 :
Mean and standard deviation of AUCs in test sets for 50 replications of the simulated genotype data.Model size represents the number of genetic predictors that are selected by each model.K represents the number of intermediate subpopulations in the 1d linear admixture data, and the number of independent subpopulations in the independent data.%∆ std represents the relative decrease in AUC standard deviation for the predictions obtained by pglmm.

Table 3 :
PRS results for the UK Biobank AUC values for asthma.We report mean of AUC and standard deviation for a total of 40 random splits.For model size, we report median and interquartile range for the number of genetic predictors selected.For pglmm, we compare performance when model is selected using BIC or CV.For BIC, the best model is chosen based on training fit.For CV, the best model is chosen based on maximum AUC on the validation set.