Beyond the average patient: how neuroimaging models can address heterogeneity in dementia

Abstract Dementia is a highly heterogeneous condition, with pronounced individual differences in age of onset, clinical presentation, progression rates and neuropathological hallmarks, even within a specific diagnostic group. However, the most common statistical designs used in dementia research studies and clinical trials overlook this heterogeneity, instead relying on comparisons of group average differences (e.g. patient versus control or treatment versus placebo), implicitly assuming within-group homogeneity. This one-size-fits-all approach potentially limits our understanding of dementia aetiology, hindering the identification of effective treatments. Neuroimaging has enabled the characterization of the average neuroanatomical substrates of dementias; however, the increasing availability of large open neuroimaging datasets provides the opportunity to examine patterns of neuroanatomical variability in individual patients. In this update, we outline the causes and consequences of heterogeneity in dementia and discuss recent research that aims to tackle heterogeneity directly, rather than assuming that dementia affects everyone in the same way. We introduce spatial normative modelling as an emerging data-driven technique, which can be applied to dementia data to model neuroanatomical variation, capturing individualized neurobiological ‘fingerprints’. Such methods have the potential to detect clinically relevant subtypes, track an individual’s disease progression or evaluate treatment responses, with the goal of moving towards precision medicine for dementia.


Introduction
Heterogeneity is an underlying characteristic of the presentation and progression of dementia. Variability is observed in the underlying neuropathology, genetic risk factors, imaging and fluid biomarkers and in clinical and behavioural manifestations, reinforcing the idea that each dementia patient is unique. However, it is challenging to capture this heterogeneity when studying dementia and it is often not attempted. The conventional analytical approach focuses on characterizing group averages, not individual differences, assuming homogeneity between dementia patients (Fig. 1A). The failure to incorporate heterogeneity in statistical models of dementia may have limited our understanding of the pathophysiological mechanisms and slowed the development of treatments.
Despite thousands of treatment trials, only one drug (aducanumab) shows any promise for disease modification. 1 This paucity of treatments, in combination with the rapid ageing of the global population, adds to the societal burden of dementia. 2 This motivates the re-evaluation of common experimental approaches in dementia research and clinical trials with the goal of optimizing statistical design. In this update, we review current and emerging neuroimaging analysis methods that are able to account for the intrinsic heterogeneity, to help further our understanding of dementia and improve the prospect of developing effective treatments.

Heterogeneity in dementia
Dementia is characterized by progressive cognitive decline, over and above that seen in normal ageing, with subsequent impact on activities of daily living. Dementia is the end point of multiple diseases, including Alzheimer's disease, vascular dementia, Parkinson's disease dementia, dementia with Lewy bodies, frontotemporal dementia and limbic-predominant age-related TDP-43 encephalopathy (LATE). 3,4 Neuropathological factors vary between these diseases, though commonly include amyloid, tau and Lewy body accumulation 5,6 and neurodegeneration.
The neuropathological hallmarks of the dementia syndromes have been well characterized in some cases, while others continue to be defined. Indeed, the Braak stages are an established method for characterizing and defining Alzheimer's disease and Parkinson's disease dementia pathology. 7,8 However, neurobiological evidence suggests that there are many individual exceptions to these general rules. This could be because of the complex relationship between the clinical syndrome and underlying neuropathology, as well as individual differences in brain structure and function that predate the pathological onset. For instance, neocortical neuritic amyloid plaques have been observed post-mortem in over 50% of non-demented older adults, while almost 20% of dementia patients had no such plaques at death. 9 Beyond amyloid, other dementia risk factors such as tau tangles, white matter lesions and vascular pathologies have been recorded in dementiafree older adults. 9 This suggests that these pathological hallmarks of dementia are not universal, with some individuals resilient to insults that may be sufficient to cause dementia in others.
The mechanisms driving these pathological changes are yet to be fully determined, although they likely vary between individuals, both between and within diagnostic categories. Genetic and environmental risk factors also show considerable variability here. 10,11 For example, APOE is the best-known genetic risk for sporadic Alzheimer's disease but is only semi-dominant and moderately penetrant; at an age of 85 years, between 30% and 50% of APOE e4 homozygotes do not have dementia. 12 Potentially, different genetic and molecular mechanisms (or combinations of mechanisms) can result in dementia.
In addition, separate pathologies have broad phenotypic correspondence, and clinicopathological relationships can be varied (e.g. aphasia or behavioural disturbances in frontotemporal lobar degeneration). 13 A specific dementia phenotype may be the result of different pathological processes, and conversely a single molecular pathology may result in multiple different dementia phenotypes. 14 A specific diagnosis (e.g. Alzheimer's disease) can also include pathological features characteristic of another dementia disease (e.g. TDP-43 proteinopathy). 15 Symptoms often do not conform to diagnostic boundaries. 16,17 For instance, diverse symptoms are observed in frontotemporal lobar degeneration syndromes, but these do not easily fall within existing clinical categories or a single disease entity. 18 Furthermore, heterogeneity can be seen in the severity of symptoms, 19 rates of change 20 and influence on activities of daily life. 21 To add further complexity, many dementia patients have non-dementia comorbidities, for example, neuropsychiatric and gastrointestinal diagnoses, 22 all of which may impact clinical presentation and disease progression. Age is also a key risk factor for pathophysiological changes; disentangling disease-related variation from the ageing process is challenging, for example when differentiating between normal cognitive decline or mild cognitive impairment as a prodromal phase of Alzheimer's disease. 23 Neuroimaging can offer in vivo neurobiological insights into the inter-individual variability in dementia. [24][25][26] Structural MRI has uncovered anatomical differences in dementia patients, reflected in patterns of atrophy, 27 and anatomical symmetry. 28 Differences Figure 1 Differences between case-control and data-driven subtype approaches. (A) The conventional case-control approach. Despite underlying neurobiological heterogeneity, as illustrated by the red, green and blue (RGB) profiles, all patients are analysed together to calculate the group average. This is used to compare with healthy controls to highlight differences between the two groups. Here, the average patient (an average of RGB profiles, circled) assumes neurobiological homogeneity, potentially masking underlying subtypes or individual differences. (B) Data-driven neuroimaging approach. The cases present neurobiological heterogeneity and are subtyped according to their different neurobiological patterns. This informs the division of the case population into its respective subtypes (distinguished RGB profiles). These subtypes can then inform stratification for further investigation, such as clinical interventions, longitudinal monitoring or genome-wide association studies.
in pathological hallmarks can be seen using PET ligands; for instance, tau (e.g. 18 F-AV1451) and amyloid (e.g. 11 C-PiB) ligand binding has been shown to vary between patients with Alzheimer's disease. 29 Functional MRI has also been used to capture differences in connectivity. 30 Neuroimaging is now commonly implemented in clinical trials to provide secondary outcome measures of treatment effectiveness. [31][32][33] Here, the relationship between variance and statistical power is considered; variance can increase noise in the typical case-control designs and subsequently reduce statistical power to detect change. For example, in early trials of solanezumab, both amyloid-negative and amyloid-positive Alzheimer's disease patients (defined using PET) were recruited, reducing the ability to detect the effects with high levels of sensitivity, as some patients were likely following different pathological trajectories. 34,35 Subsequently, it has been argued that a stringent approach of selecting optimal participants and biomarkers of interest for clinical trials will increase the likelihood of success when evaluating average differences between patients. 36 However, there are limitations to this approach. For example, hippocampal involvement in Parkinson's disease dementia is widely disputed, 37 and in Alzheimer's disease both higher, 38 or lower, 39 caudate nucleus volumes have been reported compared with healthy controls. Such inconsistencies could be due to differences within patient groups in either the disease subtype or disease stage. Until both disease heterogeneity and disease dynamics are better understood, using biomarkers for stratified trial enrolment will likely remain contentious.
A key limitation of current statistical approaches is the assumption in traditional case-control studies that experimental groups are homogeneous, discrete entities (Fig. 1A). Here, tests of statistical significance are based on group means, generally regarding individual differences as error or noise. In other words, this approach is fundamentally oriented to comparing the 'average patient'. Even in sophisticated multivariate analyses (e.g. machine learning), the focus tends to be on the discovery of canonical patterns across sets of variables that differentiate one group from another. This assumption of within-group homogeneity is neither reflected in real-world clinical populations nor in the heterogeneous pathological nature of neurodegenerative diseases. Inconsistencies are commonly seen in treatment effects 2,40,41 ; however, this could be related to unmeasured individual differences rather than poor efficacy. The assumption that there are uniform effects of dementia or of treatment on the brain may be hindering the discovery of disease-modifying treatments, especially when translating to heterogeneous clinical settings. This motivates the incorporation of heterogeneity into trial designs. 42 Given the importance of the brain in dementia, we outline ways to model heterogeneity using neuroimaging and illustrate the impact this could have on fundamental research and clinical trials.

Data-driven statistical methods
Measuring and statistically modelling neurobiological heterogeneity in a clinical population requires large datasets. Fortunately, large neuroimaging datasets are increasingly available for dementia; these include the Alzheimer's Disease Neuroimaging Initiative (ADNI), Open Access Series Of Imaging Studies (OASIS) and the National Alzheimer's Coordinating Center (NACC). These datasets allow for more flexible and powerful statistical testing. [43][44][45] These data have supported the development and application of novel data-driven methods designs in dementia research (Box 1).Recently, data-driven methods have enabled the estimation of disease subtypes from neuroimaging data, a promising way to disentangle heterogeneity by grouping patients by distinctive neurobiological and cognitive characteristics 46 and disease progression. 47,48 For instance, hierarchical clustering algorithms have been utilized to understand variation in cortical thickness, 49,50 grey matter 51 and progressive neurodegeneration. 52 Clustering techniques employed on large Alzheimer's disease neuroimaging datasets have suggested that there are disease subtypes with distinct patterns of cortical thinning. Atrophy-based groupings have been defined as either medial-temporal, parietal or widespread ('diffuse'). 49,53 Interestingly, these subtypes have also been associated with patterns of both amyloid 50 and tau deposition 54 and with cognitive phenotypes. 55,56 In frontotemporal dementia, distinct atrophy subtypes have been reported, corresponding with the temporal-dominant, temporofrontoparietal, frontal-dominant and frontotemporal areas. 51,57 In dementia with Lewy bodies, distinct atrophy subtypes have also been reported: non-atrophic, parietotemporal atrophy and occipitofrontal atrophy, with corresponding distinctions in cognitive performance. 58,59 Despite some consistency between different reports, other studies have generated a different number of subtypes, while still others have reported subtypes with anatomical overlap, such as occipital areas overlapping with parietal and mild atrophic patterns. 60 Most studies using clustering methods have analysed cross-sectional single time-point data; however, given the heterogeneity in disease progression, mapping longitudinal trajectories is an important focus for dementia research. Young and colleagues recently combined disease progression modelling and clustering techniques to enable inference of subtype and disease stage. 52 Here, three distinct spatiotemporal atrophy patterns were observed in Alzheimer's disease, with atrophy starting in either the medial temporal lobe, frontotemporal areas or basal ganglia. In addition, four distinct spatiotemporal atrophy patterns were observed in frontotemporal dementia, corresponding with different genetic subtypes. 52 Future efforts could continue to explore heterogeneity in disease progression, including the examination of presymptomatic and prodromal disease phases. Parsing this longitudinal heterogeneity should enable stratification of dementia patients into groups with differing disease progression rates, with treatments and interventions tailored to these groups accordingly.
Defining biologically meaningful subtypes may have implications for fundamental research. Case-control genome-wide association studies are hindered by heterogeneity in patients and controls alike, 61 and dementia is likely to be no exception. Restricting genome-wide association studies to more homogeneous subtypes should increase the sensitivity and reliability of such research to detect genetic risk factors for dementia.
Despite promising initial results from subtyping studies, there are key issues to consider prior to translating such models into clinical settings. The number of subtypes generated, subtype distinguishability and the stability of subtypes over the disease course should be considered. 46 It is possible that subtypes may be confounded by statistical decisions (e.g. hyperparameter choices), technical factors (e.g. scanner), biomedical factors (e.g. age, sex or comorbid disease) or sampling bias. The validation of clustering-derived subtypes is challenging in the absence of ground truth. 62 Therefore, to ensure that subtypes are biologically meaningful, external validation steps using independent datasets and long-term clinical outcome data (i.e. mortality rates and post-mortem data 63 ) are essential. Recently, promising results have emerged using post-mortem histopathological data, which have yielded transdiagnostic disease clusters. 64 However, the availability of datasets that enable such validation are limited, causing a bottleneck in the progress of dementia subtyping research. Hopefully, efforts to access existing hospital and community data will be successful in providing the data necessary for clinical validation.
It is also important to consider if discrete subtypes can explain the range of variability observed. By design, clustering assumes homogeneity within each cluster, which itself may not be valid. 65 This motivates research, such as Zhang and colleagues' Bayesian latent factor analysis, into less discrete or overlapping subtypes, with multiple subtype factors contributing to the explanation for the patterns of brain structure in any particular individual. 66 Spatial normative modelling Going further, it is possible to assess the neurobiology of dementia at the level of the individual patient and provide still greater precision than subtyping. To that end, normative modelling techniques have been developed to parse spatial heterogeneity (i.e. individual level regional variation) in neuroimaging data. Principally, normative modelling involves calculating the normal distribution of a population, then assessing how much an individual deviates from that respective distribution. Spatial normative modelling is a technique that specifically uses neuroimaging data (e.g. cortical thickness) to estimate variation for a given brain region. 67,68 This is detailed in Box 2 and illustrated in Fig. 2. The extent to which an individual deviates from the norm can be spatially mapped at regions across the brain, providing an idiosyncratic map of individual variability. These 'z-score' maps can further be summarized to provide a patient-level index of deviation potentially reflecting their general brain health.
Multiple algorithms have been proposed for spatial normative modelling. A common approach employs Gaussian process regression to estimate normative models in the brain. 67 More recent developments have included a neural processes model, which does not rely on fixed parametric kernels and can improve the scaling of the model to large datasets. This method learns optimal feature representations and covariance structures for random-effect and noise (via global latent variables), 69 and a hierarchical Bayesian regression approach to normative modelling has been shown to efficiently accommodate inter-site variation and provide computational scaling, which is useful when using large studies or combining smaller studies that are acquired across multiple sites. 70 Preliminary studies have used normative modelling in the context of brain ageing and dementia. Distinct patterns of deviation from normal ranges were observed in people with mild cognitive impairment and patients with Alzheimer's disease. 71 Additionally, quantile regression techniques have been used to map deviations of cognitively normal individuals and brain morphology in patients with Alzheimer's disease. Here, differences between these patients and healthy controls were partly attributed to accelerated ageing. 72 Recently, a spatial pattern of atrophy index, reflecting normative variability, was used to demonstrate greater age-related atrophy in patients with Alzheimer's disease compared with normative trends of age-related changes in brain structure. 73 Spatial normative modelling has also been applied to neuroimaging data in the contexts of attention-deficit hyperactivity disorder, 74 autism, 75,76 bipolar disorder and schizophrenia. 77 Results show that it is uncommon for patients to have uniform patterns of structural alterations across the brain. Individualized maps of regional differences derived from normative models generate distinct findings compared with case-control approaches, for example using voxel-based morphometry, 75,76 which rely on modelling average differences between groups at each voxel. The

Box 1 Data-driven statistical methods
Data-driven techniques seek to investigate the relationships between the data variables without imposing a priori knowledge of these relationships. Some machine learning techniques can be considered to offer a data-driven approach, whereby a computer automatically learns (e.g. update model parameters) to optimize performance from experience (i.e. examples of labelled data). This process involves discovering and exploiting regularities in 'training data'. There are many different problems that can be approached by employing machine learning methods, including anomaly detection, clustering, classification and regression. Broadly speaking, techniques can be summarized into three main groups: supervised, semi-supervised or unsupervised learning. For supervised-learning algorithms, a set of input variables are associated with labels prior to estimating the model. For example, regression analyses (e.g. predicting continuous symptom scores) and classification tasks (e.g. discriminating patients from healthy controls) are examples of supervised learning. Unsupervised learning models the underlying latent structure or distribution in the data to uncover meaningful patterns without supplying a label for each data-point. Clustering is considered unsupervised learning, because the input variable is unlabelled. Clustering aims to identify subtypes, which can be conceptualized as a way to parse a single heterogeneous dataset into a number of more homogeneous subsets (Fig. 1B). Models can be derived based on distance, density, connectivity and distribution of the data to be clustered, though such metrics tend to be correlated. Clustering has been the predominant data-driven approach used to explore heterogeneity in dementia. 46 Numerous methods have been implemented, differing in the input features used, the clustering algorithms and the validation approach. Common clustering algorithms include agglomerative, graph-based and forest-tree based. For example, in agglomerative clustering; the proximity of individual data-points (e.g. based on tissue volumes or cortical thickness) are calculated, then similar clusters are merged together to form larger clusters, after which the proximity of new clusters is calculated; these steps are then iterated until all the clusters are merged together to form a single cluster. 78 When implementing these methods, it is important to consider the type of data used (e.g. dimensionality of the data), the choice of algorithm and distance function, the order of the model, the clustering subspace and whether clusters are mutually exclusive (hard clustering) or probabilistic (soft clustering). After implementation, it is also important to assess the number and validity of the clusters generated. 65 Semi-supervised learning can also be a powerful approach, using clustering techniques alongside supervised classification or regression models to bolster sensitivity in cases of limited training data (e.g. rare diseases). This is where training data-points can either be labelled or unlabelled, with unlabelled data-points aiding the learning of a better classifier (or vice versa). This can sometimes address common confounding effects (e.g. age and sex), which result in clustering the disease effect as transformation from the normal control distribution to the patient distribution, as opposed to just the largest factor of data variability. 25 predominant focus of neuroimaging research on group-level differences has potentially masked heterogeneity among patients within diagnostic groups. 74,77 Therefore, spatial normative modelling provides a new approach to examining the neurobiological correlates of neurodevelopmental and psychiatric disorders and could well be applied in dementia.
As spatial normative modelling can utilize any continuous or categorical phenotype, a range of dementia-related neuroimaging features (i.e. local volumes, cortical thickness, diffusion microstructural indices or PET tracer binding) could be used in the model to ascertain the heterogeneity across multiple aspects of neurobiology. In addition, a multimodal approach can be adopted (e.g. combining variables of neuroanatomical volumetric measures and PET amyloid-b binding) to parse heterogeneity at both a molecular and structural level.
Going beyond neuroimaging, other data such as fluid biomarkers, physiological measures, cognitive assessments and genetic markers can be used to disentangle the heterogeneity in dementia. Such information could be incorporated explicitly in spatial normative models as predictors (alongside age and sex) of brain structure. For instance, tau CSF levels are currently pivotal in diagnosis and treatment planning; therefore, tau CSF could define a normative model of neuroanatomical measures to test whether elevated tau is associated with uniform or heterogeneous impacts on the brain. Alternatively, spatial normative models could be stratified, for example defining separate models for APOE4 carriers or people who are amyloid positive. While promising, these approaches require sufficiently large samples of normative data with that biomarker, which is potentially challenging when data collection involves invasive measures like CSF sampling. Importantly, non-imaging markers provide a means of validating subtypes from neuroimaging normative models, with the assumption that neuroanatomically homogeneous subtypes would be more homogeneous in terms of genetic and environmental risks and fluid biomarker readouts.
Furthermore, spatial normative modelling could be particularly informative with longitudinal data. Temporal heterogeneity could be modelled using two or more time points to understand withinsubject changes. In this instance, greater changes in abnormality patterns could reflect faster disease progression (Fig. 2D). Similarly, evidence of reduced or reversed changes in abnormality patterns could be indicative of treatment efficacy, using an outcome measure tailored to each individual's brain (Fig. 2E).

Summary
In conclusion, while dementia is associated with marked clinical, aetiological and neuropathological variability, research studies and clinical trials often overlook this inherent heterogeneity. While neuroimaging has provided many insights into the neuroanatomy of dementia and has helped to assess treatment efficacy, the reliance on group-average statistical methods may have hindered efforts to understand the aetiology and prognosis, which have led to Box 2 Spatial normative modelling Normative models provide statistical inferences at the level of the individual with respect to an expected 'normative' distribution or trajectory over time. This framework is commonly used in growth charts to map developmental changes in body weight and height as a function of age. Deviations from a normal growth curve manifest as outliers from the normative range at each age point. 79 Specifically, spatial normative modelling adopts this concept by modelling the relationship between neurobiological variables (e.g. neuroimaging features which are represented in 'space') and covariates (e.g. demographic variables such as age and sex) to map centiles of variation across a cohort ( Fig. 2A). An individual can then be located within the normative distribution to establish to what extent they are an outlier in a given measure. By applying this approach to derive spatial normative models at local brain regions, a map can be generated of where and to what extent an individual's brain differs from the norm (Fig. 2B). Furthermore, by modelling the covariance across the normative cohort at each brain region, confidence intervals can be derived for each point prediction, giving a measure of uncertainty that can be useful for clinical interpretation and subsequent decision-making ( Fig. 2D and  E). In the context of dementia, the process used to generate these individualized spatial normative brain maps could be as follows: Using a separate large reference dataset of healthy participants, spatial normative models of cortical thickness for separate brain regions can be statistically modelled based on age and sex. Next, the parameters of these models would be calibrated using cortical thickness measures derived from a subsample patient cohort under investigation (e.g. dementia patients and scanner-matched controls). From this, z-scores relative to the normative range would be generated for each brain region resulting in a brain 'z-score map' of cortical thickness for each participant in the remaining experimental sample (Fig. 2B and C). These z-score maps could then be utilized in a variety of research or clinical settings. For example, patients could be 'clustered' based on these neuroanatomical patterns to provide biologically relevant subtypes that may have distinct clinical or biomarker signatures (Fig. 1B). This could provide a new mechanistic understanding of dementia as well as facilitate the discovery of genetic influences on dementia-related brain atrophy. Rather than assuming that dementia patients will show common patterns of brain changes, genome-wide association studies could attempt to identify genetic variants that distinguish these biologically more homogeneous subtypes from healthy controls. Such subtyping could also be used to stratify enrolment in clinical trials, including only specific subtypes to reduce heterogeneity and increase the power to detect average effects. This could substantially increase sensitivity to treatment effects, reducing the duration and costs of clinical trials. Going beyond subtyping, the individual patient z-score maps could be used as surrogate outcome measures of treatment efficacy. Rather than simply assessing whether a treatment reduces hippocampal or whole-brain atrophy on average, the magnitude of longitudinal change in z-score maps could be compared between treatment and placebo groups. Importantly, this overcomes the assumption of homogeneity, i.e. that a treatment must affect all patients' brains in the same way, slowing atrophy in the same regions. By capturing neuroanatomical heterogeneity at an individual level, spatial normative modelling could indicate whether a treatment slows brain atrophy in different regions in different people, whilst still generating standard effect sizes and confidence intervals for rigorous statistical evaluation. failures in clinical drug development. We have outlined how datadriven neuroimaging statistical techniques enable explicit modelling of heterogeneity in the brain. We propose that the application of spatial normative modelling methods to dementia neuroimaging studies is a promising avenue to mapping regional variations at the individual level; efforts should include an investigation of various neuroanatomical markers derived from neuroimaging, which are then validated using independent datasets. Here, our clinicopathological understanding of anatomical variation could be enhanced by multimodal neuroimaging techniques and combining other biological data as predictors in normative models. Importantly, these spatial normative modelling techniques are not designed to replace or even improve on diagnoses based on clinical evidence and well-established biomarkers. The goal here is to better capture the variability within diagnostic groups based on individual patterns of brain structure or potentially define neuroanatomical subtypes rather than span diagnostic boundaries. Employing these methods could be highly advantageous in mapping neurobiological abnormalities. In particular, the use of serial neuroimaging to define patient level longitudinal trajectories of neuroanatomical variability has the potential to improve predictions of disease progression or treatment response at the level of the individual patient, thereby paving the way towards more effective, precise medicine for dementia.  The spatial normative model maps centiles of variation across dementia patients and healthy controls. The Gaussian distribution curve (right) illustrates the statistical inference at the level of a dementia patient (red highlighted subject) with respect to the normative model. (B) An example individual z-score map for the left hemisphere based on cortical thickness. Red indicates thinner cortices, relative to the norm, and blue indicates thicker cortices. (C) Spatial inferences. Spatial normative models are estimated for each sampled brain location in regional space. This can be understood as a set of functions y = f(x), which uses covariates (x) to predict the regional neurobiological variable (y), derived from neuroimaging. (D) Procedural summary of spatial normative modelling. The spatial normative model is estimated using a healthy reference cohort. Next this is validated on withheld data (e.g. using cross-validation techniques), to ensure the accuracy of the model. The model then can be applied to a dementia cohort. (E) Detecting individual trajectories relative to the norm in three different brain regions. Longitudinal data can be used to observe how spatial differences change over time. (F) An example of a single spatial trajectory before and after an intervention in a clinical trial. Red = dementia group; grey = healthy subjects. For the graphs in C, E and F, unlabelled axis x = covariates, y = neurobiological variable.