There is substantial interest in identifying new risk factors for cardiovascular disease, to improve our understanding of disease biology and to account for the cases of heart disease that cannot be explained by known risk factors. Investigation of putative risk factors frequently involves the study of circulating biomarkers. In recent years, a spirited debate has arisen regarding the validity and usefulness of these new measures. A careful assessment of the evidence suggests that most newer biomarkers are not ready for routine clinical use in the primary prevention setting. The good news is that traditional risk factors perform quite well with regard to the prediction of future cardiovascular risk. Inadequate recognition and control of the ‘classic’ risk factors continues to account for a large number of avoidable cardiovascular events. At the same time, new insights into disease mechanisms should lead to the development of novel preventive therapies, regardless of how well biomarkers themselves perform in risk stratification. Furthermore, state-of-the-art technologies that allow the profiling of large panels of genes, transcripts, proteins, or small molecules should facilitate the discovery of newer biomarkers capable of providing both mechanistic insight and true prognostic utility.
Cardiovascular disease remains a leading cause of death worldwide, a fact that underscores the importance of primary prevention. Effective prevention relies on the accurate identification of individuals at risk of developing heart disease. Traditional risk factors—including age, gender, hypertension, dyslipidaemia, smoking, and diabetes—form the foundation for all cardiovascular risk prediction models. Yet, there is substantial interest in identifying novel cardiovascular risk factors, to improve our understanding of disease biology and to account for the cases of heart disease that cannot be explained by known risk factors.
Novel ‘risk factors’ cited in the cardiovascular literature more often than not involve biochemical markers measurable in the plasma or serum, including C-reactive protein, lipoprotein (a), homocysteine, and others. In recent years, a spirited debate has arisen regarding the validity and usefulness of these new measures, as evidenced by the growing number of epidemiological studies and reviews addressing this question.1–4 The article by Smulders et al. reviews several recent studies that address two central questions in that debate: How much room is there for new cardiovascular risk factors, after the conventional ones have been taken into account? Are novel biomarker measures clinically useful? The authors conclude that new risk factors not only exist, but also provide clinically useful information.
The aim of this commentary is to explain why the authors' assertion regarding the utility of novel risk factors is only partially correct. Uncovering new risk factors for heart disease is certainly an important goal, and rapidly advancing genomic and non-genomic technologies promise to yield exciting discoveries in this area in the years ahead. On the other hand, many of the biochemical markers proposed as novel risk factors are likely not risk factors at all, because they are not causally related to heart disease. Although the lack of a causal association alone does not exclude clinical utility, the weight of the statistical and epidemiological evidence suggests that most current cardiovascular biomarkers add little to traditional risk factors.
Following Smulders et al., the first section of this commentary concerns the INTERHEART data, as it applies to the potential contributions of conventional and novel cardiovascular risk factors. The next section addresses the apparent paradox that many putative risk factors seem to be poor predictors of risk.
Do new cardiovascular risk factors exist?
The INTERHEART study is one of several studies in recent years that have re-affirmed the major contribution of conventional risk factors to the burden of cardiovascular disease. The specifics of INTERHEART are reviewed by Smulders et al. The INTERHEART investigators find that modifiable cardiovascular risk factors account for ∼90% of the population attributable risk (PAR) of myocardial infarction. This number could be interpreted as meaning there is little room for improvement in terms of discovering new causal factors for disease, but Smulders et al. correctly explain that this conclusion arises out of a false belief that PARs must sum to 100%. Indeed, the sum of PARs has no upper limit, implying that any number of additional causal risk factors may exist.
In addition to the methodological explanation provided by Smulders et al., it is useful to consider how terminology contributes to difficulties in interpreting PAR results. ‘Attributable risk’ implies causality, but causality is a biological construct rather than a statistical or epidemiological one. Indeed, as pointed out by others,5 calculating attributable risks for non-causal or surrogate factors can artificially inflate our perception of how much of the risk of disease is explained. In this regard, it is worthwhile noting that some of the factors considered by the INTERHEART investigators, such as physical activity, may be surrogates for other processes. This does not diminish the importance of controlling these factors, but suggests that the issue of causality is not completely settled.
The PAR is not meant to reflect the proportion of cases that are ‘explained’ by the risk factors, or ‘attributable’ to the risk factors in a causal manner. Rather, the PAR denotes the proportion of cases that would be eliminated if that risk factor were eliminated. Accordingly, the PAR is much more useful for prioritizing public health interventions or estimating the societal impact of risk factor control policies, than for characterizing putative aetiological factors for disease. In this regard, the 90% PAR for classical risk factors in the INTERHEART study highlights the importance of addressing known, modifiable risk factors, while leaving the door open for the discovery of novel risk factors in the future.
Are new cardiovascular biomarkers clinically useful?
Efforts to identify new cardiovascular biomarkers are certainly justified, in part because they can lead to useful insights regarding the biological underpinnings of heart disease. However, whether these novel biomarkers should be incorporated into clinical practice is a separate question. Indeed, biomarkers that are biologically informative are not necessarily clinically informative.
In order to understand why some biomarkers might not be useful clinically, it is helpful to consider the issue of causality again. When we refer to something as a ‘risk factor,’ we are implying that it plays an aetiologic or causal role in the development of disease. Accordingly, any treatment that reduces the penetration of the risk factor would be expected to reduce the risk of developing disease. This is true of the classic modifiable cardiovascular risk factors, such as blood pressure and dyslipidaemia. On the other hand, a ‘risk marker’ is not assumed to play a direct causal role in disease. A risk marker could be a surrogate for an important biological process, or serve as a marker of subclinical disease. However, the risk marker itself (as opposed to the process it represents) generally makes a poor target for therapy.
Most cardiovascular biomarkers are risk markers rather than risk factors. Examples include cardiac troponins, B-type natriuretic peptide, and microalbuminuria, which are all markers of cardiac or vascular injury. The jury is still out on C-reactive protein and homocysteine—although some have argued that they are causal risk factors, the biological evidence is not conclusive, and no clinical trial data yet exist to suggest that they are bona fide targets for therapy.
Although both risk factors and risk markers can convey important information, it is necessary to understand which one you are dealing with, because it determines the criteria by which one should judge clinical utility. Risk factors that constitute viable targets for therapy, such as LDL cholesterol, warrant routine clinical measurement, as long as measurement of the risk factor is practical and cost-effective. We would expect the level of the risk factor to be statistically associated with clinical outcomes, as evidenced by an elevated odds ratio or relative risk. Other indices, such as the area-under-the-curve (discussed in what follows), sensitivity, or specificity are less important in this setting.
Risk markers, on other hand, are mainly useful if they improve our ability to predict risk. Establishing a statistically significant association is necessary but not sufficient. Indeed, even strong statistical associations do not ensure that the biomarker will facilitate risk prediction at the level of individual patients, for reasons that have been articulated previously.3,6 To assess predictive accuracy, investigators commonly examine whether the new biomarker improves the discrimination or calibration of risk models. Discrimination refers to the ability of a model to distinguish individuals with and without disease, and it is most often assessed using the ‘area under the receiver-operating-characteristic curve’ (AUC) or the C-statistic. The AUC incorporates the two classic measures of test accuracy: sensitivity (ability to detect disease when it is present, e.g. true positive rate) and specificity (ability to exclude disease when it is absent, e.g. true negative rate). Calibration refers to how closely the predicted risks from a model correspond with the actual risks, after splitting the population into deciles or other convenient categories. Several measures of calibration exist, the most popular of which is the Hosmer–Lemeshow statistic.
Smulders et al. cite recent studies that have examined discrimination and calibration of contemporary biomarkers. With regard to discrimination, these studies consistently find that current biomarkers, either alone1 or combination,2 do little to raise the AUC. The authors downplay these poor results by arguing that discrimination is not an appropriate criterion for assessing biomarkers. This assertion is questionable, because the ultimate goal of any risk prediction model should be to successfully distinguish those who will develop disease from those who will not (e.g. discrimination). Although predicting future events is difficult, it is worth remembering that the Framingham risk score (with six variables) does a reasonable job at this task (AUC around 0.75),7 as do some standard clinical tools such as the electrocardiogram.8 Furthermore, in secondary prevention, biomarkers such as N-terminal pro-BNP can raise the AUC substantially, above and beyond conventional risk factors.9,10
Turning to calibration, the authors claim that contemporary biomarkers such as C-reactive protein substantially improve the correspondence of predicted and actual risks. This assertion warrants closer scrutiny as well. The claim is based primarily on data from the Women's Health Study, a randomized trial performed in more than 26 000 perimenopausal women. Using data from this cohort, investigators developed a new risk scoring algorithm, known as the Reynolds Risk Score, that incorporates C-reactive protein concentrations along with a variety of clinical characteristics.4 Although C-reactive protein was a statistically significant predictor of cardiovascular events, the Reynolds Risk Score did not exhibit improved calibration compared with models based on standard ATP III covariates alone. For instance, the Hosmer–Lemeshow P-value for the ‘best-fitting’ Reynolds model was 0.38, compared with a Hosmer–Lemeshow P-value of 0.45 for a model based on ATP-III covariates. Because higher P-values denote better correspondence between observed and predicted risks, these data provide no evidence to support improved calibration with the addition of C-reactive protein. Other epidemiological studies provide similar results regarding the lack of improved calibration with the addition of C-reactive protein and related inflammatory biomarkers.11
A recently proposed approach to assessing new biomarkers is known as ‘reclassification,’ which is related to calibration. Reclassification examines how individuals are assigned to categories of risk, and how this assignment is altered by the addition of a new risk marker. The rationale for this approach is that clinical decision-making relies on discrete categories, with thresholds between low- and high-risk individuals, rather than continuous predicted risk estimates.
The effort to make the assessment of risk models more clinically relevant has merit, but the interpretation of reclassification data is also subject to debate.12 The most commonly cited figures describing the proportion of people reclassified typically include both upward (to a higher category) and downward (to a lower category) shifts. However, while movement into a high-risk category (usually >20% predicted risk of a coronary event over 10-years) could affect management, the significance of a downward shift for primary prevention is unclear. If the patient is not receiving a particular therapy at baseline, downward reclassification would not alter management. If the patient is already being treated on the basis of the clinical data, the deletion of therapy based on a biomarker result has no support in the clinical literature. Another consideration is whether the ‘proportion reclassified’ includes everyone in the denominator or just a subset of people; the latter would of course produce higher estimates of the proportion reclassified.
The infrequency with which modest risk markers such as C-reactive protein lead to clinically meaningful reclassification is again illustrated by data from the Women's Health Study.4 In a validation sample (n = 8149), use of the Reynolds Risk Score led to a shift in category for 469 women. However, the vast majority of these shifts occurred between low-risk categories or involved movement downwards into low or intermediate risk. Only 38 women (out of 8149, or 0.5%) were moved from any category up into the high-risk category. Moreover, it is likely that most of the shifts represented movement from just below to just above a given threshold, because only small movements in absolute risk were observed with use of the score.4
The search for novel risk factors of cardiovascular disease has contributed to the interest in circulating biomarkers from pathways implicated in cardiovascular disease. Studies of these biomarkers have provided valuable clinical data concerning underlying mechanisms of disease. They have also engendered the hope that these same biomarkers may be used to guide clinical care. However, it is important to recognize that studying biomarkers for the sake of understanding disease mechanisms is a distinct task from identifying biomarkers for use in the clinic. Indeed, a careful assessment of the evidence leads to the unavoidable conclusion that most current biomarkers are not ready for routine clinical use in the ambulatory setting.
The good news is that recent clinical studies underscore just how well traditional risk factors perform with regard to the prediction of future cardiovascular risk. Inadequate recognition and control of the ‘classic’ risk factors continues to account for a large number of avoidable cardiovascular events. At the same time, new insights into disease mechanisms will almost certainly lead to the development of novel preventive therapies, regardless of how well biomarkers themselves perform in risk stratification. Finally, state-of-the-art technologies that allow the profiling of large panels of genes, transcripts, proteins, or small molecules should facilitate the discovery of newer biomarkers capable of providing both mechanistic insight and true prognostic utility.
Conflict of interest: none declared.
Dr. Wang is supported by grants from the NIH/NHLBI (R01-HL-083197, R01-HL-086875) and the American Heart Association.