Schwartz et al. recently reported results from a study of the relationship between particulate matter less than 2.5 µm in diameter (PM2.5) and daily number of deaths in Boston, Massachusetts (1). In their abstract, the authors state, “We found a causal association of PM2.5 with mortality  …  Given these results, prior studies, and extensive toxicological support, the association between PM2.5 and deaths is almost certainly causal” (1, p. 644). This claim is striking, partly because much toxicological (and epidemiologic) evidence does not support such a causal association at realistic exposure levels (2) and also because one cannot determine whether or how much of an association is causal from data using the counterfactual modeling (propensity score) and instrumental variable methods the authors applied. That is, their reported finding of a causal association does not result solely from an evaluation of empirical facts or from data analysis.

For example, they provide no indication that exposure is actually a Granger cause of mortality in this data set. Instead, they apply idiosyncratic methods, software, and a sensitivity analysis that they state “[follow] on the ideas in Granger causality” (1, p. 647)—yet the key idea of objectively testing, using standard methods and software, whether mortality predictions are improved by including exposure history as a predictor seems to be missing. Nor is a well-tested and validated causal graph (directed acyclic graph) or structural equation model developed showing exposure to be a potential causal predecessor of mortality (in the limited sense that mortality rates are not conditionally independent of exposure after conditioning on other variables).

Instead, the authors rely on untested modeling assumptions and personal beliefs for their key conclusion about causality. As the text of the article explains:

Nothing is for free. We have traded the untestable common assumption made in most causal analyses (that there are no omitted confounders) for a different untestable assumption (that the instrument is not associated with any of the confounders) [emphasis added] (1, p. 646).

In our case, we believe that we can identify a valid instrument, which is implausibly associated with other predictors of mortality, and which explains enough of the day-to-day variation in particulate air pollution to have reasonable power [emphasis added] (1, p. 646).

Under these conditions, we believe that we have used a valid instrument (1, p. 648).

Assuming that our instrument is valid, we have demonstrated a causal association between PM2.5 and daily deaths  …  Assuming that there are no unmeasured variables that are correlated with both exposure and mortality rates within strata of propensity score, this [propensity score] analysis also provides a causal estimate [emphasis added] (1, p. 648).

Thus, the key conclusion that “We found a causal association of PM2.5 with mortality” should be coupled with an important caveat: “assuming the validity of untested conditions and beliefs that would jointly imply this conclusion.”

In practice, the causal conclusions drawn from counterfactual analyses, including propensity score methods and more elaborate marginal structural models, are typically very sensitive to model specification errors and untested assumptions (e.g., see Moore et al. (3)), and other methods of causal analysis are needed to test for, and develop conclusions robust against, unobserved confounders (e.g., see Tashiro et al. (4)) and model uncertainty (5).

Sarewitz recently warned, “Growing concerns about the quality of published scientific results have often singled out bad statistical practices and modelling assumptions  …  Scientists must have the self-awareness to recognize and openly acknowledge the relationship between their political convictions and how they assess scientific evidence” (6, p. 159). This caveat is nowhere more important than in addressing issues of causation. Making fuller use of standard methods and software, such as Granger causality and conditional independence testing using existing well-developed R packages (https://www.r-project.org/), and carefully qualifying statements of causal conclusions by emphasizing their dependence on any uncertain assumptions, will help practitioners approach this ideal.

Acknowledgments

Conflict of interest: none declared.

References

1
Schwartz
J
,
Austin
E
,
Bind
MA
et al
.
Estimating causal associations of fine particles with daily deaths in Boston
.
Am J Epidemiol
 .
2015
;
182
7
:
644
650
.
2
Cox
LA
Jr
.
Hormesis for fine particulate matter (PM 2.5)
.
Dose Response
 .
2012
;
10
2
:
209
218
.
3
Moore
KL
,
Neugebauer
R
,
van der Laan
MJ
et al
.
Causal inference in epidemiological studies with strong confounding
.
Stat Med
 .
2012
;
31
13
:
1380
1404
.
4
Tashiro
T
,
Shimizu
S
,
Hyvärinen
A
et al
.
ParceLiNGAM: a causal ordering method robust against latent confounders
.
Neural Comput
 .
2014
;
26
1
:
57
83
.
5
Díaz
I
,
Hubbard
A
,
Decker
A
et al
.
Variable importance and prediction methods for longitudinal problems with missing variables
.
PLoS One
 .
2015
;
10
3
:
e0120031
.
6
Sarewitz
D
.
Reproducibility will not cure what ails science
.
Nature
 .
2015
;
525
7568
:
159
.