When counterfactual questions are posed too far from the data, statistical inference can become highly dependent on seemingly trivial but indefensible modeling assumptions. Choosing one or only a few specifications to publish in the presence of model dependence makes conclusions largely nonempirical and subject to criticism from others who make different choices and find contradictory results. Until now, only relatively narrow attempts to assess model dependence have been used in the literature (such as trying a few different functional form specifications) and, as a result, many published inferences are far more uncertain than standard errors and confidence intervals indicate. When researchers ignore the uncertainty due to model dependence, scholarly works tend to have the flavor of merely showing that it is possible to find results consistent with ex ante hypotheses, rather than demonstrating their veracity. Consequently, the opportunities for researchers to use methods such as...

Article PDF first page preview

Article PDF first page preview
Article PDF first page preview
You do not currently have access to this article.