Abstract

This study assesses how survey outcome distributions change over repeated calls made to addresses in face-to-face household interview surveys. We consider this question for 541 survey variables, drawn from six major face-to-face UK surveys that have different sample designs, cover different topic areas, and achieve response rates between 54 and 76 percent. Using a multilevel meta-analytic framework, we estimate for each survey variable the expected difference between the point estimate for a proportion at call n and for the full achieved sample. Results show that most variables are surprisingly close to the final achieved sample distribution after only one or two call attempts and before any post-stratification weighting has been applied; the mean expected difference from the final sample proportion across all 559 variables after one call is 1.6 percent, dropping to 0.7 percent after three calls and to 0.4 percent after five calls. These estimates vary only marginally across the six surveys and the different types of questions examined. Our findings add weight to the body of evidence that questions the strength of the relationship between response rate and nonresponse bias. In practical terms, our results suggest that making large numbers of calls at sampled addresses and converting “soft” refusals into interviews are not cost-effective means of minimizing survey error.

You do not currently have access to this article.