Adam Glynn is an associate professor of government at Harvard University, Cambridge, MA, USA. An earlier version of this article was presented at the 2010 Midwest Political Science Association conference and at the 2010 Summer Conference for the Society of Political Methodology. The author would like to thank the editors and three anonymous reviewers as well as Adam Berinsky, Matt Blackwell, Justin Grimmer, Chase Harrison, Sunshine Hillygus, Kosuke Imai, Gary King, Richard Nielsen, Dave Peterson, Kevin Quinn, and Dustin Tingley for their helpful comments and suggestions. The usual caveat applies.
Due to the inherent sensitivity of many survey questions, a number of researchers have adopted an indirect questioning technique known as the list experiment (or the item-count technique) in order to reduce dishonest or evasive responses. However, standard practice with the list experiment requires a large sample size, utilizes only a difference-in-means estimator, and does not provide a measure of the sensitive item for each respondent. This paper addresses all of these issues. First, the paper presents design principles for the standard list experiment (and the double list experiment) for the reduction of bias and variance as well as providing sample-size formulas for the planning of studies. Second, this paper proves that a respondent-level probabilistic measure for the sensitive item can be derived. This provides a basis for diagnostics, improved estimation, and regression analysis. The techniques in this paper are illustrated with a list experiment from the 2008–2009 American National Election Studies (ANES) Panel Study and an adaptation of this experiment.