Abstract

By the late 1970s, household telephone coverage grew to exceed 90 percent in the United States, and by the mid-1980s telephone surveying of the general public had become commonplace. Nevertheless, 20 years later, the ability of researchers to reach representative samples of the U.S. public via landline (wired) telephone surveys and gather reliable data is being seriously challenged for many reasons, especially those related to cell phones and the growth of the “cell phone only” population. However, at present there exists no widely accepted set of Cell Phone Surveying “best practices” for U.S. survey researchers to follow. Despite what some appear to believe, surveying persons reached on cell phone numbers in the United States currently is a very complex undertaking if one wants to do it “right,” i.e. to do it legally, ethically, and in ways that optimally allocate one's finite resources to gather the highest quality data, and to analyze and interpret those data accurately. This final “wrap-up” article in the special issue provides a review of the empirical articles in the issue with a focus on their practical implications for the decisions that researchers need to make regarding sampling, coverage, nonresponse, measurement, and weighting in surveys that include interviews with persons reached on cell phones. The article also highlights the practical implications of a number of legal, ethical, and other issues that relate to surveys in the United States that include cell phone numbers. Surveying the U.S. cell phone population is possible, if at a higher cost than surveying their landline counterparts, and if with less precision than currently can be done surveying the landline population. The next five years should see a considerable growth in the methodological and statistical know-how that the survey community uses to plan, implement, and interpret cell phone surveys. There is a great deal that still must be learned.

Survey methods that sample and gather data from respondents on their landline telephones have undergone concerted and serious methodological development only in the past 30 years (cf. Groves et al. 1988; Lepkowski et al. 2007). Prior to that time, the penetration (coverage) of households with telephones in the United States, Europe, and elsewhere had been thought to be too low to justify the use of the telephone as a representative survey sampling mode of the general public and thus a relatively sparse body of methodological literature had developed. However, by the late 1970s, household telephone coverage grew to exceed 90 percent in the United States, and by the mid-1980s telephone surveying of the general public had become commonplace for academic, commercial, and government surveys. As these changes took place, the methodological literature on telephone survey methods began to mushroom.

Nevertheless, 20 years later, the ability of researchers to reach representative samples of the U.S. public via wired/landline telephone surveys and gather reliable data is being seriously challenged. The main reasons are (a) changes in the lifestyle preferences of Americans pertaining to their use of personal telephones, in particular the switch to becoming “cell phone only,” (b) changes in U.S. government telecommunication regulations, and (c) the business practices in the U.S. telecommunication industry related to wireless cell phones.

There are unique challenges to telephone surveying of the general public in the United States, though telephone surveys worldwide also have had to adjust to new modes of telecommunication. The challenges are not necessarily threats to the validity of using telephone surveys in the United States to reach special subsets of the population covered by list frames (e.g., employees within a corporation; students currently enrolled at a university; members of a professional organization; client satisfaction surveys; subsequent waves of panel surveys in which the first wave was conducted in-person; etc.). However, for researchers who want to conduct representative and otherwise valid surveys of the general public of the United States using the telephone mode of sampling and data collection, there are many new developments they must consider, including several which at present have no satisfactory body of evidence upon which to base confident decisions about how to proceed. These challenges span the gamut of concerns that together define the concept of Total Survey Error (cf. Groves 1989; Lavrakas 1996), including coverage, sampling, nonresponse, measurement, and other analytic issues.

It is for this reason that a group of approximately 30 telephone survey professionals in the United States began working together, and with others, over five years ago to (a) identify the known problems that must be solved and the unknowns that must be investigated to close knowledge gaps concerning the integration of cell phone numbers into U.S. telephone surveys, (b) generate new empirical evidence to help close these knowledge gaps, and (c) bring together groups of scholars and practitioners from the academic, commercial, and government sectors to share knowledge, debate the meaning and implications of that knowledge, and identify the needed additional research.1

These efforts have included (a) two so-called Cell Phone Sampling Summits that were held in 2003 and 2005 in New York City (organized and sponsored by Nielsen Media Research; see http://www.nielsenmedia.com/cellphonesummit/cellphone.html); (b) a three-day “cell phone mini-conference” within the 2007 annual conference of the American Association for Public Opinion Research (AAPOR) in Anaheim CA; (c) this special issue of Public Opinion Quarterly focusing on cell phone numbers in U.S. telephone surveying; and (d) a Task Force on Cell Phone Surveying in the United States convened in the summer of 2007 under the auspices of the AAPOR Standards Committee to allow AAPOR to issue a “state-of-the-art” report in 2008, which is anticipated by the AAPOR Executive Council to be updated on a regular basis as new knowledge and experience is gained.

As of the time this special issue of Public Opinion Quarterly is being published there exist no widely accepted set of Cell Phone Surveying “best practices” for U.S. survey researchers to follow regarding how to plan, conduct, and interpret surveys of respondents who are reached on wireless cell phone numbers. There is nonetheless a fair amount of empirically based methodological and statistical knowledge and experience. Further, many experts agree on certain legal, ethical, and operational matters that researchers should consider as they plan, conduct, and interpret U.S. telephone surveys that include cell phone numbers. Despite what some appear to believe, surveying persons reached on cell phone numbers in the United States is currently a very complex undertaking if one wants to do it “right.” And by “right,” we mean to do it legally, ethically, and in ways that optimally allocate one's finite resources to gather the highest quality data, and to analyze and interpret those data accurately.

Empirical Evidence about Cell Phone Numbers and Telephone Surveys in the United States

This special issue of Public Opinion Quarterly contains some of the latest and most important empirical articles on the state of conducting telephone surveys in the United States that include cell phone numbers. The research reported in this special issue has built on other empirical work that is well summarized by Steeh and Piekarski (2007). Much of that earlier work also is referenced in many of the articles in this special issue and will not be repeated here. This section highlights the “implications” of these articles for researchers considering how best to plan, implement, and interpret telephone surveys that sample respondents on cell phone numbers in the United States.

The Ehlen and Ehlen (2007) article in this issue uses an econometric model to forecast the growth of the “cell phone only” population and some of its demographic characteristics using data from 39 semi-annual observations starting in the first half of 1987. Complementing other research that has gathered original survey data about the proportion of adults in the United States that can only be reached on a cell phone if one is conducting a telephone survey (e.g., Blumberg and Luke 2007a), the Ehlen and Ehlen model is based on secondary analyses of 20 years of existing academic, commercial, and government survey data and “posits that a stable behavioral process, the rate of habit retention, can be estimated from prior wireless lifestyle adoption in the U.S.” Their “cell phone only” modeling predicts that by the end of 2009 more than 40 percent of adults in the United States under 30 years of age will have adopted a “cell phone only” lifestyle, while in contrast less than 5% of adults 65 years of age and older will have done so. This growing gap between what telephone surveys of the general public in the United States will need to do to reach younger adults and what they will need to do to reach older adults is a testament to the almost certain need for U.S. telephone researchers to sample both landline and cell phone numbers for the foreseeable future, and these dual frame surveys will need to determine the likelihood that each respondent could have been reached by either frame. Furthermore, as Ehlen and Ehlen suggest, researchers should consider their forecasts of the size of the “cell phone only” population by age cohort when weighting adjustments are being made to national telephone surveys that sample both cell phone numbers and landline numbers, given that (a) the size of the “cell phone only” population is rapidly changing and (b) national survey estimates (e.g., those from NHIS) of these parameters are essentially a year or more behind the times when they are released.

The article by Blumberg and Luke (2007b) adds new information to the published literature about the issue of non-negligible (non-ignorable) coverage error that may occur in telephone surveys that use only a landline frame, in particular when younger and/or lower income adults are being surveyed. As these authors report, “Even after [performing] statistical adjustments that account for demographic differences between adults living in households with and without landlines, telephone surveys of landlines [still] will underestimate the prevalence of health behaviors such as binge drinking, smoking, and HIV testing” (cf. Delnevo et al. in press). Blumberg and Luke also identify other health-related behaviors that appear to be overestimated if cell phone numbers are not included in a telephone survey of the general public. Furthermore, they note that although sample weighting for multiple demographics continues to be needed to help adjust for coverage bias in the landline samples, these adjustments by themselves may not be adequate in eliminating that bias for certain demographic groups, such as low-income young adults.

As these authors speculate, given that the “cell phone only” population will grow in size, the need to sample this cohort will also grow for researchers who use telephone surveys if they want to avoid coverage errors that have practical significance. Blumberg and Luke further observe that, “These results suggest that there are unknown differences between adults with and without landlines and that these unknown differences are related to [other] lifestyle preferences… .” New research in the coming years will help to identify much of what now is unknown, but for the present time researchers are forewarned that it is their responsibility to their sponsors/clients to make informed decisions about whether or not the general public telephone surveys they conduct need to include respondents sampled from the U.S. cell phone number frame.

The article by Kennedy (2007) provides important insights into coverage, nonresponse, and measurement error issues for those surveying U.S. respondents via the telephone, along with important information about cost implications. The author notes that, “Screening out adults with landline telephones from the cellular sample does not affect the coverage properties of a dual frame survey, but it may affect other sources of error… .” In particular, if those with both a landline and a cell phone have different response propensities depending on whether they are contacted via their landline phone or their cell phone – and if they also have different attitudes, behaviors, and other key characteristics that are being measured by a given survey – then screening them out of one mode of contact or the other (e.g., screening out those contacted via a cell phone who also have a landline) may inflate nonresponse bias. Thus, although it is cost-effective to screen-in for “cell phone only” status when sampling the cell phone frame, it may not be cost-beneficial from a total survey error standpoint.

One interpretation of the implications of Kennedy's findings is that researchers using the telephone to survey the U.S. general public, at least during the next few years, should exercise a heightened level of caution whenever funding allows by guarding against the error that may result from screening out those who also have a landline from the cell phone frame. Thus, it currently appears prudent to include interviews with those sampled via the cell phone frame and who also have a landline, rather than screening them out of being interviewed.

Keeter et al. (2007) provide additional evidence about what will happen to many telephone surveys of the general public in the United States that do not include interviews with those reached on cell phone numbers – evidence that on the surface might be interpreted by some as being in conflict with the coverage bias findings reported by other researchers. Overall, using the results of four national surveys conducted in 2006, they found that whether or not data from “cell phone only” respondents merged along with data from landline respondents (i.e., those sampled respondents who had both a landline and a cell phone or who had only a landline) essentially had only negligible (ignorable) effects on their survey statistics at the level of the entire adult population. However, consistent with the findings of Blumberg and Luke (2007b), Keeter et al. determined that not including the “cell phone only” population in a telephone survey will bias some survey statistics for the 18- to 25-year-old adult cohort. Thus, taking into consideration Ehlen and Ehlen's (2007) forecast that more than two-fifths of this cohort will be “cell phone only” by the end of 2009, the possibility exists for very sizable coverage errors associated with young adults in future U.S. general population telephone surveys that do not include the sampling of cell phone numbers (cf. Delnevo et al. in press).

The article by Brick, Edwards and Lee (2007) reported on a State of California health survey that used the cell phone frame to sample and screen for “cell phone only” households. A unique aspect of this study was its use of a within-household selection procedure to sample one adult from “cell phone only” households that share their cell phone with other adult household members. The authors concluded that “the study was not definitive about the [efficacy of interviewing] a different adult when a cell phone is shared.” Thus, researchers will need to follow future methodological reports on this topic and also track findings from other future studies that determine the proportion and characteristics of Americans who share a cell phone compared to those Americans with a cell phone who do not share it.

The Brick et al. study also demonstrated that interviews with those reached on a cell phone need not necessarily be kept short. Rather, they found that their respondents were willing to complete a survey questionnaire that on average took 30 minutes. (That this questionnaire was about a topic (health) that interests many in the general public and was conducted for the state government may well have contributed to this level of cooperation given its length.) These authors also provide a detailed explication of the weighting methods they used with the “cell phone only” sample and the considerations they gave in deciding how best to do this. All telephone researchers who include respondents sampled on cell phones are encouraged to take note of what Brick et al. have reported on the weighting of “cell phone only” completions.

Link et al. (2007) reported on three state level surveys in which a “cell phone only” sample was interviewed as well as a sample of respondents who had both a cell phone and a landline. A unique part of their study included a series of debriefings they had with their telephone interviewers which helped them to identify a number of operational lessons that other researchers should consider when planning and conducting surveys in the United States with respondents reached via a cell phone. These include the following:

  1. 1. 

    Evening weekday hours were the most productive time to complete interviews with those reached on their cell phone.

  2. 2. 

    Interviewers should let a call to a cell phone ring longer than traditionally done when calling a landline to make certain that voice mail picks up or an operator message is played, if the human respondent does not answer first.

  3. 3. 

    Even if an interviewer has been told by the cell phone respondent that s/he is willing to proceed with the interview, the interviewer should be alert to cues that would suggest that it would be best not to continue with data collection at that time and instead should schedule a callback (e.g., instances in which the interviewer can tell that the respondent is operating a motor vehicle).

  4. 4. 

    In part because most people reached on their cell phone are likely to be “the designated respondent” for that telephone number, the procedures that are used to decide whether or not to try to convert cell phone refusals need to be different from the procedures generally used for deciding when (e.g., the number of days to wait before recontacting the number) and how (e.g., what the interviewer first should say when making contact on the conversion attempt) to try to convert landline household-level refusals, which often are given by someone other than the designated respondent.

  5. 5. 

    Additional disposition codes need to be devised to identify call outcomes that are unique to U.S. cell phones compared to the standard codes used in U.S. landline telephone surveys (viz. AAPOR 2006), and new calling rules need to be programmed into CATI systems to properly handle further processing of numbers with temporary cell phone specific dispositions.

Link et al. also confirmed what others have reported: that response rates in cell phone samples in the United States are lower than in otherwise comparable landline samples. They also provide evidence consistent with several of the other articles in this special issue that, “Although most landline surveys post-stratify by factors such as sex, age, race, and education, it appears that many of those in [“cell phone only”] households are different to a non-negligible extent in at least some of the [key variables gathered from them] despite these characteristics, such that weighting to these factors does not account fully for the differences.”

There is another result from the Link et al. study that all U.S. telephone researchers need to watch very closely in the next few years. This is the finding that the measured characteristics of those respondents who have a cell phone may differ depending on whether they are sampled and interviewed via a landline or a cell phone. Others (e.g., Steeh 2007) have called attention to this phenomenon. Until this is better understood, it can be argued that this is a further reason that telephone researchers of the U.S. general public should not screen their cell phone samples to interview just “cell phone only” respondents.

Additional Considerations for U.S. Surveys Calling Cell Phone Numbers

In addition to the empirical evidence reported in the articles in this issue and by others (Brick et al. 2007; Keeter 2006; Steeh and Piekarski 2007), there is a growing body of “recommendations” that anyone who is planning a telephone survey in the United States that includes cell phone numbers should consider. These considerations stem from the professional knowledge-sharing and discussions that have taken place in the past five years and which will appear in a more detailed and comprehensive form in 2008 from AAPOR.

Legal considerations in calling u.s. Cell phone numbers

Stemming from the 1991 Telephone Consumer Protection Act (TCPA) (47 U.S.C. 227), the U.S. Federal Communications Commission (FCC) (47 C.F.R. 64.1200) has set regulations that many, including CMOR,2 have interpreted that survey organizations that are trying to contact U.S. cell phone numbers should never dial those numbers by any type of mechanized dialer without express prior consent. In the absence of such consent, survey organizations instead should have an interviewer hand (manually) dial cell phone numbers. Once contact has been made with a respondent on a cell phone number and if the respondent gives permission for the survey organization to call back on the cell phone, there are no legal restrictions that limit the survey organization from using a mechanized dialer to recontact that cell phone number.

The effects of the regulations and their current interpretation on gaining cooperation from cell phone owners in RDD surveys are two-fold (cf. Lavrakas 2004). First, they place real limitations that carry the weight of the federal and state governments on what survey researchers should and should not be doing when conducting RDD surveys, including those that use only a landline frame. Second, and as important, not everyone who learns about the existence of telephone regulations has an accurate understanding of them. There are a large number of adults in the United States who hold incorrect perceptions about what a survey organization can and cannot do in sampling them via their cell phone number. These individuals are likely to be less willing to cooperate when sampled in an RDD survey, even when survey organizations have trained their interviewers how to try to address these misunderstandings. Evidence of the confusion comes from a CMOR-sponsored national mail survey of nearly 500 U.S. adults conducted in 2002 that found that one in five did not perceive any difference whatsoever between the calls they receive from telemarketers and those they receive from survey and market researchers (Lavrakas, Lai and Shepard 2003). Other research reported at the October 2003 CASRO3 annual conference indicated that about 30–40 percent of Americans would like to have the equivalent of a “Do Not Call” list for survey and market research studies.

For survey organizations that do not use mechanized dialing systems, there appear to be no FCC-imposed dialing restrictions if their RDD sampling reaches U.S. cell phone numbers. However, for survey organizations that use predictive dialers and other mechanized dialers and even ones that simply use modems hooked to their Computer Assisted Telephone Interviewing (CATI) systems to place calls for their interviewers, in order to avoid being in violation of the federal regulations, they must know in advance of dialing which numbers are cell phones.

For U.S. cell phone number sampling frames all the numbers need to be hand-dialed. For U.S. landline frames, all of which include cell phone numbers due to various reasons, there are several ways that an RDD sample can be investigated to pre-determine which numbers are cell phones that should be hand-dialed. Neustar is an organization that provides data that can be used to identify cell phone numbers. Telecordia is another organization that provides such information. In addition, survey-sampling companies such as Marketing Systems Group (MSG) and Survey Sampling Inc. (SSI) will screen RDD sampling pools against their extensive cell phone number databases. However, none of these ways is 100 percent reliable in identifying all cell phones within an RDD landline sample. Thus, any survey organization placing a call with a mechanized dialing system as part of an RDD U.S. landline survey may be in violation of the federal regulations and subject to penalties when they reach someone on their cell phone.

The TCPA regulation stipulates that anyone using an automatic dialer (including someone conducting a survey) and reaching a cell phone number is subject to “private action” being filed against the organization in state court by the “victim” (i.e., the cell phone owner) for each number so contacted.4 If the cell phone owner objects the first time s/he is reached on her/his cell phone by a survey and voices her/his objection to the interviewer, or if s/he does not object but notes to the interviewer that s/he was reached on a cell phone, then a survey organization using a mechanized dialing system would be in the position to remove that cell phone number from any of its future RDD landline samples to avoid possible future violation of the U.S. regulations. Or, the organization would be in the position to process that cell number manually in subsequent contacts attempts, if surveying those reached on cell phone numbers is a requirement of their sampling design. However, not all respondents contacted on their cell phones by survey organizations using mechanized dialers will say anything to alert the interviewer that the contact was made to a cell phone, unless the interviewer asks about this explicitly.

Thus, the most conservative approach for a U.S. telephone survey organization to take would be to ask in the introduction or early in the questionnaire whether the number dialed is a cell phone or a landline every time they make contact on an assumed landline. Ideally, this would be done for both respondents who cooperate with the survey and those who refuse to cooperate. (Obviously, with refusing respondents it will be much harder to gather the needed information about the nature of the phone line reached.) In addition, whenever an answering machine is reached and there is an interviewer who can listen to the message – as opposed to a predictive dialer using software to make a determination if it is a human or a machine that was reached – a survey organization using a mechanized dialer should consider modifying its outcome codes so as to capture information, whenever possible, about every number reaching an answering device/service that plays a message indicating that it is, or is likely to be, a cell phone number.

Through these various approaches, survey firms using mechanized dialers can do quite an extensive job of eliminating cell phone numbers from their future RDD landline sampling, and/or dialing those numbers manually, and thereby reduce the chance that they will be in violation of the U.S. federal regulations. But this can never be done with 100 percent accuracy.

An additional legal concern for those planning a telephone survey in the United States that includes cell phone numbers concerns the use of text messaging. Researchers who might be considering using text messages sent to cell phones as part of their survey protocols (e.g., using them similar to an advance contact letter), in addition to the TCPA restrictions on sending a text message via any automated mechanism without expressing prior consent, could be subject to the CAN-SPAM Act (16 CFR Part 316), which regulates commercial email (spam). Although this rule is under dispute following several court cases, telephone researchers should consider including opt-out notices in text messages as a precaution or avoid sending text messages entirely.

Ethical Considerations

A first ethical consideration is related to the nature of cell phone technology which allows for a respondent to be engaged in numerous activities and to be physically present in various locations that normally would not be expected in reaching someone on a landline number. In particular, the operation of a motor vehicle or any other type of potentially harmful machinery by a respondent during a cell phone survey interview presents a potential hazard to the respondent and to anyone else in the general vicinity of the respondent (e.g., fellow passengers in the car). Recognizing this, any researcher who conducts a survey that includes cell phone respondents should take appropriate measures to protect the safety of the respondent and those around the respondent. This will decrease the chance that harm will result from someone participating in an interview while on a cell phone and the chance that the survey organization will be held accountable for the harm.

Second, because of the cost structure of cell phone billing currently in the United States, there likely will be a financial burden upon the respondent for an incoming survey call to a cell phone – something that does not occur when being sampled and interviewed on a landline phone. Therefore, when appropriate, survey respondents reached on their cell phone should be offered proper remuneration for their time on a research call. This reimbursement should be viewed as a good will gesture on the part of the survey organization – one that is separate from any incentive that the researchers may choose to offer cell phone respondents to increase their response propensity. However, it may turn out that many respondents will not claim reimbursement, because they need to provide contact information to receive the remuneration and they may prefer not to do so.

Third, researchers planning telephone surveys of those reached on their cell phone should strive to leave the respondent with “a good experience.” Because people often are under special time constraints and special pressures when speaking on their cell phones, survey researchers should take this explicitly into account whenever planning the length of a questionnaire that will be used to interview someone on a cell phone. Until more empirical evidence comes forth about the effects of questionnaire length in U.S. cell phone surveys on nonresponse and data quality, researchers should try especially hard to keep cell phone interviews short (e.g., 15 minutes or less).

Fourth, surveys of cell phone numbers in the United States will reach proportionally more non-adults (i.e., minors) than will surveys of landline numbers. Thus, researchers should take extra care to determine whether calls reach an adult, and have interviewers trained well to know what to do if they do not.

Other Data Quality Considerations

Current experience suggests that many users of cell phones are willing to talk in all kinds of locations, including public spaces and semi-private places, in which they are seemingly oblivious of those around them. Nevertheless, a survey respondent reached on a cell phone may consciously or unconsciously censor responses because of lack of privacy. Thus the accuracy of responses, depending on the sensitivity of the research questions, may be negatively affected.

In addition, the level of effort that a survey respondent commits to accurately answer questions may be diminished when being interviewed on a cell phone versus a landline. Since people appear to engage in more multitasking when speaking on cell phones, the attention that respondents give to the question-answering task may be less. Concern about data quality may be exacerbated by the generally lower quality of telecommunications fidelity provided over a cell phone compared to a landline in the United States.

Because of these factors, researchers should have interviewers try to determine whether a cell phone respondent is in an environment and is reached at a time that is conducive to providing full and accurate answers. One way to do this is to have interviewers explicitly ask respondents if they are in a position to provide full and accurate data; if not, scheduling a callback should be considered. Another way is for researchers to build into their instruments short sequences of questions that offer insight into the quality of the data the respondent is providing, including test–retest reliability questioning, social desirability items, and/or factual knowledge testing. Researchers also should investigate the amount and nature of missing data from cell phone respondents and the richness of detail that is given to open-ended questions. Finally, until more is known on this topic, researchers should consider asking all their cell phone respondents whether they have been reached at home or away from home and then compare data quality between the two groups using various demographic controls.

Geography Considerations

The geographic precision with which one can sample U.S. residents reached on a non-national cell phone is considerably less than when one is sampling from a landline telephone frame. There are many reasons for this, including number portability which went into effect in the United States in November 2003 and is linked to federal regulations permitting a household to keep its existing telephone number(s) when they move, change carriers, and/or switch from landline to cell phone service. To be certain that one is sampling only those who are geographically eligible for the purpose of a given telephone survey, researchers will need to build into their introductory scripts or into their questionnaires some form of geographic screening.

Geographic screening in RDD sampling is not a new issue. Telephone researchers have had to devise ways to determine whether a sampled household/respondent lives within the eligible geographic area (cf. Lavrakas 1993, 118–19). A general rule of thumb here is that the larger the geographic area that is being sampled, the easier it is to accurately screen in those who live within the area and to screen out those who live outside the area.

For example, for geo-screening in an RDD survey of an area with known geopolitical boundaries in the United States, such as a state, or county, or city, all respondents can be asked if they live within “X,” where X is the name of the state, county, or city. The vast majority of respondents will agree to answer these questions and very few will give erroneous answers providing they do not know what answer(s) will qualify them for, or disqualify them from, being eligible for the survey (cf. Lavrakas 1993, 31–32). Thus, in such instances, geo-screening will yield very few errors of omission (false negatives) or errors of commission (false positives).

In other cases the U.S. geography that needs to be sampled can be defined by a set of postal zip codes. This too can be effectively used to screen people in or out of the area that is to be sampled. And, again, experience indicates that very few people will refuse to cooperate if a zip code screening procedure is added at the end of the introduction of a survey (cf. Schejbal and Lavrakas 1995).

It is when the geographic area that needs to be sampled does not conform to any well-known geopolitical boundary that screening for RDD surveys becomes difficult, but not necessarily impossible. In these instances, e.g., sampling people who live within a certain set of metropolitan police districts, the researcher must devise a way to describe the boundaries of the area and determine whether the sampled respondent lives within or outside them. Often, this is difficult to do and not surprisingly it leads to many more errors of omission and commission when deployed (Schejbal and Lavrakas 1995).

As it relates to cell phone numbers and RDD sampling, the need to do geographic screening will increase as the percentage of the public who have ported their phone numbers or simply moved to another geopolitical area grows. Geographic screening as part of an RDD survey's introduction is reasonably feasible in most instances, but it is costly and will contribute coverage errors in surveys that use such geographic screening when certain people (e.g., those with lower educational attainment and those who are “geographically challenged”) answer the screening questions incorrectly. Furthermore, it likely will add to nonresponse, due to more people refusing to continue with an interview, depending on how complex and intrusive is the geo-screening sequence.

Another factor related to geography and the surveying of cell phone numbers is the weighting adjustments that U.S. surveys that are non-national in scope will need to deploy. Currently there are no reliable non-national parameters for the U.S. cell phone population to which researchers can geographically adjust their samples. Nor are there any current plans to start collecting these data so that those doing cell phone surveys at the state and local levels can have such parameters. Some experts have called for the American Community Survey to begin to gather such data (e.g., Frankel 2007). But, until such data are available, researchers that conduct non-national telephone surveys in the United States that include respondents sampled on cell phones will need to do their best to decide how to handle this thorny issue and they should always disclose the geographic weighting decisions they make.

Conclusion

The era has arrived where researchers conducting telephone surveys of the general U.S. public need to make careful and explicit decisions about how to handle the (a) sampling for, (b) data collection from, and (c) analysis of those respondents reached on cell phones.

As shown by the articles in this special issue and other publications, surveying the U.S. cell phone population is possible, if at a higher cost than surveying their landline counterparts, and if with less precision than currently can be done surveying the landline population. The next five years should see a considerable growth in the methodological and statistical know-how that the survey community uses to plan, implement, and interpret cell phone surveys. There is a great deal that still must be learned.

And although we can no longer benefit from current research and advice from Warren Mitofsky and Joseph Waksberg, we still can benefit from their legacies and exceptional leadership as we engage in our collective pursuit of solutions for all the challenges that surveying cell phone numbers in the United States now pose.

References

Blumberg
Stephen J.
Luke
Julian V.
“Wireless Substitution: Early Release of Estimates Based on Data from the National Health Interview Survey, July-December 2006.”
National Center for Health Statistics
 , 
2007
 
Blumberg
S.
Luke
J.
“Coverage Bias in Traditional Telephone Surveys of Low-Income and Young Adults.”
Public Opinion Quarterly
 , 
2007
, vol. 
71
 
5
 
10.1093/poq/nfm047
Brick
J. Michael
Brick
Pat D.
Dipko
Sarah
Presser
Stanley
Tucker
Clyde
Yuan
Yangyang
“Cell Phone Survey Feasibility in the U.S.: Sampling and Calling Cell Numbers Versus Landline Numbers.”
Public Opinion Quarterly
 , 
2007
, vol. 
71
 (pg. 
23
-
39
)
Brick
M.
Edwards
W. S.
Lee
S.
“Sampling Telephone Numbers and Adults, Interview Length, and Weighting in the California Health Interview Survey Cell Phone Pilot Study.”
Public Opinion Quarterly
 , 
2007
, vol. 
71
 
5
 
10.1093/poq/nfm052
Brick
J. M.
Tucker
C.
“Mitofsky-Waksberg: Learning from the Past.”
Public Opinion Quarterly
 , 
2007
, vol. 
71
 
5
 
10.1093/poq/nfm049
Delnevo
C. D.
Gundersen
D. A.
Hagman
B. T.
“Declining Estimated Prevalence of Alcohol Drinking and Smoking among Young Adults Nationally: Artifacts of Sample Undercoverage?”
American Journal of Epidemiology
 , 
2007
 
Advance Access published on October 31, 2007, 10.1093/aje/kwm313
Ehlen
J.
Ehlen
P.
“Cellular-Only Substitution in the U.S. as Lifestyle Adoption: Implications for Telephone Survey Coverage.”
Public Opinion Quarterly
 , 
2007
, vol. 
71
 
5
 
10.1093/poq/nfm048
Frankel
M.
Comments Presented in “Cell Phone Surveying: Where Do We Go From Here?”
2007
at the 62nd Annual American Association for Public Opinion Research Conference, Anaheim, CA
Groves
R. M.
Biemer
P. N.
Lyberg
L. E.
Massey
J. T.
Nicholls
W. L.
Waksberg
J.
Telephone Survey Methodology
 , 
1988
New York: John Wiley
Groves
R. M.
Survey Errors and Surveys Cost
 , 
1989
New York: John Wiley
Keeter
Scott
“The Cell Phone Challenge to Survey Research.”
2006
 
Keeter
S.
Kennedy
C.
Clark
A.
Tompson
T.
Mokrzycki
M.
“What's Missing from National Landline RDD Surveys? The Impact of the Growing Cell-Only Population.”
Public Opinion Quarterly
 , 
2007
, vol. 
71
 
5
 
10.1093/poq/nfm053
Kennedy
C.
“Evaluating the Effects of Screening for Telephone Service in Dual Frame RDD Surveys.”
Public Opinion Quarterly
 , 
2007
, vol. 
71
 
5
 
10.1093/poq/nfm050
Lavrakas
P. J.
Telephone Survey Methods: Sampling Selection and Supervision
 , 
1993
Newbury Park, CA
Sage
Lavrakas
P. J.
To Err is Human: Embrace a ‘Total Survey Error’ Perspective to Make the Most of Precious Resources
Marketing Research
 , 
1996
(pg. 
30
-
36
)
Lavrakas
P. J.
“Will a Perfect Storm of Cellular Forces Sink RDD Sampling?”
2004
Paper presented at the 59th Annual American Association for Public Opinion Research Conference, Phoenix, AZ
Lavrakas
P. J.
Lai
J.
Shepard
J.
“CMOR's National Survey to Help Build an Advertising Campaign to Motivate Survey Response.”
2003
Paper presented at the 58th Annual American Association for Public Opinion Research Conference, Nashville, TN
Lepkowski
J.
Tucker
C.
Brick
M.
De Leeuw
E.
Japec
L.
Lavrakas
P. J.
Link
M.
Sangster
R.
Advances in Telephone Survey Methodology
 , 
2007
New York: John Wiley & Sons
Link
M. W.
Battaglia
M. P.
Frankel
M. R.
Osborne
L.
Mokdad
A. H.
“Reaching the U.S. Cell Phone Generation: Comparison of Cell Phone Survey Results With an Ongoing Landline Telephone Survey.”
Public Opinion Quarterly
 , 
2007
, vol. 
71
 
5
 
10.1093/poq/nfm051
Schejbal
J. A.
Lavrakas
P. J.
Coverage Error and Cost Issues in Small Area Telephone Surveys
1995
(pg. 
1287
-
92
American Statistical Association 1994 Proceedings: Section on Survey Research Methods
Steeh
Charlotte
“Are Cellular-Only Subscribers Just Part of the Problem?
2007
Paper presented at the 62nd Annual Conference of the American Association for Public Opinion Research, Anaheim, CA.
Steeh
C.
Piekarski
L.
Lepkowski
J. M.
Tucker
C.
Brick
J. M.
de Leeuw
E.
Japec
L.
Lavrakas
P. J.
Link
M. W.
Sangster
R. L.
Chapter 20, “Accommodating New Technologies: Mobile and VoIP Communication.” In
Advances in Telephone Survey Methodology
 , 
2007
New York
Wiley
(pg. 
426
-
446
)
1
This group includes: Mike Battaglia, Stephen Blumberg, Chet Bowie, John Boyle, Mike Brick, Trent Buskirk, Mario Callegaro, Ed Cohen, Brian Dautch, Howard Fienberg, Anna Fleeman-Elhini, Marty Frankel, Donna Gillin, Patrick Glaser, John Hall, Vince Innacchione, George Ivie, Scott Keeter, Courtney Kennedy, Dale Kulp, Paul Lavrakas, Jim Lepkowski, Greg Linder, Michael Link, Dan Merkle, Charlie Palit, Tom Piazza, Linda Piekarski, Charlotte Steeh, Chuck Shuttles, Trevor Tompson, Bob Totora, Jane Traub, and Clyde Tucker.
2
CMOR, formerly known as the Council of Marketing and Opinion Research, is a nonprofit organization which works on behalf of the survey research industry to (a) improve respondent cooperation in research and (b) promote positive legislation and prevent restrictive legislation which adversely could impact the survey research industry; www.cmor.org.
3
CASRO, the Council of American Survey Research Organizations, is an umbrella organization that represents over 300 companies and research operations in the United States and abroad, including promoting a rigorous code of conduct that strives to enhance the image of survey research and protects the public's rights and privacy; www.casro.org.
4
The actual wording in the TCPA says: the cell phone owner can “recover actual monetary loss from such a violation, or receive $500 in damages for each such violation (whichever is greater)” and “if the court finds that the defendant willfully or knowingly violated this section of the TCPA, the court may, in its discretion, increase the amount of the award to an amount equal to not more than 3 times the [$500].”

Author notes

PAUL J. LAVRAKAS, 382 Janes Lane, Stamford, CT 06903, USA.
CHARLES D. SHUTTLES is with Nielsen Media Research, 501 Brooker Creek Blvd, Oldsmar, FL 34677, USA.
CHARLOTTE STEEH is with Centers for Disease Control and Prevention, 1303 Iverson Street NE, Atlanta, GA 30307, USA.
HOWARD FIENBERG is with CMOR, 1111 16th St. NW, Suite 120, Washington, DC 20036, USA. We would like to thank Peter V. Miller for his helpful comments and editing suggestions on an earlier version of this article. We also would like to thank Rob Daves, Patrica Moy, and Peter V. Miller for their work with AAPOR Executive Council to make this special issue of POQ possible.