Abstract

We conducted two experiments designed to evaluate several strategies for improving response to Web and Web/mail mixed-mode surveys. Our goal was to determine the best ways to maximize Web response rates in a highly Internet-literate population with full Internet access. We find that providing a simultaneous choice of response modes does not improve response rates (compared to only providing a mail response option). However, offering the different response modes sequentially, in which Web is offered first and a mail follow-up option is used in the final contact, improves Web response rates and is overall equivalent to using only mail. We also show that utilizing a combination of both postal and email contacts and delivering a token cash incentive in advance are both useful methods for improving Web response rates. These experiments illustrate that although different implementation strategies are viable, the most effective strategy is the combined use of multiple response-inducing techniques.

Introduction

Internet surveys are an increasingly popular alternative to traditional survey modes, but their response rates are typically lower than those of mail surveys (Manfreda et al. 2008). In this article, we explore utilizing mail methods to improve Web response rates. We examine multiple techniques that may be used when surveyors can contact potential respondents by both email and postal mail, as well as when only postal addresses are available.

We conducted two experiments within a population that has complete access to the Internet and is believed to be highly Web literate. Table 1 outlines the variables and treatment comparisons used to test the hypotheses in both experiments. In experiment 1, we compare the response rates of three treatments that offer a choice of responding by either mail or Web, by mail response only, or by Web response only, respectively. Experiment 1 also tests the effects of initially offering one mode of response and then switching to provide the alternative mode in a final contact. Additionally, experiment 1 includes a treatment to determine whether augmenting a postal contact strategy with supportive email contacts improves Web response rates.

Table 1.

Variables Tested in Two Experiments

  Treatment Comparisons
 
Variables and tests: Hypothesis Experiment 1 Experiment 2 
Response mode (postal contacts only)    
Choice vs. mail only 1 vs. 2 6 vs. 7 
Choice vs. Web only 1 vs. 3  
Mail only vs. Web only 2 vs. 3  
Response mode switch    
Switch from Web response to mail response 3, 4  
Switch from mail response to Web response  
Email augmentation of postal contacts    
Web response (email augmentation vs. only postal contacts) 4 vs. 3  
Choice response (email augmentation vs. only postal contacts)  5 vs. 6 
Web response (email augmentation vs. postal invitation with all email follow-up contacts)  8 vs. 9 
Response mode with email augmentation    
Mail response (postal contacts) vs. Web response with email augmentation 2 vs. 4 7 vs. 8 
Token cash incentive    
$2 vs. no incentive 10  9 vs. 10 
Initial postal invitation    
Initial postal invitation & email follow-ups vs. all email contacts 11  10 vs. 11 
  Treatment Comparisons
 
Variables and tests: Hypothesis Experiment 1 Experiment 2 
Response mode (postal contacts only)    
Choice vs. mail only 1 vs. 2 6 vs. 7 
Choice vs. Web only 1 vs. 3  
Mail only vs. Web only 2 vs. 3  
Response mode switch    
Switch from Web response to mail response 3, 4  
Switch from mail response to Web response  
Email augmentation of postal contacts    
Web response (email augmentation vs. only postal contacts) 4 vs. 3  
Choice response (email augmentation vs. only postal contacts)  5 vs. 6 
Web response (email augmentation vs. postal invitation with all email follow-up contacts)  8 vs. 9 
Response mode with email augmentation    
Mail response (postal contacts) vs. Web response with email augmentation 2 vs. 4 7 vs. 8 
Token cash incentive    
$2 vs. no incentive 10  9 vs. 10 
Initial postal invitation    
Initial postal invitation & email follow-ups vs. all email contacts 11  10 vs. 11 

The second experiment was designed in response to results from experiment 1. Experiment 2 reexamines offering a simultaneous choice of modes. It also determines how using a combination of mail and email contacts affects response rates when individuals are asked to respond via Web, and when they are given a choice of modes. The second experiment also tests the effectiveness of both an initial postal invitation contact and a token cash incentive sent in advance as means of improving response over the Web. Our overall goal is to evaluate the effectiveness of alternative strategies for improving response rates and to suggest means of conducting more effective surveys with populations accessible by postal mail only or by both mail and email.

Conceptual Framework and Research Questions

Although Web response rates can vary depending upon the survey population, topic, survey burden, and other survey characteristics (Dillman, Smyth, and Christian 2009), Internet survey response rates are often relatively low. Manfreda et al.’s (2008) meta-analysis of studies comparing response rates across modes concluded that Internet response rates are generally lower than those of mail.

Several decades of research have shown that response to mail self-administered surveys can be improved through the simultaneous application of several techniques, such as multiple contacts, token cash incentives delivered in advance, personalized communications, respondent-friendly construction, and other design features (Dillman, Smyth, and Christian 2009). A recent study confirms that these techniques remain effective; by using procedures aimed at improving trust that the survey is legitimate and useful, increasing benefits, and reducing perceived costs of responding, Smyth et al. (2010) achieved response rates as high as 71 percent in a general household survey. In this article, we apply similar ideas to Web surveys. We examine multiple implementation procedures in Web and mixed-mode designs, with the aim of producing higher Web response rates. The variables and treatment comparisons are summarized in table 1.

OFFERING A CHOICE OF RESPONSE MODES

Surveyors are increasingly providing potential respondents with the option of choosing between completing a questionnaire via alternate modes. There are multiple rationales for employing this methodology. First, in Web studies it is often necessary to utilize another mode to sample and/or contact potential respondents. This is especially relevant with general public samples, for which there are no available listings of email addresses and it is inappropriate to contact individuals via email without a prior established relationship. Second, many consider offering a choice of Web or mail preferable to using only Web because a sizable portion of the general public, about 26 percent of U.S. adults, does not use the Internet (Pew Research Center 2009), and research suggests that many individuals lack the skills needed for using it (Stern, Adams, and Elsasser 2009). Finally, a common assumption is that offering a choice improves response because it allows surveyors to cater to respondent preferences. Prior research indicates that survey respondents tend to prefer one data-collection mode over another (Groves and Kahn 1979; Millar, O'Neill, and Dillman 2009; Smyth, Olson, and Richards 2009; Tarnai and Paxon 2004). Thus, researchers often argue that offering a choice of response modes may improve response rates because individuals can select their preferred mode (see, e.g., Dillman, West, and Clark 1994; Diment and Garrett-Jones 2007; Shih and Fan 2007; Tarnai and Paxon 2004), whereas if only one mode is offered it will appeal to fewer people.

However, several studies show that when individuals are given a choice of responding by Web or mail, the overall response rate is lower than when only a mail response option is offered (Gentry and Good 2008; Griffin, Fischer, and Morgan 2001; Grigorian and Hoffer 2008; Smyth et al. 2010). One possible explanation for these seemingly counterintuitive findings is Schwartz's (2004) argument that offering choices has negative consequences on the decision-making process. According to this psychological research, every option has opportunity costs associated with it, and when two options are compared to each other, individuals must consider tradeoffs. This makes each option appear less appealing than it would if offered alone, leading to no compelling reason to select either one (Brenner, Rottenstreich, and Sood 1999; Schwartz 2004; Tversky and Shafir 1992). This suggests that by offering a choice between Web or mail response, surveyors are certainly not encouraging response and, in fact, they may even be discouraging it.

Although several prior studies indicate that offering a choice does not have a positive effect on survey response rates, the survey populations used were not completely accessible by Web. If members of the survey population lack Web access or the skills for using the Internet, this could affect response to a survey in which Web is one possible response mode. Therefore, we conducted an improved test of the effects of offering a choice of response modes by surveying a population of undergraduate college students who have access to the Internet and email and are expected by university faculty and administration to use them regularly. Because younger, highly educated individuals are more likely to use the Internet (Pew Research Center 2009), we expect that there are fewer hesitations to respond to a Web survey in this demographic than there would be in a general public sample.

Hypothesis 1: Offering a choice of responding by mail or Web will produce a lower response rate than offering only mail response.

However, providing a choice may be more beneficial than offering only Web response when postal contacts are used (which is necessary in many situations, such as in Web surveys of the general public). Responding by Web may be more burdensome than responding by paper when the survey request is sent through postal communications. When a paper questionnaire is received via postal mail, responding is simple and can begin instantly. However, when using postal contacts for a Web survey, respondents must switch tasks, from opening the mail to working on the computer (Millar et al. 2009). This transfer of activities necessitates additional steps before responding can begin: turning on a computer, opening the Internet browser, and manually entering a URL followed by typing an individualized access code. Prior research shows that even among Internet-savvy populations, if a postal request includes a paper questionnaire option, respondents are more likely to choose to respond by mail (Schonlau, Asch, and Du 2003).

Hypothesis 2: Offering a choice of mail or Web response will produce a higher response rate than offering only Web response.

It follows from our first two hypotheses that we expect the mail response rate to be higher than the Web response rate, as is the trend in prior literature.

Hypothesis 3: A mail response treatment will produce a higher response rate than a Web response treatment.

PROVIDING MODE OPTIONS IN SEQUENCE

Instead of offering a simultaneous choice, providing two modes in sequence may be a more promising method of utilizing the advantages of both mail and Web modes of response (Dillman et al. 2009). In this strategy, one mode of response is used initially, and nonrespondents are later asked to respond via the other mode. Two prior general public studies showed that using a mail response option in a late contact increased Web survey response rates by 14 to 15 percent (Messer and Dillman forthcoming; Smyth et al. 2010). However, these studies found that switching from a mail to a Web version only increased overall response by about 1 percent. Switching from Web to mail was likely more effective in part due to the fact that some individuals did not have Web access, so offering mail allowed those who could not respond by Web to participate. A mail follow-up to Web might also have been effective because the mail option is a more convenient way to respond, given that postal contacts were used. Conversely, switching from mail to Web did not provide a more convenient option. We examined whether a similar pattern occurs within an undergraduate student population. Based on mail's greater convenience, we expected to observe the same trends, although the benefits of offering mail last may not be as dramatic because a lack of Internet access is not an issue in our population.

Hypothesis 4: Offering a mail response option in a final contact will improve response to a Web survey.

Hypothesis 5: Offering a Web response option in a final contact of a mail survey will not significantly improve response.

EMAIL AUGMENTATION OF POSTAL CONTACTS

Due to the more burdensome nature of postal contacts for seeking response to an online questionnaire, there may be benefits to using email contacts for Web surveys. However, we believe that sending the initial survey request via postal mail is more desirable than sending an email invitation. A postal letter on official university stationery, especially if it includes a token cash incentive, can signal the importance and legitimacy of the study (Dillman, Smyth, and Christian 2009). Thus, in this research we examine the effects of using both email and postal contacts together. We implemented a series of multiple postal contacts, beginning with a postal invitation letter, and incorporated supportive email messages into this primarily postal contact strategy. We label the use of supportive email contacts within a primarily postal contact sequence as email augmentation of a postal contact strategy. We expect that these follow-up emails decrease the time and effort, or perceived burden, of responding by Web. Through email, respondents can simply click on a link to the Web site and cut and paste the access code from the message to the questionnaire. By making it easier to respond, an email following a postal invitation shows positive regard for participants; it suggests that the surveyors are actively attempting to make the survey more convenient. An all-postal contact strategy might garner more attention than emails, but responding online would be more inconvenient. Conversely, if all contacts are sent via email, it would be convenient to respond via the Web, but we would not be able to use the advance token cash incentive to encourage response. Therefore, we expect that email augmentation is the most effective contact strategy for encouraging people to respond over the Internet.

Hypothesis 6: The email augmentation strategy will produce a Web response rate that is higher than Web response rates when only postal contacts are used.

Furthermore, given that our survey population is highly Internet literate, we expect that once the inconvenience of the Web response option is removed (through the use of email augmentation), the Web response rate will be equivalent to a mail-only response rate.

Hypothesis 7: The email augmentation strategy will produce a Web response rate that is equivalent to mail response.

Email augmentation may also be beneficial when offering a choice of modes. Alternating two modes of contact could counteract the decision-making burden that may be involved when a choice of modes is offered by alternatively placing emphasis on one mode of response or the other. Thus, we expect this approach to improve response when both modes of response are offered.

Hypothesis 8: When offering a choice of response modes, the email augmentation strategy will produce a higher response rate than using only postal contacts.

In the email augmentation approach, the postal invitation is followed up by multiple email and multiple postal reminder contacts. Another possible approach to combining mail and email contacts is to use the initial postal invitation but send all follow-ups through email. This strategy is more cost effective, but it also might be less influential because recipients may more easily dismiss emails. Research shows that repeated email contacts are less effective for improving Web response than repeated postal contacts are for improving mail survey response (Manfreda et al. 2008). We expect the email augmentation approach, which intertwines multiple postal contacts (aimed at attracting more attention) with supportive email contacts (to make responding easier), to be superior to using all email follow-ups.

Hypothesis 9: The email augmentation approach will produce a higher Web response rate than using only one postal (invitation) contact and all email follow-ups.

TOKEN CASH INCENTIVES AND POSTAL INVITATION CONTACTS

A substantial limitation of typical Web surveys, which are commonly conducted solely via email, is the inability to deliver a token cash incentive in advance. Research shows that these incentives have a considerable effect on mail survey response (Church 1993; James and Bolstein 1990). In a meta-analysis of online surveys, Göritz (2006) found that incentives of various types increase Web response rates by an average of 2.8 percent and retention rates by 4.2 percent. However, these effects are relatively small, and the studies analyzed in this research did not include any experiments in which a token cash incentive was delivered in advance.

Other research indicates that an advance cash incentive is more effective than entering participants in a “chance to win” drawing only after the completion of the survey (Warriner et al. 1996), which is a common technique used in online surveys. Cash incentives delivered in advance are also more effective than providing other advance incentives, such as gift certificates (Birnholtz et al. 2004) or money distributed online through PayPal (Bosnjak and Tuten 2003). Sending cash incentives in advance deemphasizes the purely economic “payment” context of incentives and instead creates a type of social encouragement that stresses the importance of the survey (Dillman, Smyth, and Christian 2009). The benefits of cash incentives for improving mail response rates are clearly established, and research shows that advance cash incentives are more powerful for improving Web survey response than mail survey response (Messer and Dillman forthcoming). We therefore predict that including a token cash incentive will dramatically improve Web response.

Hypothesis 10: Including a token cash incentive in an invitation letter will significantly improve Web response.

To send an advance cash incentive, the initial survey request must be sent via postal mail. There also may be other benefits to using an initial postal contact in a Web survey. Emails have become an ephemeral form of communication that can easily be ignored, discarded, or forgotten. Furthermore, in some contexts it may be more difficult to establish the legitimacy of the surveyor through emails, which are often regarded as “spam” and viewed with some degree of suspicion. In a Pew Internet and American Life Project survey, 55 percent of respondents indicated that spam email has made them less trusting of email in general (Fallows 2007). In light of this, a postal invitation letter might be more effective in establishing survey legitimacy, drawing attention to any follow-up email messages, and encouraging response.

Indeed, prior research suggests that a pre-notice postcard significantly improves response to Web surveys, even though no incentive is delivered (Kaplowitz, Hadlock, and Levine 2004). Also, an analysis of 21 student surveys administered between 2005 and 2010 shows that the average response rate of the 10 surveys that used only email contacts was 24 percent, while the average response rate for the 11 surveys using a postal invitation and email contacts was 34 percent (Allen 2010).

Hypothesis 11: The response rate for a Web survey will be higher when a postal invitation letter is used, as opposed to using only email communications.

Experiment 1 Methods

The first experimental survey was conducted between February 13 and April 22, 2009, using a random sample of 2,800 undergraduate students at the main campus of Washington State University. We utilized a paper and an online version of the questionnaire, and the survey was implemented primarily through postal contacts (email contacts were used in one treatment; see below). The Web and paper questionnaires were constructed in similar fashion to minimize response differences between modes. In the paper version, each individual question was presented in its own enclosed region to emulate the page-by-page construction of the Web questionnaire. We employed cascading style sheets in the Web screen construction to ensure that the appearance of the questionnaire items would be similar in all Web browsers and would resemble the paper questionnaire appearance. The questionnaire contained 36 questions assessing students’ opinions about a variety of issues related to their educational experiences at WSU.

The sample was randomly divided into four treatment groups that provided different response options. Treatment 1 offered respondents a choice of responding by mail or Web, treatment 2 asked for response by mail only, and treatments 3 and 4 asked for response by Web only. All four treatment groups were contacted initially via postal letters and given a $2 bill as an incentive. The choice group (treatment 1) students received a paper questionnaire (with a stamped return envelope) as well as the Web site and individualized access codes for responding online. The mail group (2) students were given only the paper questionnaire with a stamped return envelope. The Web groups (3 and 4) were given only the Web site and individualized access codes, and no paper questionnaire. The third and fourth groups’ initial contacts were identical; the variation in these groups was the use of email augmentation in group 4.

Three days after the postal invitations were mailed, a supportive email contact was sent to treatment 4 students. This message built upon the invitation letter information and included a link to the survey Web site. The email explained that we hoped the electronic link made it “easier to respond.” Table 2 documents the contact implementation strategies, including dates of each contact and modes of response requested in each letter/email.

Table 2.

Response Options Offered for Each Treatment Group in Experiment 1, by Date and Mode of Contacta

Treatment group (nFeb 13: Postal invitation Feb 18: Email augmentation Feb 20: Postal thank you/reminder Mar 6: Replacement Mar 10: Email augmentation Apr 6: Mode switch letter 
1. Choice (700) Mail/Web  Mail/Web Mail/Web  Mail/Web 
2. Mail (700) Mail  Mail Mail  Web 
3. Web (700) Web  Web Web  Mail 
4. Web + email augmentation (700) Web Web Web Web Web Mail 
Treatment group (nFeb 13: Postal invitation Feb 18: Email augmentation Feb 20: Postal thank you/reminder Mar 6: Replacement Mar 10: Email augmentation Apr 6: Mode switch letter 
1. Choice (700) Mail/Web  Mail/Web Mail/Web  Mail/Web 
2. Mail (700) Mail  Mail Mail  Web 
3. Web (700) Web  Web Web  Mail 
4. Web + email augmentation (700) Web Web Web Web Web Mail 
a

“Mail” indicates that response was requested by mail; “Web” indicates that response was requested by Web.

A second postal mailing to thank respondents and remind nonrespondents to participate was sent one week after the initial request. Another postal follow-up was sent to nonrespondents about three weeks after the invitation letter. This “replacement” mailing included a second copy of the questionnaire for mail and choice group nonrespondents. Three days after this mailing, a second email reminder was sent to the Web with email augmentation group (4) nonrespondents.

After the rate of returned questionnaires diminished to nearly zero for a week, suggesting that response had “flatlined,” we sent a final contact to test the effects of offering different modes in sequence. In this letter, the mail group nonrespondents were offered the opportunity to respond via Web and the two Web groups’ nonrespondents were offered an opportunity to respond through mail.1 The mode of response that had originally been offered to each of these groups was not mentioned in these letters.

EXPERIMENT 1 RESULTS

We calculated two sets of response rates for the first experiment. The “primary” response rates include completed questionnaires up until just before the final contact, when three of the treatment groups were invited to respond via an alternate mode. The “final” response rates include primary responses plus those obtained after the mode switch. In order to assess differences by mode of response, we compare the primary response rates across treatments, because they exclude responses that were submitted via the alternate mode. The final response rates of each treatment are compared to the primary response rates for their respective group to assess the impact of the mode switch.

The total primary response rate was 50.3 percent. The first column of data in table 3 displays the primary response rates by treatment and significance tests for the different response rates by group.2 The response rate for the choice group is slightly lower than the response rate for the mail treatment (47.7 vs. 51.3, p = 0.093). This provides modest support for hypothesis 1, that offering a choice of modes produces a lower response rate than offering only mail, even in this highly Internet-literate population with complete Internet access. Although this effect is not substantial, it nevertheless provides strong evidence that offering a choice of modes is not superior to using only mail, which is commonly assumed.

Table 3.

Experiment 1 Primary Response Rates and Increase in Response after Switching Modes in Final Contact, by Treatment Group

 Primary response rate (before mode switch)b Increase after mode switchc Final response rateb Tests of mode switch effects
Treatment (sample size)a % % % z 
1. Choice (669) 47.7 4.6 52.3  
2. Mail (681) 51.3 1.9 53.2 0.69 
3. Web (676) 42.3 7.8 50.2 2.89** 
4. Web + email augmentation (678) 59.7 4.7 64.5 1.80* 
Tests for differences in response rates across treatment groups, before and after mode switche 
 Primary response rates  Final response rates  
 z  z  
Choice (1) vs. mail (2) −1.32  −0.31  
Choice (1) vs. Web (3) 1.99*  0.80  
Mail (2) vs. Web (3) 3.32***  1.11  
Web + email (4) vs. Web (3) 6.40***  5.32***  
Web + email (4) vs. mail (2) 3.12**  4.23***  
 Primary response rate (before mode switch)b Increase after mode switchc Final response rateb Tests of mode switch effects
Treatment (sample size)a % % % z 
1. Choice (669) 47.7 4.6 52.3  
2. Mail (681) 51.3 1.9 53.2 0.69 
3. Web (676) 42.3 7.8 50.2 2.89** 
4. Web + email augmentation (678) 59.7 4.7 64.5 1.80* 
Tests for differences in response rates across treatment groups, before and after mode switche 
 Primary response rates  Final response rates  
 z  z  
Choice (1) vs. mail (2) −1.32  −0.31  
Choice (1) vs. Web (3) 1.99*  0.80  
Mail (2) vs. Web (3) 3.32***  1.11  
Web + email (4) vs. Web (3) 6.40***  5.32***  
Web + email (4) vs. mail (2) 3.12**  4.23***  
a

Undeliverables are subtracted out of reported sample size.

b

Response rate = (number completed / sample size)*100. This corresponds to AAPOR RR6.

c

These numbers reflect all additional responses obtained after the mode switch contact, even those submitted via the originally offered mode. After the mode switch contact, few responses were obtained via the originally offered mode for each group (the mail group received two additional mail responses, the Web group (3) received three additional Web responses, and the Web with email augmentation group (4) received two additional Web responses). The differences in primary and final response rates are substantively similar regardless of whether these additional responses via the originally offered mode are included in the analysis or not.

d

z-tests for differences in proportions. One-tailed tests used for groups 3 and 4; two-tailed test used for group 2 (p < 0.10, * p < 0.05, ** p < 0.01, *** p < 0.001).

e

z-tests for differences in proportions. One-tailed tests used in all cases except for group 4 vs. group 2 (p < 0.10, * p < 0.05, ** p < 0.01, *** p < 0.001). Tests were adjusted for multiple comparisons between treatment groups using the Bonferroni-Holm correction method (Holm 1979).

Also, the choice group response rate is significantly higher than the group 3 Web treatment (47.7 vs. 42.3, p = 0.023), which provides support for hypothesis 2. These findings suggest that, when sending postal communications, responding by Web seems to involve a greater burden than responding via mail, making the treatment that includes a more convenient mail option preferable to the Web-only treatment. Also, in agreement with numerous prior studies and in support of our third hypothesis, the mail group response rate (51.3 percent) is higher than the Web group (3) response rate (42.3 percent), and this difference is statistically significant (p = 0.000). The Web-only response rate is, nevertheless, substantially higher than the average response rate (24 percent) of email-administered Web surveys of students conducted by the sponsoring organization (Allen 2010). This study differs from the others in that we used postal contacts and included an advance token cash incentive.

Table 3 shows that the Web plus email augmentation group (treatment 4) produced the highest response rate of all treatments in experiment 1. Hypothesis 6 predicted that this group would outperform a Web group with mail-only contacts (group 3), and hypothesis 7 predicted that it would be equivalent to a mail response group (2). Significance tests verify that this response rate (59.7 percent) is significantly higher than those of the Web (3) and mail (2) groups. These results appear to provide support for our assertion that providing an email link to the Web questionnaire reduces the burden and inconvenience associated with responding online. However, although the response rate is higher, the email augmentation treatment had two more contacts than the other groups, making it difficult to interpret to what extent the increased response rate is due to the extra contacts. In the second experiment, we reexamine hypothesis 7 to determine if these results hold when the number of contacts is equalized across all treatments.

Table 3 also demonstrates how response rates increased after the implementation of the mode switch, when treatments 2, 3, and 4 were offered the alternate mode of response. Tests examining the differences between primary and final response rates show that the two Web groups’ response rates significantly improved with the addition of a final mail response option, providing support for hypothesis 4. Also, in support of hypothesis 5, switching from mail to Web did not significantly improve the response rate for the mail group (2).

Table 3 also contains an additional analysis of the differences between the final response rates of the different treatment groups. After the mode switch contact, the response rates for treatments 1, 2, and 3 are all statistically equivalent. The advantage the mail and choice groups held over the Web group 3 disappeared once Web nonrespondents were given the option to respond via mail. This supports the proposition that the reason the choice treatment response exceeded that of the Web treatment is because of the mail option in the choice group. Once the convenience of the mail option was offered to the Web group, there was no notable difference between these groups.

Also, the mail treatment does not maintain its advantage over the choice group after the mode switch. This is due to the fact that the mail group response rate did not increase significantly by offering Web response, while the choice group's response rate continued to increase after the final contact. The cause of this is likely the continuation of a mail option in the choice group but not in the mail group. All these results suggest that the mail response option is more successful at drawing response when postal contacts are used. However, it is notable that in this experiment, using postal contacts to offer Web response, followed by a later mail response option (treatment 3), produced a response rate that is equivalent to a mail-only survey (treatment 2) while obtaining a high proportion of Web respondents.

Experiment 2 Methods

The second experimental survey was conducted between November 11, 2009, and January 5, 2010, using a random sample of 4,300 students. We used both paper and online versions of the questionnaire, which were constructed using the same methods employed in experiment 1. The questionnaire contained 33 questions and focused primarily on how students have been affected by the recent economic downturn and the university's resulting budget cuts.

The sample was divided into seven treatments, which are numbered beginning with 5 to distinguish them from those in the first experiment. The details of contact type and incentive use are outlined in table 4. Treatment groups 5 and 6 both offered respondents a choice of Web or mail response. The difference between these two groups was the use of the email augmentation strategy in treatment 5. Treatment 7 asked for response by mail, and the eighth treatment was a Web response group that used the email augmentation strategy. Groups 9–11 were also Web response treatments, but they relied primarily on email contacts rather than postal contacts. Treatment 9 had an initial postal invitation with a $2 incentive (just like groups 1–8), but all follow-up contacts were sent by email. Treatment 10 included an initial postal invitation letter but no incentive, and had all email follow-up contacts. Treatment 11 used all email contacts and no incentive.

Table 4.

Mode of Contact for Each Treatment Group in Experiment 2, by Date of Contacta

Treatment group (n)b Nov 9/10c: Invitation Nov 12/13: Prompt after invitation Nov 18/19: Thank you/ reminder Dec 7/8: Replacement Dec 10/14: Prompt after replacement 
5. Choice (500) Letter with $ Email Letter Letter Email 
6. Choice (700) Letter with $ Postcard Letter Letter Postcard 
7. Mail (700) Letter with $ Postcard Letter Letter Postcard 
8. Web(500) Letter with $ Email Letter Letter Email 
9. Web(600) Letter with $ Email Email Email Email 
10. Web(600) Letter Email Email Email Email 
11. Web(700) Email Email Email Email Email 
Treatment group (n)b Nov 9/10c: Invitation Nov 12/13: Prompt after invitation Nov 18/19: Thank you/ reminder Dec 7/8: Replacement Dec 10/14: Prompt after replacement 
5. Choice (500) Letter with $ Email Letter Letter Email 
6. Choice (700) Letter with $ Postcard Letter Letter Postcard 
7. Mail (700) Letter with $ Postcard Letter Letter Postcard 
8. Web(500) Letter with $ Email Letter Letter Email 
9. Web(600) Letter with $ Email Email Email Email 
10. Web(600) Letter Email Email Email Email 
11. Web(700) Email Email Email Email Email 
a

“Letter” indicates that the contact was a paper letter sent via postal mail; “Postcard” indicates that the contact was a paper postcard sent via postal mail; “Email” indicates that the contact was sent via email.

b

Treatment group names represent the requested mode of response. Treatment group sample sizes varied based on expected response rate differences across groups.

c

Two dates are listed for each contact (invitation, invitation prompt, thank you/reminder, replacement, and replacement prompt) because in order to allow postal and email versions to arrive at similar times, we sent postal versions out earlier than email versions.

In this experiment, all groups received five contacts: the invitation, invitation prompt, thank you/reminder, replacement, and replacement prompt. This design allowed us to control for number of contacts when assessing the response effects of email augmentation. The invitation contacts for this experiment mirrored those of the first experiment; the choice groups (5 and 6) were given both a paper questionnaire (with a stamped return envelope) and the Web site and individualized access code for responding online. The mail group (7) students were given only the paper questionnaire with a stamped return envelope. The Web groups were offered only the option of responding online; their initial letters (or email in the case of group 11) contained the Web site and individualized access codes. Treatments 5–9 included $2 bills in the initial request as an incentive, but groups 10 and 11 did not.

The second contact came in the form of either a supportive email (groups 5, 8–11) or supportive postcard (groups 6 and 7) sent a few days after the initial contact. In the email augmentation groups (5 and 8), this contact represents the first email augmentation. Just as in the first experiment, these emails stated that we hoped the electronic link makes it “easier to respond.” In order to keep the contact stimuli similar across all groups, at the same time we sent similar types of contacts to the remaining groups, which were not part of the email augmentation strategy. For the choice (group 6) and mail (group 7) treatments, we sent “postcard prompts” via postal mail. These postcards were meant to mirror the “easy to complete” themes found in the augmenting emails, but did not contain the Web address or access codes.

A short, third contact was sent about a week after the second to thank those who had responded and remind others to participate. The fourth contact was delivered about two and a half weeks later. This was a “replacement” contact, which included paper questionnaire replacements for the choice and mail groups (5–7). This contact was followed shortly by another “prompt” contact for all groups; the email augmentation treatments and those receiving only email follow-ups received emails, while the choice (6) and mail (7) groups received another postcard. This prompt referenced the preceding contact and indicated that the study would be coming to a close within the following weeks.

EXPERIMENT 2 RESULTS

The overall response rate for the second experiment was 35.8 percent.3Table 5 contains the response rates for each treatment group in the second experiment and significance tests for differences in response proportions across groups. In experiment 2, we reexamined hypothesis 1, which predicted the response rate of a choice group (6) to be less than that of the mail group (7). The choice group response rate is slightly lower than the mail response rate, but this difference is not statistically significant. The equivalence of these two response rates nevertheless confirms our assertion that offering a choice of modes is not superior to offering only mail response.

Table 5.

Final Response Rates for Experiment 2, by Treatment Group

Treatment (sample size)a Response rateb Tests for differences in response rates across treatment groupsc 
5. Choice + email augmentation (492) 46.5%   
5 vs. 6: 1.84* 
6. Choice (683) 41.1%   
6 vs. 7: −1.05 
7. Mail (683) 43.9%   
7 vs. 8: 0.48 
8. Web + email augmentation (487) 42.5%   
8 vs. 9: 1.43 
9. Web, postal invite, $ (589) 38.2%   
9 vs. 10: 6.38*** 
10. Web, postal invite, no $ (586) 21.2%   
10 vs. 11: 0.31 
11. Web, email only, no $ (699) 20.5%   
Treatment (sample size)a Response rateb Tests for differences in response rates across treatment groupsc 
5. Choice + email augmentation (492) 46.5%   
5 vs. 6: 1.84* 
6. Choice (683) 41.1%   
6 vs. 7: −1.05 
7. Mail (683) 43.9%   
7 vs. 8: 0.48 
8. Web + email augmentation (487) 42.5%   
8 vs. 9: 1.43 
9. Web, postal invite, $ (589) 38.2%   
9 vs. 10: 6.38*** 
10. Web, postal invite, no $ (586) 21.2%   
10 vs. 11: 0.31 
11. Web, email only, no $ (699) 20.5%   
a

Undeliverables are subtracted out of reported sample size.

b

Response rate = (number completed / sample size)*100. This corresponds to AAPOR RR6.

c

One-tailed z-tests for differences in proportions (p < 0.10, * p < 0.05, ** p < 0.01, *** p < 0.001). Tests were adjusted for multiple comparisons between treatment groups using the Bonferroni-Holm correction method. Test statistics that were statistically significant only before this correction was made are in italics; statistics in boldface remained significant after the adjustment for multiple comparisons.

In the second experiment, we also reexamined hypothesis 7, which predicted that the response rates of a Web with email augmentation group (8) and a mail-only group (7) would be equivalent. We find support for this hypothesis; the Web response rate (42.5 percent) is not statistically different from the mail response rate (43.9 percent). In our first experiment, the Web with email augmentation group (4) actually outperformed the mail group (2). The first experiment results thus seem attributable to the fact that the email augmentation group had two more contacts than its mail comparison group. Nevertheless, the findings of the second experiment provide support for hypothesis 7, that using email augmentation will result in a response rate that is equivalent to mail. This is an important finding, as most Web surveys are unable to achieve the typically higher response rates of mail-only surveys. The equivalence of response rates between mail and Web here suggest that email augmentation is a valuable strategy for producing Web response rates that can compete with mail only.

In this experiment, we also examined how email augmentation might affect response when a choice of modes is offered. Hypothesis 8 predicted that the email augmentation strategy would produce a choice group (5) response rate that is significantly higher than the choice group with only postal contacts (6). Table 5 shows that the response rate for the choice group with email augmentation is indeed higher than the regular choice group response rate (p = 0.033). However, this difference was no longer statistically significant after adjusting results to account for multiple comparisons across treatment groups. Thus, we cannot conclusively determine if email augmentation results in a higher choice group response rate.

Nevertheless, the proportion of respondents who responded by Web (as opposed to mail) was significantly higher in the choice group with the email augmentation strategy. In group 5, 53 percent of responses were via Web, while in group 6 only 43 percent of responses were via Web. This difference shows that the email contacts were successful at drawing in a greater proportion of Web responses. This suggests that the strategy of alternating postal and email contacts may in fact be alternatively placing emphasis on the two different response modes and thus making the choice aspect of this treatment less salient, as we expected.

It is also notable that the choice with email augmentation response rate is the highest of all treatments in experiment 2. In supplementary statistical analyses (not shown), we compared this response rate to those of treatments 6–9. Group 5’s response rate is not statistically different from groups 6–8, but before adjusting for multiple comparisons it is significantly higher than the response rate of treatment 9, which offers Web response using an initial postal contact with a cash incentive and all email follow-ups. This suggests that the combination of offering a choice and utilizing email augmentation may be a promising method worthy of future exploration.

Groups 8 and 9 compare the effects of the email augmentation strategy to using only email follow-up contacts when asking for Web response. Table 5 shows that group 8 (email augmentation) produced a slightly higher response rate than using all email follow-ups (42.5 percent vs. 38.2 percent; p = 0.076). This difference provides modest support for hypothesis 9, which predicted that email augmentation is somewhat more effective because it continues to use mail contacts, which are less easily dismissed, in conjunction with the convenience of email links to the survey Web site. However, after adjusting for multiple comparisons, this difference is no longer statistically significant. Thus, we lack enough evidence to confidently conclude that email augmentation is superior to using only email follow-ups to a postal invitation.

Hypothesis 10 predicted that including a token cash incentive in a postal invitation would increase Web response. Table 5 confirms that the response rate for the incentive group (9), 38.2 percent, is substantially higher than the response rate for the no-incentive group (10), 21.2 percent (statistically significant at p = .000). However, in contrast to hypothesis 11, which predicted that a postal invitation alone (with no incentive) would produce a higher response rate than using only email contacts, we find no statistically significant difference between the response rates in groups 10 (21.2 percent) and 11 (20.5 percent). These results suggest that the primary benefit of a postal invitation letter is the ability to deliver a token cash incentive in advance.

Discussion and Conclusions

Our experiments illustrate important methods for improving the response outcomes in Web and mixed-mode surveys. The results suggest that there is considerable value to moving beyond email-only implementation strategies for Web surveys. Mail and Web methods can be combined in ways that reduce the burden and increase the rewards of responding, ultimately leading to dramatic improvement in Web response rates.

Despite the popular notion that offering a choice of modes can benefit survey response, we found that when using only mail contacts, a simultaneous choice of Web and mail response simply does not outperform a paper-only option, even in a highly Internet-literate population with complete Web access. Alternatively, our study illustrates that offering modes in sequence (following requests for Web response with a final request to respond by mail) can significantly increase the overall response rate, making it equivalent to the response rate when mail is the only response option.

Furthermore, by using a combination of multiple mail and email contacts (email augmentation), our Web-only response rate was equivalent to the mail response rate. We believe this strategy of augmenting multiple postal contacts with several supportive emails utilizes the advantages of both contact modes—it establishes memorability and presence through postal contacts and reduces the burden of responding via Web through emails with a convenient link directly to the survey Web site. Future research must continue to explore the interaction between using email augmentation and offering a choice of modes. This treatment had the highest response rate of all treatments in experiment 2, which suggests that there may be ways to use email and postal contacts to not only improve Web response but also reduce the burden associated with offering a choice.

The effectiveness of this email augmentation strategy in our experiments suggests that, in order to improve the potential of Web-only surveys, surveyors should consider utilizing a combination of multiple postal and multiple email contacts whenever possible. This approach is also beneficial from a cost standpoint; sending an additional contact via email is much less expensive than sending a postal letter. Alternatively, a mail-only contact approach can be effective if the requests for Web response are followed by a final request that asks for a response by mail. This method allows the surveyor to “push” as many respondents as possible to complete the survey via Web before offering the mail option to those who are unwilling to respond to the Web version.

Our study also confirms that delivering token cash incentives in advance is critical for establishing the survey's legitimacy and increasing the benefits of survey response. These incentives dramatically improved Web survey response (by 17 percentage points). The effectiveness of this incentive in our study appears much greater than the effects of other types of incentives in Web surveys, as prior studies illustrate (e.g., Göritz 2006). Taken together, our study's results suggest that combining an advance cash incentive, the email augmentation strategy with multiple postal and email contacts, and a switch to a mail response option in the final contact is a viable method for producing a Web survey response rate that is equivalent to a traditional mail-only approach.

Other implementation approaches are possible, and future research should continue to test a variety of methods to improve survey response. Our studies were conducted within a very specific population (undergraduate students at one university), and the survey was sponsored by an organization affiliated with the university. This prior relationship may have increased the legitimacy of the study and thus improved response in ways that are not possible in other settings. Our findings need to be examined in other contexts with different types of populations. We believe these methods have great potential in other populations in which the surveyor has access to both postal and email addresses (e.g., clients or conference registrants).

Future research must also carefully examine how our results apply to populations with less knowledge about the Internet, lower Internet-access rates, and no available email addresses, such as in the general public. Although email augmentation is not possible in these settings, other research suggests that using advance token cash incentives and switching from Web to mail response are useful in Web surveys in the general public (Messer and Dillman forthcoming; Smyth et al. 2010). As rates of Internet access continue to increase, our findings may become increasingly relevant in diverse settings.

In sum, these studies suggest that survey response is less dependent on the offering of a choice of modes, or of one particular mode, than it is on the implementation strategies associated with the offering of these modes. Surveyors should avoid thinking of Web surveying as synonymous with an email-only implementation strategy. Rather, building implementation systems that utilize both postal and email contacts, token cash incentives delivered in advance, and offering modes in sequence (Web then mail) can dramatically increase response. It is a combination of multiple techniques, not simply one, which is most effective.

References

Allen
Thom
“Student Surveys Conducted at SESRC (2005–2010).”
Unpublished data from the Social and Economic Sciences Research Center
 , 
2010
 
Washington State University, Pullman, WA
Birnholtz
Jeremy P
Horn
Daniel B
Finholt
Thomas A
Joo Bae
Sung
“The Effects of Cash, Electronic, and Paper Gift Certificates as Respondent Incentives for a Web-Based Survey of Technologically Sophisticated Respondents.”
Social Science Computer Review
 , 
2004
, vol. 
22
 (pg. 
355
-
62
)
Bosnjak
Michael
Tuten
Tracy L
“Prepaid and Promised Incentives in Web Surveys: An Experiment.”
Social Science Computer Review
 , 
2003
, vol. 
21
 (pg. 
208
-
17
)
Brenner
Lyle
Rottenstreich
Yuval
Sood
Sanjay
“Comparison, Grouping, and Preference.”
Psychological Science
 , 
1999
, vol. 
10
 (pg. 
225
-
29
)
Church
Allan H
“Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis.”
Public Opinion Quarterly
 , 
1993
, vol. 
57
 (pg. 
62
-
79
)
Dillman
Don A
Phelps
Glenn
Tortora
Robert
Swift
Karen
Kohrell
Julie
Berck
Jodi
Messer
Benjamin L
“Response Rate and Measurement Differences in Mixed-Mode Surveys Using Mail, Telephone, Interactive Voice Response (IVR), and the Internet.”
Social Science Research
 , 
2009
, vol. 
38
 (pg. 
1
-
18
)
Dillman
Don A
Smyth
Jolene D
Christian
Leah Melani
Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method. Hoboken, NJ
 , 
2009
John Wiley and Sons
Dillman
Don A
West
Kirsten K
Clark
Jon R
“The Influence of an Invitation to Answer by Telephone on Response to Census Questionnaires.”
Public Opinion Quarterly
 , 
1994
, vol. 
58
 (pg. 
557
-
68
)
Diment
Kieren
Sam Garrett-Jones
“How Demographic Characteristics Affect Mode Preference in a Postal/Web Mixed-Mode Survey of Australian Researchers.”
Social Science Computer Review
 , 
2007
, vol. 
25
 (pg. 
410
-
17
)
Fallows
Deborah
“Spam 2007.” A Report for the Pew Internet and American Life Project
2007
 
Gentry
Robin
Good
Cindy
“Offering Respondents a Choice of Survey Mode: Use Patterns of an Internet Response Option in a Mail Survey.”
Presentation at the Annual Conference of the American Association of Public Opinion Research
 , 
2008
New Orleans, LA
Göritz
Anja S
“Incentives in Web Studies: Methodological Issues and a Review.”
International Journal of Internet Science
 , 
2006
, vol. 
1
 (pg. 
58
-
70
)
Griffin
Deborah H
Fischer
Donald P
Morgan
Michael T
“Testing an Internet Response Option for the American Community Survey.” Presentation at the Annual Conference of the American Association of Public Opinion Research
 , 
2001
Montreal, Quebec
Canada
Grigorian
Karen
Hoffer
Thomas B
“2006 Survey of Earned Doctorates Mode Assignment Analysis Report.”
Prepared for the National Science Foundation by the National Opinion Research Center
 , 
2008
Chicago, IL
University of Chicago
Groves
Robert M
Kahn
Robert L
Surveys by Telephone: A National Comparison with Personal Interviews.
 , 
1979
New York
Academic Press
Holm
Sture
“A Simple Sequentially Rejective Multiple Test Procedure.”
Scandinavian Journal of Statistics
 , 
1979
, vol. 
6
 (pg. 
65
-
70
)
James
Jeannine M
Bolstein
Richard
“The Effect of Monetary Incentives and Follow-Up Mailings on the Response Rate and Response Quality in Mail Surveys.”
Public Opinion Quarterly
 , 
1990
, vol. 
54
 (pg. 
346
-
61
)
Kaplowitz
Michael D
Hadlock
Timothy D
Levine
Ralph
“A Comparison of Web and Mail Survey Response Rates.”
Public Opinion Quarterly
 , 
2004
, vol. 
68
 (pg. 
94
-
101
)
Manfreda Katja
Lozar
Bosnjak
Michael
Berzelak
Jernej
Haas
Iris
Vehovar
Vasja
“Web Surveys Versus Other Survey Modes: A Meta-Analysis Comparing Response Rates.”
International Journal of Market Research
 , 
2008
, vol. 
50
 (pg. 
79
-
104
)
Messer
Benjamin L
Dillman
Don A
Forthcoming. “Surveying the General Public over the Internet Using Address-Based Sampling and Mail Contact Procedures.”
Public Opinion Quarterly
 
Millar
Morgan M
Dillman
Don A
Messer
Benjamin L
Williams
Meredith
“Summary of Student Experience Survey Cognitive Interviews.” Unpublished data from the Social and Economic Sciences Research Center
 , 
2009
 
Washington State University, Pullman, WA
Millar Morgan
M
O'Neill
Allison C
Dillman
Don A
“Are Mode Preferences Real?”
Technical Report 09-003 of the Social and Economic Sciences Research Center
 , 
2009
Pullman, WA
Washington State University
 
Available online at http://sesrc.wsu.edu/dillman/
Pew Research Center
“November 30–December 27, 2009, Tracking Survey.” Pew Internet and American Life Project
2009
 
Schonlau
Matthias
Asch
Beth J
Du
Can
“Web Surveys as Part of a Mixed-Mode Strategy for Populations That Cannot Be Contacted by E-Mail.”
Social Science Computer Review
 , 
2003
, vol. 
21
 (pg. 
218
-
22
)
Schwartz
Barry
The Paradox of Choice: Why More Is Less.
 , 
2004
New York
Harper Perennial
Shih
Tse-Hua
Fan
Xitao
“Response Rates and Mode Preferences in Web-Mail Mixed-Mode Surveys: A Meta-Analysis.”
International Journal of Internet Science
 , 
2007
, vol. 
2
 (pg. 
59
-
82
)
Smyth
Jolene D
Dillman
Don A
Christian
Leah Melani
O'Neill
Allison
“Using the Internet to Survey Small Towns and Communities: Limitations and Possibilities in the Early 21st Century.”
American Behavioral Scientist
 , 
2010
, vol. 
53
 (pg. 
1423
-
48
)
Smyth
Jolene D
Olson
Kristen
Richards
Ashley
“Unraveling Mode Preference.” Paper presented at the Annual Conference of the American Association of Public Opinion Research
 , 
2009
Stern
Michael J
Adams
Alison E
Shaun Elsasser
“Digital Inequality and Place: The Effects of Technological Diffusion on Internet Proficiency and Usage across Rural, Suburban, and Urban Counties.”
Sociological Inquiry
 , 
2009
, vol. 
79
 (pg. 
391
-
417
)
Tarnai
John
Chris Paxon
M
“Survey Mode Preferences of Business Respondents.” Paper presented at the Annual Conference of the American Association for Public Opinion Research
 , 
2004
Tversky
Amos
Eldar Shafir
“Choice under Conflict: The Dynamics of Deferred Decision.”
Psychological Science
 , 
1992
, vol. 
3
 (pg. 
358
-
61
)
Warriner
Keith
Goyder
John
Gjertsen
Heidi
Hohner
Paula
McSpurren
Kathleen
Charities, No; Lotteries, No; Cash, Yes: Main Effects and Interactions in a Canadian Incentives Experiment
Public Opinion Quarterly
 , 
1996
, vol. 
60
 (pg. 
542
-
62
)
1
The choice group nonrespondents were simply sent another request to participate via the mode of their choice.
2
Throughout all the analyses, response rates are compared using z-tests for differences in proportions. Tests are one tailed except in the case of non-directional hypotheses (hypotheses 5 and 7). Significance tests were adjusted to account for multiple comparisons using the Bonferroni-Holm method (Holm 1979).
3
This response rate is lower than experiment 1’s (55 percent). One reason for this is that two of the treatments did not include incentives. Also, the combination of Thanksgiving vacation, final examinations, and the end of the semester three weeks after the Thanksgiving break may have caused the overall decline in response. It is also possible that the change of survey topic may have made a difference.

Author notes

MORGAN M. MILLAR is a graduate research assistant in the Social and Economic Sciences Research Center and a Ph.D. candidate in the Department of Sociology at Washington State University, Pullman, WA, USA. Don A. Dillman is a Regents Professor in the Social and Economic Sciences Research Center and Department of Sociology at Washington State University, Pullman, WA, USA. An earlier draft of this article was presented at the annual meeting of the American Association of Public Opinion Research, Chicago, IL, May 2010. The authors wish to acknowledge with thanks the contributions of Benjamin Messer, Shaun Genter, Meredith Williams, Thom Allen, and other SESRC staff members in the design and implementation of these experiments. They also express thanks to Edith de Leeuw, Joop Hox, and participants in the August 2008 MESS Workshop in Zeist, the Netherlands, for encouraging this research. This work was supported by the United States Department of Agriculture–National Agricultural Statistics Service and the National Science Foundation Division of Science Resources Statistics [under cooperative agreement no. 43-3AEU-5-80039 to D. A. D.]. Additional support for data collection was provided by the SESRC and Department of Community and Rural Sociology at Washington State University.