Blog

What is an acceptable survey response rate?

November 12, 2014
Share:

2019 UPDATE: We now have a series of explainer videos that explore this topic more thoroughly. You can find them here.

I’ve been doing some investigating of this question. What follows is a very un-academic and incomplete summary of information from a few articles and web sites.

Using a Google Scholar search for “survey response rates” articles published from 2010-2014, I found a 2010 review article in the journal Computers in Human Behavior:

Fan, W., & Yan, Z. (2010). Factors affecting response rates of the web survey: A systematic reviewComputers in Human Behavior26(2), 132-139.

Here’s an interesting fact I gleaned from that article: “Based on a recent meta-analysis (Manfreda, Bosnjak, Berzelak, Haas, & Vehovar, 2008) of 45 studies examining differences in the response rate between web surveys and other survey modes, it is estimated that the response rate in the web survey on average is approximately 11% lower than that of other survey modes.”

The other article of great interest, and particular relevance to web-based surveys of college populations, is a 2011 article in Public Opinion Quarterly:

Millar , M.M., & Dillman, D.A. (2011) Improving Response to Web and Mixed-Mode Surveys. Public Opin Q, 75(2): 249-269. 

Abstract: We conducted two experiments designed to evaluate several strategies for improving response to Web and Web/mail mixed-mode surveys. Our goal was to determine the best ways to maximize Web response rates in a highly Internet-literate population with full Internet access. We find that providing a simultaneous choice of response modes does not improve response rates (compared to only providing a mail response option). However, offering the different response modes sequentially, in which Web is offered first and a mail follow-up option is used in the final contact, improves Web response rates and is overall equivalent to using only mail. We also show that utilizing a combination of both postal and email contacts and delivering a token cash incentive in advance are both useful methods for improving Web response rates. These experiments illustrate that although different implementation strategies are viable, the most effective strategy is the combined use of multiple response-inducing techniques.

This is from a 2009 web paper by  Kathy Biersdorff who is a business consultant in the Calgary area.

 When I said that there is no simple answer to the question of how many is enough, this does not mean that people have been unwilling to go on record with a numerical answer. Here are some expert opinions as to what is considered good or adequate as a mail survey response rate:

25% – Dr. Norman Hertz when asked by the Supreme Court of Arizona

30% – R. Allen Reese, manager of the Graduate Research Institute of Hull U. in the United Kingdom

36% – H. W. Vanderleest (1996) response rate achieved after a reminder

38% – in Slovenia where surveys are uncommon

50% – Babbie (1990, 1998)

60% – Kiess & Bloomquist (1985) to avoid bias by the most happy/unhappy respondents only

60% – AAPOR study looking at minimum standards for publishability in key journals

70% – Don A. Dillman (1974, 2000)

75% – Bailey (1987) cited in Hager et al. (2003 in Nonprofit and Voluntary Sector Quarterly, pp. 252-267)

In addition, various studies described their response rate as “acceptable” at 10%, 54%, and 65%, while others on the American Psychological Association website reported caveats regarding non-responder differences for studies with 38.9%, 40% and 42% response rates.

  I went to the fount of all knowledge, Wikipedia, and found a rather nice summary of some articles investigating the effect of response rate:

 One early example of a finding was reported by Visser, Krosnick, Marquette and Curtin (1996) who showed that surveys with lower response rates (near 20%) yielded more accurate measurements than did surveys with higher response rates (near 60 or 70%).[2] In another study, Keeter et al. (2006) compared results of a 5-day survey employing the Pew Research Center’s usual methodology (with a 25% response rate) with results from a more rigorous survey conducted over a much longer field period and achieving a higher response rate of 50%. In 77 out of 84 comparisons, the two surveys yielded results that were statistically indistinguishable. Among the items that manifested significant differences across the two surveys, the differences in proportions of people giving a particular answer ranged from 4 percentage points to 8 percentage points.[3]

 A study by Curtin et al. (2000) tested the effect of lower response rates on estimates of the Index of Consumer Sentiment (ICS). They assessed the impact of excluding respondents who initially refused to cooperate (which reduces the response rate 5–10 percentage points), respondents who required more than five calls to complete the interview (reducing the response rate about 25 percentage points), and those who required more than two calls (a reduction of about 50 percentage points). They found no effect of excluding these respondent groups on estimates of the ICS using monthly samples of hundreds of respondents. For yearly estimates, based on thousands of respondents, the exclusion of people who required more calls (though not of initial refusers) had a very small one.[4]

 Holbrook et al. (2005) assessed whether lower response rates are associated with less unweighted demographic representativeness of a sample. By examining the results of 81 national surveys with response rates varying from 5 percent to 54 percent, they found that surveys with much lower response rates decreased demographic representativeness within the range examined, but not by much.[5]

Finally and to further complicate matters, let me remind you of a few non-statistical or quasi-statistical factors that will impact decisions about what is an adequate sample size and response rate:

  1. Perceived believability: We all know how influential perceptions are. Will your audience believe that your survey data truly represents them?
  2. Need to look at subgroups: We know that there are consistently three high risk groups on college campuses: incoming freshmen, fraternity members and varsity athletes. It is difficult in a survey as large and costly as the ACHA-NCHA to achieve an adequate representation of fraternity member and varsity athletes, so you may have to plan on smaller scale surveys specifically for those groups, if you want to track changes in perception, use, and negative outcomes for those high risk groups.
  3. Bias: the lower the response rate, the more chance that the respondent group is biased in some way. It can make longitudinal differences particularly difficult to interpret: If there is a change from the previous survey years, is that a real change or due to some bias in the response group (particularly if the respondents are not representative in terms of exposure to the intervention or risk).
  4. Demographic representativeness: This is actually a subcategory of bias, but deserves special mention since we know that demographic factors (gender, age, race/ethnicity) affect drinking rates and patterns. Even with a relatively high response rate, you should always check if your sample is demographically similar to your population.

 

 

Linking best evidence with best practice to promote effective social norms marketing campaigns

Linking best evidence with best practice to promote effective social norms marketing campaigns to improve health on college campuses and beyond.