What is the response rate? How is it calculated? Is it "good?"

The precise figure varies from year to year and by campus, but at Berkeley it has been:

  • 2011: 30% (6,982 out of 23,656)
  • 2010: 45% (11,203 out of 24,967)
  • 2009: 37% (9,016 out of 24,379)
  • 2008: 50% (11,833 out of 23,904)
  • 2007: 51% (11,957 out of 23,278)
  • 2006: 48% (10,717 out of 22,430)
  • 2005: 52% (11,673 out of 22,450)
  • 2006: 48% (10,717 out of 22,430)
  • 2007: 51% (11,957 out of 23,278)
  • 2008: 50% (11,833 out of 23,904)
  • 2009: 37% (9,016 out of 24,379)
  • 2010: 45% (11,203 out of 24,967)
  • 2011: 33% (7,750 out of 23,656)
  • 2012: 39% (9,732 out of 25,203)
  • 2013: Survey not administered

The numerator is the number of undergraduate students who logged into the survey system and submitted their responses. The denominator is the population of eligible undergraduates.

Not every student answers every item presented, and in general some items (such as parental income and open-ended items that require a typed response) are more likely to be skipped than others. One departure from common practice in reporting web survey response rates is that the denominator includes students with inactive email addresses on the assumption that they may have been contacted about the survey through other means such as advertising, direct communications from academic departments and other units, or word of mouth.

Opinions vary greatly as to what makes a "good," "high," or "adequate" response rate and what relationship, if any, there is between response rate and the accuracy of the results. While conventional wisdom holds that low response rates yield biased results and that increasing response rates tends to reduce bias, recent research suggests that neither of these claims is necessarily the case. See, for example:

Groves, Robert M. 2006. "Nonresponse rates and nonresponse bias in household Surveys." Public Opinion Quarterly 70:646-675. [Also see other articles from the same special issue.]

Keeter, Scott, Carolyn Miller, Andrew Kohut, Robert M. Groves, and Stanley Presser. 2000. "Consequences of reducing nonresponse in a large national telephone survey." Public Opinion Quarterly 64:125–48.

Krosnick, Jon A. 1999. "Survey research." Annual Review of Psychology 50:537-67.

Visser, Penny S., Jon A. Krosnick, Jesse Marquette, and Michael Curtin. 1996. "Mail surveys for election forecasting? An evaluation of the Columbus Dispatch poll."Public Opinion Quarterly 60: 181-227.