Volunteer-based internet surveys should not be used to represent broader public opinion, given their lack of a theoretical basis in survey research principles, according to a detailed report by the nation’s leading association of public opinion researchers.
“Researchers should avoid nonprobability online panels when one of the research objectives is to accurately estimate population values,” says the 81-page report from the American Association for Public Opinion Research. Beyond the lack of a theoretical underpinning, it cites “inherent and significant coverage error” (mainly by missing those without internet access) and a suspected "extremely high level of nonresponse,” saying this combination “presumably results in substantial bias.”
Nearly a year and a half in works, the AAPOR Report on Online Panels presents challenging findings for the multi-billion-dollar business of so-called “opt-in online” surveys, in which participants sign up to click through questionnaires on the internet, generally in exchange for cash and gifts. Such surveys are particularly common in market research, given the speed and low cost with which they can be produced, but also often are used to purportedly represent public opinion.
AAPOR previously has said opt-in online panels should not claim a margin of sampling error, advice frequently ignored by some producers of such data. Its new report, by a 20-member committee, goes further: “There currently is no generally accepted theoretical basis from which to claim that survey results using samples from nonprobability online panels are projectable to the general population,” the report says. “Thus, claims of ‘representativeness’ should be avoided when using these sample sources.”
The report notes that probability-based measures of public attitudes (commonly referred to as random samples) have been “an established practice for valid survey research for over 50 years.” In contrast, it says, “The nonprobability character of volunteer online panels runs counter to this practice and violates the underlying principles of probability theory.”
Opt-in online surveys have been rated as “not airworthy” by ABC News standards for more than a decade. Several other national news organizations have followed suit in avoiding such data.
I’ve reported before on the problems with opt-in online data, including a report on a groundbreaking academic study on the issue led by Stanford University researchers David Yeager and Jon Krosnick, here; their replies to critics, here and here; and previous postings, such as this and this.
Noting that “not all survey research is intended to produce precise estimates of population values,” the AAPOR report says opt-in online panels may be suitable for other uses, such as evaluating relationships among variables – for instance, how personal characteristics may interact with attitudes and behaviors. While it says that may be appropriate “especially when budget is limited and/or time is short,” the report also warns of potential biases in these uses.
Elsewhere, it reports, “Empirical evaluations of online panels abroad and in the U.S. leave no doubt that those who choose to join online panels differ in important and nonignorable ways from those who do not.” The difference is not just related to demographic characteristics; “In sum, the existing body of evidence shows that online surveys with nonprobability panels elicit systematically different results than probability sample surveys in a wide variety of attitudes and behaviors.” In available evidence, the report adds, the opt-in online results “are generally less accurate.”
In a further problem for the users of such data, the report cites several studies that it says found “considerable variation” in results across different opt-in panels, “raising questions about the accuracy of the method.”
The report also decries a lack of adequate, publicly released research by online panel providers themselves. It says they could study nonresponse by comparing prospective and actual panel members; “however, very little has been reported.” Nonresponse also could be compared between panel members and participants in specific surveys; yet, “Despite a great deal of data being available to investigate this issue, little has been publicly reported.” It suggests studying panel attrition, adding, “We are aware of no published accounts to verify that this is being done.” It says nonresponse at recruitment is “only sparsely documented in the literature” (but believed to be “very high”); and on item-level nonresponse, again: “As far as we can tell, this type of analysis is seldom done.”
In a section on efforts to adjust opt-in panel data, similarly, the report says “there appears to be no research” that focuses on one effort, purposive sampling intended to reach targeted types of individuals in these panels. It also examines model- and weighting-based efforts to adjust opt-in online data, raising questions about both approaches.
Again, the full report’s here, highly recommended reading. Given the length of this post, I’ll reach out separately to producers of opt-in online data to invite their response to the AAPOR committee’s conclusions.
(Disclosure: I’m an AAPOR member. And its report was distributed last week – hey, I was on vacation.)
Written By: Matthew
Source:
http://abcnews.go.com/blogs/politics/2010/04/study-group-issues-a-warning-on-optin-online-surveys/