26 Feb 2025

IASS Webinar 49: Comparing Data Quality and Sources of Error in Probability-based Online Panels and Online Opt-in Samples

Date 26 Feb 2025
Time 13:00 GMT+01:00 - 14:30 GMT+01:00
Level of instruction Intermediate
Instructor
Dr. Andrew Mercer
Registration fee

Over the past decade, online surveys, both those recruited using traditional probability-based methods and those drawn from online, opt-in sources, have grown to become the most common means of conducting public opinion research in the United States. During this same period, the Pew Research Center has conducted several studies examining data quality and sources of error in online, probability-based and opt-in samples. In this webinar, Andrew Mercer will review the findings from this line of research and discuss the Center’s most recent such study comparing the accuracy of six online surveys of U.S. adults – three from probability-based panels and three from opt-in sources. This is the first such study to include samples from multiple probability-based panels, allowing for their side-by-side comparison. The study was also designed to permit an in-depth comparison of accuracy not only for full-sample estimates, but also for estimates within key demographic subgroups. Consistent with previous studies, it found that probability-based samples generally yielded more accurate estimates. While previous studies have tended to assume that differences between probability-based and opt-in samples are due to differences in the selection mechanism, the findings from this study suggest that in fact, many of the large biases found in online opt in samples are instead due to measurement error stemming from the presence of “bogus respondents” who make no effort to answer questions truthfully. Critically, the study finds that errors from bogus respondents are especially large for subgroup estimates, particularly 18-29-year-olds and Hispanic adults.

Instructors

Andrew Mercer
Instructor
Dr. Andrew Mercer

About the instructor

Andrew Mercer is a senior research methodologist at Pew Research Center. He is an expert on probability-based online panels, nonprobability survey methods, survey nonresponse and statistical analysis. His research focuses on methods of identifying and correcting bias in survey samples. He leads the Center’s research on nonprobability samples and co-authored several reports and publications on the subject. He also served on the American Association for Public Opinion Research’s task force on Data Quality Metrics for Online Samples. He has authored blog posts and analyses making methodological concepts such as margin of error and oversampling accessible to a general audience. Prior to

joining the Center, Mercer was a senior survey methodologist at Westat. He received a bachelor’s degree in political science from Carleton College and master’s and doctoral degrees in survey methodology from the University of Maryland. His research has been published in Public Opinion Quarterly and the Journal of Survey Statistics and Methodology.