Summary
This article investigates to what degree alternative forms of demographic screening within non-probability samples more accurately represents well-established benchmark norms from such large-scale studies as:
- National Health Interview Survey
- American National Election Studies
- General Social Survey
- American Community Survey.
In addition, the article probes if other modelled methods of screening non-probability samples improve data accuracy.
Using the ARF’s Foundations of Quality survey data, the authors demonstrate that:
- Increasing the extent of demographic selection quotas used did not reduce bias or improve accuracy
- Adding race-ethnicity and education quotas did not lessen bias (proximity to the benchmarks was not reduced).
- The primary utility of oversampling under-represented groups enables results to be obtained for these groups, but oversampling did not appear to decrease bias.
- Demographic weighting did not generally reduce bias for non-probability samples when using demographic sample selection.
- Some model-based sample selection approaches showed promise in reducing bias and improving accuracy. Variables other than demographics can be useful in decreasing the differences in results between non-probability samples and probability-based estimates.
- However, these sampling approaches generally required considerably more sample to screen respondents than were selected for participation.