Research & Data Quality

Validating Programmatic Audiences

  • Laura Lewellyn – Senior Director, Market Innovation, Lotame; Jim Warner – CTO, Survata

With so much 3rd party data available, how do you know how well that data represents your intended consumer target? Lotame and Survata partnered to link 1st and 3rd party data and boosted accuracy rates for identifying auto intenders on a publisher’s website by nearly 1/3rd.

Member Only Access

ARF: How to Improve Digital Targets

The ARF has announced two new efforts to tackle data quality questions: Data Labeling Initiative - with the ANA’s Data Marketing & Analytics division, the IAB Tech Lab, and the Coalition for Innovative Media Measurement (CIMM). Data Validation Initiative - independently conducting research to determine whether surveys can be used to measure the accuracy of those targets.

Determining What’s Better: Single or Multiple Attitudinal Measures

  • Lawrence Ang, Martin Eisend, Journal of Advertising Research

Academic advertising researchers argue that multiple measures are needed for accuracy. Practitioners argue that a single measure of attitudes is sufficient. Who’s right? This article suggests it is better to have one valid measurement item that fully captures the semantic meaning of the construct rather than having multiple bad ones, no matter how internally consistent the measurement scale may be.

Member Only Access

Representativeness in an Era of Nonresponse and Nonprobability Samples

  • Andrew Mercer, Pew Research Center

The continuing rise in non-response rates has led to growing concern about the representativeness of surveys. Andrew Mercer, Senior Research Methodologist at the Pew Research Center, provides an overview of the issue along with best practices in modeling representativeness, for example, think through your modeling assumptions before collecting data.

Quota Controls in Survey Research

  • Steven H. Gittelman; Randall K. Thomas; Paul J. Lavrakas; Victor Lange
  • JOURNAL OF ADVERTISING RESEARCH

Non-probability samples have become a de facto norm in on-line survey and marketing research. But are there opportunities to improve the demographic screening tools to reduce non-probability error and come closer to Random Probability samples? This JAR article shows that additional demographic screening did not reduce bias or improve data accuracy when measured against benchmarks from large-scale Random Probability studies.

Member Only Access