data quality

Improving Viewership Projections: Forecasting for Data-Driven Audience Segments

Diana Saafi, Data Science Lead at Discovery, built on Cliff Young’s comments about the importance of multiple indicators for forecasting. Saafi forecasts linear television audiences in segments of interest to advertisers (beyond age-sex segments), rather than political outcomes. She and her team have found that their models have benefited from using multiple sources of data.  She recommended identifying signals that are most predictive, experimenting with different types of models (such as ARIMA models and AI models), continually refreshing the data in the models, and continually updating the models. While this process is now automated at Discovery, there are people who monitor changes in the predictions, which she referred to as “human-in-the-loop automation.”

In Defense of Polling

David Dutwin, SVP of Strategic Initiatives at NORC, and a past president of AAPOR and survey research expert, in an interview with ARF CEO & President Scott McDonald, Ph.D., encouraged the advertising and marketing industry to maintain their faith in survey research. Surveys for marketing and advertising do not have to contend with two problems with election forecasting based on polls:

  1. Unlike market research surveys, pre-election polls are, “measuring a population that doesn’t [yet] exist,” – the population that will vote in an election.
  2. Given that lack of trust in major media is stronger at one end of the political spectrum than the other, non-response to surveys may well be correlated with political opinions but not with the subjects of most media and advertising surveys. Non-response therefore may well be less damaging for market research surveys.

There Were Always Mixed Signals: Triangulating Uncertainty

Cliff Young, President of US Public Affairs at Ipsos, proposed that multiple indicators be used to forecast elections, not just data from the horse-race question alone. In particular, leaders’ approval ratings are strong predictors of their probabilities of winning, and Trump’s approval rating exceeded his horse-race preference in several swing states. Taking this variable and others (incumbency, perceptions of the most important problems facing the country) into account, Ipsos created a model based on results of over 800 elections across the globe. The model had predicted that a narrow Biden win with Republicans retaining control of the Senate was more likely than a Blue Wave.

What Did Pollsters Learn from the 2020 Election Polls?

Kathy Frankovic, a polling export who led the survey unit at CBS News for over 30 years and now consults for YouGov, highlighted two plausible hypotheses for the polling industry’s over-estimation of Democratic strength in the election:

  1. Likely voter models built on past voting practices: Likely voter models were based on the norm of Election-Day voting, and were unprepared for an election in which two thirds of 2020 votes were not cast on Election Day. One to two percent of mail-in votes don’t get counted, but in seven states, about 10% or more are rejected.
  2. The “missing” Trump voter (as opposed to the “shy” Trump voter): In states which Trump carried with 55% or more of the vote, YouGov pre-election polls showed him tied with Biden. Trump’s bashing of the polls may have discouraged his supporters from participating in polls.

The Exploding Complexity of Programming Research, and How to Measure It, When Content is King

Programming researchers are not getting the data they need to make informed decisions and Joan FitzGerald (Data ImpacX) uses streaming’s complex ecosystem to explain the conundrum facing programmers. Key insights into monetization and performance are not supported despite the inundation of new forms of data, leaving programmers without a comprehensive picture of their audience. Together with Michael McGuire at MSA, Joan outlined a methodology funnel that combined 1st, 2nd and 3rd party data to create equivalized metrics that, once leveraged, could meet critical programming research demands.

How Researchers Can Learn from Recent Political Polling Challenges

This event was suggested by the ARF’s LA Media Research Council in the aftermath of the poor performance of last November’s election polls.  Council members felt that the discussions about polling could impact trust in consumer and media research and that we should explore what research suppliers are doing to implement Best Practices. Research Quality has always been a key issue for the ARF. Most recently, an ARF event about the November polls found that while some issues are unique to political polling, many impact all survey research, for example, obtaining representative samples while response rates are declining, validity of responses, and predicting behavior and attitude trends.

Contending with Algorithmic Bias

On March 16, 2022, the ARF Cultural Effectiveness Council hosted a discussion on bias in the algorithms and models used by organizations, particularly those in advertising and marketing, to make selection or recommendation decisions.  Speakers from Publicis Media, Twitter, Wunderman Thompson, Cassandra, and the University of Southern California shed light on why this issue arises, what its effects can be and how to contend with it.  The session was moderated by Council Co-Chair Janelle James of Ipsos.