AudienceXScience: Keynote – Collecting Data During a Global Pandemic (Monday – 9/21/20)
With the arrival of the COVID-19 pandemic, the US Census Bureau’s leadership had to think outside the box. Learn how the agency modified its work, all while maintaining operations, releasing data products and continuing to serve the data-user community during this unprecedented time.
Dr. Ron Jarmin – Deputy Director and COO, U.S. Census Bureau
Read more »
This is only a short sketch on a topic that may be under the microscope for years to come.
Here is a synopsis of key takeaways from the 2016 presidential election:
WHAT EXACTLY HAPPENED?
There are several firms that generate national models using an array of polling sources/data, as well as applying proprietary techniques. The most salient question they answer is: “What are the chances of XXX winning the election,” (i.e. electoral vote victory). Estimates of the popular votes for candidates are usually provided as well.
On Election Eve, the models showed Clinton with a 71% to 99% chance of becoming the next president. The median was in low 90s. However, the correct answer turned out to be 0%.
The National Aggregators
Their process is simple and transparent. Pollsters are selected (subjectively) and the average of all of their estimates produces a single number, i.e. the spread in popular vote between the top two candidates. RCP (Real Clear Politics) has been providing these for several election cycles.
For the eleven polls in RCP, the final estimate was a 3.3% popular vote lead for Clinton. However, the actual number is expected to be about 1.3% (several million votes still uncounted in areas favoring Democrats). Net result, RCP data for 2016 showed around a 2% “error”.
Here is an overlooked irony – the RCP 2012 “error” was a bit higher, with results off by 3.2%. Obama was estimated to be ahead of Romney by 0.7%, but he won the popular vote by just under 4%.
The 2008 RCP poll of polls was incredibly accurate, missing the final results by 0.3%.
“State polls were off in a way that has not been seen in previous presidential election years” – so says Sam Wang, a neuroscience professor at Princeton, whose model predicted a substantial Clinton Electoral College win, with a 99% chance of success. State polls are a critical input.
The surfeit of state polls generated a good deal of data and lots of noise. For example, the final survey from the highly respected Marquette Law School showed Clinton ahead by a prodigious 46 to 40 in what turned out to be the crucial state of Wisconsin. On Election night, final results showed Clinton down my 1, a net seven percent error.
Election Night Exit Polls
This survey was comprised of over 24,000 voters, conducted with both phone interviews (representing early and absent voters) and among about 20,000 day of voting participants leaving 350 voting places.
There has been a tradition of being wary of “early exit polls” (in 2004, Kerry ahead of Bush by 4%). Nevertheless, selected national exit poll data were made available around 6:30pm EST. These included estimates of younger and older voters, those with college and non-college educations, white voters, etc. Using a series of simple calculations it seemed to foreshadow about a 4% Clinton popular vote lead, mirroring other estimates in the past few days.
By 8:30EDT, state polls for Georgia, Virginia, and Ohio appeared to be even stronger for Clinton. However, less than an hour later, exit poll data began to change, in many cases significantly.
Final exit results showed a very close vote in the popular vote and extremely close races in several states. This included “Democrat fire walls” that Obama had won easily. The rest is history.
The American Association for Public Opinion Research, as it has done in the last several elections, has already convened a panel of survey research and election polling experts (list on website) to conduct a post-hoc analysis of the 2016 polls. The goal of this committee is to prepare a report that summarizes the accuracy of 2016 pre-election polling (for both primaries and the general election), reviews variation by different methodologies, and identifies differences from prior election years.
There is much speculation today about what led to these errors and already the chorus of concerns about a “crisis in polling” have emerged as headlines on news and social media sites. As final results continue to be tabulated it would be inappropriate for us to participate in conjecture.
Reactions from media outlets Implications –
Commentary on the situation seems ubiquitous; here are four links to articles that focus on potential implications for this industry and market research and analytics in general:
From AdAge: Why Pollsters Got the Election So Wrong, and What It Means for Marketers http://adage.com/article/campaign-trail/pollsters-wrong-means-marketers/306697/
From CIO: Is Trump’s unexpected victory a failure for big data? Not really
From the Chicago Tribune: Failed polls call into question the profession of prognostication http://www.chicagotribune.com/news/nationworld/politics/ct-presidential-polls-failed-20161109-story.html
From Bloomberg: Failed polls in 2016 call into question a profession’s precepts http://www.bloomberg.com/politics/articles/2016-11-09/failed-polls-in-2016-call-into-question-a-profession-s-precepts
After Ted Cruz won the Iowa Caucus, reports suggested his campaign’s sophisticated use of data and analytics to target voters with messages customized to their psychological proclivities had a lot to do with it. A few months later the Texas Senator left the race.
Donald Trump captivated primary voters with simple mass-marketed brand messaging through earned media rather than spending on precisely-targeted digital and TV media. This will have many pundits wondering what it all means for the use of data in politics.
Chris Wilson, director of research & analytics for the Cruz campaign said that the Senator survived amid a flood of 17 candidates. “So, no, it’s not magic. But a sophisticated data operation sure can make things easier along the way.”