data analytics

How AI & ML Are Being Used Now

  • Joan FitzGerald, Data ImpacX; Hillary Haley, RPA; Anita Lynch, Disney|ABC Television Group; Alice Sylvester, Sequent Partners; and Audrey Thompson, Oracle Data Cloud

Hot terms or hotly contested terms? Although “artificial intelligence” and “machine learning” (aka AI and ML) are increasingly part of the marketing conversation, experts debate about how extensively they are being used today. See how leading marketers, agencies and researchers define AI and ML and how they are currently applying these concepts to drive business.

Member Only Access

Improving Consumer Profiles with Data Signals

  • John Gim – SVP, Advanced Analytic Solutions, RAPP; David Popkin – Head of Data Strategy for Brands, LiveRamp

Addressable consumer profiles across media. Actionable data. Connected technology. LiveRamp and RAPP offer suggestions on how to leverage these inputs, using signals from a range of sources including location, transactional and online behavioral data to create more robust consumer/customer profiles and segments that lead to better outcomes.

Member Only Access

The Top Three Data Trends of 2016 via AdAge

If there’s one phrase that could be used to describe the momentum behind marketing data business ventures in 2016, it’s full steam ahead. Companies helping marketers connect consumer data dots, tech firms turning mobile data exhaust into targeting tools for advertisers, and firms fostering partnerships to share the data wealth moved at a fast pace this year.

Among the key trends in data-driven marketing were:

  • The proliferation of location data sensors
  • The use of personally identifiable information to target and measure ad campaigns
  • A wave of partnerships complicating the web of data dissemination even more

Access full article from AdAge

Catching Their Ad Tech in Bed with Fake News, Marketers Ask Fraud Fighters for Help via AdAge

 

Ad buyers are joining the fight against the “fake news” that many people blame for misinforming voters during the presidential campaign.

Although the focus initially fell on Facebook and Google, where made-up headlines became easy to find, pressure has also come to bear on lesser-known companies that provide the financial motivation for fake news.

Now ad-fraud fighters, usually hired to prevent scam artists from stealing ad budgets with fake traffic, are being asked to help brands avoid websites with real audiences but with fake stories.

“Brand safety” online has historically involved making sure ads don’t appear on pornographic or vulgar websites. That’s changed, as fake news sites have found themselves working with nearly every major player in ad tech.

“This is a new frontier in the fraud war and it came out of a weird place,” said Scott Meyer, CEO of Ghostery. “And it’s going to be a challenge for these companies for exactly that reason.”

Access full article from AdAge

NYCU – Special Election 2016 Edition – Opinion by David Marans, ARF Consultant: A Polling Fiasco?

This is only a short sketch on a topic that may be under the microscope for years to come.

Here is a synopsis of key takeaways from the 2016 presidential election:

  • Electoral College – modeling, based on a variety of inputs, especially national and state polling data, predicted around a 90% chance of Clinton winning (and collecting well over 300 electoral votes)
  • State polling – these polling errors were largely to blame for the underestimating of Trump’s chances of winning the electoral college vote
  • National polling – a firm, throughout the year, provided the average estimated national popular vote from a selected group of polls. The final number (gap between the two leading candidates) was off by about 2.3%. However, this gap was actually a slight “improvement” over 2012, which showed a 3.3% differential from final estimate to actual results
  • Early exit polls – data from the consortium released in the early evening seemed to mirror the aggregator averages, i.e. a Clinton victory
  • Why this went wrong – there are a plethora of topics under consideration, here is a sample:
    • Undecided and late switchers favoring Trump (undetected in last published polls)
    • “Likely voter” construct, i.e. sample composition understated turnout among more rural/no college degree voters (which helped Trump) concomitant with decreases in Democratic participation
    • Methodological issues in a smartphone era; challenges in recruiting individuals (very small response rates which created more potential non-response error)
    • Respondents not comfortable about telling pollsters their actual preference
  • Brexit – only months earlier the UK vote on whether to Leave/Remain in the EU confounded pollsters as well.  The actual vote had “Leave” 4% higher than “Remain”, while the poll of polls had shown “Leave the EU” 2% behind, a net 6% percent miss

 WHAT EXACTLY HAPPENED?

The Modelers

There are several firms that generate national models using an array of polling sources/data, as well as applying proprietary techniques. The most salient question they answer is: “What are the chances of XXX winning the election,” (i.e. electoral vote victory). Estimates of the popular votes for candidates are usually provided as well.

On Election Eve, the models showed Clinton with a 71% to 99% chance of becoming the next president. The median was in low 90s. However, the correct answer turned out to be 0%.

The National Aggregators

Their process is simple and transparent. Pollsters are selected (subjectively) and the average of all of their estimates produces a single number, i.e. the spread in popular vote between the top two candidates. RCP (Real Clear Politics) has been providing these for several election cycles.

For the eleven polls in RCP, the final estimate was a 3.3% popular vote lead for Clinton. However, the actual number is expected to be about 1.3% (several million votes still uncounted in areas favoring Democrats). Net result, RCP data for 2016 showed around a 2% “error”.

Here is an overlooked irony – the RCP 2012 “error” was a bit higher, with results off by 3.2%. Obama was estimated to be ahead of Romney by 0.7%, but he won the popular vote by just under 4%.

The 2008 RCP poll of polls was incredibly accurate, missing the final results by 0.3%.

State Polls

“State polls were off in a way that has not been seen in previous presidential election years” – so says Sam Wang, a neuroscience professor at Princeton, whose model predicted a substantial Clinton Electoral College win, with a 99% chance of success. State polls are a critical input.

The surfeit of state polls generated a good deal of data and lots of noise. For example, the final survey from the highly respected Marquette Law School showed Clinton ahead by a prodigious 46 to 40 in what turned out to be the crucial state of Wisconsin. On Election night, final results showed Clinton down my 1, a net seven percent error.

Election Night Exit Polls

This survey was comprised of over 24,000 voters, conducted with both phone interviews (representing early and absent voters) and among about 20,000 day of voting participants leaving 350 voting places.

There has been a tradition of being wary of “early exit polls” (in 2004, Kerry ahead of Bush by 4%). Nevertheless, selected national exit poll data were made available around 6:30pm EST. These included estimates of younger and older voters, those with college and non-college educations, white voters, etc. Using a series of simple calculations it seemed to foreshadow about a 4% Clinton popular vote lead, mirroring other estimates in the past few days.

By 8:30EDT, state polls for Georgia, Virginia, and Ohio appeared to be even stronger for Clinton. However, less than an hour later, exit poll data began to change, in many cases significantly.

Final exit results showed a very close vote in the popular vote and extremely close races in several states. This included “Democrat fire walls” that Obama had won easily. The rest is history.

AAPOR

The American Association for Public Opinion Research, as it has done in the last several elections, has already convened a panel of survey research and election polling experts (list on website) to conduct a post-hoc analysis of the 2016 polls. The goal of this committee is to prepare a report that summarizes the accuracy of 2016 pre-election polling (for both primaries and the general election), reviews variation by different methodologies, and identifies differences from prior election years.

There is much speculation today about what led to these errors and already the chorus of concerns about a “crisis in polling” have emerged as headlines on news and social media sites. As final results continue to be tabulated it would be inappropriate for us to participate in conjecture.

Reactions from media outlets Implications –

Commentary on the situation seems ubiquitous; here are four links to articles that focus on potential implications for this industry and market research and analytics in general:

From AdAge: Why Pollsters Got the Election So Wrong, and What It Means for Marketers  http://adage.com/article/campaign-trail/pollsters-wrong-means-marketers/306697/

From CIO: Is Trump’s unexpected victory a failure for big data? Not really

http://www.cio.com/article/3140172/big-data/is-trumps-unexpected-victory-a-failure-for-big-data-not-really.html

From the Chicago Tribune: Failed polls call into question the profession of prognostication http://www.chicagotribune.com/news/nationworld/politics/ct-presidential-polls-failed-20161109-story.html

From Bloomberg: Failed polls in 2016 call into question a profession’s precepts  http://www.bloomberg.com/politics/articles/2016-11-09/failed-polls-in-2016-call-into-question-a-profession-s-precepts

 

Executives still mistrust insights from data and analytics – via CIO

Data and analytics are increasingly becoming central to business decision-making. But even as organizations push to make their decision-making more data-driven, business leaders accustomed to making decisions based on gut-instincts and experience are having trouble trusting insights from data and analytics (D&A).

Forrester Consulting, commissioned by the Data and Analytics Global team at professional services firm KPMG, recently surveyed 2,165 data and analytics decision-makers from a range of industries in nine nations.

KPMG recommends organizations address seven key areas to close the trust gaps:

  •  Assess the trust gaps
  •  Create purpose by clarifying goals
  •  Raise awareness to increase internal engagement
  •  Develop an internal data and analytics culture
  •  Open up the ‘black box’ to encourage greater transparency
  •  Provide a 360-degree view by building ecosystems
  •  Stimulate innovation and analytics R&D to incubate new ideas and maintain a  competitive stance

Access full article from CIO

 

Extracting Insights from Vast Stores of Data via Harvard Business Review (Rishad Tobaccowla & Sunil Gupta, authors)

Companies have invested millions of dollars in big data and analytics, but recent reports suggest most have yet to see a payoff on these investments. In an age where data is the new oil, how are smart companies extracting insights from these vast data reservoirs in order to fuel profitable decisions?

Companies that have been successful in harnessing the power of data start with a specific business problem and then seek data to help in their decision-making. Contrary to what Anderson preached, the process starts with a business problem and a specific hypothesis, not data.

Access full article from the Harvard Business Review

How brands like Netflix and Spotify use data visualization for social campaigns (source: ClickZ, from Asia)

Examples discussed below available by clicking here.

The article points out that visualization of data is often overlooked or deprioritized – especially in Asia and describes two consumer-facing campaign examples that should put data visualization back on your radar.

In order to promote its TV show Narcos, which tells the story of Pablo Escobar and the Medellin cartel, Netflix created infographics that brought the economy of the Columbian cocaine trade to life in a socially engaging way.

Spotify’s Found Them First gave music fans a way to prove that they were really into certain bands and singers before they became famous. Listening data was used to show users all the artists they had discovered ahead of other Spotify users. Within weeks of the site’s campaign launch, it had received more than 1 million visits and more than 100 million social media impressions, without any media spend.