leveraging data

Building a Multi-Currency Future

This dialog between Scott McDonald and Colleen Fahey Rush (Paramount) covered the rebranding of Colleen’s company and three broader issues facing the television industry—the rise of streaming services, her perceptions of the currency environment and the upcoming upfronts.

Nielsen One Comes to Market

Scott McDonald opened the session by discussing how the Census uses sample to correct for issues like undercounts in big data. Pete Doe (Nielsen) responded by commenting on persons who ask: do you have a Big Data solution or a panel solution? He doesn’t see it that way but rather you take all the signals you have and put them together in the best way for the problem at hand.

JIC: Coalescing Around Standards for Cross-Platform Currencies

Brittany Slattery (OpenAP), who opened this discussion, explained that the new JIC was created by national programmers and media agencies for three main purposes: (1) To bring buyers and sellers to the table with equal voices; (2) To create baseline requirements for cross-platform measurement solutions and (3) To create a harmonized census-level streaming service data set across all of the programmers in the JIC. Fox, NBCU, Paramount and Warner Brothers Discovery are all JIC members, as are Dentsu, Group M, IPG Mediabrands, OMG and Publicis. The members hope to foster competition among multiple ad video measurement currencies. After her introduction, Danielle DeLauro (VAB) moderated a discussion with the representatives of three networks and Group M.

Unlocking the Value of Alternative Linear TV Currencies with Universal Forecasting

Spencer LambertDirector, Product & Partnership Success, datafuelX

Matt WeinmanSenior Director of Product Management, Advanced Advertising Product, TelevisaUnivision



Matt Weinman (TelevisaUnivision) and Spencer Lambert (datafuelX) shared the methodology and results from testing TelevisaUnivision’s initiative that, with datafuelX’s technology, enabled their advertising partners to choose their preferred currency in forecasting both long- and short-term audiences for their programming. Implementation involved adjusting the business flow for multi-measurement sources but with each source ingested, validated and normalized to the tech standard separately. Forecasting incorporated a programming schedule imputation process which was then fed into a mixed model estimation (MME), and then optimized with linear granular data. Their model revealed gaps that they addressed with a variety of tactics including a ratings adjustment approach that updated network viewership trends, a proportional weight method for advanced audiences, recency weighting to avoid stale rate cards and relying less on forecasting viewers rather than scheduled content. The MME drove strong predictive forecasts and increased the use of long-tail inventory.

Key Takeaways

  • Forecasting should always be done based on content.
  • In reviewing the accuracy of predicting exact programming, the forecast to actuals had a 71% program match. In predicting programming type, there was 94% accuracy.
  • Results for the long-term audience forecasts had 42% MAPE (mean absolute percentage error) improvements overall using big data sources.

Download Presentation

Member Only Access

Complexities of Integrating Big Data and Probability Sample People Meter Data

Pete DoeChief Research Officer, Nielsen

Nielsen compared the implied ratings from ACR data and STB data in homes where they also have meters. The correlation was quite high, though panel adjustments raised the rating levels by about 1%. Big Data are limited in different ways: not all sets in a house provide ACR or STB data, they are devoid of persons information and STB’s are often powered up but the TV is off. Nielsen presented how a panel of 40,000 homes can be used to correct those biases. A critical finding was that projection of MVPD data outside of its geographic footprint significantly changed network shares. That said, Big Data can significantly improve local market data where samples are necessarily much smaller.

Key Takeaways

  • Nielsen presented their view on how to best use Big Data. Nielsen uses a wide variety of data sets as part of its Big Data solution. It identified several gaps in the use of ACR data, including when native apps are used and in cases where channels are not monitored.
  • Nielsen has about 30 million RPD and ACR homes and a panel of 40,000 homes which it uses to adjust its Big Data. Where possible, it matches its panel homes to RPD/ACR in those same homes. The r2 with the panel data is .98 and .96 respectively, which is quite good. However, using the panel to adjust anticipated missing data, raises the overall viewing levels by about 1%.
  • However, there are other TVs in the homes that are not ACR capable and there is no persons data associated with ACR only homes. The panel and Experian are used to model who is in the household and the panel and Gracenote are used to model who among them would be viewing.
  • Similar modeling and correction procedures are used for STB data. However, one of the most significant adjustments of STB data is modeling when the TV is off but the STP is still powered on.
  • Nielsen uses a fusion process to conduct its viewer assignment, the process of modeling who is viewing when the set is known to be tuned. A simple way to understand this is to look for a similar panel home and assign the viewing characteristics of the panel home to the ACR or RPD home. The use of Gracenote has made a significant improvement in the viewer assignment model.
  • Nielsen showed data on the shifts in share that occur when you take a Big Data set like an MVPD’s RPD data and project it beyond the footprint of that MVPD. The share shifts are much greater than when one limits the projection to the geographic footprint of that MVPD.
    • Statistically, when one compares the differences between the shares projected within and not within the footprint, 28% of the shares when projected beyond the footprint of the MVPD were different from the panel at the .05 level of statistical significance.
  • However, Nielsen showed how much Big Data can eliminate quarter-hours with zero ratings within local market (where zero ratings reduced from 62 to 0 in one market) by including Big Data in local market ratings.

Download Presentation

Member Only Access

Harnessing the Superpower of Personalization in a Privacy-Safe World

Michael TscherwinskiPrincipal, Media, Circana

Gregory Younkie Sr Data Scientist & Data Strategy, Kraft Heinz

  Michael Tscherwinski (Circana) and Greg Younkie (Kraft Heinz) explored how personalization of quality data can take performance up to the next level and shared learnings from a two year journey that found personalized impressions supported by the right data can drive stronger impact. Kraft started with its CRM recipe-focused data and grew it 17% with sweeps and games. Its in-house consumer insights platform, Kraft-O-Matic, had three core competencies with its consumer database incorporating 1P (first-party), 2P (second-party) and 3P (third-party) data, insights and analytics and agile marketing. Using identity resolution to match 1P data to devices and content, and enriching 3P data to create high-value audiences, Kraft then activated personalized marketing campaigns with speed to capture engagement and ROAS. Driving 1P acquisition, data enrichment, more personalized activation and uplift measurement resulted in a 93% lift in ROAS impact and secured an increase in media spend from leadership.

Key Takeaways

  • Enriching datasets with demographics, psychographics and purchase-based data powered their modeling for different machine learning techniques across the consumer funnel from awareness to performance marketing.
  • Within CPG, purchase-based data proved to be the best predictor of future performance.
  • Utilize HH transaction-level to enable more sophisticated media approaches and get deeper insights about your consumers.
  • Select the right clean room partner for your goals.

Download Presentation

Member Only Access

AUDIENCExSCIENCE 2023

The ARF hosted its annual flagship conference, AUDIENCExSCIENCE 2023, on April 25-26, 2023. The industry’s biggest names and brightest minds came together to share new insights on the impact of changing consumer behavior on brands, insights into TV consumption, campaign measurement and effectiveness, whether all impressions are equal, join-up solutions across multiple media, the validity, reliability and predictive power of Attention measures, targeting diverse audiences, privacy’s effect on advertising and the impact of advertising in new formats. Keynotes were presented by Tim Hwang, author of Subprime Attention Crisis, Robert L. Santos of the U.S. Census Bureau, Brian Wieser of Madison and Wall, LLC and Andrea Zapata of Warner Bros. Discovery.

Member Only Access

Attribution & Analytics Accelerator 2022

The boldest and brightest minds joined us November 14 - 17 for Attribution & Analytics Accelerator 2022—the only event focused exclusively on attribution, marketing mix models, in-market testing and the science of marketing performance measurement. Experts led discussions to answer some of the industry’s most pressing questions and shared new innovations that can bring growth to your organization.

Member Only Access

Conversations with “Great Minds”

Two 2022 ARF Great Mind Award recipients, Martin Renaud, winner of the Chief Marketing Officer (CMO) Award and Dr. Duane Varan, winner of the Erwin Ephron Demystification Award, shared insights on how research should be used to improve marketing. 

Read more »

Experts on Data and Creative

The Drum hosted a roundtable with six select members to dissect the ways in which data informs creativity — and how that relationship is transforming in light of market movements and industry sea changes. Here are three of their biggest takeaways.

Read more »