modeling

Panel Discussion

In this session, Elea McDonnell Feit (Drexel University) led a panel discussion with the day’s speakers on innovations in experiments in marketing and referred to these experiments as a “mature part of the measurement system.” In this discussion panel members brought up ideas and examples of how to effectively employ randomized controlled trials (RCT) and the benefits of using experiments for attribution. They examined the lack of patterns stemming from advertising incrementality and credited this to the changing nature of the consumer journey and unique factors in strategy, the business life cycle and the product being sold. The panel also explored processes to ensure the deployment of a successful and effective experiment. In addition, geo-based tests were also considered. Other topics discussed were the cost-effectiveness of running experiments and the value of failed experiments.

Measurement with Large-Scale Experiments: Lessons Learned

In this session, Ross Link (Marketing Attribution) and Jeff Doud (Ocean Spray Cranberries) examined a large-scale experiment conducted with Ocean Spray. They applied randomized controlled trials (RCT) to approximately 10 million households (30 – 40 million people) in which ads were consumed by their participants via a variety of devices. Jeff explained that the experiment was done to measure the impact of when certain ads were suppressed for some of their participants. Additionally, they examined an MTA (multi-touch attribution) logit model that was subsequently applied, which yielded KPIs such as ROI. Information from this MTA-RCT experiment supplied refreshed results monthly. Daily ROI results from the campaign were collected from the MTA-applied modeling. Outcomes from this experiment revolved around retargeting and recent and lagged buyers. In addition, the study also explored creative treatments and platform effectiveness.

MRC’s Outcomes and Data Quality Standard

The MRC’s Ron Pinelli outlined the scope of the Outcomes and Data Quality Standard, recently completed in September 2022. Part of MRC’s mission is setting standards for high quality media and advertising measurement, and Ron walked through the phased approach and iterative process that included the ANA, the 4A’s and other industry authorities.

Unlocking the Value of Alternative Linear TV Currencies with Universal Forecasting

Spencer LambertDirector, Product & Partnership Success, datafuelX

Matt WeinmanSenior Director of Product Management, Advanced Advertising Product, TelevisaUnivision



Matt Weinman (TelevisaUnivision) and Spencer Lambert (datafuelX) shared the methodology and results from testing TelevisaUnivision’s initiative that, with datafuelX’s technology, enabled their advertising partners to choose their preferred currency in forecasting both long- and short-term audiences for their programming. Implementation involved adjusting the business flow for multi-measurement sources but with each source ingested, validated and normalized to the tech standard separately. Forecasting incorporated a programming schedule imputation process which was then fed into a mixed model estimation (MME), and then optimized with linear granular data. Their model revealed gaps that they addressed with a variety of tactics including a ratings adjustment approach that updated network viewership trends, a proportional weight method for advanced audiences, recency weighting to avoid stale rate cards and relying less on forecasting viewers rather than scheduled content. The MME drove strong predictive forecasts and increased the use of long-tail inventory.

Key Takeaways

  • Forecasting should always be done based on content.
  • In reviewing the accuracy of predicting exact programming, the forecast to actuals had a 71% program match. In predicting programming type, there was 94% accuracy.
  • Results for the long-term audience forecasts had 42% MAPE (mean absolute percentage error) improvements overall using big data sources.

Download Presentation

Member Only Access

Holistic Cross-Media Measurement

Brendan Kroll – VP Performance Measurement, Nielsen

Anne Ori – Measurement Lead, CG&E, Google

Daniel Sacks – Incrementality Lead, US Agency, Google



Brendan Kroll of Nielsen and Anne Ori and Daniel Sacks, both of Google, explained that their study’s objective was to identify potential improvements to marketing mix models by utilizing enhanced prior beliefs (priors) based on sales lift studies and exploring the resulting changes in campaign-level sales lift once those priors were incorporated. Incrementality experiments are widely accepted as the gold standard for causal measurement. Calibrating individual channels via experimentation ensures optimization of model outcomes. However, the results of incrementality experiments are often not part of marketing mix model (MMM) design. Nielsen utilized NCS sales lift studies as the source of the experimental data for this analysis. NCS determined the causal effects of advertising on incremental sales while controlling for targeting and other co-variates. The study design involved 10 brands with existing MMMs and available NCS results for corresponding periods, model re-estimation using NCS lift priors, refinement of the priors and scaling. This study showed that applying this methodology to a YouTube campaign resulted in significant sales lift, as well as revenue and ROAS increases, including a 2.6x median increase in the effectiveness in the adjusted model. The adjusted model showed greater marketing contribution overall; therefore, marketers are at risk of undervaluing their overall marketing if experimental results are not included.

Key Takeaways

  • Brands can effectively leverage experiment-based priors to strengthen marketing mix models.
  • For nascent channels, the inclusion of experimentation results proved fundamental, especially if those campaigns showed strong initial results, since MMMs cannot rely solely on historical anchoring to measure true impact.
  • When experiments reveal high performing channels or campaigns, the use of testing can aid more accurate MMM measurements as investment scales.
  • Even for channels with long histories and relative stability, experimentation can serve as a way to validate models and may give models a chance to remain flexible in case of strategic shifts and/or changes in consumer behavior.

Download Presentation

Member Only Access

Complexities of Integrating Big Data and Probability Sample People Meter Data

Pete DoeChief Research Officer, Nielsen

Nielsen compared the implied ratings from ACR data and STB data in homes where they also have meters. The correlation was quite high, though panel adjustments raised the rating levels by about 1%. Big Data are limited in different ways: not all sets in a house provide ACR or STB data, they are devoid of persons information and STB’s are often powered up but the TV is off. Nielsen presented how a panel of 40,000 homes can be used to correct those biases. A critical finding was that projection of MVPD data outside of its geographic footprint significantly changed network shares. That said, Big Data can significantly improve local market data where samples are necessarily much smaller.

Key Takeaways

  • Nielsen presented their view on how to best use Big Data. Nielsen uses a wide variety of data sets as part of its Big Data solution. It identified several gaps in the use of ACR data, including when native apps are used and in cases where channels are not monitored.
  • Nielsen has about 30 million RPD and ACR homes and a panel of 40,000 homes which it uses to adjust its Big Data. Where possible, it matches its panel homes to RPD/ACR in those same homes. The r2 with the panel data is .98 and .96 respectively, which is quite good. However, using the panel to adjust anticipated missing data, raises the overall viewing levels by about 1%.
  • However, there are other TVs in the homes that are not ACR capable and there is no persons data associated with ACR only homes. The panel and Experian are used to model who is in the household and the panel and Gracenote are used to model who among them would be viewing.
  • Similar modeling and correction procedures are used for STB data. However, one of the most significant adjustments of STB data is modeling when the TV is off but the STP is still powered on.
  • Nielsen uses a fusion process to conduct its viewer assignment, the process of modeling who is viewing when the set is known to be tuned. A simple way to understand this is to look for a similar panel home and assign the viewing characteristics of the panel home to the ACR or RPD home. The use of Gracenote has made a significant improvement in the viewer assignment model.
  • Nielsen showed data on the shifts in share that occur when you take a Big Data set like an MVPD’s RPD data and project it beyond the footprint of that MVPD. The share shifts are much greater than when one limits the projection to the geographic footprint of that MVPD.
    • Statistically, when one compares the differences between the shares projected within and not within the footprint, 28% of the shares when projected beyond the footprint of the MVPD were different from the panel at the .05 level of statistical significance.
  • However, Nielsen showed how much Big Data can eliminate quarter-hours with zero ratings within local market (where zero ratings reduced from 62 to 0 in one market) by including Big Data in local market ratings.

Download Presentation

Member Only Access

Expanding Spanish Language Audiences

Sergey FogelsonHead of Data Science, TelevisaUnivision

Edouardo VitaleData Scientist, TelevisaUnivision

Sergey Fogelson and Edouardo Vitale, both from TelevisaUnivision, outlined their motivations for developing a custom lookalike model (LAM) to expand Spanish-language audiences, which were under-represented:
  • Misidentification: 4 in 10 Hispanics are excluded from 3p datasets.
  • Waste: 70% of impressions targeted at Hispanics are wasted.
  • Scale: The true scale of the Hispanic population within a given brand’s 1p dataset is hard to identify without extensive validation.
In order to address this audience underrepresentation, data sources were leveraged to create a household graph incorporating 1P (first-party) viewership, 3P (third-party) viewership and demographics on age, gender, income and education from TelevisaUnivision’s partner. The combination of these sources created a robust household level representation of approximately 17MM Hispanic households in the U.S. Embeddings were used to create a latent space that allowed for the comparison of user similarities. Mathematically, very similar users have very similar embeddings. Similarities between individuals may be based on what content they viewed, where the content was viewed (zip code) or demographics of the viewer. Additional details of the embedding process as well as the autoencoder architecture steps and validation process were presented.

Key Takeaways

  • Developing a Lookalike Model (LAM) to expand Spanish-language audiences, corrected for the underrepresentation of this consumer target.
  • Expanding an audience with LAM identifies individuals who look and act just like a given target audience. These look-alike models are used to build larger audiences from smaller segments in order to create reach for marketers and advertisers and enable them to transact on an expanded audience.
  • Use of LAM can overcome the challenges of misidentification, waste and scale. LAM plus the household graph achieves significant increases in overall audience scale.

Download Presentation

Member Only Access

Better Attribution Research

This year’s Attribution & Analytics Accelerator event again focused on the science of marketing performance measurement. Experts presented and discussed insights from attribution studies, how to use marketing mix models, experiments and in-market testing, as well as new developments such as AI.   Read more »

Attribution & Analytics Accelerator 2022

The boldest and brightest minds joined us November 14 - 17 for Attribution & Analytics Accelerator 2022—the only event focused exclusively on attribution, marketing mix models, in-market testing and the science of marketing performance measurement. Experts led discussions to answer some of the industry’s most pressing questions and shared new innovations that can bring growth to your organization.

Member Only Access