effectiveness & ROI

Panel Discussion

In this session, Elea McDonnell Feit (Drexel University) led a panel discussion with the day’s speakers on innovations in experiments in marketing and referred to these experiments as a “mature part of the measurement system.” In this discussion panel members brought up ideas and examples of how to effectively employ randomized controlled trials (RCT) and the benefits of using experiments for attribution. They examined the lack of patterns stemming from advertising incrementality and credited this to the changing nature of the consumer journey and unique factors in strategy, the business life cycle and the product being sold. The panel also explored processes to ensure the deployment of a successful and effective experiment. In addition, geo-based tests were also considered. Other topics discussed were the cost-effectiveness of running experiments and the value of failed experiments.

Measurement with Large-Scale Experiments: Lessons Learned

In this session, Ross Link (Marketing Attribution) and Jeff Doud (Ocean Spray Cranberries) examined a large-scale experiment conducted with Ocean Spray. They applied randomized controlled trials (RCT) to approximately 10 million households (30 – 40 million people) in which ads were consumed by their participants via a variety of devices. Jeff explained that the experiment was done to measure the impact of when certain ads were suppressed for some of their participants. Additionally, they examined an MTA (multi-touch attribution) logit model that was subsequently applied, which yielded KPIs such as ROI. Information from this MTA-RCT experiment supplied refreshed results monthly. Daily ROI results from the campaign were collected from the MTA-applied modeling. Outcomes from this experiment revolved around retargeting and recent and lagged buyers. In addition, the study also explored creative treatments and platform effectiveness.

How to Cut Waste and Fuel Growth with Incrementality-Based Attribution

With Trevor Testwuide (Measured) providing context and Ian Yung (Tonal) guiding the case studies from Pinterest and Google, the two presenters tested whether ads/channels were working and how far marketers can scale them. Trevor compared last-touch attribution to incremental ROAS, showing the significant discrepancies between platform-reported and media-driven incremental conversions. The incrementality experiment methodologies used in the case studies were cohort based first-party audience split testing and geo-matched market incrementality testing, which Trevor noted were must-haves in determining where to cut waste and where to scale. Results from the case studies measuring holdout cohorts showed overinvestment in Pinterest based on organic conversions, and a 13% increase in ROAS on Google Shopping from under-reporting of incrementality.

MRC’s Outcomes and Data Quality Standard

The MRC’s Ron Pinelli outlined the scope of the Outcomes and Data Quality Standard, recently completed in September 2022. Part of MRC’s mission is setting standards for high quality media and advertising measurement, and Ron walked through the phased approach and iterative process that included the ANA, the 4A’s and other industry authorities.

Panel Discussion: Perspectives on the Rise of Retail Media

This panel discussion, moderated by Circana’s Michael Ellgass, discussed the current state and emerging opportunities in retail media, including measurement as well as organizational issues. The following are edited highlights from their conversation.

Beyond Measurement: How Coca-Cola Uses Attention Metrics to Increase Efficiency & ROAS

This presentation discussed cooperation between Coca-Cola and Adelaide. Adelaide created a metric AU—a single metric for brands to ensure their media gets the most attention. They conducted a campaign with Diet Coke to understand how attention metrics could be incorporated into a campaign measurement system to improve efficiency and to understand what could drive success. Coca-Cola has an end-to-end framework for measuring their campaigns. The E2E metrics has key components that measure human impact: head metric—all different measures if an experience was noticed and recalled; heart—resonance and relevance; hand—purchase/shopping; mouth—consumption. AU is part of the “head” component. An earlier trial in Europe took two campaigns—one with Aquarius and one with Coke. Half of media was optimized based on attention and the other on exposure. Clear convincing results showed that there is higher ad recall, recognition and impact based on attention as opposed to viewability.

Contribution of Media vs. Creative vs. Brand

Across all platforms, creative continues to have the dominant effect ranging from 46% to 49% of the effect of the campaign. The proportional effect of media and brand vary by platform depending on the targetability of the medium, the ability to build reach and the appeal to younger audiences such as is the case for social media.

Using Attention AI to Predict Real-World Outcomes

Mars and Realeyes prove connection between creative attention and sales performance. Mars’ Agile Creative Expertise (ACE) tool tracks visual attention and emotion responses to digital video ads. Visual attention AI and facial coding measures how long participants watch a video and how their attention changes as they watch. Proven this model to work—optimizing content, lifting sales up to 18% in 19 markets, $30 million in ad optimizations in 18 months.

Demystifying Cross-Media Ad Impact

In this session, Yannis Pavlidis of consumer insights and CX firm DISQO tackled the challenges of benchmarking, cross-media outcomes and brand lift due to incomplete data from siloed platforms and media channels. In the opening, Yannis provided a refresher on the importance of benchmarks and obstacles from existing approaches to benchmarking (e.g., inconsistent methodologies, outdated data and collection techniques). The discussion examined solutions to address issues in data collection concerning benchmarking ad impact, which streamlines the process using consented, single-source data. The presentation also examined calculating benchmarks based on data taken from one source group (rather than two unaffiliated groups), considered the recency of the campaign used and subsequent behavior(s) which then can be correlated with survey responses. The advantage of using consented single-source data is that it can lead to more insightful, relevant and consistent outcomes in benchmarks.

A Two-Pronged Approach

In this session, speakers Bennett M. Kaufman, Kyle Holtzman and Michelle Smiley of Google explored a two-pronged approach to cross-media measurement and planning that considered the full-funnel impact across traditional TV and streaming video (YouTube), to make sense of all the “disparate forms of data and measurement.” The approach considered a geo-based experiment and audience incrementality to demonstrate and solve the following challenges: to retain current loyal customers, to age down the brand and to appeal to new consumers (Generation Z). The speakers presented a study done by Google in partnership with Burger King to test a new experimentation strategy to understand and measure the relationship between Linear TV and YouTube. The speakers touted the benefits of this method as repeatable and customizable across a variety of media channels, in addition to being timely, omni-channel and privacy safe.