cross-platform/cross-device

Cross Channel Measurement in a Time of Data Collection Challenges

The average home has over 300,000 items, and consumers may be exposed to 6-10,000 ads daily. We need to overcome measurement silos to truly understand what triggers the different paths to purchase for the same products. Third party data sources need to be vetted on their sources and collection techniques, their validation methods and how they help us understand traditional metrics such as recency, frequency and consistency. Loyalty card data can help CPG companies track the 90% of purchases that still occur offline. IRI’s retailer and other partnerships offer a more holistic view of purchase behavior. In a masked case study, COVID reduced linear and cable but increased connected TV viewing, putting a premium on “equity spots” for in-home occasions such as food preparation/consumption. Multi-touchpoint fractional attribution (MTA) can distinguish the impact of creative from other aspects of digital ads.

MODERATED TRACK DISCUSSIONS: Attention Measures: What Counts & How Much Does it Cost

Jane Clarke (CIMM) followed up with each of this session’s presenters on the goals and data points of their discrete studies. The following are edited highlights from the discussions.

  • A necessity condition is that consumers have to pay attention to advertising for advertising to initiate any kind of sequence, according to Shuba (Boston University). To the extent that consumers pay attention to ads, only then is any kind of advertising effect through a hierarchical sequence triggered, so it’s a necessary condition but it’s not sufficient to say which of these intermediate factors would have the effect on sales. Not all of these metrics drive sales equally – know the sequence for your brand and advertisers.
  • Gen Z and Millennials consumed more content overall, but still had a higher rate of aided recall than other generations (Gen X, Boomers), shared Heather (Snap). Last year, they conducted a research study with Kantar to evaluate the information processing power across different generations to see if there were any differences. Each generation used Snap as they normally would, and they controlled for ad exposure. What they learned is that younger participants showed superior ad processing power when looking at ad message recall. This is surprising because we may be underestimating what we expect from the younger generations.
  • Advertisers are getting better at creating 6-second ads. According to Kara (Magna Global), back when they first started building :06 second ads, it was simply taking your :15 or :30 second ad and cutting it down to :06 seconds. You were really at the mercy at what had already been shot for another purpose. Cutting the original down to :06 seconds and maintaining branding and storytelling was very difficult to do. Now advertisers are creating :06 second ads – either on a custom basis or shooting with :06 second ad in mind, knowing that the longer versions will be cut down. Overall, that’s led to more efficient short ads because they’ve learned with the right material and testing what is going to work in a shorter amount of time.
  • The historical econometric model approach won’t garner the most accurate view of cross-platform reach or delivery, noted Heather. From this research they were able to provide a different way of thinking. A :06 second ad isn’t half as effective as a :12 second ad, and a :12 second ad isn’t a frequency of 2 to a :06 sec ad – that kind of thinking doesn’t hold true any longer. They saw that there were other kinds of descriptors, like platform, device, attention – those can and should be used to better equivalize impressions across platforms. She hopes this research challenges the industry’s way of thinking.
  • A new tool called the Attention Calculator was just launched by TVision and Lumen. Yan (TVision) explained that this tool was based on their study and it’s for anyone interested in attention for media planning and duration based metrics. It’s a free and interactive tool that calculates the cost of attention with the user’s CPMs to see the average cost per impression across platforms, based on Ebiquity data.

Does Every Second Count?

Kara Manatt (Magna) and Heather O’Shea (Snap) presented research that compared :06 second and :15 second ad lengths across three video platforms – Snap, video aggregators, and full episode players (FEPs) – to determine the optimum ad length for an effective ad strategy.

 

In testing the same :06 and :15 ads for the same four brands, the study factored in the characteristics of each platform – pre-roll/mid-roll, skippable and non-skippable, and device – as it tracked 7,500+ panelists’ viewing behaviors for brand awareness, brand perception, and purchase intent.

Understanding the True Cost of Attention Across Media

Lumen Research and TVision came together with Ebiquity to study differences in the way advertising generates visual attention across varied media and how much it costs to buy that attention. By combining their individual datasets – Lumen’s visual attention to digital advertising on desktop and smartphones, TVision’s TV attention data, and Ebiquity’s cost data – the researchers devised a new currency, the aCPM, a proxy metric representing the cost per thousand seconds of attention.

Fast(er) Causal Attribution

The presenters analyzed the past attribution challenges for Chipotle and proposed new measurement solutions. Chipotle wanted proof that its TV commercials drove sales. Chipotle and WarnerMedia developed an outcome-guaranteed deal based on sales lift, rather than impressions. For Chipotle, incremental transactions are most important. These transactions were also measured for the competitors. The goal was to provide purposeful, visible, and accountable results.

The Future of Media

The last year-and-a-half has presented the media industry with a number of unique challenges when it comes to measuring how content is being consumed across platforms. In this one-on-one conversation, Comscore’s Bill Livek joins the ARF’s Scott McDonald to discuss how today’s viewership behaviors have signaled the need for a more modern cross-screen measurement approach and outline the fundamental changes necessary to effectively transact on media today and into the future.

The Exploding Complexity of Programming Research, and How to Measure It, When Content is King

Programming researchers are not getting the data they need to make informed decisions and Joan FitzGerald (Data ImpacX) uses streaming’s complex ecosystem to explain the conundrum facing programmers. Key insights into monetization and performance are not supported despite the inundation of new forms of data, leaving programmers without a comprehensive picture of their audience. Together with Michael McGuire at MSA, Joan outlined a methodology funnel that combined 1st, 2nd and 3rd party data to create equivalized metrics that, once leveraged, could meet critical programming research demands.

Enabling Alternative TV Measurement for Buyers and Sellers

Pete Doe (Xandr) and Caroline Horner (605) provided a case study of their partnership that derived results from alternative currency measurement with buy and sell side perspectives. Xandr’s nimble workflow method enabled 605’s shift from advanced targeting to a very specific, custom-built, “persuadable” target audience with a range between 2 to 10x increase in outcomes.

 

Concurrent Track Panel Discussions: INNOVATION IN VIDEO MEASUREMENT

John Watts of CIMM moderated a panel examining presentations on innovations and changes in video measurement on day three of AUDIENCExSCIENCE 2022. The topics in this discussion included the decline in linear television, measuring new viewing habits, challenges created by the new viewing ecosystem and getting access to more personalized one-on-one data.