media mix selection

The Value of “Other” Media

Given the current focus on social media and streaming services as advertising vehicles, it is worth paying attention to studies that remind us of the value of radio and Out-Of-Home (OOH). 

Read more »

The Value of Attention is Nuanced by the Size of the Brand

Karen Nelson-Field, Ph.D.CEO, Amplified Intelligence

This presentation discussed the importance of nuance and interaction effects and how understanding interaction effects are critical in building products. There were four use cases—campaign strategy, planning, verification, buying. Two sets of data—inward and outward facing—looked at tag-based data through tags, outward facing—device based, panel data, gaze tracking, pose estimation, etc. One is observed while the other is human. Both are valuable. Each set has limitations. Looking at actual humans has a scale issue, whereas impression data has limited ability to predict behavior. Human behavior is complex. It is also varied by platforms. Metrics without ground truth misses out on this. Three types of human-attention were measured: active attention (looking directly at an ad), passive attention (eyes not directly on ad), non-attention (eyes not on screen, not on ad). Attention outcomes and attention are not always related. Underneath how attention data works there is a hierarchy of attention—the way ad units and scroll speeds and other interaction effects all mediate with each other. It is not as simple as saying look at this ad unit and we will get this amount of attention. If products don’t include these factors they fail. Amplified Intelligence built a large-scale validation model for interaction effects and “choice” using Pepsi. They employed logistic regression using Maximum Likelihood Estimation (MLE), analyzing observations and tested critical factors—brand size and attention type, to demonstrate strong predictive accuracy with CV accuracy. They found significant interaction effects, particularly brand size and attention type as key influencers of consumer brand choice. Key findings:
  1. Passive and active attention work differently. Passive attention works harder for bigger brands, while active attention works harder for smaller brands. Put differently, small brands need active attention to get more brand choice outcomes.
  2. Attention switching (focus) mediates outcomes. The nature of viewing behavior mediates outcomes. Not just attention yes or no, and what level, but about behavior across time. This is why time-in-view fundamentally fails even though it is considered one of the critical measures of attention. Humans are constantly switching between attention and non-attention. There’s attention decay—how quickly attention diminishes (sustained attention x time). There’s attention volume—the number of people attentive (attentive reach x time).
  3. Eyes on brand attention is vital for outcomes. If the brand is not at the point when people are looking (or hearing), this impacts outcomes. When the brand is missing, we fill in the blanks, but the next generation of buyers are being “untrained.”
  1. Human attention is nuanced, complicated, making it difficult to rely merely on aggregated non-human metrics for accuracy. We must constantly train these models, just like GenAI, to ensure that all these nuances are fit into the model. A human first approach is critical.
  2. Outcomes cannot predict attention. Attention can predict outcomes but not the other way around.
  3. Attention strategies should be tailored to campaign requirements (not binary quality or more/less time). Overtime attention performance segments will start to think about other AI.
Key takeaways:
  • Human attention is nuanced. This makes it difficult to rely only on aggregated non-human metrics for accuracy.
  • A human-first approach is critical.
  • Outcomes cannot predict attention.
  • Attention strategies should be tailored to campaign requirements.

Download Presentation

Member Only Access

How Attention Measurement Optimizes Marketing Campaigns for Success

Neala BrownSVP of Strategy and Insights, Teads

Laura ManningSVP of Measurement, Cint

This presentation focused on the intersection of attention and brand lift. The partnership between Teads and Cint relates to the challenge of scalability, access to data and insights, and collaboration and innovation. They used normative data sets in order to examine the performance. Beyond viewability, in partnership with Adelaide who uses AU—an omnichannel metric that predicts the probability of placement to capture attention and drive subsequent impact— they conducted 17 studies in 2023. There were variance in results, in statistical significance, outcomes, etc. Results: #1 case study—media that scored highly attentive showed higher product familiarity and favorability; #2 case study—for a flat and neutral campaign, higher attention drove higher brand lift across every brand funnel metric. In terms of applicability, from a media planning perspective, this learning can be leveraged toward outcomes. When aggregating across all 17 case studies, frequency matters—lower frequencies require more AU to move metrics. People who are already familiar react to lower AU media. For favorability—more “energy” or AU is needed to move people here, it’s easier to move people with high level of familiarity. For ad recall—even at higher exposure levels, the ad needs to be high quality and needs more attention. Notably, these case studies can be replicated. Key takeaways:
  • Frequency matters: lower frequencies require more AU to move metrics.
  • Familiarity reacts to lower AU media.
  • Favorability: it’s easier to move people with high level of familiarity.
  • For ad recall, the ad needs to be high quality.

Download Presentation

Member Only Access

Retail Media Networks, Generative AI Top JAR’s Industry-Informed Research Priorities


Retail media networks, generative AI across creative, market research and trust, ad effectiveness and attention: These are among the topics highlighted on the Journal of Advertising Research’s list of 2024 research priorities. The list is a result of one-on-one interviews with advertising professionals by Editor-in-Chief Colin Campbell, who asked: "What are your biggest needs and challenges?"

Member Only Access

2023 Attribution & Analytics Accelerator

The Attribution & Analytics Accelerator returned for its eighth year as the only event focused exclusively on attribution, marketing mix models, in-market testing and the science of marketing performance measurement. The boldest and brightest minds took the stage to share their latest innovations and case studies. Modelers, marketers, researchers and data scientists gathered in NYC to quicken the pace of innovation, fortify the science and galvanize the industry toward best practices and improved solutions. Content is available to event attendees and ARF members.

Member Only Access

One Size [Does Not] Fit All Optimizing Audio Strategies for Success

What spot length works best? Audacy partnered with Veritonic to compare frequent radio listener responses to 15, 30 and 60-second ads across multiple categories such as auto, financial, retail and professional services to address this frequently asked question. Jenny Nelson (Audacy) and Korri Kolesa (Veritonic) presented the results of this study, which were measured by Veritonic’s audio score components such as attribute score, intent score and engagement score. This survey-based study of a panel of 2,400 radio listeners pointed to a variety of recommendations, such as initiating multiple 30-second ads instead of fewer 60-second ads, testing creative before launch and deploying a total audio strategy to reach omnichannel listeners.

Concurrent Track Panel Discussions: ATTENTION MEASURES

These presenters were all true believers in the value of attention. Their key takeaways from the presentations in this track were:

  • Attention is “ready for prime time,” as Marc Guldimann (Adelaide) put it. It has risen to prominence in the industry’s agenda and expects it to spread into media mix modeling and programmatic. Attention, he believes, should free the industry from “invasive” attribution practices by giving advertisers confidence in the quality of the media they are buying.
  • Jon Waite (Havas) was encouraged to see attention move from theory to practice for optimizing campaigns. He believes that the focus on attention would encourage publishers to improve experiences on the web, which, in turn, would lead to better results for brands.
  • Mike Follett (Lumen) cautioned that there was still much to learn about attention in different contexts, flighting, frequency, differences between B-to-B and B-to-C, the role of creative and long-term effects. What he found interesting in Joanne Leong’s presentation (to which he contributed) is the possibility of developing models that can predict attention for any campaign.
  • Publishers have come up with innovative formats to optimize for attention on television, according to Kelsey Hanlon (TVision).


There was some disagreement among the panelists about the prospects for an attention currency.  Marc saw it as an “obvious next step.”  Mike regarded attention as more of a buy-side “trading tool.” Jon said that it will become a key planning metric for Havas.

Attention Everywhere

The NBA tested an attention metric for digital media ad placements developed by Adelaide–called the AU–to increase tune-in to the NBA Finals and to improve its brand metrics. They leveraged the AUs of their CTV and digital placements to optimize a large campaign across CTV, digital, social, and OOH with over two billion video impressions. They also used the AUs, which they obtain in near real-time, to adjust those placements in-flight. They found that AUs lifted their KPIs and will incorporate them into their media mix models. Adelaide is also working with TVision to get AUs for linear television by daypart and genre. Their tune-in data was provided by SambaTV.

Dentsu’s Attention Economy Project: From Theory to Practice

Dentsu conducted multi-phase research on visual attention to advertising in the U.S. and U.K. across channels, platforms, formats and devices. For digital ads, they worked with Lumen, and for television ads, they worked with TVision. There were two components to the research–exploring how much attention consumers pay to advertising “in the wild” and exploring attention to ads in a structured design with forced exposure to pre-selected ads for varying amounts of time. They learned that, for example, uplift in outcomes was stronger for viewing of four seconds of a six-second ad than for four seconds of a 20-second ad. This research has provided Dentsu with an extensive data set on attention and an understanding of the drivers of attention that can be applied to future plans.

The Future of Attention Measurement

Horst Stipp, ARF’s EVP, moderated a panel discussion with the speakers from the second half of the event. He asked panelists if they found new insights or surprises in the presentations and discussions, if there is room for improvement in attention measures and “what’s next”, specifically if attention measures should become currency. Here are edited highlights from their conversation.