attention

OOH Measurement’s Game Has Changed

Christina RadiganSVP, Research & Insights, Outfront

Christina Radigan of Outfront explored the advantages of out-of-home advertising (OOH) and discussed advancements in its measurement techniques. Christina noted that with the loss of cookies and third-party data, contextual ad placement will see a renewed sense of importance, and in OOH, location is a proxy for context, driving content. She further indicated the benefits of OOH citing a recent study by Omnicom, using marketing mix modeling (MMM), which found that increased OOH spend drives revenue return on ad spend (RROAS). This research also highlighted that OOH is underfunded, representing only 4% to 5% of the total media marketplace. Following up on this, Christina pointed to attribution metrics, measuring the impact of OOH ad exposure on brand metrics and consumer behaviors, to demonstrate OOH's effectiveness at the campaign level. Expanding on their work in attribution, she noted changes stemming from the pandemic: Format proliferation and greater digitization, privacy-compliant mobile measurement ramping up (opt-in survey panel and SDK) and performance marketing and measurement becoming table stakes for budget allocations. New measurement opportunities from OOH intercepts included brand lift studies, footfall, website visitation, app download and app activity and tune in. Finally, she examined brand studies conducted for Nissan and Professional Bull Riders (PBR), showcasing the effectiveness of OOH advertising in driving recall, ticket sales and revenue. Key takeaways:
  • MMMs return to the forefront, as models become more campaign sensitive and are privacy compliant (powered by ML and AI).
  • A study from Omnicom, using MMM, found that optimizing OOH spend in automotive increased brand consideration (11%) and brand awareness (19%). In CPG food, optimizing OOH spend increased purchase intent (24%) and optimizing OOH spend in retail grocery increased awareness (9%).
  • OOH now represents a plethora of formats (e.g., roadside ads, rail and bus ads, digital and print) and has the ability to surround the consumer across their journey, providing the ability to measure up and down the funnel, in addition to fueling behavioral research.
  • Key factors for successful measurement in OOH: feasibility (e.g., scale and scope of the campaign, reach and frequency), the right KPIs (e.g., campaign goal) and creative best practices (Is the creative made for OOH?).
  • OOH advertising is yielding tangible outcomes by boosting consumer attention (+49%). Additionally, there has been a notable surge in advertiser engagement (+200%).
  • Ad recall rates in OOH continue to increase (e.g., 30% in 2020 vs. 44% in 2023).

Member Only Access

Neuro: TV Brand Attraction Advantage Over Digital

Bill HarveyExecutive Chairman, Bill Harvey Consulting

Elizabeth Johnson, Ph.D.Executive Director & Senior Fellow, Wharton Neuroscience Initiative, UPenn

Michael Platt, Ph.D.Director, Wharton Neuroscience Initiative, UPenn

Audrey SteeleEVP Sales Research Insights & Strategy, FOX Corp.

The presenters discussed their study focused on the link between attention and sales. Attention is required for engagement. Eyes on screen do not predict sales well. However, three main brain measurement dimensions account for sales and branding effects: brand attraction/joy (=motivational signals in fMRI and EEG), memory (=Theta power in EEG) and synchrony (=collective resonance across audience brains, in fMRI and EEG)—all require more than 1-2 seconds to unfold and measure. Using neuroanalysis can help unmask hidden thoughts and feelings (via fMRI). Additionally, scaled up, other predictive bio and neuro metrics can be just as predictive. The research shows that patterns of brain activity predict sales best: the sum of all perceptual, attentional, emotional, social and memory processes. We can also use EEG to tell us about frustration, attention, memory, sleep/introspection. Research using EEG shows that EEG measuring brand attraction/joy can predict 80% variance in sales. Notably, brand attraction/joy takes 15 seconds to peak. Brain memory also predicts sales. Notably, memory encoding picks up after 10 seconds. Finally, synchrony—collective audience response—predicts more than 90% of sales but also has temporal dynamics, peaks at 5 seconds and picks up again after 15-20 seconds. Wharton Neuroscience investigated predicting how different content and platform impact sales lift. The study design: eight ads in eight verticals tested in each of the 10 experimental cells (7 TV, 2 smart phones, 1 control condition). Four ads at a time are shown between TV show. Each viewer will only see one kind of content. Findings from the 3% of total sample: attraction and memory are sustained for ads shown in premium channels compared to YouTube. Value of context is enormous! YouTube has a drop at 4 seconds whereas TV continues. Key takeaways:
  • Attention is an incomplete measure by which to select media contexts and platforms for specific campaigns.
  • Premium longform content and contexts have more sales and branding impact than digital, especially in new customer growth due to emotional immersion in TV context vs. brevity of ad attention/engagement in digital.

Download Presentation

Member Only Access

The Power of Radio Through the Lenses of Emotional Engagement

Pedro AlmeidaCEO, MediaProbe

Pierre BouvardChief Insights Officer, Cumulus Media | Westwood One

The presentation focused on determining the emotional impact of AM/FM radio ads. MediaProbe was retained by Cumulus Media to measure second-by-second electrodermal activity (EDA)—a measure of the sympathetic nervous system, to see when it is activated, whether listeners were excited by the stimulus they heard. This is termed Emotional Impact Score (EIS)—an impact metric that can help understand how excited people are on a second-by-second basis and what are the elements that drive this emotion. This is an objective way of quantifying emotion in media and advertising content, capturing the emotional implicit data (what people feel). Throughout the session, participants can also dial those moments that they like/dislike—the conscious explicit capture of likes and dislikes, and are asked pre and post session questions to learn more about recall and purchase intent. Methodology: 36 AM/FM radio ads, in a simulated broadcast of 30 minutes across four genres (urban, news, adult contemporary and rock/oldies). Each “broadcast” had three ad breaks and the average commercial break had three ads. Also, 227 people participated. Each “broadcast” had a sample size of 75 people and consumers listened to at least three of the four broadcasts. Each ad was exposed to 225 people. Findings:
  1. AM/FM radio programming outperforms MediaProbe’s U.S. TV norms by 13%. Put differently, the emotional impact score is higher when listening to radio.
  2. Carry over effect: radio advertising commercial pods receive 12% higher Emotional Impact Score over TV advertising commercial pods, making radio a premium platform.
  3. Across genres, people are more engaged when listening to news—people are processing what is being said, they are paying attention. There is no valence contamination between what is being said on the news and the emotional engagement to ads.
  4. People are more engaged during radio advertising—4% more than radio content.
  5. Looking at 32 individual MediaProbe ads, there is on average a 5% higher emotional impact score in comparison to 4,670 individual MediaProbe TV ads. This research is consistent with other lab-based studies.
  6. MediaProbe also conducted a physical feature analysis of the creative to find that: 1) higher pitch contrast between programming content and ads leads to higher impact. If the content has low pitch, ads should be higher pitch and vice-versa; 2) louder ads lead to higher impact.
  7. Using a regression analysis, MediaProbe found the following best performing creative in radio ads: 1) female voiceover; 2) with jingles/with background music; 3) five brand mentions are optimal; 4) no disclaimers. This too is consistent with other research.
Key takeaways:
  • AM/FM radio programming is more engaging than TV, according to MediaProbe
  • They also found that AM/FM radio advertising outperforms TV advertising.
  • News is the most impactful genre as a high-quality contextual environment for advertising.
  • Sound contrast between radio programming and ads drives higher attention and brand recall.
  • Creative best practices: female voiceover, jingles, one voiceover and five brand mentions.
 

Download Presentation

Member Only Access

How Co-viewing and Other Factors Impact Viewer Attention to CTV

Monica LongoriaHead of Marketing Insights, LG Ad Solutions

Tristan WebsterChief Product Officer, TVision

The research presented included an online survey of over 1,000 respondents incorporated with TVision’s 5,000+ U.S. home panel data. Questions asked: 1. Does CTV garner more attention? 2. Are consumers more likely to co-view CTV? 3. Does co-viewing negatively affect attention? TVision’s equipment includes their always-on panel, a webcam that can capture how many people are in the room and eyes on screen at a second by second, a router meter to understand which CTV device is on and detects apps. TVision measurement engine includes remote device management and ACR engine. Findings:
  1. CTV in general has 13% higher attention index. Attention increases due to purposeful watching. Co-viewing CTV has stronger impact in comparison to linear (75% higher).
  2. Streaming is a popular co-viewing experience with mostly a non-negative impact to attention. Households with kids are more likely to pay attention to streaming content and ads with 36% more likely to discuss what is seen on TV. There are three different types of co-viewing: family setup with different age group (increased attention depends on genre), adults only setup with similar gender and age (biggest impact on attention), mixed adults only setup.
  3. Streaming is gaining ground as a co-viewing method for watching sports. Watching sports is typically with other people.
Implications for brands and marketers:
  1. CTV offers opportunity to create more engaging ads with higher levels of attention. CTV has digital capabilities that garner more attention. There is a need to create ads that are specific for CTV (in contrast to linear).
  2. Co-viewing can be an opportunity to turn your brand into a discussion.
  3. Measurement providers give us new insights into viewer behavior.
Key takeaways:
  • There is a higher attention with CTV in comparison to linear.
  • Positive impact of co-viewing: Co-viewing on streaming platforms is popular and generally maintains or increases attention.
  • Streaming is increasingly preferred for watching sports in a co-viewing context, offering new opportunities for targeted advertising and engagement in sports content.
  • Implications for brands and advertisers: The engaging nature of CTV offers ample opportunities for more impactful ads. Co-viewing experiences can transform ads into discussion points among viewers, enhancing brand engagement.

Download Presentation

Member Only Access

Human Experience: Why Attention AI Needs Human Input

Dr. Matthias RothenseeCSO & Partner, eye square

Stefan SchoenherrVP Brand and Media & Partner, eye square

Speakers Matthias Rothensee and Stefan Schoenherr of eye square discussed the need for a human element and oversight of AI. Beginning their discussion on the state of attention and AI, Matthias acknowledge that race for attention is one of the defining challenges of our time for modern marketers. He quoted author Rex Briggs, who noted the "conundrum at the heart of AI: its greatest strength can also be its greatest weakness." Matthias indicated that AI is incredibly powerful in recognizing pattern from big data sets but at the same time there are some risks attached to it (e.g., finding spurious patterns, hallucinations, etc.). Stefan examined a case study using an advertisement for the candy M&Ms, which considered real humans using eye tracking technology and compared it to results using AI. The goal was to better understand where AI is good at predicting attention and where does it still have to optimize or get better. Results from a case study indicated areas for AI improvements in terms of gaze cueing, movement, contrast, complexity and nonhuman entities (e.g., a dog). The static nature of AI (e.g., AI prediction models are often built based on static attention databases) can become a challenge when comparing dynamic attention trends. Key takeaways:
  • Predictive AI is good at replicating human attention for basic face and eye images, high-contrast scenes (e.g., probability of looking at things that stand out) and slow-paced scene cuts where AI can detect details.
  • AI seems unaware of a common phenomenon called the "cueing effect" (e.g., humans not only pay attention to people's faces but also to where they're looking), which leads to an incorrect prediction.
  • AI has difficulties deciphering scenes with fast movements (e.g., AI shows inertia) in contrast to slow-paced scenes where AI excels in replicating human feedback. In this case human feedback is more accurate.
  • AI is more consumed with attention towards contrast (e.g., in an ad featuring a runner, AI gave attention to trees surrounding the runner), whereas humans can decipher the main aspect of an image.
  • AI decomposes human faces (e.g., AI is obsessed with human ears), whereas humans can detect the focal point of a human face. In addition, AI hallucinates, underestimating facial effects.
  • AI has difficulties interpreting more complex visual layouts (e.g., complex product pack shots are misinterpreted).
  • AI is human centric and does not focus well on nonhuman entities such as a dog (e.g., in scenes where a dog was present, AI disregarded the dog altogether).
  • AI tends to be more static in nature (e.g., AI prediction models are often built based on static attention databases), which can be a problem when comparing this to dynamic attention trends.

Download Presentation

Member Only Access

The Impact of Co-Viewing on Attention to Video Advertising

Duane Varan, Ph.D.CEO, MediaScience

Impressions are measured everywhere, however, not all impressions are equal, and as such, we need to think about how to appropriately weigh them. The problem with CTV is that there is more than one viewer, and the device itself doesn’t tell you this. The question, then, is how do we account for these added impressions. From a value point of view, we need to understand what is the value of these additional viewers. There was a meta-analysis of MediaScience studies (n=11) on co-viewing. This is not conclusive but rather exploratory because these studies were commissioned by clients. These are premium publishers and not all TV is at that level of quality. The conceptual model of co-viewing: device level exposure data à add additional co-viewers à estimated additional co-viewers. How do we know that these additional co-viewers have the same values? We need to factor for what could be a diminished add impact. To do this: we need to adjust audience (factoring for diminished ad impact) à adjusted additional co-viewers (by impact). Results:
  1. Attention and memory effects are the two areas that matter the most when addressing co-viewing. The attention sphere is a small effect, and there is not a lot of variability with that effect. The real story is in memory—if you’re talking to someone it is difficult to process the ad. Memory retrieval when co-viewing decreases by 15-52% depending on the content.
  2. Co-viewing composition effect: Mixed gender viewing has a more detrimental effect than same sex viewing (decrease by 27%).
  3. Age effects: There are big differences by age but not a lot of difference in terms of the decline that is associated with co-viewing by age.
  4. Program effects: Majority of variability is in the program effects—between 22% and 58%. The co-viewing problem cannot be solved by industry averaging, but we would need program-level measurement. For instance, effect is worse with sitcoms than it is with sports. One of the theories is that in sports, a lot of human interaction happens at the moment, whereas in comedy this is saved for the ad break.
  5. Number of co-viewers effects: What happens when you increase the number of people in the room? In the studies, the maximum co-viewing is two. Looking at TVision data, they saw that for three or more viewers and above that impacts level of visual attention—from 3% drop with two viewers, to 18% drop with three viewers and 23% drop with four viewers or more. However, this is not significant because 97% of TV viewing occurs with one or two viewers, and only 3% of TV viewing is with three or more viewers (TVision data).
  6. Implications in terms of value proposition—the worst-case scenario is a detrimental effect of 58%. The net effect of co-viewers is negative 40. Average scenario— detrimental effect of 15%; net of 140 viewers in value.
Future research will focus on second screen device usage. Hypothesis is that the scale of this problem is bigger than the scale of co-viewing. Key takeaways:
  • Focus on co-viewing to understand the value of additional viewers.
  • Effect is seen in memory domain rather than attention domain.
  • Issue of variability by program means that the equation will differ between programs.

Download Presentation

Member Only Access

EEG Illuminates Social Media Attention Outcomes

Shannon Bosshard, Ph.D.Lead Scientist, Playground XYZ

Bill HarveyExecutive Chairman, Bill Harvey Consulting

Advertising starts with attention. If gained and sustained long enough, brain engagement occurs. Once this happens, memory encoding might happen, and that is when incremental brand equity and sales occur. Attention economy has now reached a pivotal moment: What is it that drives attention and how is this related to outcomes? Is it media platforms or creative? The presenters took two approaches: the first was brand lift studies (focusing on the conscious) with 20,000 participants, 35 well-established brands, 60 ads, on social media platforms, using eye tracking and post exposure survey. The second approach was a neuro study focusing on the subconscious, with 50 participants, across 150 sessions, people exposed to over 1,800 ads. They used a combination of eye tracking and EEG, and RMT method for measuring motivations. Hypotheses:
  1. Some ads achieve their desired effects with lower attention than others.
  2. Platform attention averages mislead media selection because they leave out the effect of the creative and the effect of motivations.
  3. Higher order effects add to our understanding of what is “optimal”: motivation, memory encoding, immersion, cognition load.
By isolating the impact of the platform (same creatives across multiple channels), the research shows that platform is not the largest driver of outcomes. In only 25% of the times there is a statistical difference between media platforms. Instead, the creatives determine outcomes: in 96% of cases we see statistical difference between media platforms. Creatives present the best opportunity for behavior change. The platform might be the driver of attention, but creative is the driver of outcome. Put differently, platforms dictate the range of attention and how the consumer interacts, but it’s the creative that drives outcomes. Attention/non attention is affected by motivations and subconscious decisions (to be proven in future). Neuroscience taps into the subconscious— memory encoding, immersion (engagement), approach (attitude), cognitive load. They compiled overall averages to make inferences regarding where to place your ad. RMT methodology used driver tags to code an ad (or any piece of content) using human coders to see how many of these tags belong to the ad. This methodology was used to examine the resonance between the ad and the person. Key takeaways:
  • Attention drives outcomes—there’s a need to understand how it is related within that cycle.
  • Creative is key—there is a need to understand how much attention is needed to drive outcome.
  • Consider consumer motivation—this correlates with neuroscience metrics and allows for more nuanced understanding of the importance of creative in driving outcomes.

Download Presentation

Member Only Access

Mapping the Impact: When, How and Why TV Commercials Work Best

Jeff BanderPresident, eye square

Sandra Schümann Senior Advertising Researcher, RTL Data & Screenforce

Marvin VogtSenior Research Consultant, eye square

Screenforce conducted a series of studies beginning in 2020, examining reach, success, mapping moods and impact in relation to attention. They mapped the impact by investigating when does which type of communication work best and why? There were 8,304 ad contacts in-home, 285 participants in a natural way (living rooms). They also examined 64 brands in three countries. The largest media ethnographic study in Europe examined usage situations and scenarios. There were four different scenarios: 1) Busy Day scenario (2-6PM Mon-Fri, people are distracted and focused on other things), 2) Work is Done (after 6PM, first lower part of concentration, seeking for better mood), 3) Quality Time (8-10PM, prime time, high activation of quality time, “Super Bowl moment,” high focus on screen), 4) Dreaming Away (10PM-1AM, typically alone, before sleep, dreamlike situation). Each of the 64 ads was tested in all four scenarios. The study included a technical objective criteria, subjective feeling and creative approaches. Eye square found a way where no additional material is needed other than an instruction book, webcam and GSR. Key findings:
  1. Visual attention is highest at late night (86%). Recall for ads works best in evening (75% Quality Time and Dreaming Away). However, advertising is shown to fit better earlier in the day.
  2. Characteristics per scenario: spot liking rises when using brand jingle (audio) in Busy Day scenario. This is because during the Busy Day scenario people are distracted and the jingle can help retain their focus.
  3. On a Busy Day, use strong brands with strong branding. When work is done, use ads to create a good mood. During Quality Time, it’s time for the big stories. During Dreaming Away, less is more.
  4. In sum, it is possible to find out which scenario works best for the spot and optimize the ads and find the best possible time and spot to air the ad.
Key takeaways:
  • TV ads have a strong effect, but there are ways to improve this impact.
  • Usage scenarios of audience has impact on ad effectiveness.
  • TVs can achieve a higher effect if they take the usage scenario into account.

Download Presentation

Member Only Access

The Value of Attention is Nuanced by the Size of the Brand

Karen Nelson-Field, Ph.D.CEO, Amplified Intelligence

This presentation discussed the importance of nuance and interaction effects and how understanding interaction effects are critical in building products. There were four use cases—campaign strategy, planning, verification, buying. Two sets of data—inward and outward facing—looked at tag-based data through tags, outward facing—device based, panel data, gaze tracking, pose estimation, etc. One is observed while the other is human. Both are valuable. Each set has limitations. Looking at actual humans has a scale issue, whereas impression data has limited ability to predict behavior. Human behavior is complex. It is also varied by platforms. Metrics without ground truth misses out on this. Three types of human-attention were measured: active attention (looking directly at an ad), passive attention (eyes not directly on ad), non-attention (eyes not on screen, not on ad). Attention outcomes and attention are not always related. Underneath how attention data works there is a hierarchy of attention—the way ad units and scroll speeds and other interaction effects all mediate with each other. It is not as simple as saying look at this ad unit and we will get this amount of attention. If products don’t include these factors they fail. Amplified Intelligence built a large-scale validation model for interaction effects and “choice” using Pepsi. They employed logistic regression using Maximum Likelihood Estimation (MLE), analyzing observations and tested critical factors—brand size and attention type, to demonstrate strong predictive accuracy with CV accuracy. They found significant interaction effects, particularly brand size and attention type as key influencers of consumer brand choice. Key findings:
  1. Passive and active attention work differently. Passive attention works harder for bigger brands, while active attention works harder for smaller brands. Put differently, small brands need active attention to get more brand choice outcomes.
  2. Attention switching (focus) mediates outcomes. The nature of viewing behavior mediates outcomes. Not just attention yes or no, and what level, but about behavior across time. This is why time-in-view fundamentally fails even though it is considered one of the critical measures of attention. Humans are constantly switching between attention and non-attention. There’s attention decay—how quickly attention diminishes (sustained attention x time). There’s attention volume—the number of people attentive (attentive reach x time).
  3. Eyes on brand attention is vital for outcomes. If the brand is not at the point when people are looking (or hearing), this impacts outcomes. When the brand is missing, we fill in the blanks, but the next generation of buyers are being “untrained.”
Implications:
  1. Human attention is nuanced, complicated, making it difficult to rely merely on aggregated non-human metrics for accuracy. We must constantly train these models, just like GenAI, to ensure that all these nuances are fit into the model. A human first approach is critical.
  2. Outcomes cannot predict attention. Attention can predict outcomes but not the other way around.
  3. Attention strategies should be tailored to campaign requirements (not binary quality or more/less time). Overtime attention performance segments will start to think about other AI.
Key takeaways:
  • Human attention is nuanced. This makes it difficult to rely only on aggregated non-human metrics for accuracy.
  • A human-first approach is critical.
  • Outcomes cannot predict attention.
  • Attention strategies should be tailored to campaign requirements.

Download Presentation

Member Only Access

The Power of AI for Effective Advertising in an ID-free World

Rachel GantzManaging Director, Proximic by Comscore

Amidst heightened regulations in the advertising ecosystem, Rachel Gantz of Proximic by Comscore delved into a discussion of diverse AI applications and implementation tactics, in an increasingly ID-free environment, to effectively reach audiences. Rachel highlighted signal loss as a "massive industry challenge," to provide a framework for the research she examined. She remarked that the digital advertising environment was built on ID-based audience targeting, but with the loss of this data and the increase in privacy regulations, advertisers have placed their focus on first-party and contextual targeting (which includes predictive modeling). In her discussion, she focused on the many impacts predictive AI is having on contextual targeting, in a world increasingly void of third-party data, providing results from a supporting experiment. The research aimed to understand how the performance of AI-powered ID-free audience targeting tactics compared to their ID-based counterparts. The experiment considered audience reach, cost efficiency (eCPM), in-target accuracy and inventory placement quality. Key takeaways:
  • Fifty to sixty percent of programmatic inventory has no IDs associated with it and that includes alternative IDs.
  • Specific to mobile advertising, many advertisers saw 80% of their IOS scale disappear overnight.
  • In an experiment, two groups were exposed to two simultaneous campaigns, focused on holiday shoppers. The first group (campaign A) was an ID-based audience, while the second group was an ID-free predictive audience.
    • Analyzing reach: ID-free targeting nearly doubled the advertisers’ reach, vs. the same audience, with ID-based tactics.
    • Results from cost efficiency (eCPM): ID-free AI-powered contextual audiences saw 32% lower eCPMs than ID-based counterparts.
    • In-target rate results: Significant accuracy was confirmed (84%) when validating if users reached with the ID-free audience matched the targeting criteria.
    • Inventory placement quality: ID-free audience ads appeared on higher quality inventory, compared to the same ID-based audience (ID-free 27% vs. ID-based 21%).

Download Presentation

Member Only Access