attention

Human Experience: Why Attention AI Needs Human Input

Dr. Matthias RothenseeCSO & Partner, eye square

Stefan SchoenherrVP Brand and Media & Partner, eye square

Speakers Matthias Rothensee and Stefan Schoenherr of eye square discussed the need for a human element and oversight of AI. Beginning their discussion on the state of attention and AI, Matthias acknowledge that race for attention is one of the defining challenges of our time for modern marketers. He quoted author Rex Briggs, who noted the "conundrum at the heart of AI: its greatest strength can also be its greatest weakness." Matthias indicated that AI is incredibly powerful in recognizing pattern from big data sets but at the same time there are some risks attached to it (e.g., finding spurious patterns, hallucinations, etc.). Stefan examined a case study using an advertisement for the candy M&Ms, which considered real humans using eye tracking technology and compared it to results using AI. The goal was to better understand where AI is good at predicting attention and where does it still have to optimize or get better. Results from a case study indicated areas for AI improvements in terms of gaze cueing, movement, contrast, complexity and nonhuman entities (e.g., a dog). The static nature of AI (e.g., AI prediction models are often built based on static attention databases) can become a challenge when comparing dynamic attention trends. Key takeaways:
  • Predictive AI is good at replicating human attention for basic face and eye images, high-contrast scenes (e.g., probability of looking at things that stand out) and slow-paced scene cuts where AI can detect details.
  • AI seems unaware of a common phenomenon called the "cueing effect" (e.g., humans not only pay attention to people's faces but also to where they're looking), which leads to an incorrect prediction.
  • AI has difficulties deciphering scenes with fast movements (e.g., AI shows inertia) in contrast to slow-paced scenes where AI excels in replicating human feedback. In this case human feedback is more accurate.
  • AI is more consumed with attention towards contrast (e.g., in an ad featuring a runner, AI gave attention to trees surrounding the runner), whereas humans can decipher the main aspect of an image.
  • AI decomposes human faces (e.g., AI is obsessed with human ears), whereas humans can detect the focal point of a human face. In addition, AI hallucinates, underestimating facial effects.
  • AI has difficulties interpreting more complex visual layouts (e.g., complex product pack shots are misinterpreted).
  • AI is human centric and does not focus well on nonhuman entities such as a dog (e.g., in scenes where a dog was present, AI disregarded the dog altogether).
  • AI tends to be more static in nature (e.g., AI prediction models are often built based on static attention databases), which can be a problem when comparing this to dynamic attention trends.

Download Presentation

Member Only Access

The Impact of Co-Viewing on Attention to Video Advertising

Duane Varan, Ph.D.CEO, MediaScience

Impressions are measured everywhere, however, not all impressions are equal, and as such, we need to think about how to appropriately weigh them. The problem with CTV is that there is more than one viewer, and the device itself doesn’t tell you this. The question, then, is how do we account for these added impressions. From a value point of view, we need to understand what is the value of these additional viewers. There was a meta-analysis of MediaScience studies (n=11) on co-viewing. This is not conclusive but rather exploratory because these studies were commissioned by clients. These are premium publishers and not all TV is at that level of quality. The conceptual model of co-viewing: device level exposure data à add additional co-viewers à estimated additional co-viewers. How do we know that these additional co-viewers have the same values? We need to factor for what could be a diminished add impact. To do this: we need to adjust audience (factoring for diminished ad impact) à adjusted additional co-viewers (by impact). Results:
  1. Attention and memory effects are the two areas that matter the most when addressing co-viewing. The attention sphere is a small effect, and there is not a lot of variability with that effect. The real story is in memory—if you’re talking to someone it is difficult to process the ad. Memory retrieval when co-viewing decreases by 15-52% depending on the content.
  2. Co-viewing composition effect: Mixed gender viewing has a more detrimental effect than same sex viewing (decrease by 27%).
  3. Age effects: There are big differences by age but not a lot of difference in terms of the decline that is associated with co-viewing by age.
  4. Program effects: Majority of variability is in the program effects—between 22% and 58%. The co-viewing problem cannot be solved by industry averaging, but we would need program-level measurement. For instance, effect is worse with sitcoms than it is with sports. One of the theories is that in sports, a lot of human interaction happens at the moment, whereas in comedy this is saved for the ad break.
  5. Number of co-viewers effects: What happens when you increase the number of people in the room? In the studies, the maximum co-viewing is two. Looking at TVision data, they saw that for three or more viewers and above that impacts level of visual attention—from 3% drop with two viewers, to 18% drop with three viewers and 23% drop with four viewers or more. However, this is not significant because 97% of TV viewing occurs with one or two viewers, and only 3% of TV viewing is with three or more viewers (TVision data).
  6. Implications in terms of value proposition—the worst-case scenario is a detrimental effect of 58%. The net effect of co-viewers is negative 40. Average scenario— detrimental effect of 15%; net of 140 viewers in value.
Future research will focus on second screen device usage. Hypothesis is that the scale of this problem is bigger than the scale of co-viewing. Key takeaways:
  • Focus on co-viewing to understand the value of additional viewers.
  • Effect is seen in memory domain rather than attention domain.
  • Issue of variability by program means that the equation will differ between programs.

Download Presentation

Member Only Access

EEG Illuminates Social Media Attention Outcomes

Shannon Bosshard, Ph.D.Lead Scientist, Playground XYZ

Bill HarveyExecutive Chairman, Bill Harvey Consulting

Advertising starts with attention. If gained and sustained long enough, brain engagement occurs. Once this happens, memory encoding might happen, and that is when incremental brand equity and sales occur. Attention economy has now reached a pivotal moment: What is it that drives attention and how is this related to outcomes? Is it media platforms or creative? The presenters took two approaches: the first was brand lift studies (focusing on the conscious) with 20,000 participants, 35 well-established brands, 60 ads, on social media platforms, using eye tracking and post exposure survey. The second approach was a neuro study focusing on the subconscious, with 50 participants, across 150 sessions, people exposed to over 1,800 ads. They used a combination of eye tracking and EEG, and RMT method for measuring motivations. Hypotheses:
  1. Some ads achieve their desired effects with lower attention than others.
  2. Platform attention averages mislead media selection because they leave out the effect of the creative and the effect of motivations.
  3. Higher order effects add to our understanding of what is “optimal”: motivation, memory encoding, immersion, cognition load.
By isolating the impact of the platform (same creatives across multiple channels), the research shows that platform is not the largest driver of outcomes. In only 25% of the times there is a statistical difference between media platforms. Instead, the creatives determine outcomes: in 96% of cases we see statistical difference between media platforms. Creatives present the best opportunity for behavior change. The platform might be the driver of attention, but creative is the driver of outcome. Put differently, platforms dictate the range of attention and how the consumer interacts, but it’s the creative that drives outcomes. Attention/non attention is affected by motivations and subconscious decisions (to be proven in future). Neuroscience taps into the subconscious— memory encoding, immersion (engagement), approach (attitude), cognitive load. They compiled overall averages to make inferences regarding where to place your ad. RMT methodology used driver tags to code an ad (or any piece of content) using human coders to see how many of these tags belong to the ad. This methodology was used to examine the resonance between the ad and the person. Key takeaways:
  • Attention drives outcomes—there’s a need to understand how it is related within that cycle.
  • Creative is key—there is a need to understand how much attention is needed to drive outcome.
  • Consider consumer motivation—this correlates with neuroscience metrics and allows for more nuanced understanding of the importance of creative in driving outcomes.

Download Presentation

Member Only Access

Mapping the Impact: When, How and Why TV Commercials Work Best

Jeff BanderPresident, eye square

Sandra Schümann Senior Advertising Researcher, RTL Data & Screenforce

Marvin VogtSenior Research Consultant, eye square

Screenforce conducted a series of studies beginning in 2020, examining reach, success, mapping moods and impact in relation to attention. They mapped the impact by investigating when does which type of communication work best and why? There were 8,304 ad contacts in-home, 285 participants in a natural way (living rooms). They also examined 64 brands in three countries. The largest media ethnographic study in Europe examined usage situations and scenarios. There were four different scenarios: 1) Busy Day scenario (2-6PM Mon-Fri, people are distracted and focused on other things), 2) Work is Done (after 6PM, first lower part of concentration, seeking for better mood), 3) Quality Time (8-10PM, prime time, high activation of quality time, “Super Bowl moment,” high focus on screen), 4) Dreaming Away (10PM-1AM, typically alone, before sleep, dreamlike situation). Each of the 64 ads was tested in all four scenarios. The study included a technical objective criteria, subjective feeling and creative approaches. Eye square found a way where no additional material is needed other than an instruction book, webcam and GSR. Key findings:
  1. Visual attention is highest at late night (86%). Recall for ads works best in evening (75% Quality Time and Dreaming Away). However, advertising is shown to fit better earlier in the day.
  2. Characteristics per scenario: spot liking rises when using brand jingle (audio) in Busy Day scenario. This is because during the Busy Day scenario people are distracted and the jingle can help retain their focus.
  3. On a Busy Day, use strong brands with strong branding. When work is done, use ads to create a good mood. During Quality Time, it’s time for the big stories. During Dreaming Away, less is more.
  4. In sum, it is possible to find out which scenario works best for the spot and optimize the ads and find the best possible time and spot to air the ad.
Key takeaways:
  • TV ads have a strong effect, but there are ways to improve this impact.
  • Usage scenarios of audience has impact on ad effectiveness.
  • TVs can achieve a higher effect if they take the usage scenario into account.

Download Presentation

Member Only Access

The Value of Attention is Nuanced by the Size of the Brand

Karen Nelson-Field, Ph.D.CEO, Amplified Intelligence

This presentation discussed the importance of nuance and interaction effects and how understanding interaction effects are critical in building products. There were four use cases—campaign strategy, planning, verification, buying. Two sets of data—inward and outward facing—looked at tag-based data through tags, outward facing—device based, panel data, gaze tracking, pose estimation, etc. One is observed while the other is human. Both are valuable. Each set has limitations. Looking at actual humans has a scale issue, whereas impression data has limited ability to predict behavior. Human behavior is complex. It is also varied by platforms. Metrics without ground truth misses out on this. Three types of human-attention were measured: active attention (looking directly at an ad), passive attention (eyes not directly on ad), non-attention (eyes not on screen, not on ad). Attention outcomes and attention are not always related. Underneath how attention data works there is a hierarchy of attention—the way ad units and scroll speeds and other interaction effects all mediate with each other. It is not as simple as saying look at this ad unit and we will get this amount of attention. If products don’t include these factors they fail. Amplified Intelligence built a large-scale validation model for interaction effects and “choice” using Pepsi. They employed logistic regression using Maximum Likelihood Estimation (MLE), analyzing observations and tested critical factors—brand size and attention type, to demonstrate strong predictive accuracy with CV accuracy. They found significant interaction effects, particularly brand size and attention type as key influencers of consumer brand choice. Key findings:
  1. Passive and active attention work differently. Passive attention works harder for bigger brands, while active attention works harder for smaller brands. Put differently, small brands need active attention to get more brand choice outcomes.
  2. Attention switching (focus) mediates outcomes. The nature of viewing behavior mediates outcomes. Not just attention yes or no, and what level, but about behavior across time. This is why time-in-view fundamentally fails even though it is considered one of the critical measures of attention. Humans are constantly switching between attention and non-attention. There’s attention decay—how quickly attention diminishes (sustained attention x time). There’s attention volume—the number of people attentive (attentive reach x time).
  3. Eyes on brand attention is vital for outcomes. If the brand is not at the point when people are looking (or hearing), this impacts outcomes. When the brand is missing, we fill in the blanks, but the next generation of buyers are being “untrained.”
Implications:
  1. Human attention is nuanced, complicated, making it difficult to rely merely on aggregated non-human metrics for accuracy. We must constantly train these models, just like GenAI, to ensure that all these nuances are fit into the model. A human first approach is critical.
  2. Outcomes cannot predict attention. Attention can predict outcomes but not the other way around.
  3. Attention strategies should be tailored to campaign requirements (not binary quality or more/less time). Overtime attention performance segments will start to think about other AI.
Key takeaways:
  • Human attention is nuanced. This makes it difficult to rely only on aggregated non-human metrics for accuracy.
  • A human-first approach is critical.
  • Outcomes cannot predict attention.
  • Attention strategies should be tailored to campaign requirements.

Download Presentation

Member Only Access

The Power of AI for Effective Advertising in an ID-free World

Rachel GantzManaging Director, Proximic by Comscore

Amidst heightened regulations in the advertising ecosystem, Rachel Gantz of Proximic by Comscore delved into a discussion of diverse AI applications and implementation tactics, in an increasingly ID-free environment, to effectively reach audiences. Rachel highlighted signal loss as a "massive industry challenge," to provide a framework for the research she examined. She remarked that the digital advertising environment was built on ID-based audience targeting, but with the loss of this data and the increase in privacy regulations, advertisers have placed their focus on first-party and contextual targeting (which includes predictive modeling). In her discussion, she focused on the many impacts predictive AI is having on contextual targeting, in a world increasingly void of third-party data, providing results from a supporting experiment. The research aimed to understand how the performance of AI-powered ID-free audience targeting tactics compared to their ID-based counterparts. The experiment considered audience reach, cost efficiency (eCPM), in-target accuracy and inventory placement quality. Key takeaways:
  • Fifty to sixty percent of programmatic inventory has no IDs associated with it and that includes alternative IDs.
  • Specific to mobile advertising, many advertisers saw 80% of their IOS scale disappear overnight.
  • In an experiment, two groups were exposed to two simultaneous campaigns, focused on holiday shoppers. The first group (campaign A) was an ID-based audience, while the second group was an ID-free predictive audience.
    • Analyzing reach: ID-free targeting nearly doubled the advertisers’ reach, vs. the same audience, with ID-based tactics.
    • Results from cost efficiency (eCPM): ID-free AI-powered contextual audiences saw 32% lower eCPMs than ID-based counterparts.
    • In-target rate results: Significant accuracy was confirmed (84%) when validating if users reached with the ID-free audience matched the targeting criteria.
    • Inventory placement quality: ID-free audience ads appeared on higher quality inventory, compared to the same ID-based audience (ID-free 27% vs. ID-based 21%).

Download Presentation

Member Only Access

CTV Ads: Viewer Attention & Brand Metrics

Rohan CastelinoCMO, IRIS.TV

Mike TreonProgrammatic Lead, PMG

Representing the Alliance for Video Level Contextual Advertising (AVCA), Rohan Castelino (IRIS.TV) and Mike Treon (PMG) examined research conducted with eye tracking and attention computing company, Tobii. The research endeavor focused on the impact of AI-enabled contextual targeting on viewer attention and brand perception in CTV. Beginning the discussion, Rohan examined challenges with CTV advertising. He noted that advances in machine learning (ML) have empowered advertisers to explore AI enabled contextual targeting, which analyzes video frame by frame, uses computer vision, natural language, understanding, sentiment analysis, etc., to create standardized contextual and brand suitability segments. Highlighting a study of participants in U.S. households, the research specifically aimed to understand if AI-enabled contextual targeting outperformed standard demo and pub-declared metadata in CTV. Additionally, they wanted to understand if brand suitability had an impact on CTV viewers’ attention and brand perception. Results from the research found that AI-enabled contextual targeting outperformed standard demo and pub-declared metadata in CTV and increased viewer engagement. In closing, Mike provided the marketers’ perspective on the use of AI-enabled contextual targeted ads and its practical applications. Key takeaways:
  • Challenges with CTV advertising: Ads can be repetitive, offensive and sometimes irrelevant, in addition to ads being placed in problematic context.
  • In addition, buyers are unsure who saw the ad or what type of content the ad appeared within. A recent study by GumGum showed that 20% of CTV ad breaks in children’s content were illegal (e.g., ads shown for alcohol and casino gambling).
  • Advertisers have begun experimentation with contextual targeting in CTV, as a path to relevance.
  • A study conducted with U.S. participants that examined the effects of watching 90 minutes of control and test advertisements, using a combination of eye tracking, microphones, interviews and surveys to gather data found that:
    • AI-enabled contextual targeting attracts and holds attention (e.g., 4x fewer ads missed, 22% more ads seen from the beginning and 15% more total ad attention).
    • AI-enabled contextual targeting drives brand metrics (e.g., 2x higher unaided recall and 4x higher aided recall).
    • AI-enabled contextual targeting increases brand interest (e.g., 42% more interested in the product, 38% gained a deeper understanding).
  • Research to understand if brand suitability had an impact on CTV viewers’ attention and brand perception found that:
    • Poor brand suitability makes CTV viewers tune out ads and reduces brand favorability (e.g., 54% were less interested in the product, 31% liked the brand less).
    • AI-enabled contextual targeted ads are as engaging as the show.

Download Presentation

Member Only Access

How Attention Measurement Optimizes Marketing Campaigns for Success

Neala BrownSVP of Strategy and Insights, Teads

Laura ManningSVP of Measurement, Cint

This presentation focused on the intersection of attention and brand lift. The partnership between Teads and Cint relates to the challenge of scalability, access to data and insights, and collaboration and innovation. They used normative data sets in order to examine the performance. Beyond viewability, in partnership with Adelaide who uses AU—an omnichannel metric that predicts the probability of placement to capture attention and drive subsequent impact— they conducted 17 studies in 2023. There were variance in results, in statistical significance, outcomes, etc. Results: #1 case study—media that scored highly attentive showed higher product familiarity and favorability; #2 case study—for a flat and neutral campaign, higher attention drove higher brand lift across every brand funnel metric. In terms of applicability, from a media planning perspective, this learning can be leveraged toward outcomes. When aggregating across all 17 case studies, frequency matters—lower frequencies require more AU to move metrics. People who are already familiar react to lower AU media. For favorability—more “energy” or AU is needed to move people here, it’s easier to move people with high level of familiarity. For ad recall—even at higher exposure levels, the ad needs to be high quality and needs more attention. Notably, these case studies can be replicated. Key takeaways:
  • Frequency matters: lower frequencies require more AU to move metrics.
  • Familiarity reacts to lower AU media.
  • Favorability: it’s easier to move people with high level of familiarity.
  • For ad recall, the ad needs to be high quality.

Download Presentation

Member Only Access

Retail Media Networks, Generative AI Top JAR’s Industry-Informed Research Priorities

  • JOURNAL OF ADVERTISING RESEARCH

Retail media networks, generative AI across creative, market research and trust, ad effectiveness and attention: These are among the topics highlighted on the Journal of Advertising Research’s list of 2024 research priorities. The list is a result of one-on-one interviews with advertising professionals by Editor-in-Chief Colin Campbell, who asked: "What are your biggest needs and challenges?"

Member Only Access

Too Much Attention?

One expert argues that attention alone does not bring ad success and that we should not forget the other important levels of the “ARF Model.”

Read more »