fbpx

neuro/biometrics

Unlocking Reach in Premium Content

Mike LevinProduct Management, NBCU

Emily KwokSenior Director, Ad Experience Measurement, NBCU



NBCU’s Mike Levin and Emily Kwok tested brand safety in premium video content from a viewer perspective in their research using NBCU’s proprietary AI tech for automating brand safety and suitability decision making. The study’s three objectives asked whether increasingly violent episodes influence viewers’ experiences, if they then assign blame to marketers for knowingly advertising in explicit or violent content, and if there are specific instances where adjacency affects viewer sentiment towards an ad. Measuring unconscious response to nine episodes across two seasons tagged with three levels of risk, facial coding and eye gaze technology, complemented by traditional surveys, captured the impact on a nationally representative sample of 1,800 respondents. Finding that violent episodes maintained stable levels of attention, the study also determined that traces of negative emotion were scarcer in the more violent episodes.

Key Takeaways

  • From the mildest episodes to the most violent, viewer attention remained stable. Attention to high risk episodes measured in at 51.5%, with attention to low-risk episodes at 51.4%.
  • Viewers don’t attribute blame to advertisers. “There’s more reward than risk,” according to Emily. Viewers tend to enjoy brands that are sponsoring the content they love, controversial or not—8 in 10 agree that they don’t distrust brands that advertise in graphic TV shows.
  • Several rare cases where gratuitous violence immediately preceding an ad break did carry negative sentiment into the first seconds of the ad.

Download Presentation

Member Only Access

Putting Cinema in the Frame

Mike FollettCEO, Lumen Research

Manu SinghSVP, Insights/Analytics & Sales Data Strategy, National CineMedia (NCM)



Lumen has a comprehensive attention dataset for most media about visual attention, yet cinema was missing. How much attention goes to cinema advertising? The comparison between media is important. Clients have been asking to complete the picture—how to “put cinema in the frame”? Methodology: 151 respondents across six screenings; infrared head tracking cameras used to assess viewing. Track when people were looking at the screen and when not. The first ever eye tracking data for cinema in the U.S. Analysis of data was second by second. The attention funnel: 1,000 ads, 51% were technically viewable, only 9% were looked at, and if so, they were looked at for 1.6 seconds. Findings:
  • In contrast to TV or other media, and in particular digital display, cinemas ads are functionally unmissable. Almost all cinema ads are viewed.
  • Based on the premise that more attention is usually better—the longer you engage the more likely you are to remember them—findings show that cinema ads hold attention for dramatically more time than any other media—as much as 36x of some digital formats.
  • The proportion of the ads that gets looked all the way through—cinema ads are often longer than other formats, but they outperform other media as a proportion of viewable time.
  • Cinema is both audio and visual—perhaps even if you’re not looking at the screen, you might still be listening to it.
  • Cinema drive more attentive seconds per 1,000 impressions than any other media.
Attention and recall 1:1 correlation via survey: for every percentage point increase in attention there is correspondence lift in brand recall and brand choice. Implications for advertising: APM X CPM = ACPM ACPM embedded within agency planning tools. ACPM + BLS = “value of attention.”

Key Takeaways

  • The first ever eye tracking data for cinema ads in the U.S.
  • Ads in cinema are functionally unmissable; hold attention for dramatically more time than any other media; cinema ads are often longer than other formats but they outperform other media as a proportion of viewable time; Cinema drive more “attentive seconds per 1,000 impressions” than any other media.
  • Three main drivers of attention: 1) size of screen; 2) lack of distraction; 3) mood of audience.

Download Presentation

Member Only Access

Let’s Face It: Facial Coding Isn’t Up to the Task

Elise Temple, Ph.D.VP, Neuroscience & Client Service, Nielsen IQ



There are limitations in terms of measuring the emotional impact of video. Facial expressions are powerful. Neuroscience has proved that special parts of our brain is dedicated to faces. Development psychology also demonstrates the power of facial expressions. Consumer neuroscience also demonstrates the importance of faces. The question is whether Facial Action Coding System (FACS) is the best tool to measure consumers emotional response when viewing ads? Nielsen IQ found that when you measure people’s facial expressions you need high quality videos, good lighting, algorithms. Nielsen IQ tests across over 2,000 ads in 15 countries. Initial R&D, global deployment, became a standardized diagnostic integrated with eye tracking. First phase found that facial expressions are only reproducible in strong and consistent moments. Happy is reproducible if it is really big, namely includes strong smiles. Negative emotions—reliability and reproducibility were as low as .25%. People watching TV, however, are mostly neutral. Reproducible expressions don’t occur often. Brain measure EEG, however, fluctuates much more and is highly dynamic. Put differently, there is a gap between facial expressions and EEG waves. Emotional expression does not equal emotion in the brain r=0.07. Is FACS predictive of anything meaningful? NO! No significant relation between ad-driven sales lift and any emotional expression. Facial coding is not the right tool for the job of emotional response to video ads. It is not reproducible—when measured continuously, don’t get same answer twice; it is not sensitive—at reproducible levels, expressions are rare; it is not meaningful—expressions do not equal emotion and are not reflective of dynamic emotion in the brain; and not predictive—smiles do not equal sales and don’t correlate with outcomes.

Key Takeaways

  • Facial coding is not a good measure of measuring emotional response to video ads.
  • Only smiles were found to register in somewhat reliable manner with facial coding.
  • There is no significant relation between ad-driven sales lift and any emotional expression.

Download Presentation

Member Only Access

Using Attention AI to Predict Real-World Outcomes

Max KalehoffVP Marketing Growth, Realeyes

Johanna WelchGlobal Mars Horizon Comms Lab Senior Manager, Mars



Mars and Realeyes prove connection between creative attention and sales performance. Mars’ Agile Creative Expertise (ACE) tool tracks visual attention and emotion responses to digital video ads. Visual attention AI and facial coding measures how long participants watch a video and how their attention changes as they watch. Proven this model to work—optimizing content, lifting sales up to 18% in 19 markets, $30 million in ad optimizations in 18 months. ACE can 1. Predict sales accurately while learning how consumers behave and think. 2. Optimize—improve performance through creative selection. 3. Scale—establish a fast scalable solution. The model links attention, emotion and memory. Accordingly: 1. Attention is access to the brain and enables the brand to enter into consciousness. 2. Facial reactions—build memories. 3. Impact—higher consideration, conversions and sales. ACE solution: 1. Participant exposure: 24-48 completion, 150-500 viewers from pool of +200m people. Observe people. 2. Attention detection: deep learning, collect viewer attention through natural viewer experience. 3. Actionable scores: ML and AI analytics to assess performance and deliver scores. Company impact: validated predictions proved connections to behavioral and sales data via over 4,000 sales/ads data points and benchmarks. They also used ACE to improve performances for TikTok, Facebook, Instagram, YouTube, achieving 18% cumulative sales lift. Global scale—scored over 1,000 creatives in 18 months. In conclusion, ACE is the biggest attention database, received U.S. patent for visual attention detection. Mars hopes to share ACE with other companies. And the next step is how to take a pre-testing tool to in-flight content and to examine brand equity.

Key Takeaways

  • Establishing the connection between creative attention and sales performance is key.
  • Mars’ Agile Creative Expertise (ACE) tool tracks visual attention as well as emotional responses to digital video ads.
  • Proven this model to work—optimizing content, lifting sales.

Download Presentation

Member Only Access

In-Home Psychophysiological Television Measurement

Pedro AlemidaCEO, MindProber

Lauren ZweiflerSVP Insights & Research, Client Strategy & Insights Org, NBCU



How to measure attention? This presentation introduced MindProber—a tool that includes biometrics, physiology, conscious/dial AND survey methods to assess attention. MindProber measurement is passive—second by second emotional engagement is measured through electrodermal activity (EDA), aka galvanic skin response (GSR) and also active—measures cognitive response via an optional feature to indicate like/dislike content through app. N= 1,500 and growing to 3,000 by end of year. Pedro Almeida (MindProber) emphasized three indicators of the value of metrics: 1. Validity—measures what it is supposed to measure. GSR is linearly related to perceived arousal in video formats. Measures emotional intensity (example of soccer game and spikes during key moments such as goals). 2. Reliability—metrics are stable across samples with 98.3% accuracy in ranking high vs. low impact ads. 3. Predictive power—metrics predict events of interest. Carry-over—if you’re more involved with the content you will respond more to ads. Premium content leads to higher advertising impact. Engagement with contents will carry over to engagement with ads. Emotional impact is indicator of likelihood of remembering the brand. Generating macro insights through a robust taxonomy—analyzed 150+ sessions, watched 250+ hours of programming, monitored 20,000+ individual hours, 10,000+ individual participations, 22,700+ events tagged and 8,500+ ads. Ninety-eight percent of commercial activity is as engaging as the content. Findings show 1. validity exists. 2. Reliability—98.3% accuracy in ranking high vs. low impact ads. 3. Predictive validity: A) Higher emotional impact for content = higher emotional impact for advertising. Carry-over—premium content leads to higher advertising impact; B) Higher emotional impact score = higher brand recall. Emotional impact is an indicator of likelihood of remembering the brand. Lauren Zweifler (NBCU) showed that MindProber successfully identified the most crucial moments on both scripted and unscripted shows (Iconic holiday TV moments for the first and Top Chef and America’s Next Top Model for the latter) WITH validity, reliability and predictability. What’s next?
  • How can we best predict lower funnel outcomes?
  • What elements of creative really matter?
  • How do we optimize for CTV?

Key Takeaways

  • New tool that captures emotional engagement through skin response.
  • Three fundamental metrics are validity, reliability and predictability.
  • Generated macro insights through a robust taxonomy. Power of premium video content yield strong emotional engagement (EE).

Download Presentation

Member Only Access

Why Visual Attention is a Flawed Measure of Attention

Duane Varan, Ph.D.CEO, MediaScience



Because of the complexity of attention, Duane Varan proposes to focus on inattention and regard attention as absence of inattention. Attention is the threshold above which the cognitive processing of stimuli occurs. It is not linear; it’s best understood as occurring above a threshold (inattention) after which other variables can deliver their value. There are many different kinds of attention, and the different measures capture the different types of attention. Inattention, on the other hand, is a constant construct. Phase 1 of the study focused on literature-proven measures (except fMRI) and found the following to be the four best methods: Heart rate (HR), visual fixations, blink duration, EEG (Alpha Wave). MediaScience and the Ehrenberg-Bass Institute concluded that eyes-on-screen is not best-in-class because 1. People are often paying attention even if not looking at screen; 2. Even when people look at the screen, they aren’t always paying attention, at least with fixations you see movements. Phase 2 used ads as stimuli and found two measures worked best: heart rate and EEG. Duane Varan pointed out that as an industry we have been fixating on eye tracking. But many ways of measuring eye tracking are not accurate AND they lose accuracy over time. There is a misalignment between what we are measuring (on which platforms) and what this means for different platforms. Even if accurate, eye tracking measures are not the best-in-class measure for attention. While EEG is a great measure, it is not scalable. Heart rate, on the other hand, can scale. Pilot project—massive in scale—50 programs, n>4000 using Stream Pulse platform; 1,576 in lab sessions and 2,437 in home sessions. Results demonstrate that HR via fitness trackers is viable with correlations as high as .81, but overall, this was only .32. Importantly, while promising, HR is a complex measure that necessitates more work to clean noise and harness full potential. Next steps: 1. Continue mining existing data to devise strategies for identifying and cleaning noise factors. 2. Large pilot “in-the wild” using natural fitness tracker data combined with ACR. Open questions: Where is attention being oriented to? Towards brand? If not, what’s the value in that? Then, thinking about what effect this has. Finally, not all measures lend themselves to AI. Trained against what reference data—is it measuring attention or something else, this depends on the reference data. A lot of the companies are using sales lift measures—you need attention to get sales lift, but you don’t measure this as attention. Finally, with what validation?

Key Takeaways

  • Inattention should be the focus.
  • Eye tracking is not the best measure; heart rate and EEG are better but only heart rate is scalable.
  • Heart rate is viable and scales so there is promise in its application.
  • Be skeptical (but hopeful) of AI measure of attention. Proper scientific validation is necessary before we adopt them.

Download Presentation

Member Only Access

Decode Digital Video Attention by Environments in the Wild

Bill HarveyExecutive Chairman, Bill Harvey Consulting, Inc.

Sophie MacIntyre Ads Research Lead, Meta



This research has two main objectives: to establish if there is evidence for distinct environment types within digital media and to understand implications for use of attention metrics. The study focused on mobile “in the wild” test using simulated media environments conducted and analyzed by Realeyes in partnership with Eye Square and Bill Harvey consulting. The study scoped to video ads on six media platforms—Meta (Facebook and Instagram), Hulu, Snapchat, TikTok, Twitter and YouTube—and three environment categories—Feed (Facebook, Instagram, twitter), Short Form (Facebook, Instagram and Snapchat stories, TikTok Takeover, Feed and TopView, and YouTube shorts) and Stream (Facebook InStream, Hulu Pre and Mid Roll, and YouTube Skippable and Non-Skippable). Study was conducted in three stages: pre-exposure survey followed by in-context view (ad visibility, skips scrolls, on-screen attention and reactions) and then post-exposure survey (examining brand recognition, ad recall, brand trust, ad liking and persuasion). A number of constants were held: 1) isolating effect of creative; 2) holding audience constant through randomization and isolation (each person sees only one ad at a time). Main findings include:
  1. Across digital environments relaxation is the key mode.
  2. While attention norms vary between environments, brand recognition is comparable and delivers the same effect.
  3. There were similar results across the funnel across different format types—apart from ad recall.
  4. The effect of attention on brand outcomes differs across environments. More attention = higher ad recall and better brand recognition. Feed and short form environments saw the same ad recall with shorter average time and fewer long exposures.
  5. Marketers should tailor creative to take advantage of different second by second attentive profiles. In stream there is a more constant level of attention across creative—more time to tell the story. In Feed and Short Form—a conscious decision to concentrate your attention—you’re enjoying the fun of choosing; in stream it’s like being in the back seat of the limo. Less choice. So different types of attention.
  6. Across all environments, consumer attentive behavior decreases with increasing familiarity.
  7. When you do a study like this it is important to filter out those people who aren’t typically users of the platform. Newbies don’t know how to escape the ads.
In conclusion, effectiveness of attention varies across environment. Caution is needed in averaging across categories. Attentive behavior evolves across time. Open questions:
  • How can we distinguish between positive and negative attention? Perhaps through facial attention measurement.
  • How to understand forced vs. earned time?
  • What are the tradeoffs between attention, brand outcomes and cost?
  • How can we layer on emotion data and/or additional data?

Key Takeaways

  • There are different profiles of consumer behavior across environments.
  • Despite this, brand outcomes are comparable, suggesting each environment attains value in a different way.
  • Attention has a different relationship with outcomes across environments and users.

Download Presentation

Member Only Access

Meaningful Attention – The Complexity of Human Perception

Jeff BanderChief Revenue Officer USA, eye square

Matthias Rothensee, Ph.D. Chief Scientific Officer & Partner, eye square

  Attention is a hot topic, but what is most important, according to these researchers, is to identify “meaningful” attention. Meaningful attention is focused, purposeful, deeply processed, and effective. Eye Square’s research on attention focuses on attention hierarchy (motion and novelty), ambiguity that attracts both brain and eye; and the pop-out effect. To measure attention, one needs:
  1. Decent methodology of measuring attention through eye tracking—decent accurate noise-free signal.
  2. Authentic media exposition—live in context portfolio.
Major results (through meta-analysis):
  • Attention span is short. Only 10% of all advertising contacts are longer than 8 seconds.
  • Relationship between attention and recognition is not linear: the first 2.5 count is where the greatest effect is created.
  • Longer viewing time does not always lead to more memory. Instagram vs. YouTube study shows that Instagram—despite lower attention has higher recognition. Perhaps this is related to exposure situation—voluntary attention on Instagram vs. forced exposure on YouTube.
  • There are different qualities of attention and these in turn impact/shape behavioral effects.
  • There are cultural, gender, geographic, age, race and ethnicity differences that guide attention.

Key Takeaways

  • Begin with the end in mind.
  • Focus on goal of attention.
  • Tie meaning to goal.

Download Presentation

Member Only Access

Context Matters

Heather CoghillVP, Audience, Warner Bros. Discovery

Daniel BulgrinDirector, Research Operations & Insights, MediaScience

Heather Coghill (WBD) and Daniel Bulgrin (MediaScience) shared methodologies and results from two in-lab studies that sought to understand how impactful category priming can be without brand mention and if viewers associate brands with adjacent unsuitable content. Their presentation focused on two types of contextual effects within program context—“excitation transfer” and “brand priming”. To see if these effects carried over to ad content through excitement or brand recognition in the content, the research team utilized distraction-free viewing stations that enabled neurometrics and facial coding followed by post-exposure surveys. Impact on brand perception was measured with lifts in brand attitude, attention and memory. Results showed brand priming did change how viewers experienced the ad by lifting brand recognition, with stronger effects in heavier ad loads. The research also concluded that although brands are not harmed by adjacency to perceived unsuitable content, context effects still need to be considered.

Key Takeaways

  • Even moderate category primes can push through effects, despite modest impact, in both linear and CTV. Category priming in streaming with limited ads impacted middle and lower funnel metrics, with 31% of viewers noticing a connection between the ad and the program.
  • Although viewers agreed that low intensity “unsuitable” content was most acceptable for advertisers, there were no adverse effects as intensity levels increased—all levels were deemed suitable for advertisers, with no significant differences in brand recall or purchase intent.
  • More research is required to understand what is unsuitable for brands. The current guidelines are based on what is thought to be unsuitable—not social science.

Member Only Access