research quality

Let’s Face It: Facial Coding Isn’t Up to the Task

There are limitations in terms of measuring the emotional impact of video. Facial expressions are powerful. Neuroscience has proved that special parts of our brain is dedicated to faces. Development psychology also demonstrates the power of facial expressions. Consumer neuroscience also demonstrates the importance of faces. The question is whether Facial Action Coding System (FACS) is the best tool to measure consumers emotional response when viewing ads?

Using Attention AI to Predict Real-World Outcomes

Mars and Realeyes prove connection between creative attention and sales performance. Mars’ Agile Creative Expertise (ACE) tool tracks visual attention and emotion responses to digital video ads. Visual attention AI and facial coding measures how long participants watch a video and how their attention changes as they watch. Proven this model to work—optimizing content, lifting sales up to 18% in 19 markets, $30 million in ad optimizations in 18 months.

In-Home Psychophysiological Television Measurement

How to measure attention? This presentation introduced MindProber—a tool that includes biometrics, physiology, conscious/dial AND survey methods to assess attention. MindProber measurement is passive—second by second emotional engagement is measured through electrodermal activity (EDA), aka galvanic skin response (GSR) and also active—measures cognitive response via an optional feature to indicate like/dislike content through app. N= 1,500 and growing to 3,000 by end of year.

Why Visual Attention is a Flawed Measure of Attention

Because of the complexity of attention, Duane Varan proposes to focus on inattention and regard attention as absence of inattention. Attention is the threshold above which the cognitive processing of stimuli occurs. It is not linear; it’s best understood as occurring above a threshold (inattention) after which other variables can deliver their value. There are many different kinds of attention, and the different measures capture the different types of attention. Inattention, on the other hand, is a constant construct.

Charting the Course for Third Party, Cross-Media Audience Measurement

In this session, Tina Daniels and Nicole Gileadi examined Google’s principles for charting the course for third-party cross-media audience measurement. Tina acknowledged more third-party measurement companies were expressing interest in working more closely with Google, given their stature as the world’s largest video provider. In her discussion, she acknowledged that this interest generated the need for Google to create a set of principles to offer to both measurement companies and key clients to guide the process. After reviewing these principles Tina and Nicole held an open discussion regarding these principles. Topics of the discussion included premium and high-quality content, long-form versus short-form video and the measurement of this content. In addition, Nicole touched on the importance of content and the context surrounding an ad. Other areas included the idea of exposure metrics (e.g., Where is my audience? Did I reach them?) in addition to providing signals to conduct an impact analysis.

Nielsen One Comes to Market

Scott McDonald opened the session by discussing how the Census uses sample to correct for issues like undercounts in big data. Pete Doe (Nielsen) responded by commenting on persons who ask: do you have a Big Data solution or a panel solution? He doesn’t see it that way but rather you take all the signals you have and put them together in the best way for the problem at hand.

Making Sense of Multi-Currency Initiatives

Jon Watts (CIMM) led a conversation with the CEOs of an organization that is helping to manage the JIC (OpenAP) and one that participates in it (the VAB), the EVP of an organization that does not belong to the JIC but has met with it and the CEO of the MRC. The participants clarified their relationships with each other, discussed Nielsen and expressed their hope for the future of television measurement.

Use AI to Automate These Aspects of Market Research

  • MSI

Adopting AI usage into business functions seems to be the new trend, but can it be used to increase efficiency in market research? This study finds that it can help reduce costs and improve speed by automating some aspects of the process. One such way is to use Large Language Models (LLMs) as stand-ins for human survey respondents. The pool of respondents is shrinking. Moreover, the research finds that LLMs can realistically reflect consumer preferences because they have been trained on extensive online data. This process is also a fraction of the cost and time required with conventional methods.

Member Only Access

JIC: Coalescing Around Standards for Cross-Platform Currencies

Brittany Slattery (OpenAP), who opened this discussion, explained that the new JIC was created by national programmers and media agencies for three main purposes: (1) To bring buyers and sellers to the table with equal voices; (2) To create baseline requirements for cross-platform measurement solutions and (3) To create a harmonized census-level streaming service data set across all of the programmers in the JIC. Fox, NBCU, Paramount and Warner Brothers Discovery are all JIC members, as are Dentsu, Group M, IPG Mediabrands, OMG and Publicis. The members hope to foster competition among multiple ad video measurement currencies. After her introduction, Danielle DeLauro (VAB) moderated a discussion with the representatives of three networks and Group M.

Inclusion by Design in Pharma Research and Marketing

  • Pharma Council

This ARF Pharma Council event followed up on the Council’s podcast episode on “Inclusive Futures of Humancare,” focusing on the importance of inclusiveness in pharma research and marketing with respect to both demographic characteristics and health conditions.  Four speakers delivered brief presentations, followed by a discussion moderated by Pharma Council Co-Chair Marjorie Reedy of Merck.

Member Only Access