research methods

The ARF Member AI Workshop

The ARF Member AI workshop introduced members to the potentialities of various AI platforms and tools to boost their work productivity. The workshop covered how LLMs such as Copilot, ChatGPT, Gemini and Claude can be employed in three main areas: presentations and reports, advertising research and meetings. Issues such as privacy and security of using AI, as well as the current limitations and challenges of the technology were also discussed. The hands-on, interactive workshop was an opportunity for all those interested in best practices and guidelines for using AI to learn how to interweave such programs into their daily work processes.

Member Only Access

Research to Improve AI

Yes, AI is a great tool for marketers. But how can we avoid the “AI Conundrum” – taking advantage of its strengths while avoiding its errors and risks?    

Read more »

Experimentation Unleashed: Driving Transformation Using Cutting-Edge Data

Cesar BreaPartner, Bain & Company

James SlezakCEO, Swayable

Cesar Brea (Bain & Co.) and James Slezak (Swayable) shared the lessons they learned using and experimenting with RCTs (random controlled trials) in trying to transform organizations by taking advantage of new data technologies. They contributed their experiences with CPG, online, event and retailer clients to best exemplify how organizations need to embrace the process of transformation using experimentation and data. Their resulting experimentation maturity framework outlines important conditions for success. Key takeaways:
  • Orchestration is more important than sophistication—think end to end from problem formulation to alignment on execution and measurement with the CFO. Are the conditions right to have a successful experimentation program? What are the underlying organizational and political dynamics that need to be managed? Does the organization have the right tools to support interpretation and adoption?
  • Work with the data that is going to be useful to the company from practical sources to help decision making.

Download Presentation

Member Only Access

Foundations of Incrementality

Sophie MacIntyreAds Research Lead, Marketing Science, Meta

Randomized Control Trials (RCTs) are the gold standard for unbiased measurement of incrementality according to Sophie MacIntyre (Meta). However, there are situations where RCTs are not available so Meta explored other methods to improve the measurement of incrementality. Meta’s researchers wanted to know how close they could get to the experimental result by using non-experimental methods. The researchers were unable to accurately measure an ad campaign’s effect with sophisticated observational methods. Additionally, traditional non-experimental models like propensity score matching and double machine learning were difficult to use and resulted in large errors. Sophie presented incrementality as a ladder of options that get closer to measuring true business value as the ladder is ascended. The different rungs of the ladder are based on how well a particular measurement approach can isolate the effect of a campaign from any other factors. This research was undertaken in collaboration with the MMA and analyzed non-incremental models, quasi-experiments with incrementality models and randomized experiments. Meta revealed that incrementality could be achieved with modeling if the research included some RCTs. Using PIE (predictive incrementality by experimentation) estimates for decision making led to results similar to experiment-based decisions. Sophie stated that academic collaborations provide quantitative evidence of the value of incremental methods. Key takeaways:
  • Incrementality matters because it is the foundation of good business decisions and should be the “North Star.”
  • Randomized Control Trial (RCTs) are the gold standard for determining incrementality.
  • Using a significant amount of data and complex models can improve the performance of observational methods but does not accurately measure an ad campaign’s effect.
  • Using Predictive Incrementality by Experimentation (PIE) estimates for decision making leads to results similar to experiment-based decisions.

Download Presentation

Member Only Access

Tune-In to Discover What is Making Audiences Tune-Out

Travis FloodExecutive Director of Insights, Comcast Advertising

Duane Varan, Ph.D.CEO, MediaScience

Travis Flood (Comcast Advertising) and Duane Varan (MediaScience) presented research, which explored improving ad pod architecture, aimed at better engaging audiences by understanding what makes them tune-out. To provide framework to their research process, Travis indicated they started with a literature review, to understand the existing viewer experience. Focus was placed on the quantity, quality and relevance of the ads, in addition to media effectiveness studies (e.g., pod architecture, ad creative, getting the right viewers, etc.). Duane indicated that the literature review unveiled gaps, particularly in the examination of the content within the middle section of an ad pod. Based on this, the goal of the subsequent research was to understand the optimal duration of ad pods to optimize both the viewer experience and brand impact, difference in impact (e.g., more ads vs. fewer ads in the same break duration) and the impact of frequency on viewers and brands. The research included 840 participants who watched a 30-minute program with structured ad breaks. Feedback was measured using a post-exposure survey, neurometrics and facial coding. Results revealed that shorter pod length, grouping consistency in ad length and capping frequency at two to three ads per program as most effective. Key takeaways:
  • Optimal pod length: Two minutes or less leads to better results. After viewing 2 minutes of ads, recall begins to decrease. Recall is 2x higher at 2 minutes vs. 3 minutes, and after 3 minutes, recall is at its lowest point.
  • Viewers are more engaged as ads begin. Using facial coding data showed that for a heavy clutter cell, there was marginally less joy in the first 5 seconds of the ad, indicating that ad load impacts how viewers experience ads.
  • Facial coding data revealed that ad clutter can diminish how funny scenes are for viewers.
  • Consistency is key in ad lengths within a pod. Viewer testing showed that when ads had different lengths in a pod, it made the ad break feel longer compared to pods with ads of the same length.
  • Ad frequency was optimized at two per program. There was significant boost in ad recognition and purchase intent going from 1 to 2 exposures in a program. Capping frequency at 2-3 per program can positively impact recognition and purchase intent.

Download Presentation

Member Only Access