campaign tracking

Human Experience: Why Attention AI Needs Human Input

Dr. Matthias RothenseeCSO & Partner, eye square

Stefan SchoenherrVP Brand and Media & Partner, eye square

Speakers Matthias Rothensee and Stefan Schoenherr of eye square discussed the need for a human element and oversight of AI. Beginning their discussion on the state of attention and AI, Matthias acknowledge that race for attention is one of the defining challenges of our time for modern marketers. He quoted author Rex Briggs, who noted the "conundrum at the heart of AI: its greatest strength can also be its greatest weakness." Matthias indicated that AI is incredibly powerful in recognizing pattern from big data sets but at the same time there are some risks attached to it (e.g., finding spurious patterns, hallucinations, etc.). Stefan examined a case study using an advertisement for the candy M&Ms, which considered real humans using eye tracking technology and compared it to results using AI. The goal was to better understand where AI is good at predicting attention and where does it still have to optimize or get better. Results from a case study indicated areas for AI improvements in terms of gaze cueing, movement, contrast, complexity and nonhuman entities (e.g., a dog). The static nature of AI (e.g., AI prediction models are often built based on static attention databases) can become a challenge when comparing dynamic attention trends. Key takeaways:
  • Predictive AI is good at replicating human attention for basic face and eye images, high-contrast scenes (e.g., probability of looking at things that stand out) and slow-paced scene cuts where AI can detect details.
  • AI seems unaware of a common phenomenon called the "cueing effect" (e.g., humans not only pay attention to people's faces but also to where they're looking), which leads to an incorrect prediction.
  • AI has difficulties deciphering scenes with fast movements (e.g., AI shows inertia) in contrast to slow-paced scenes where AI excels in replicating human feedback. In this case human feedback is more accurate.
  • AI is more consumed with attention towards contrast (e.g., in an ad featuring a runner, AI gave attention to trees surrounding the runner), whereas humans can decipher the main aspect of an image.
  • AI decomposes human faces (e.g., AI is obsessed with human ears), whereas humans can detect the focal point of a human face. In addition, AI hallucinates, underestimating facial effects.
  • AI has difficulties interpreting more complex visual layouts (e.g., complex product pack shots are misinterpreted).
  • AI is human centric and does not focus well on nonhuman entities such as a dog (e.g., in scenes where a dog was present, AI disregarded the dog altogether).
  • AI tends to be more static in nature (e.g., AI prediction models are often built based on static attention databases), which can be a problem when comparing this to dynamic attention trends.

Download Presentation

Member Only Access

Evidence-Based Social Media Advertising: Two Field Experiments

Prof. Rachel KennedyAssociate Director (Product Development), Ehrenberg-Bass Institute for Marketing Science

Beginning her discussion, Rachel Kennedy (Ehrenberg-Bass Institute) noted that Artificial Intelligence (AI) and other developments in computational advertising could mean key media principles, developed for traditional advertising, no longer apply. She examined empirical evidence, primarily focused on traditional media, which validated the idea that for media to thrive, it must consistently reach category buyers with both continuity and recency. Nevertheless, she acknowledged the evolving landscape of media. Building on that notion, she detailed two field experiments using social media, conducted with Stephen Bellman and Zachary Anesbury, also from the Ehrenberg-Bass Institute. The experiments aimed to assess: (1) whether AI-based optimization outperformed simpler, evidence-based optimization methods by implementing algorithms on YouTube and Meta platforms and (2) whether bursting, compared to continuous advertising, was more effective in reaching category buyers. The experimental design considered matched cells (e.g., randomized zip codes, matched demographics, people per HH, median weekly income, monthly repayments, motor vehicles per dwelling, etc.). Additionally, there were equal budgets per cell. Rachel noted that the standing principles will likely still have a role, but the research aimed to understand which ones and how they contribute to the current media landscape. Results from the experiments tended to be uneven and varied, indicating room for improvement. Key takeaways:
  • AI and ML in programmatic advertising are discovering and using new media principles that may generate results from a variety of data points, better than any human could.
  • Experiment 1 (platform optimizer vs. simple reach principle): AI-based optimization beat simpler, evidence-based reach optimization, considering results for impressions, clicks and reach, reported by the digital agency responsible for scheduling the media.
    • However, AI did not outperform the simple media principles.
    • These findings suggest that using traditional media placement strategies can be just as effective as AI-based strategies for certain goals.
  • Experiment 2: Bursting is better than continuous advertising for reaching as many category buyers as possible.
    • However, neither campaign performed significantly better than the unexposed control cell.
  • Overall results from these experiments were messy, indicating the need for improvement, particularly in tools on the platform end (e.g., inadequate capping options, high budget spending and the need for enhancements in forecasting tools).

Download Presentation

Member Only Access

Business Outcomes in Advertising Powered by Machine Learning

Brett MershmannSr. Director, Research & Development (R&D), NCSolutions

Brett Mershmann’s (NCSolutions) discussion focused on how to quantify incremental advantages of some more modern contemporary machine learning (ML) frameworks, over more traditional measurements for incrementality. Beginning the presentation, Brett provided an overview of both traditional modeling techniques as well as more contemporary ML campaign measurements. To understand the differences, Brett detailed an 11-experiment process, using real observational household data, intersected with real campaign impression data but with simulated outcome and with a defined outcome function. The experiments measured accuracy, validity and power. Additionally, they compared ML with randomized controlled trials (RCTs), noting that RCTs are the gold standard but are not always feasible. To accomplish this, they ran both an RCT and an ML analysis, by creating test-control groups on real, limited data. This experiment applied the same outcome function to each, depending on a larger set of variables. In closing, Brett shared feedback from these experiments, which supported ML as a powerful method of measurement and a viable alternative to RCTs. He highlighted the importance of getting the correct data into these models for optimum results. Key takeaways:
  • A survey from the CMO Council indicated that 56% of marketers want to improve their campaign measurement performance in the next 12 months.
  • Traditional campaign measurement techniques use household matching (Nearest-Neighbor), household matching (Propensity) and inverse propensity weighting (IPW), based on simple statistical models applied uniformly. This method simulates balanced test and control groups to estimate the group-wise counterfactual.
  • The ML measurement technique, using NCSolutions’ measurement methodology, is computationally robust for large, complex data sets, understanding that data is not one-size-fits-all and estimates counterfactual for individual observations.
  • Simple A/B testing does not capture the true effect, while the counterfactual approach uses a "what-if model" approach to estimate the true effect.
  • The experiments comparing ML to traditional methods, measuring accuracy, validity and power showed that:
    • Accuracy: Machine learning outperforms on accuracy 55% compared to inverse propensity weighing (9%), propensity match (27%) and nearest-neighbor match (8%).
    • Validity: Percentage of scenarios with true effect in confidence interval (validity) found that ML gave valid estimates most often (91%), compared with inverse propensity weighing (82%), propensity match (64%) and nearest-neighbor match (73%).
    • Power: Machine learning is more statistically powerful. The average width of confidence interval using machine learning was 1.48, compared to inverse propensity weighing (1.56), propensity match (1.78) and nearest-neighbor match (1.72).
  • Results from ML vs. RCTs: Both ML and RCT are accurate in campaign measurement, both methods are generally valid, but ML is more powerful.
    • Overall, ML can be an adequate substitute for RCTs providing meaningful estimates when running an RCT is not a possibility.

Download Presentation

Member Only Access

2023 Attribution & Analytics Accelerator

The Attribution & Analytics Accelerator returned for its eighth year as the only event focused exclusively on attribution, marketing mix models, in-market testing and the science of marketing performance measurement. The boldest and brightest minds took the stage to share their latest innovations and case studies. Modelers, marketers, researchers and data scientists gathered in NYC to quicken the pace of innovation, fortify the science and galvanize the industry toward best practices and improved solutions. Content is available to event attendees and ARF members.

Member Only Access

Optimize Early & Often

The presenter reviewed the current challenges of optimizing and reporting brand campaigns mid-flight. Specifically, stat sig was never intended for midflight optimization and reporting, and is misapplied, causing errors. The question marketers ask mid-flight is “how much confidence can I have that a tactic is helping the campaign, given the other campaign tactics”? Stat sig cannot accurately answer this question.

Enabling Alternative TV Measurement for Buyers and Sellers

Pete Doe (Xandr) and Caroline Horner (605) provided a case study of their partnership that derived results from alternative currency measurement with buy and sell side perspectives. Xandr’s nimble workflow method enabled 605’s shift from advanced targeting to a very specific, custom-built, “persuadable” target audience with a range between 2 to 10x increase in outcomes.

 

Not All Frequency is Created Equal

The presenters shared new insights from their research on effective frequency which they find to be so valuable to the industry that NCS decided to offer a free license to their patent.

Measurement with Large-Scale Experiments: Lessons Learned

In this session, Ross Link (Marketing Attribution) and Jeff Doud (Ocean Spray Cranberries) examined a large-scale experiment conducted with Ocean Spray. They applied randomized controlled trials (RCT) to approximately 10 million households (30 – 40 million people) in which ads were consumed by their participants via a variety of devices. Jeff explained that the experiment was done to measure the impact of when certain ads were suppressed for some of their participants. Additionally, they examined an MTA (multi-touch attribution) logit model that was subsequently applied, which yielded KPIs such as ROI. Information from this MTA-RCT experiment supplied refreshed results monthly. Daily ROI results from the campaign were collected from the MTA-applied modeling. Outcomes from this experiment revolved around retargeting and recent and lagged buyers. In addition, the study also explored creative treatments and platform effectiveness.

Standardizing and Scaling Cross-Platform Measurement

Lindsey Woodland (605) and Jes Santoro (Cadent) presented a case study of a big box retailer to demonstrate their standardized, scalable process for cross-platform measurement and reporting. The retailer’s 2020 holiday campaign benefitted from the identification, scaling and targeting of a selected custom-curated audience. Activation within premium inventory involved broadcast, cable and CTV ads served to targeted households. Including CTV in the media plan added many medium and light linear viewers.